questions
stringlengths 4
1.65k
| answers
stringlengths 1.73k
353k
| site
stringclasses 24
values | answers_cleaned
stringlengths 1.73k
353k
|
---|---|---|---|
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config set credentials autogenerated true | ---
title: kubectl config set-credentials
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Set a user entry in kubeconfig.
Specifying a name that already exists will merge new fields on top of existing values.
Client-certificate flags:
--client-certificate=certfile --client-key=keyfile
Bearer token flags:
--token=bearer_token
Basic auth flags:
--username=basic_user --password=basic_password
Bearer token and basic auth are mutually exclusive.
```
kubectl config set-credentials NAME [--client-certificate=path/to/certfile] [--client-key=path/to/keyfile] [--token=bearer_token] [--username=basic_user] [--password=basic_password] [--auth-provider=provider_name] [--auth-provider-arg=key=value] [--exec-command=exec_command] [--exec-api-version=exec_api_version] [--exec-arg=arg] [--exec-env=key=value]
```
##
```
# Set only the "client-key" field on the "cluster-admin"
# entry, without touching other values
kubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key
# Set basic auth for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
# Embed client certificate data in the "cluster-admin" entry
kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true
# Enable the Google Compute Platform auth provider for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --auth-provider=gcp
# Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional arguments
kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar
# Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-
# Enable new exec auth plugin for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1
# Enable new exec auth plugin for the "cluster-admin" entry with interactive mode
kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never
# Define new exec auth plugin arguments for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2
# Create or update exec auth plugin environment variables for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2
# Remove exec auth plugin environment variables for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --exec-env=var-to-remove-
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--auth-provider string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Auth provider for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--auth-provider-arg strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>'key=value' arguments for the auth provider</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to client-certificate file for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to client-key file for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--embed-certs tristate[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Embed client cert/key for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--exec-api-version string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>API version of the exec credential plugin for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--exec-arg strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>New arguments for the exec credential plugin command for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--exec-command string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Command for the exec credential plugin for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--exec-env strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>'key=value' environment values for the exec credential plugin</p></td>
</tr>
<tr>
<td colspan="2">--exec-interactive-mode string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>InteractiveMode of the exec credentials plugin for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--exec-provide-cluster-info tristate[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>ProvideClusterInfo of the exec credentials plugin for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for set-credentials</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>password for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>token for the user entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>username for the user entry in kubeconfig</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use a particular kubeconfig file</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl config](../) - Modify kubeconfig files
| kubernetes reference | title kubectl config set credentials content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Set a user entry in kubeconfig Specifying a name that already exists will merge new fields on top of existing values Client certificate flags client certificate certfile client key keyfile Bearer token flags token bearer token Basic auth flags username basic user password basic password Bearer token and basic auth are mutually exclusive kubectl config set credentials NAME client certificate path to certfile client key path to keyfile token bearer token username basic user password basic password auth provider provider name auth provider arg key value exec command exec command exec api version exec api version exec arg arg exec env key value Set only the client key field on the cluster admin entry without touching other values kubectl config set credentials cluster admin client key kube admin key Set basic auth for the cluster admin entry kubectl config set credentials cluster admin username admin password uXFGweU9l35qcif Embed client certificate data in the cluster admin entry kubectl config set credentials cluster admin client certificate kube admin crt embed certs true Enable the Google Compute Platform auth provider for the cluster admin entry kubectl config set credentials cluster admin auth provider gcp Enable the OpenID Connect auth provider for the cluster admin entry with additional arguments kubectl config set credentials cluster admin auth provider oidc auth provider arg client id foo auth provider arg client secret bar Remove the client secret config value for the OpenID Connect auth provider for the cluster admin entry kubectl config set credentials cluster admin auth provider oidc auth provider arg client secret Enable new exec auth plugin for the cluster admin entry kubectl config set credentials cluster admin exec command path to the executable exec api version client authentication k8s io v1beta1 Enable new exec auth plugin for the cluster admin entry with interactive mode kubectl config set credentials cluster admin exec command path to the executable exec api version client authentication k8s io v1beta1 exec interactive mode Never Define new exec auth plugin arguments for the cluster admin entry kubectl config set credentials cluster admin exec arg arg1 exec arg arg2 Create or update exec auth plugin environment variables for the cluster admin entry kubectl config set credentials cluster admin exec env key1 val1 exec env key2 val2 Remove exec auth plugin environment variables for the cluster admin entry kubectl config set credentials cluster admin exec env var to remove table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 auth provider string td tr tr td td td style line height 130 word wrap break word p Auth provider for the user entry in kubeconfig p td tr tr td colspan 2 auth provider arg strings td tr tr td td td style line height 130 word wrap break word p key value arguments for the auth provider p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to client certificate file for the user entry in kubeconfig p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to client key file for the user entry in kubeconfig p td tr tr td colspan 2 embed certs tristate true td tr tr td td td style line height 130 word wrap break word p Embed client cert key for the user entry in kubeconfig p td tr tr td colspan 2 exec api version string td tr tr td td td style line height 130 word wrap break word p API version of the exec credential plugin for the user entry in kubeconfig p td tr tr td colspan 2 exec arg strings td tr tr td td td style line height 130 word wrap break word p New arguments for the exec credential plugin command for the user entry in kubeconfig p td tr tr td colspan 2 exec command string td tr tr td td td style line height 130 word wrap break word p Command for the exec credential plugin for the user entry in kubeconfig p td tr tr td colspan 2 exec env strings td tr tr td td td style line height 130 word wrap break word p key value environment values for the exec credential plugin p td tr tr td colspan 2 exec interactive mode string td tr tr td td td style line height 130 word wrap break word p InteractiveMode of the exec credentials plugin for the user entry in kubeconfig p td tr tr td colspan 2 exec provide cluster info tristate true td tr tr td td td style line height 130 word wrap break word p ProvideClusterInfo of the exec credentials plugin for the user entry in kubeconfig p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for set credentials p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p password for the user entry in kubeconfig p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p token for the user entry in kubeconfig p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p username for the user entry in kubeconfig p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p use a particular kubeconfig file p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl config Modify kubeconfig files |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config set cluster autogenerated true | ---
title: kubectl config set-cluster
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Set a cluster entry in kubeconfig.
Specifying a name that already exists will merge new fields on top of existing values for those fields.
```
kubectl config set-cluster NAME [--server=server] [--certificate-authority=path/to/certificate/authority] [--insecure-skip-tls-verify=true] [--tls-server-name=example.com]
```
##
```
# Set only the server field on the e2e cluster entry without touching other values
kubectl config set-cluster e2e --server=https://1.2.3.4
# Embed certificate authority data for the e2e cluster entry
kubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt
# Disable cert checking for the e2e cluster entry
kubectl config set-cluster e2e --insecure-skip-tls-verify=true
# Set the custom TLS server name to use for validation for the e2e cluster entry
kubectl config set-cluster e2e --tls-server-name=my-cluster-name
# Set the proxy URL for the e2e cluster entry
kubectl config set-cluster e2e --proxy-url=https://1.2.3.4
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to certificate-authority file for the cluster entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--embed-certs tristate[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>embed-certs for the cluster entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for set-cluster</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify tristate[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>insecure-skip-tls-verify for the cluster entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--proxy-url string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>proxy-url for the cluster entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>server for the cluster entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>tls-server-name for the cluster entry in kubeconfig</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use a particular kubeconfig file</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl config](../) - Modify kubeconfig files
| kubernetes reference | title kubectl config set cluster content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Set a cluster entry in kubeconfig Specifying a name that already exists will merge new fields on top of existing values for those fields kubectl config set cluster NAME server server certificate authority path to certificate authority insecure skip tls verify true tls server name example com Set only the server field on the e2e cluster entry without touching other values kubectl config set cluster e2e server https 1 2 3 4 Embed certificate authority data for the e2e cluster entry kubectl config set cluster e2e embed certs certificate authority kube e2e kubernetes ca crt Disable cert checking for the e2e cluster entry kubectl config set cluster e2e insecure skip tls verify true Set the custom TLS server name to use for validation for the e2e cluster entry kubectl config set cluster e2e tls server name my cluster name Set the proxy URL for the e2e cluster entry kubectl config set cluster e2e proxy url https 1 2 3 4 table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to certificate authority file for the cluster entry in kubeconfig p td tr tr td colspan 2 embed certs tristate true td tr tr td td td style line height 130 word wrap break word p embed certs for the cluster entry in kubeconfig p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for set cluster p td tr tr td colspan 2 insecure skip tls verify tristate true td tr tr td td td style line height 130 word wrap break word p insecure skip tls verify for the cluster entry in kubeconfig p td tr tr td colspan 2 proxy url string td tr tr td td td style line height 130 word wrap break word p proxy url for the cluster entry in kubeconfig p td tr tr td colspan 2 server string td tr tr td td td style line height 130 word wrap break word p server for the cluster entry in kubeconfig p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p tls server name for the cluster entry in kubeconfig p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p use a particular kubeconfig file p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl config Modify kubeconfig files |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kubectl config set context weight 30 autogenerated true | ---
title: kubectl config set-context
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Set a context entry in kubeconfig.
Specifying a name that already exists will merge new fields on top of existing values for those fields.
```
kubectl config set-context [NAME | --current] [--cluster=cluster_nickname] [--user=user_nickname] [--namespace=namespace]
```
##
```
# Set the user field on the gce context entry without touching other values
kubectl config set-context gce --user=cluster-admin
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>cluster for the context entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--current</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Modify the current context</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for set-context</p></td>
</tr>
<tr>
<td colspan="2">--namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>namespace for the context entry in kubeconfig</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>user for the context entry in kubeconfig</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use a particular kubeconfig file</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl config](../) - Modify kubeconfig files
| kubernetes reference | title kubectl config set context content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Set a context entry in kubeconfig Specifying a name that already exists will merge new fields on top of existing values for those fields kubectl config set context NAME current cluster cluster nickname user user nickname namespace namespace Set the user field on the gce context entry without touching other values kubectl config set context gce user cluster admin table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p cluster for the context entry in kubeconfig p td tr tr td colspan 2 current td tr tr td td td style line height 130 word wrap break word p Modify the current context p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for set context p td tr tr td colspan 2 namespace string td tr tr td td td style line height 130 word wrap break word p namespace for the context entry in kubeconfig p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p user for the context entry in kubeconfig p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p use a particular kubeconfig file p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl config Modify kubeconfig files |
kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl config set contenttype tool reference weight 30 autogenerated true | ---
title: kubectl config set
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Set an individual value in a kubeconfig file.
PROPERTY_NAME is a dot delimited name where each token represents either an attribute name or a map key. Map keys may not contain dots.
PROPERTY_VALUE is the new value you want to set. Binary fields such as 'certificate-authority-data' expect a base64 encoded string unless the --set-raw-bytes flag is used.
Specifying an attribute name that already exists will merge new fields on top of existing values.
```
kubectl config set PROPERTY_NAME PROPERTY_VALUE
```
##
```
# Set the server field on the my-cluster cluster to https://1.2.3.4
kubectl config set clusters.my-cluster.server https://1.2.3.4
# Set the certificate-authority-data field on the my-cluster cluster
kubectl config set clusters.my-cluster.certificate-authority-data $(echo "cert_data_here" | base64 -i -)
# Set the cluster field in the my-context context to my-cluster
kubectl config set contexts.my-context.cluster my-cluster
# Set the client-key-data field in the cluster-admin user using --set-raw-bytes option
kubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for set</p></td>
</tr>
<tr>
<td colspan="2">--set-raw-bytes tristate[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>When writing a []byte PROPERTY_VALUE, write the given string directly without base64 decoding.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use a particular kubeconfig file</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl config](../) - Modify kubeconfig files
| kubernetes reference | title kubectl config set content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Set an individual value in a kubeconfig file PROPERTY NAME is a dot delimited name where each token represents either an attribute name or a map key Map keys may not contain dots PROPERTY VALUE is the new value you want to set Binary fields such as certificate authority data expect a base64 encoded string unless the set raw bytes flag is used Specifying an attribute name that already exists will merge new fields on top of existing values kubectl config set PROPERTY NAME PROPERTY VALUE Set the server field on the my cluster cluster to https 1 2 3 4 kubectl config set clusters my cluster server https 1 2 3 4 Set the certificate authority data field on the my cluster cluster kubectl config set clusters my cluster certificate authority data echo cert data here base64 i Set the cluster field in the my context context to my cluster kubectl config set contexts my context cluster my cluster Set the client key data field in the cluster admin user using set raw bytes option kubectl config set users cluster admin client key data cert data here set raw bytes true table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for set p td tr tr td colspan 2 set raw bytes tristate true td tr tr td td td style line height 130 word wrap break word p When writing a byte PROPERTY VALUE write the given string directly without base64 decoding p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p use a particular kubeconfig file p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl config Modify kubeconfig files |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config use context autogenerated true | ---
title: kubectl config use-context
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Set the current-context in a kubeconfig file.
```
kubectl config use-context CONTEXT_NAME
```
##
```
# Use the context for the minikube cluster
kubectl config use-context minikube
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for use-context</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use a particular kubeconfig file</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl config](../) - Modify kubeconfig files
| kubernetes reference | title kubectl config use context content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Set the current context in a kubeconfig file kubectl config use context CONTEXT NAME Use the context for the minikube cluster kubectl config use context minikube table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for use context p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p use a particular kubeconfig file p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl config Modify kubeconfig files |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config rename context autogenerated true | ---
title: kubectl config rename-context
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Renames a context from the kubeconfig file.
CONTEXT_NAME is the context name that you want to change.
NEW_NAME is the new name you want to set.
Note: If the context being renamed is the 'current-context', this field will also be updated.
```
kubectl config rename-context CONTEXT_NAME NEW_NAME
```
##
```
# Rename the context 'old-name' to 'new-name' in your kubeconfig file
kubectl config rename-context old-name new-name
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for rename-context</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use a particular kubeconfig file</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl config](../) - Modify kubeconfig files
| kubernetes reference | title kubectl config rename context content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Renames a context from the kubeconfig file CONTEXT NAME is the context name that you want to change NEW NAME is the new name you want to set Note If the context being renamed is the current context this field will also be updated kubectl config rename context CONTEXT NAME NEW NAME Rename the context old name to new name in your kubeconfig file kubectl config rename context old name new name table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for rename context p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p use a particular kubeconfig file p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl config Modify kubeconfig files |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config get clusters autogenerated true | ---
title: kubectl config get-clusters
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Display clusters defined in the kubeconfig.
```
kubectl config get-clusters [flags]
```
##
```
# List the clusters that kubectl knows about
kubectl config get-clusters
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for get-clusters</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use a particular kubeconfig file</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl config](../) - Modify kubeconfig files
| kubernetes reference | title kubectl config get clusters content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Display clusters defined in the kubeconfig kubectl config get clusters flags List the clusters that kubectl knows about kubectl config get clusters table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for get clusters p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p use a particular kubeconfig file p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl config Modify kubeconfig files |
kubernetes reference nolist true contenttype tool reference title kubectl config weight 30 autogenerated true | ---
title: kubectl config
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Modify kubeconfig files using subcommands like "kubectl config set current-context my-context".
The loading order follows these rules:
1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes place.
2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list.
3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
```
kubectl config SUBCOMMAND
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for config</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use a particular kubeconfig file</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
* [kubectl config current-context](kubectl_config_current-context/) - Display the current-context
* [kubectl config delete-cluster](kubectl_config_delete-cluster/) - Delete the specified cluster from the kubeconfig
* [kubectl config delete-context](kubectl_config_delete-context/) - Delete the specified context from the kubeconfig
* [kubectl config delete-user](kubectl_config_delete-user/) - Delete the specified user from the kubeconfig
* [kubectl config get-clusters](kubectl_config_get-clusters/) - Display clusters defined in the kubeconfig
* [kubectl config get-contexts](kubectl_config_get-contexts/) - Describe one or many contexts
* [kubectl config get-users](kubectl_config_get-users/) - Display users defined in the kubeconfig
* [kubectl config rename-context](kubectl_config_rename-context/) - Rename a context from the kubeconfig file
* [kubectl config set](kubectl_config_set/) - Set an individual value in a kubeconfig file
* [kubectl config set-cluster](kubectl_config_set-cluster/) - Set a cluster entry in kubeconfig
* [kubectl config set-context](kubectl_config_set-context/) - Set a context entry in kubeconfig
* [kubectl config set-credentials](kubectl_config_set-credentials/) - Set a user entry in kubeconfig
* [kubectl config unset](kubectl_config_unset/) - Unset an individual value in a kubeconfig file
* [kubectl config use-context](kubectl_config_use-context/) - Set the current-context in a kubeconfig file
* [kubectl config view](kubectl_config_view/) - Display merged kubeconfig settings or a specified kubeconfig file
| kubernetes reference | title kubectl config content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Modify kubeconfig files using subcommands like kubectl config set current context my context The loading order follows these rules 1 If the kubeconfig flag is set then only that file is loaded The flag may only be set once and no merging takes place 2 If KUBECONFIG environment variable is set then it is used as a list of paths normal path delimiting rules for your system These paths are merged When a value is modified it is modified in the file that defines the stanza When a value is created it is created in the first file that exists If no files in the chain exist then it creates the last file in the list 3 Otherwise HOME kube config is used and no merging takes place kubectl config SUBCOMMAND table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for config p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p use a particular kubeconfig file p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager kubectl config current context kubectl config current context Display the current context kubectl config delete cluster kubectl config delete cluster Delete the specified cluster from the kubeconfig kubectl config delete context kubectl config delete context Delete the specified context from the kubeconfig kubectl config delete user kubectl config delete user Delete the specified user from the kubeconfig kubectl config get clusters kubectl config get clusters Display clusters defined in the kubeconfig kubectl config get contexts kubectl config get contexts Describe one or many contexts kubectl config get users kubectl config get users Display users defined in the kubeconfig kubectl config rename context kubectl config rename context Rename a context from the kubeconfig file kubectl config set kubectl config set Set an individual value in a kubeconfig file kubectl config set cluster kubectl config set cluster Set a cluster entry in kubeconfig kubectl config set context kubectl config set context Set a context entry in kubeconfig kubectl config set credentials kubectl config set credentials Set a user entry in kubeconfig kubectl config unset kubectl config unset Unset an individual value in a kubeconfig file kubectl config use context kubectl config use context Set the current context in a kubeconfig file kubectl config view kubectl config view Display merged kubeconfig settings or a specified kubeconfig file |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config view autogenerated true | ---
title: kubectl config view
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Display merged kubeconfig settings or a specified kubeconfig file.
You can use --output jsonpath={...} to extract specific values using a jsonpath expression.
```
kubectl config view [flags]
```
##
```
# Show merged kubeconfig settings
kubectl config view
# Show merged kubeconfig settings, raw certificate data, and exposed secrets
kubectl config view --raw
# Get the password for the e2e user
kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--flatten</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Flatten the resulting kubeconfig file into self-contained output (useful for creating portable kubeconfig files)</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for view</p></td>
</tr>
<tr>
<td colspan="2">--merge tristate[=true] Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Merge the full hierarchy of kubeconfig files</p></td>
</tr>
<tr>
<td colspan="2">--minify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Remove all information not used by current-context from the output</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string Default: "yaml"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">--raw</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Display raw byte data and sensitive data</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use a particular kubeconfig file</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl config](../) - Modify kubeconfig files
| kubernetes reference | title kubectl config view content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Display merged kubeconfig settings or a specified kubeconfig file You can use output jsonpath to extract specific values using a jsonpath expression kubectl config view flags Show merged kubeconfig settings kubectl config view Show merged kubeconfig settings raw certificate data and exposed secrets kubectl config view raw Get the password for the e2e user kubectl config view o jsonpath users name e2e user password table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 flatten td tr tr td td td style line height 130 word wrap break word p Flatten the resulting kubeconfig file into self contained output useful for creating portable kubeconfig files p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for view p td tr tr td colspan 2 merge tristate true nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p Merge the full hierarchy of kubeconfig files p td tr tr td colspan 2 minify td tr tr td td td style line height 130 word wrap break word p Remove all information not used by current context from the output p td tr tr td colspan 2 o output string nbsp nbsp nbsp nbsp nbsp Default yaml td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 raw td tr tr td td td style line height 130 word wrap break word p Display raw byte data and sensitive data p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p use a particular kubeconfig file p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl config Modify kubeconfig files |
kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl uncordon | ---
title: kubectl uncordon
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Mark node as schedulable.
```
kubectl uncordon NODE
```
##
```
# Mark node "foo" as schedulable
kubectl uncordon foo
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--dry-run string[="unchanged"] Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for uncordon</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
| kubernetes reference | title kubectl uncordon content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Mark node as schedulable kubectl uncordon NODE Mark node foo as schedulable kubectl uncordon foo table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 dry run string unchanged nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Must be quot none quot quot server quot or quot client quot If client strategy only print the object that would be sent without sending it If server strategy submit server side request without persisting the resource p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for uncordon p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager |
kubernetes reference title kubectl proxy nolist true contenttype tool reference weight 30 autogenerated true | ---
title: kubectl proxy
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Creates a proxy server or application-level gateway between localhost and the Kubernetes API server. It also allows serving static content over specified HTTP path. All incoming data enters through one port and gets forwarded to the remote Kubernetes API server port, except for the path matching the static content path.
```
kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix]
```
##
```
# To proxy all of the Kubernetes API and nothing else
kubectl proxy --api-prefix=/
# To proxy only part of the Kubernetes API and also some static files
# You can get pods info with 'curl localhost:8001/api/v1/pods'
kubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/
# To proxy the entire Kubernetes API at a different root
# You can get pods info with 'curl localhost:8001/custom/api/v1/pods'
kubectl proxy --api-prefix=/custom/
# Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/
kubectl proxy --port=8011 --www=./local/www/
# Run a proxy to the Kubernetes API server on an arbitrary local port
# The chosen port for the server will be output to stdout
kubectl proxy --port=0
# Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api
# This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/
kubectl proxy --api-prefix=/k8s-api
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--accept-hosts string Default: "^localhost$,^127\.0\.0\.1$,^\[::1\]$"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Regular expression for hosts that the proxy should accept.</p></td>
</tr>
<tr>
<td colspan="2">--accept-paths string Default: "^.*"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Regular expression for paths that the proxy should accept.</p></td>
</tr>
<tr>
<td colspan="2">--address string Default: "127.0.0.1"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The IP address on which to serve on.</p></td>
</tr>
<tr>
<td colspan="2">--api-prefix string Default: "/"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Prefix to serve the proxied API under.</p></td>
</tr>
<tr>
<td colspan="2">--append-server-path</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, enables automatic path appending of the kube context server path to each request.</p></td>
</tr>
<tr>
<td colspan="2">--disable-filter</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, disable request filtering in the proxy. This is dangerous, and can leave you vulnerable to XSRF attacks, when used with an accessible port.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for proxy</p></td>
</tr>
<tr>
<td colspan="2">--keepalive duration</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>keepalive specifies the keep-alive period for an active network connection. Set to 0 to disable keepalive.</p></td>
</tr>
<tr>
<td colspan="2">-p, --port int Default: 8001</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The port on which to run the proxy. Set to 0 to pick a random port.</p></td>
</tr>
<tr>
<td colspan="2">--reject-methods string Default: "^$"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Regular expression for HTTP methods that the proxy should reject (example --reject-methods='POST,PUT,PATCH').</p></td>
</tr>
<tr>
<td colspan="2">--reject-paths string Default: "^/api/.*/pods/.*/exec,<br />^/api/.*/pods/.*/attach"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Regular expression for paths that the proxy should reject. Paths specified here will be rejected even accepted by --accept-paths.</p></td>
</tr>
<tr>
<td colspan="2">-u, --unix-socket string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Unix socket on which to run the proxy.</p></td>
</tr>
<tr>
<td colspan="2">-w, --www string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Also serve static files from the given directory under the specified prefix.</p></td>
</tr>
<tr>
<td colspan="2">-P, --www-prefix string Default: "/static/"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Prefix to serve static files under, if static file directory is specified.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
| kubernetes reference | title kubectl proxy content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Creates a proxy server or application level gateway between localhost and the Kubernetes API server It also allows serving static content over specified HTTP path All incoming data enters through one port and gets forwarded to the remote Kubernetes API server port except for the path matching the static content path kubectl proxy port PORT www static dir www prefix prefix api prefix prefix To proxy all of the Kubernetes API and nothing else kubectl proxy api prefix To proxy only part of the Kubernetes API and also some static files You can get pods info with curl localhost 8001 api v1 pods kubectl proxy www my files www prefix static api prefix api To proxy the entire Kubernetes API at a different root You can get pods info with curl localhost 8001 custom api v1 pods kubectl proxy api prefix custom Run a proxy to the Kubernetes API server on port 8011 serving static content from local www kubectl proxy port 8011 www local www Run a proxy to the Kubernetes API server on an arbitrary local port The chosen port for the server will be output to stdout kubectl proxy port 0 Run a proxy to the Kubernetes API server changing the API prefix to k8s api This makes e g the pods API available at localhost 8001 k8s api v1 pods kubectl proxy api prefix k8s api table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 accept hosts string nbsp nbsp nbsp nbsp nbsp Default localhost 127 0 0 1 1 td tr tr td td td style line height 130 word wrap break word p Regular expression for hosts that the proxy should accept p td tr tr td colspan 2 accept paths string nbsp nbsp nbsp nbsp nbsp Default td tr tr td td td style line height 130 word wrap break word p Regular expression for paths that the proxy should accept p td tr tr td colspan 2 address string nbsp nbsp nbsp nbsp nbsp Default 127 0 0 1 td tr tr td td td style line height 130 word wrap break word p The IP address on which to serve on p td tr tr td colspan 2 api prefix string nbsp nbsp nbsp nbsp nbsp Default td tr tr td td td style line height 130 word wrap break word p Prefix to serve the proxied API under p td tr tr td colspan 2 append server path td tr tr td td td style line height 130 word wrap break word p If true enables automatic path appending of the kube context server path to each request p td tr tr td colspan 2 disable filter td tr tr td td td style line height 130 word wrap break word p If true disable request filtering in the proxy This is dangerous and can leave you vulnerable to XSRF attacks when used with an accessible port p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for proxy p td tr tr td colspan 2 keepalive duration td tr tr td td td style line height 130 word wrap break word p keepalive specifies the keep alive period for an active network connection Set to 0 to disable keepalive p td tr tr td colspan 2 p port int nbsp nbsp nbsp nbsp nbsp Default 8001 td tr tr td td td style line height 130 word wrap break word p The port on which to run the proxy Set to 0 to pick a random port p td tr tr td colspan 2 reject methods string nbsp nbsp nbsp nbsp nbsp Default td tr tr td td td style line height 130 word wrap break word p Regular expression for HTTP methods that the proxy should reject example reject methods POST PUT PATCH p td tr tr td colspan 2 reject paths string nbsp nbsp nbsp nbsp nbsp Default api pods exec br api pods attach td tr tr td td td style line height 130 word wrap break word p Regular expression for paths that the proxy should reject Paths specified here will be rejected even accepted by accept paths p td tr tr td colspan 2 u unix socket string td tr tr td td td style line height 130 word wrap break word p Unix socket on which to run the proxy p td tr tr td colspan 2 w www string td tr tr td td td style line height 130 word wrap break word p Also serve static files from the given directory under the specified prefix p td tr tr td colspan 2 P www prefix string nbsp nbsp nbsp nbsp nbsp Default static td tr tr td td td style line height 130 word wrap break word p Prefix to serve static files under if static file directory is specified p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl top pod autogenerated true | ---
title: kubectl top pod
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Display resource (CPU/memory) usage of pods.
The 'top pod' command allows you to see the resource consumption of pods.
Due to the metrics pipeline delay, they may be unavailable for a few minutes since pod creation.
```
kubectl top pod [NAME | -l label]
```
##
```
# Show metrics for all pods in the default namespace
kubectl top pod
# Show metrics for all pods in the given namespace
kubectl top pod --namespace=NAMESPACE
# Show metrics for a given pod and its containers
kubectl top pod POD_NAME --containers
# Show metrics for the pods defined by label name=myLabel
kubectl top pod -l name=myLabel
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-A, --all-namespaces</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.</p></td>
</tr>
<tr>
<td colspan="2">--containers</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, print usage of containers within a pod.</p></td>
</tr>
<tr>
<td colspan="2">--field-selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for pod</p></td>
</tr>
<tr>
<td colspan="2">--no-headers</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, print output without headers.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--sort-by string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If non-empty, sort pods list using specified field. The field can be either 'cpu' or 'memory'.</p></td>
</tr>
<tr>
<td colspan="2">--sum</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Print the sum of the resource usage</p></td>
</tr>
<tr>
<td colspan="2">--use-protocol-buffers Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Enables using protocol-buffers to access Metrics API.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl top](../) - Display resource (CPU/memory) usage
| kubernetes reference | title kubectl top pod content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Display resource CPU memory usage of pods The top pod command allows you to see the resource consumption of pods Due to the metrics pipeline delay they may be unavailable for a few minutes since pod creation kubectl top pod NAME l label Show metrics for all pods in the default namespace kubectl top pod Show metrics for all pods in the given namespace kubectl top pod namespace NAMESPACE Show metrics for a given pod and its containers kubectl top pod POD NAME containers Show metrics for the pods defined by label name myLabel kubectl top pod l name myLabel table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 A all namespaces td tr tr td td td style line height 130 word wrap break word p If present list the requested object s across all namespaces Namespace in current context is ignored even if specified with namespace p td tr tr td colspan 2 containers td tr tr td td td style line height 130 word wrap break word p If present print usage of containers within a pod p td tr tr td colspan 2 field selector string td tr tr td td td style line height 130 word wrap break word p Selector field query to filter on supports and e g field selector key1 value1 key2 value2 The server only supports a limited number of field queries per type p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for pod p td tr tr td colspan 2 no headers td tr tr td td td style line height 130 word wrap break word p If present print output without headers p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 sort by string td tr tr td td td style line height 130 word wrap break word p If non empty sort pods list using specified field The field can be either cpu or memory p td tr tr td colspan 2 sum td tr tr td td td style line height 130 word wrap break word p Print the sum of the resource usage p td tr tr td colspan 2 use protocol buffers nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p Enables using protocol buffers to access Metrics API p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl top Display resource CPU memory usage |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kubectl top node weight 30 autogenerated true | ---
title: kubectl top node
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Display resource (CPU/memory) usage of nodes.
The top-node command allows you to see the resource consumption of nodes.
```
kubectl top node [NAME | -l label]
```
##
```
# Show metrics for all nodes
kubectl top node
# Show metrics for a given node
kubectl top node NODE_NAME
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for node</p></td>
</tr>
<tr>
<td colspan="2">--no-headers</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, print output without headers</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--show-capacity</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Print node resources based on Capacity instead of Allocatable(default) of the nodes.</p></td>
</tr>
<tr>
<td colspan="2">--sort-by string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If non-empty, sort nodes list using specified field. The field can be either 'cpu' or 'memory'.</p></td>
</tr>
<tr>
<td colspan="2">--use-protocol-buffers Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Enables using protocol-buffers to access Metrics API.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl top](../) - Display resource (CPU/memory) usage
| kubernetes reference | title kubectl top node content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Display resource CPU memory usage of nodes The top node command allows you to see the resource consumption of nodes kubectl top node NAME l label Show metrics for all nodes kubectl top node Show metrics for a given node kubectl top node NODE NAME table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for node p td tr tr td colspan 2 no headers td tr tr td td td style line height 130 word wrap break word p If present print output without headers p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 show capacity td tr tr td td td style line height 130 word wrap break word p Print node resources based on Capacity instead of Allocatable default of the nodes p td tr tr td colspan 2 sort by string td tr tr td td td style line height 130 word wrap break word p If non empty sort nodes list using specified field The field can be either cpu or memory p td tr tr td colspan 2 use protocol buffers nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p Enables using protocol buffers to access Metrics API p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl top Display resource CPU memory usage |
kubernetes reference nolist true contenttype tool reference title kubectl top weight 30 autogenerated true | ---
title: kubectl top
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Display resource (CPU/memory) usage.
The top command allows you to see the resource consumption for nodes or pods.
This command requires Metrics Server to be correctly configured and working on the server.
```
kubectl top [flags]
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for top</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
* [kubectl top node](kubectl_top_node/) - Display resource (CPU/memory) usage of nodes
* [kubectl top pod](kubectl_top_pod/) - Display resource (CPU/memory) usage of pods
| kubernetes reference | title kubectl top content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Display resource CPU memory usage The top command allows you to see the resource consumption for nodes or pods This command requires Metrics Server to be correctly configured and working on the server kubectl top flags table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for top p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager kubectl top node kubectl top node Display resource CPU memory usage of nodes kubectl top pod kubectl top pod Display resource CPU memory usage of pods |
kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl scale | ---
title: kubectl scale
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Set a new size for a deployment, replica set, replication controller, or stateful set.
Scale also allows users to specify one or more preconditions for the scale action.
If --current-replicas or --resource-version is specified, it is validated before the scale is attempted, and it is guaranteed that the precondition holds true when the scale is sent to the server.
```
kubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT (-f FILENAME | TYPE NAME)
```
##
```
# Scale a replica set named 'foo' to 3
kubectl scale --replicas=3 rs/foo
# Scale a resource identified by type and name specified in "foo.yaml" to 3
kubectl scale --replicas=3 -f foo.yaml
# If the deployment named mysql's current size is 2, scale mysql to 3
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql
# Scale multiple replication controllers
kubectl scale --replicas=5 rc/example1 rc/example2 rc/example3
# Scale stateful set named 'web' to 3
kubectl scale --replicas=3 statefulset/web
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--all</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Select all resources in the namespace of the specified resource types</p></td>
</tr>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--current-replicas int Default: -1</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Precondition for current size. Requires that the current size of the resource match this value in order to scale. -1 (default) for no condition.</p></td>
</tr>
<tr>
<td colspan="2">--dry-run string[="unchanged"] Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files identifying the resource to set a new size</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for scale</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">--replicas int</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The new desired number of replicas. Required.</p></td>
</tr>
<tr>
<td colspan="2">--resource-version string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Precondition for resource version. Requires that the current resource version match this value in order to scale.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
<tr>
<td colspan="2">--timeout duration</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a scale operation, zero means don't wait. Any other values should contain a corresponding time unit (e.g. 1s, 2m, 3h).</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
| kubernetes reference | title kubectl scale content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Set a new size for a deployment replica set replication controller or stateful set Scale also allows users to specify one or more preconditions for the scale action If current replicas or resource version is specified it is validated before the scale is attempted and it is guaranteed that the precondition holds true when the scale is sent to the server kubectl scale resource version version current replicas count replicas COUNT f FILENAME TYPE NAME Scale a replica set named foo to 3 kubectl scale replicas 3 rs foo Scale a resource identified by type and name specified in foo yaml to 3 kubectl scale replicas 3 f foo yaml If the deployment named mysql s current size is 2 scale mysql to 3 kubectl scale current replicas 2 replicas 3 deployment mysql Scale multiple replication controllers kubectl scale replicas 5 rc example1 rc example2 rc example3 Scale stateful set named web to 3 kubectl scale replicas 3 statefulset web table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 all td tr tr td td td style line height 130 word wrap break word p Select all resources in the namespace of the specified resource types p td tr tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 current replicas int nbsp nbsp nbsp nbsp nbsp Default 1 td tr tr td td td style line height 130 word wrap break word p Precondition for current size Requires that the current size of the resource match this value in order to scale 1 default for no condition p td tr tr td colspan 2 dry run string unchanged nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Must be quot none quot quot server quot or quot client quot If client strategy only print the object that would be sent without sending it If server strategy submit server side request without persisting the resource p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files identifying the resource to set a new size p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for scale p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process the kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 replicas int td tr tr td td td style line height 130 word wrap break word p The new desired number of replicas Required p td tr tr td colspan 2 resource version string td tr tr td td td style line height 130 word wrap break word p Precondition for resource version Requires that the current resource version match this value in order to scale p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tr td colspan 2 timeout duration td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a scale operation zero means don t wait Any other values should contain a corresponding time unit e g 1s 2m 3h p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl plugin list | ---
title: kubectl plugin list
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
List all available plugin files on a user's PATH. To see plugins binary names without the full path use --name-only flag.
Available plugin files are those that are: - executable - anywhere on the user's PATH - begin with "kubectl-"
```
kubectl plugin list [flags]
```
##
```
# List all available plugins
kubectl plugin list
# List only binary names of available plugins without paths
kubectl plugin list --name-only
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for list</p></td>
</tr>
<tr>
<td colspan="2">--name-only</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, display only the binary name of each plugin, rather than its full path</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl plugin](../) - Provides utilities for interacting with plugins
| kubernetes reference | title kubectl plugin list content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project List all available plugin files on a user s PATH To see plugins binary names without the full path use name only flag Available plugin files are those that are executable anywhere on the user s PATH begin with kubectl kubectl plugin list flags List all available plugins kubectl plugin list List only binary names of available plugins without paths kubectl plugin list name only table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for list p td tr tr td colspan 2 name only td tr tr td td td style line height 130 word wrap break word p If true display only the binary name of each plugin rather than its full path p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl plugin Provides utilities for interacting with plugins |
kubernetes reference title kubectl plugin nolist true contenttype tool reference weight 30 autogenerated true | ---
title: kubectl plugin
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Provides utilities for interacting with plugins.
Plugins provide extended functionality that is not part of the major command-line distribution. Please refer to the documentation and examples for more information about how write your own plugins.
The easiest way to discover and install plugins is via the kubernetes sub-project krew: [krew.sigs.k8s.io]. To install krew, visit https://krew.sigs.k8s.io/docs/user-guide/setup/install
```
kubectl plugin [flags]
```
##
```
# List all available plugins
kubectl plugin list
# List only binary names of available plugins without paths
kubectl plugin list --name-only
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for plugin</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
* [kubectl plugin list](kubectl_plugin_list/) - List all visible plugin executables on a user's PATH
| kubernetes reference | title kubectl plugin content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Provides utilities for interacting with plugins Plugins provide extended functionality that is not part of the major command line distribution Please refer to the documentation and examples for more information about how write your own plugins The easiest way to discover and install plugins is via the kubernetes sub project krew krew sigs k8s io To install krew visit https krew sigs k8s io docs user guide setup install kubectl plugin flags List all available plugins kubectl plugin list List only binary names of available plugins without paths kubectl plugin list name only table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for plugin p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager kubectl plugin list kubectl plugin list List all visible plugin executables on a user s PATH |
kubernetes reference nolist true contenttype tool reference weight 30 title kubectl wait autogenerated true | ---
title: kubectl wait
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Experimental: Wait for a specific condition on one or many resources.
The command takes multiple resources and waits until the specified condition is seen in the Status field of every given resource.
Alternatively, the command can wait for the given set of resources to be deleted by providing the "delete" keyword as the value to the --for flag.
A successful message will be printed to stdout indicating when the specified condition has been met. You can use -o option to change to output destination.
```
kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available|--for=jsonpath='{}'[=value]]
```
##
```
# Wait for the pod "busybox1" to contain the status condition of type "Ready"
kubectl wait --for=condition=Ready pod/busybox1
# The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity)
kubectl wait --for=condition=Ready=false pod/busybox1
# Wait for the pod "busybox1" to contain the status phase to be "Running"
kubectl wait --for=jsonpath='{.status.phase}'=Running pod/busybox1
# Wait for pod "busybox1" to be Ready
kubectl wait --for='jsonpath={.status.conditions[?(@.type=="Ready")].status}=True' pod/busybox1
# Wait for the service "loadbalancer" to have ingress
kubectl wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer
# Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command
kubectl delete pod/busybox1
kubectl wait --for=delete pod/busybox1 --timeout=60s
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--all</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Select all resources in the namespace of the specified resource types</p></td>
</tr>
<tr>
<td colspan="2">-A, --all-namespaces</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.</p></td>
</tr>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--field-selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>identifying the resource.</p></td>
</tr>
<tr>
<td colspan="2">--for string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The condition to wait on: [delete|condition=condition-name[=condition-value]|jsonpath='{JSONPath expression}'=[JSONPath value]]. The default condition-value is true. Condition values are compared after Unicode simple case folding, which is a more general form of case-insensitivity.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for wait</p></td>
</tr>
<tr>
<td colspan="2">--local</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, annotation will NOT contact api-server but run locally.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
<tr>
<td colspan="2">--timeout duration Default: 30s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up. Zero means check once and don't wait, negative means wait for a week.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
| kubernetes reference | title kubectl wait content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Experimental Wait for a specific condition on one or many resources The command takes multiple resources and waits until the specified condition is seen in the Status field of every given resource Alternatively the command can wait for the given set of resources to be deleted by providing the delete keyword as the value to the for flag A successful message will be printed to stdout indicating when the specified condition has been met You can use o option to change to output destination kubectl wait f FILENAME resource group resource name resource group l label all for delete for condition available for jsonpath value Wait for the pod busybox1 to contain the status condition of type Ready kubectl wait for condition Ready pod busybox1 The default value of status condition is true you can wait for other targets after an equal delimiter compared after Unicode simple case folding which is a more general form of case insensitivity kubectl wait for condition Ready false pod busybox1 Wait for the pod busybox1 to contain the status phase to be Running kubectl wait for jsonpath status phase Running pod busybox1 Wait for pod busybox1 to be Ready kubectl wait for jsonpath status conditions type Ready status True pod busybox1 Wait for the service loadbalancer to have ingress kubectl wait for jsonpath status loadBalancer ingress service loadbalancer Wait for the pod busybox1 to be deleted with a timeout of 60s after having issued the delete command kubectl delete pod busybox1 kubectl wait for delete pod busybox1 timeout 60s table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 all td tr tr td td td style line height 130 word wrap break word p Select all resources in the namespace of the specified resource types p td tr tr td colspan 2 A all namespaces td tr tr td td td style line height 130 word wrap break word p If present list the requested object s across all namespaces Namespace in current context is ignored even if specified with namespace p td tr tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 field selector string td tr tr td td td style line height 130 word wrap break word p Selector field query to filter on supports and e g field selector key1 value1 key2 value2 The server only supports a limited number of field queries per type p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p identifying the resource p td tr tr td colspan 2 for string td tr tr td td td style line height 130 word wrap break word p The condition to wait on delete condition condition name condition value jsonpath JSONPath expression JSONPath value The default condition value is true Condition values are compared after Unicode simple case folding which is a more general form of case insensitivity p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for wait p td tr tr td colspan 2 local td tr tr td td td style line height 130 word wrap break word p If true annotation will NOT contact api server but run locally p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 R recursive nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tr td colspan 2 timeout duration nbsp nbsp nbsp nbsp nbsp Default 30s td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up Zero means check once and don t wait negative means wait for a week p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager |
kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl taint | ---
title: kubectl taint
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Update the taints on one or more nodes.
* A taint consists of a key, value, and effect. As an argument here, it is expressed as key=value:effect.
* The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 253 characters.
* Optionally, the key can begin with a DNS subdomain prefix and a single '/', like example.com/my-app.
* The value is optional. If given, it must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters.
* The effect must be NoSchedule, PreferNoSchedule or NoExecute.
* Currently taint can only apply to node.
```
kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N
```
##
```
# Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'
# If a taint with that key and effect already exists, its value is replaced as specified
kubectl taint nodes foo dedicated=special-user:NoSchedule
# Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists
kubectl taint nodes foo dedicated:NoSchedule-
# Remove from node 'foo' all the taints with key 'dedicated'
kubectl taint nodes foo dedicated-
# Add a taint with key 'dedicated' on nodes having label myLabel=X
kubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule
# Add to node 'foo' a taint with key 'bar' and no value
kubectl taint nodes foo bar:NoSchedule
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--all</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Select all nodes in the cluster</p></td>
</tr>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--dry-run string[="unchanged"] Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.</p></td>
</tr>
<tr>
<td colspan="2">--field-manager string Default: "kubectl-taint"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the manager used to track field ownership.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for taint</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">--overwrite</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, allow taints to be overwritten, otherwise reject taint updates that overwrite existing taints.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
<tr>
<td colspan="2">--validate string[="strict"] Default: "strict"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be one of: strict (or true), warn, ignore (or false).<br/>"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br/>"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise.<br/>"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
| kubernetes reference | title kubectl taint content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Update the taints on one or more nodes A taint consists of a key value and effect As an argument here it is expressed as key value effect The key must begin with a letter or number and may contain letters numbers hyphens dots and underscores up to 253 characters Optionally the key can begin with a DNS subdomain prefix and a single like example com my app The value is optional If given it must begin with a letter or number and may contain letters numbers hyphens dots and underscores up to 63 characters The effect must be NoSchedule PreferNoSchedule or NoExecute Currently taint can only apply to node kubectl taint NODE NAME KEY 1 VAL 1 TAINT EFFECT 1 KEY N VAL N TAINT EFFECT N Update node foo with a taint with key dedicated and value special user and effect NoSchedule If a taint with that key and effect already exists its value is replaced as specified kubectl taint nodes foo dedicated special user NoSchedule Remove from node foo the taint with key dedicated and effect NoSchedule if one exists kubectl taint nodes foo dedicated NoSchedule Remove from node foo all the taints with key dedicated kubectl taint nodes foo dedicated Add a taint with key dedicated on nodes having label myLabel X kubectl taint node l myLabel X dedicated foo PreferNoSchedule Add to node foo a taint with key bar and no value kubectl taint nodes foo bar NoSchedule table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 all td tr tr td td td style line height 130 word wrap break word p Select all nodes in the cluster p td tr tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 dry run string unchanged nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Must be quot none quot quot server quot or quot client quot If client strategy only print the object that would be sent without sending it If server strategy submit server side request without persisting the resource p td tr tr td colspan 2 field manager string nbsp nbsp nbsp nbsp nbsp Default kubectl taint td tr tr td td td style line height 130 word wrap break word p Name of the manager used to track field ownership p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for taint p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 overwrite td tr tr td td td style line height 130 word wrap break word p If true allow taints to be overwritten otherwise reject taint updates that overwrite existing taints p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tr td colspan 2 validate string strict nbsp nbsp nbsp nbsp nbsp Default strict td tr tr td td td style line height 130 word wrap break word p Must be one of strict or true warn ignore or false br quot true quot or quot strict quot will use a schema to validate the input and fail the request if invalid It will perform server side validation if ServerSideFieldValidation is enabled on the api server but will fall back to less reliable client side validation if not br quot warn quot will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server and behave as quot ignore quot otherwise br quot false quot or quot ignore quot will not perform any schema validation silently dropping any unknown or duplicate fields p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager |
kubernetes reference autogenerated true nolist true contenttype tool reference weight 30 title kubectl completion | ---
title: kubectl completion
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Output shell completion code for the specified shell (bash, zsh, fish, or powershell). The shell code must be evaluated to provide interactive completion of kubectl commands. This can be done by sourcing it from the .bash_profile.
Detailed instructions on how to do this are available here:
for macOS:
https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#enable-shell-autocompletion
for linux:
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#enable-shell-autocompletion
for windows:
https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#enable-shell-autocompletion
Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2.
```
kubectl completion SHELL
```
##
```
# Installing bash completion on macOS using homebrew
## If running Bash 3.2 included with macOS
brew install bash-completion
## or, if running Bash 4.1+
brew install bash-completion@2
## If kubectl is installed via homebrew, this should start working immediately
## If you've installed via other means, you may need add the completion to your completion directory
kubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl
# Installing bash completion on Linux
## If bash-completion is not installed on Linux, install the 'bash-completion' package
## via your distribution's package manager.
## Load the kubectl completion code for bash into the current shell
source <(kubectl completion bash)
## Write bash completion code to a file and source it from .bash_profile
kubectl completion bash > ~/.kube/completion.bash.inc
printf "
# kubectl shell completion
source '$HOME/.kube/completion.bash.inc'
" >> $HOME/.bash_profile
source $HOME/.bash_profile
# Load the kubectl completion code for zsh[1] into the current shell
source <(kubectl completion zsh)
# Set the kubectl completion code for zsh[1] to autoload on startup
kubectl completion zsh > "${fpath[1]}/_kubectl"
# Load the kubectl completion code for fish[2] into the current shell
kubectl completion fish | source
# To load completions for each session, execute once:
kubectl completion fish > ~/.config/fish/completions/kubectl.fish
# Load the kubectl completion code for powershell into the current shell
kubectl completion powershell | Out-String | Invoke-Expression
# Set kubectl completion code for powershell to run on startup
## Save completion code to a script and execute in the profile
kubectl completion powershell > $HOME\.kube\completion.ps1
Add-Content $PROFILE "$HOME\.kube\completion.ps1"
## Execute completion code in the profile
Add-Content $PROFILE "if (Get-Command kubectl -ErrorAction SilentlyContinue) {
kubectl completion powershell | Out-String | Invoke-Expression
}"
## Add completion code directly to the $PROFILE script
kubectl completion powershell >> $PROFILE
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for completion</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
| kubernetes reference | title kubectl completion content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Output shell completion code for the specified shell bash zsh fish or powershell The shell code must be evaluated to provide interactive completion of kubectl commands This can be done by sourcing it from the bash profile Detailed instructions on how to do this are available here for macOS https kubernetes io docs tasks tools install kubectl macos enable shell autocompletion for linux https kubernetes io docs tasks tools install kubectl linux enable shell autocompletion for windows https kubernetes io docs tasks tools install kubectl windows enable shell autocompletion Note for zsh users 1 zsh completions are only supported in versions of zsh gt 5 2 kubectl completion SHELL Installing bash completion on macOS using homebrew If running Bash 3 2 included with macOS brew install bash completion or if running Bash 4 1 brew install bash completion 2 If kubectl is installed via homebrew this should start working immediately If you ve installed via other means you may need add the completion to your completion directory kubectl completion bash brew prefix etc bash completion d kubectl Installing bash completion on Linux If bash completion is not installed on Linux install the bash completion package via your distribution s package manager Load the kubectl completion code for bash into the current shell source kubectl completion bash Write bash completion code to a file and source it from bash profile kubectl completion bash kube completion bash inc printf kubectl shell completion source HOME kube completion bash inc HOME bash profile source HOME bash profile Load the kubectl completion code for zsh 1 into the current shell source kubectl completion zsh Set the kubectl completion code for zsh 1 to autoload on startup kubectl completion zsh fpath 1 kubectl Load the kubectl completion code for fish 2 into the current shell kubectl completion fish source To load completions for each session execute once kubectl completion fish config fish completions kubectl fish Load the kubectl completion code for powershell into the current shell kubectl completion powershell Out String Invoke Expression Set kubectl completion code for powershell to run on startup Save completion code to a script and execute in the profile kubectl completion powershell HOME kube completion ps1 Add Content PROFILE HOME kube completion ps1 Execute completion code in the profile Add Content PROFILE if Get Command kubectl ErrorAction SilentlyContinue kubectl completion powershell Out String Invoke Expression Add completion code directly to the PROFILE script kubectl completion powershell PROFILE table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for completion p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl rollout undo autogenerated true | ---
title: kubectl rollout undo
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Roll back to a previous rollout.
```
kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags]
```
##
```
# Roll back to the previous deployment
kubectl rollout undo deployment/abc
# Roll back to daemonset revision 3
kubectl rollout undo daemonset/abc --to-revision=3
# Roll back to the previous deployment with dry-run
kubectl rollout undo --dry-run=server deployment/abc
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--dry-run string[="unchanged"] Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files identifying the resource to get from a server.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for undo</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
<tr>
<td colspan="2">--to-revision int</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The revision to rollback to. Default to 0 (last revision).</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl rollout](../) - Manage the rollout of a resource
| kubernetes reference | title kubectl rollout undo content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Roll back to a previous rollout kubectl rollout undo TYPE NAME TYPE NAME flags Roll back to the previous deployment kubectl rollout undo deployment abc Roll back to daemonset revision 3 kubectl rollout undo daemonset abc to revision 3 Roll back to the previous deployment with dry run kubectl rollout undo dry run server deployment abc table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 dry run string unchanged nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Must be quot none quot quot server quot or quot client quot If client strategy only print the object that would be sent without sending it If server strategy submit server side request without persisting the resource p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files identifying the resource to get from a server p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for undo p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process the kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tr td colspan 2 to revision int td tr tr td td td style line height 130 word wrap break word p The revision to rollback to Default to 0 last revision p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl rollout Manage the rollout of a resource |
kubernetes reference title kubectl rollout resume The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true | ---
title: kubectl rollout resume
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Resume a paused resource.
Paused resources will not be reconciled by a controller. By resuming a resource, we allow it to be reconciled again. Currently only deployments support being resumed.
```
kubectl rollout resume RESOURCE
```
##
```
# Resume an already paused deployment
kubectl rollout resume deployment/nginx
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--field-manager string Default: "kubectl-rollout"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the manager used to track field ownership.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files identifying the resource to get from a server.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for resume</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl rollout](../) - Manage the rollout of a resource
| kubernetes reference | title kubectl rollout resume content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Resume a paused resource Paused resources will not be reconciled by a controller By resuming a resource we allow it to be reconciled again Currently only deployments support being resumed kubectl rollout resume RESOURCE Resume an already paused deployment kubectl rollout resume deployment nginx table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 field manager string nbsp nbsp nbsp nbsp nbsp Default kubectl rollout td tr tr td td td style line height 130 word wrap break word p Name of the manager used to track field ownership p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files identifying the resource to get from a server p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for resume p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process the kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl rollout Manage the rollout of a resource |
kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl rollout restart contenttype tool reference weight 30 autogenerated true | ---
title: kubectl rollout restart
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Restart a resource.
Resource rollout will be restarted.
```
kubectl rollout restart RESOURCE
```
##
```
# Restart all deployments in the test-namespace namespace
kubectl rollout restart deployment -n test-namespace
# Restart a deployment
kubectl rollout restart deployment/nginx
# Restart a daemon set
kubectl rollout restart daemonset/abc
# Restart deployments with the app=nginx label
kubectl rollout restart deployment --selector=app=nginx
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--field-manager string Default: "kubectl-rollout"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the manager used to track field ownership.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files identifying the resource to get from a server.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for restart</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl rollout](../) - Manage the rollout of a resource
| kubernetes reference | title kubectl rollout restart content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Restart a resource Resource rollout will be restarted kubectl rollout restart RESOURCE Restart all deployments in the test namespace namespace kubectl rollout restart deployment n test namespace Restart a deployment kubectl rollout restart deployment nginx Restart a daemon set kubectl rollout restart daemonset abc Restart deployments with the app nginx label kubectl rollout restart deployment selector app nginx table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 field manager string nbsp nbsp nbsp nbsp nbsp Default kubectl rollout td tr tr td td td style line height 130 word wrap break word p Name of the manager used to track field ownership p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files identifying the resource to get from a server p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for restart p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process the kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl rollout Manage the rollout of a resource |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl rollout status autogenerated true | ---
title: kubectl rollout status
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Show the status of the rollout.
By default 'rollout status' will watch the status of the latest rollout until it's done. If you don't want to wait for the rollout to finish then you can use --watch=false. Note that if a new rollout starts in-between, then 'rollout status' will continue watching the latest revision. If you want to pin to a specific revision and abort if it is rolled over by another revision, use --revision=N where N is the revision you need to watch for.
```
kubectl rollout status (TYPE NAME | TYPE/NAME) [flags]
```
##
```
# Watch the rollout status of a deployment
kubectl rollout status deployment/nginx
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files identifying the resource to get from a server.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for status</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">--revision int</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Pin to a specific revision for showing its status. Defaults to 0 (last revision).</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--timeout duration</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before ending watch, zero means never. Any other values should contain a corresponding time unit (e.g. 1s, 2m, 3h).</p></td>
</tr>
<tr>
<td colspan="2">-w, --watch Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Watch the status of the rollout until it's done.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl rollout](../) - Manage the rollout of a resource
| kubernetes reference | title kubectl rollout status content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Show the status of the rollout By default rollout status will watch the status of the latest rollout until it s done If you don t want to wait for the rollout to finish then you can use watch false Note that if a new rollout starts in between then rollout status will continue watching the latest revision If you want to pin to a specific revision and abort if it is rolled over by another revision use revision N where N is the revision you need to watch for kubectl rollout status TYPE NAME TYPE NAME flags Watch the rollout status of a deployment kubectl rollout status deployment nginx table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files identifying the resource to get from a server p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for status p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process the kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 revision int td tr tr td td td style line height 130 word wrap break word p Pin to a specific revision for showing its status Defaults to 0 last revision p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 timeout duration td tr tr td td td style line height 130 word wrap break word p The length of time to wait before ending watch zero means never Any other values should contain a corresponding time unit e g 1s 2m 3h p td tr tr td colspan 2 w watch nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p Watch the status of the rollout until it s done p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl rollout Manage the rollout of a resource |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl rollout pause autogenerated true | ---
title: kubectl rollout pause
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Mark the provided resource as paused.
Paused resources will not be reconciled by a controller. Use "kubectl rollout resume" to resume a paused resource. Currently only deployments support being paused.
```
kubectl rollout pause RESOURCE
```
##
```
# Mark the nginx deployment as paused
# Any current state of the deployment will continue its function; new updates
# to the deployment will not have an effect as long as the deployment is paused
kubectl rollout pause deployment/nginx
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--field-manager string Default: "kubectl-rollout"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the manager used to track field ownership.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files identifying the resource to get from a server.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for pause</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl rollout](../) - Manage the rollout of a resource
| kubernetes reference | title kubectl rollout pause content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Mark the provided resource as paused Paused resources will not be reconciled by a controller Use kubectl rollout resume to resume a paused resource Currently only deployments support being paused kubectl rollout pause RESOURCE Mark the nginx deployment as paused Any current state of the deployment will continue its function new updates to the deployment will not have an effect as long as the deployment is paused kubectl rollout pause deployment nginx table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 field manager string nbsp nbsp nbsp nbsp nbsp Default kubectl rollout td tr tr td td td style line height 130 word wrap break word p Name of the manager used to track field ownership p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files identifying the resource to get from a server p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for pause p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process the kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl rollout Manage the rollout of a resource |
kubernetes reference title kubectl rollout nolist true contenttype tool reference weight 30 autogenerated true | ---
title: kubectl rollout
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Manage the rollout of one or many resources.
Valid resource types include:
* deployments
* daemonsets
* statefulsets
```
kubectl rollout SUBCOMMAND
```
##
```
# Rollback to the previous deployment
kubectl rollout undo deployment/abc
# Check the rollout status of a daemonset
kubectl rollout status daemonset/foo
# Restart a deployment
kubectl rollout restart deployment/abc
# Restart deployments with the 'app=nginx' label
kubectl rollout restart deployment --selector=app=nginx
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for rollout</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
* [kubectl rollout history](kubectl_rollout_history/) - View rollout history
* [kubectl rollout pause](kubectl_rollout_pause/) - Mark the provided resource as paused
* [kubectl rollout restart](kubectl_rollout_restart/) - Restart a resource
* [kubectl rollout resume](kubectl_rollout_resume/) - Resume a paused resource
* [kubectl rollout status](kubectl_rollout_status/) - Show the status of the rollout
* [kubectl rollout undo](kubectl_rollout_undo/) - Undo a previous rollout
| kubernetes reference | title kubectl rollout content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Manage the rollout of one or many resources Valid resource types include deployments daemonsets statefulsets kubectl rollout SUBCOMMAND Rollback to the previous deployment kubectl rollout undo deployment abc Check the rollout status of a daemonset kubectl rollout status daemonset foo Restart a deployment kubectl rollout restart deployment abc Restart deployments with the app nginx label kubectl rollout restart deployment selector app nginx table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for rollout p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager kubectl rollout history kubectl rollout history View rollout history kubectl rollout pause kubectl rollout pause Mark the provided resource as paused kubectl rollout restart kubectl rollout restart Restart a resource kubectl rollout resume kubectl rollout resume Resume a paused resource kubectl rollout status kubectl rollout status Show the status of the rollout kubectl rollout undo kubectl rollout undo Undo a previous rollout |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl rollout history autogenerated true | ---
title: kubectl rollout history
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
View previous rollout revisions and configurations.
```
kubectl rollout history (TYPE NAME | TYPE/NAME) [flags]
```
##
```
# View the rollout history of a deployment
kubectl rollout history deployment/abc
# View the details of daemonset revision 3
kubectl rollout history daemonset/abc --revision=3
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files identifying the resource to get from a server.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for history</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">--revision int</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>See the details, including podTemplate of the revision specified</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl rollout](../) - Manage the rollout of a resource
| kubernetes reference | title kubectl rollout history content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project View previous rollout revisions and configurations kubectl rollout history TYPE NAME TYPE NAME flags View the rollout history of a deployment kubectl rollout history deployment abc View the details of daemonset revision 3 kubectl rollout history daemonset abc revision 3 table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files identifying the resource to get from a server p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for history p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process the kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 revision int td tr tr td td td style line height 130 word wrap break word p See the details including podTemplate of the revision specified p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl rollout Manage the rollout of a resource |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl apply set last applied autogenerated true | ---
title: kubectl apply set-last-applied
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f<file> ' was run, without updating any other parts of the object.
```
kubectl apply set-last-applied -f FILENAME
```
##
```
# Set the last-applied-configuration of a resource to match the contents of a file
kubectl apply set-last-applied -f deploy.yaml
# Execute set-last-applied against each configuration file in a directory
kubectl apply set-last-applied -f path/
# Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist
kubectl apply set-last-applied -f deploy.yaml --create-annotation=true
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--create-annotation</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Will create 'last-applied-configuration' annotations if current objects doesn't have one</p></td>
</tr>
<tr>
<td colspan="2">--dry-run string[="unchanged"] Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files that contains the last-applied-configuration annotations</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for set-last-applied</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl apply](../) - Apply a configuration to a resource by file name or stdin
| kubernetes reference | title kubectl apply set last applied content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Set the latest last applied configuration annotations by setting it to match the contents of a file This results in the last applied configuration being updated as though kubectl apply f lt file gt was run without updating any other parts of the object kubectl apply set last applied f FILENAME Set the last applied configuration of a resource to match the contents of a file kubectl apply set last applied f deploy yaml Execute set last applied against each configuration file in a directory kubectl apply set last applied f path Set the last applied configuration of a resource to match the contents of a file will create the annotation if it does not already exist kubectl apply set last applied f deploy yaml create annotation true table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 create annotation td tr tr td td td style line height 130 word wrap break word p Will create last applied configuration annotations if current objects doesn t have one p td tr tr td colspan 2 dry run string unchanged nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Must be quot none quot quot server quot or quot client quot If client strategy only print the object that would be sent without sending it If server strategy submit server side request without persisting the resource p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files that contains the last applied configuration annotations p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for set last applied p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl apply Apply a configuration to a resource by file name or stdin |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl apply edit last applied autogenerated true | ---
title: kubectl apply edit-last-applied
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Edit the latest last-applied-configuration annotations of resources from the default editor.
The edit-last-applied command allows you to directly edit any API resource you can retrieve via the command-line tools. It will open the editor defined by your KUBE_EDITOR, or EDITOR environment variables, or fall back to 'vi' for Linux or 'notepad' for Windows. You can edit multiple objects, although changes are applied one at a time. The command accepts file names as well as command-line arguments, although the files you point to must be previously saved versions of resources.
The default format is YAML. To edit in JSON, specify "-o json".
The flag --windows-line-endings can be used to force Windows line endings, otherwise the default for your operating system will be used.
In the event an error occurs while updating, a temporary file will be created on disk that contains your unapplied changes. The most common error when updating a resource is another editor changing the resource on the server. When this occurs, you will have to apply your changes to the newer version of the resource, or update your temporary saved copy to include the latest resource version.
```
kubectl apply edit-last-applied (RESOURCE/NAME | -f FILENAME)
```
##
```
# Edit the last-applied-configuration annotations by type/name in YAML
kubectl apply edit-last-applied deployment/nginx
# Edit the last-applied-configuration annotations by file in JSON
kubectl apply edit-last-applied -f deploy.yaml -o json
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--field-manager string Default: "kubectl-client-side-apply"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the manager used to track field ownership.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files to use to edit the resource</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for edit-last-applied</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
<tr>
<td colspan="2">--validate string[="strict"] Default: "strict"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be one of: strict (or true), warn, ignore (or false).<br/>"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br/>"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise.<br/>"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.</p></td>
</tr>
<tr>
<td colspan="2">--windows-line-endings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Defaults to the line ending native to your platform.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl apply](../) - Apply a configuration to a resource by file name or stdin
| kubernetes reference | title kubectl apply edit last applied content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Edit the latest last applied configuration annotations of resources from the default editor The edit last applied command allows you to directly edit any API resource you can retrieve via the command line tools It will open the editor defined by your KUBE EDITOR or EDITOR environment variables or fall back to vi for Linux or notepad for Windows You can edit multiple objects although changes are applied one at a time The command accepts file names as well as command line arguments although the files you point to must be previously saved versions of resources The default format is YAML To edit in JSON specify o json The flag windows line endings can be used to force Windows line endings otherwise the default for your operating system will be used In the event an error occurs while updating a temporary file will be created on disk that contains your unapplied changes The most common error when updating a resource is another editor changing the resource on the server When this occurs you will have to apply your changes to the newer version of the resource or update your temporary saved copy to include the latest resource version kubectl apply edit last applied RESOURCE NAME f FILENAME Edit the last applied configuration annotations by type name in YAML kubectl apply edit last applied deployment nginx Edit the last applied configuration annotations by file in JSON kubectl apply edit last applied f deploy yaml o json table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 field manager string nbsp nbsp nbsp nbsp nbsp Default kubectl client side apply td tr tr td td td style line height 130 word wrap break word p Name of the manager used to track field ownership p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files to use to edit the resource p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for edit last applied p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process the kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tr td colspan 2 validate string strict nbsp nbsp nbsp nbsp nbsp Default strict td tr tr td td td style line height 130 word wrap break word p Must be one of strict or true warn ignore or false br quot true quot or quot strict quot will use a schema to validate the input and fail the request if invalid It will perform server side validation if ServerSideFieldValidation is enabled on the api server but will fall back to less reliable client side validation if not br quot warn quot will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server and behave as quot ignore quot otherwise br quot false quot or quot ignore quot will not perform any schema validation silently dropping any unknown or duplicate fields p td tr tr td colspan 2 windows line endings td tr tr td td td style line height 130 word wrap break word p Defaults to the line ending native to your platform p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl apply Apply a configuration to a resource by file name or stdin |
kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl apply view last applied | ---
title: kubectl apply view-last-applied
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
View the latest last-applied-configuration annotations by type/name or file.
The default output will be printed to stdout in YAML format. You can use the -o option to change the output format.
```
kubectl apply view-last-applied (TYPE [NAME | -l label] | TYPE/NAME | -f FILENAME)
```
##
```
# View the last-applied-configuration annotations by type/name in YAML
kubectl apply view-last-applied deployment/nginx
# View the last-applied-configuration annotations by file in JSON
kubectl apply view-last-applied -f deploy.yaml -o json
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--all</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Select all resources in the namespace of the specified resource types</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Filename, directory, or URL to files that contains the last-applied-configuration annotations</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for view-last-applied</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string Default: "yaml"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. Must be one of (yaml, json)</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl apply](../) - Apply a configuration to a resource by file name or stdin
| kubernetes reference | title kubectl apply view last applied content type tool reference weight 30 auto generated true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project View the latest last applied configuration annotations by type name or file The default output will be printed to stdout in YAML format You can use the o option to change the output format kubectl apply view last applied TYPE NAME l label TYPE NAME f FILENAME View the last applied configuration annotations by type name in YAML kubectl apply view last applied deployment nginx View the last applied configuration annotations by file in JSON kubectl apply view last applied f deploy yaml o json table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 all td tr tr td td td style line height 130 word wrap break word p Select all resources in the namespace of the specified resource types p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p Filename directory or URL to files that contains the last applied configuration annotations p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for view last applied p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process the kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 o output string nbsp nbsp nbsp nbsp nbsp Default yaml td tr tr td td td style line height 130 word wrap break word p Output format Must be one of yaml json p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl apply Apply a configuration to a resource by file name or stdin |
kubernetes reference nolist true contenttype tool reference title kubectl apply weight 30 autogenerated true | ---
title: kubectl apply
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
JSON and YAML formats are accepted.
Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See https://issues.k8s.io/34274.
```
kubectl apply (-f FILENAME | -k DIRECTORY)
```
##
```
# Apply the configuration in pod.json to a pod
kubectl apply -f ./pod.json
# Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml
kubectl apply -k dir/
# Apply the JSON passed into stdin to a pod
cat pod.json | kubectl apply -f -
# Apply the configuration from all files that end with '.json'
kubectl apply -f '*.json'
# Note: --prune is still in Alpha
# Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx
kubectl apply --prune -f manifest.yaml -l app=nginx
# Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file
kubectl apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--all</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Select all resources in the namespace of the specified resource types.</p></td>
</tr>
<tr>
<td colspan="2">--allow-missing-template-keys Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.</p></td>
</tr>
<tr>
<td colspan="2">--cascade string[="background"] Default: "background"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be "background", "orphan", or "foreground". Selects the deletion cascading strategy for the dependents (e.g. Pods created by a ReplicationController). Defaults to background.</p></td>
</tr>
<tr>
<td colspan="2">--dry-run string[="unchanged"] Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.</p></td>
</tr>
<tr>
<td colspan="2">--field-manager string Default: "kubectl-client-side-apply"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the manager used to track field ownership.</p></td>
</tr>
<tr>
<td colspan="2">-f, --filename strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The files that contain the configurations to apply.</p></td>
</tr>
<tr>
<td colspan="2">--force</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.</p></td>
</tr>
<tr>
<td colspan="2">--force-conflicts</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, server-side apply will force the changes against conflicts.</p></td>
</tr>
<tr>
<td colspan="2">--grace-period int Default: -1</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion).</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for apply</p></td>
</tr>
<tr>
<td colspan="2">-k, --kustomize string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process a kustomization directory. This flag can't be used together with -f or -R.</p></td>
</tr>
<tr>
<td colspan="2">--openapi-patch Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, use openapi to calculate diff when the openapi presents and the resource can be found in the openapi spec. Otherwise, fall back to use baked-in types.</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).</p></td>
</tr>
<tr>
<td colspan="2">--overwrite Default: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration</p></td>
</tr>
<tr>
<td colspan="2">--prune</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Automatically delete resource objects, that do not appear in the configs and are created by either apply or create --save-config. Should be used with either -l or --all.</p></td>
</tr>
<tr>
<td colspan="2">--prune-allowlist strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Overwrite the default allowlist with <group/version/kind> for --prune</p></td>
</tr>
<tr>
<td colspan="2">-R, --recursive</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.</p></td>
</tr>
<tr>
<td colspan="2">-l, --selector string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.</p></td>
</tr>
<tr>
<td colspan="2">--server-side</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, apply runs in the server instead of the client.</p></td>
</tr>
<tr>
<td colspan="2">--show-managed-fields</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, keep the managedFields when printing objects in JSON or YAML format.</p></td>
</tr>
<tr>
<td colspan="2">--template string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].</p></td>
</tr>
<tr>
<td colspan="2">--timeout duration</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object</p></td>
</tr>
<tr>
<td colspan="2">--validate string[="strict"] Default: "strict"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Must be one of: strict (or true), warn, ignore (or false).<br/>"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br/>"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise.<br/>"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.</p></td>
</tr>
<tr>
<td colspan="2">--wait</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, wait for resources to be gone before returning. This waits for finalizers.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
* [kubectl apply edit-last-applied](kubectl_apply_edit-last-applied/) - Edit latest last-applied-configuration annotations of a resource/object
* [kubectl apply set-last-applied](kubectl_apply_set-last-applied/) - Set the last-applied-configuration annotation on a live object to match the contents of a file
* [kubectl apply view-last-applied](kubectl_apply_view-last-applied/) - View the latest last-applied-configuration annotations of a resource/object
| kubernetes reference | title kubectl apply content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Apply a configuration to a resource by file name or stdin The resource name must be specified This resource will be created if it doesn t exist yet To use apply always create the resource initially with either apply or create save config JSON and YAML formats are accepted Alpha Disclaimer the prune functionality is not yet complete Do not use unless you are aware of what the current state is See https issues k8s io 34274 kubectl apply f FILENAME k DIRECTORY Apply the configuration in pod json to a pod kubectl apply f pod json Apply resources from a directory containing kustomization yaml e g dir kustomization yaml kubectl apply k dir Apply the JSON passed into stdin to a pod cat pod json kubectl apply f Apply the configuration from all files that end with json kubectl apply f json Note prune is still in Alpha Apply the configuration in manifest yaml that matches label app nginx and delete all other resources that are not in the file and match label app nginx kubectl apply prune f manifest yaml l app nginx Apply the configuration in manifest yaml and delete all the other config maps that are not in the file kubectl apply prune f manifest yaml all prune allowlist core v1 ConfigMap table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 all td tr tr td td td style line height 130 word wrap break word p Select all resources in the namespace of the specified resource types p td tr tr td colspan 2 allow missing template keys nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true ignore any errors in templates when a field or map key is missing in the template Only applies to golang and jsonpath output formats p td tr tr td colspan 2 cascade string background nbsp nbsp nbsp nbsp nbsp Default background td tr tr td td td style line height 130 word wrap break word p Must be quot background quot quot orphan quot or quot foreground quot Selects the deletion cascading strategy for the dependents e g Pods created by a ReplicationController Defaults to background p td tr tr td colspan 2 dry run string unchanged nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Must be quot none quot quot server quot or quot client quot If client strategy only print the object that would be sent without sending it If server strategy submit server side request without persisting the resource p td tr tr td colspan 2 field manager string nbsp nbsp nbsp nbsp nbsp Default kubectl client side apply td tr tr td td td style line height 130 word wrap break word p Name of the manager used to track field ownership p td tr tr td colspan 2 f filename strings td tr tr td td td style line height 130 word wrap break word p The files that contain the configurations to apply p td tr tr td colspan 2 force td tr tr td td td style line height 130 word wrap break word p If true immediately remove resources from API and bypass graceful deletion Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation p td tr tr td colspan 2 force conflicts td tr tr td td td style line height 130 word wrap break word p If true server side apply will force the changes against conflicts p td tr tr td colspan 2 grace period int nbsp nbsp nbsp nbsp nbsp Default 1 td tr tr td td td style line height 130 word wrap break word p Period of time in seconds given to the resource to terminate gracefully Ignored if negative Set to 1 for immediate shutdown Can only be set to 0 when force is true force deletion p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for apply p td tr tr td colspan 2 k kustomize string td tr tr td td td style line height 130 word wrap break word p Process a kustomization directory This flag can t be used together with f or R p td tr tr td colspan 2 openapi patch nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p If true use openapi to calculate diff when the openapi presents and the resource can be found in the openapi spec Otherwise fall back to use baked in types p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p Output format One of json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file p td tr tr td colspan 2 overwrite nbsp nbsp nbsp nbsp nbsp Default true td tr tr td td td style line height 130 word wrap break word p Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration p td tr tr td colspan 2 prune td tr tr td td td style line height 130 word wrap break word p Automatically delete resource objects that do not appear in the configs and are created by either apply or create save config Should be used with either l or all p td tr tr td colspan 2 prune allowlist strings td tr tr td td td style line height 130 word wrap break word p Overwrite the default allowlist with lt group version kind gt for prune p td tr tr td colspan 2 R recursive td tr tr td td td style line height 130 word wrap break word p Process the directory used in f filename recursively Useful when you want to manage related manifests organized within the same directory p td tr tr td colspan 2 l selector string td tr tr td td td style line height 130 word wrap break word p Selector label query to filter on supports and e g l key1 value1 key2 value2 Matching objects must satisfy all of the specified label constraints p td tr tr td colspan 2 server side td tr tr td td td style line height 130 word wrap break word p If true apply runs in the server instead of the client p td tr tr td colspan 2 show managed fields td tr tr td td td style line height 130 word wrap break word p If true keep the managedFields when printing objects in JSON or YAML format p td tr tr td colspan 2 template string td tr tr td td td style line height 130 word wrap break word p Template string or path to template file to use when o go template o go template file The template format is golang templates http golang org pkg text template pkg overview p td tr tr td colspan 2 timeout duration td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a delete zero means determine a timeout from the size of the object p td tr tr td colspan 2 validate string strict nbsp nbsp nbsp nbsp nbsp Default strict td tr tr td td td style line height 130 word wrap break word p Must be one of strict or true warn ignore or false br quot true quot or quot strict quot will use a schema to validate the input and fail the request if invalid It will perform server side validation if ServerSideFieldValidation is enabled on the api server but will fall back to less reliable client side validation if not br quot warn quot will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server and behave as quot ignore quot otherwise br quot false quot or quot ignore quot will not perform any schema validation silently dropping any unknown or duplicate fields p td tr tr td colspan 2 wait td tr tr td td td style line height 130 word wrap break word p If true wait for resources to be gone before returning This waits for finalizers p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager kubectl apply edit last applied kubectl apply edit last applied Edit latest last applied configuration annotations of a resource object kubectl apply set last applied kubectl apply set last applied Set the last applied configuration annotation on a live object to match the contents of a file kubectl apply view last applied kubectl apply view last applied View the latest last applied configuration annotations of a resource object |
kubernetes reference nolist true contenttype tool reference weight 30 title kubectl version autogenerated true | ---
title: kubectl version
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Print the client and server version information for the current context.
```
kubectl version [flags]
```
##
```
# Print the client and server versions for the current context
kubectl version
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--client</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, shows client version only (no server required).</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for version</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>One of 'yaml' or 'json'.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
| kubernetes reference | title kubectl version content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Print the client and server version information for the current context kubectl version flags Print the client and server versions for the current context kubectl version table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 client td tr tr td td td style line height 130 word wrap break word p If true shows client version only no server required p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for version p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p One of yaml or json p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager |
kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl kustomize | ---
title: kubectl kustomize
content_type: tool-reference
weight: 30
auto_generated: true
no_list: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
##
Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying same with respect to the repository root. If DIR is omitted, '.' is assumed.
```
kubectl kustomize DIR [flags]
```
##
```
# Build the current working directory
kubectl kustomize
# Build some shared configuration directory
kubectl kustomize /home/config/production
# Build from github
kubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6
```
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as-current-user</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use the uid and gid of the command executor to run the function in the container</p></td>
</tr>
<tr>
<td colspan="2">--enable-alpha-plugins</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>enable kustomize plugins</p></td>
</tr>
<tr>
<td colspan="2">--enable-helm</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Enable use of the Helm chart inflator generator.</p></td>
</tr>
<tr>
<td colspan="2">-e, --env strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>a list of environment variables to be used by functions</p></td>
</tr>
<tr>
<td colspan="2">--helm-api-versions strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Kubernetes api versions used by Helm for Capabilities.APIVersions</p></td>
</tr>
<tr>
<td colspan="2">--helm-command string Default: "helm"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>helm command (path to executable)</p></td>
</tr>
<tr>
<td colspan="2">--helm-kube-version string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Kubernetes version used by Helm for Capabilities.KubeVersion</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for kustomize</p></td>
</tr>
<tr>
<td colspan="2">--load-restrictor string Default: "LoadRestrictionsRootOnly"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>if set to 'LoadRestrictionsNone', local kustomizations may load files from outside their root. This does, however, break the relocatability of the kustomization.</p></td>
</tr>
<tr>
<td colspan="2">--mount strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>a list of storage options read from the filesystem</p></td>
</tr>
<tr>
<td colspan="2">--network</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>enable network access for functions that declare it</p></td>
</tr>
<tr>
<td colspan="2">--network-name string Default: "bridge"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>the docker network to run the container in</p></td>
</tr>
<tr>
<td colspan="2">-o, --output string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If specified, write output to this path.</p></td>
</tr>
</tbody>
</table>
##
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--as string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.</p></td>
</tr>
<tr>
<td colspan="2">--as-group strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.</p></td>
</tr>
<tr>
<td colspan="2">--as-uid string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>UID to impersonate for the operation.</p></td>
</tr>
<tr>
<td colspan="2">--cache-dir string Default: "$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Default cache directory</p></td>
</tr>
<tr>
<td colspan="2">--certificate-authority string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a cert file for the certificate authority</p></td>
</tr>
<tr>
<td colspan="2">--client-certificate string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client certificate file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--client-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a client key file for TLS</p></td>
</tr>
<tr>
<td colspan="2">--cluster string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig cluster to use</p></td>
</tr>
<tr>
<td colspan="2">--context string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig context to use</p></td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int Default: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.</p></td>
</tr>
<tr>
<td colspan="2">--disable-compression</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, opt-out of response compression for all requests to the server</p></td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p></td>
</tr>
<tr>
<td colspan="2">--kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the kubeconfig file to use for CLI requests.</p></td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Require server version to match client version</p></td>
</tr>
<tr>
<td colspan="2">-n, --namespace string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>If present, the namespace scope for this CLI request</p></td>
</tr>
<tr>
<td colspan="2">--password string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Password for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--profile string Default: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)</p></td>
</tr>
<tr>
<td colspan="2">--profile-output string Default: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Name of the file to write the profile to</p></td>
</tr>
<tr>
<td colspan="2">--request-timeout string Default: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.</p></td>
</tr>
<tr>
<td colspan="2">-s, --server string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The address and port of the Kubernetes API server</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration Default: 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string Default: "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string Default: "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database host:port</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database password</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>use secure connection with database</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string Default: "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>table name</p></td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string Default: "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>database username</p></td>
</tr>
<tr>
<td colspan="2">--tls-server-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Bearer token for authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--user string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of the kubeconfig user to use</p></td>
</tr>
<tr>
<td colspan="2">--username string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Username for basic authentication to the API server</p></td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version</p></td>
</tr>
<tr>
<td colspan="2">--warnings-as-errors</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Treat warnings received from the server as errors and exit with a non-zero exit code</p></td>
</tr>
</tbody>
</table>
##
* [kubectl](../kubectl/) - kubectl controls the Kubernetes cluster manager
| kubernetes reference | title kubectl kustomize content type tool reference weight 30 auto generated true no list true The file is auto generated from the Go source code of the component using a generic generator https github com kubernetes sigs reference docs To learn how to generate the reference documentation please read Contributing to the reference documentation docs contribute generate ref docs To update the reference content please follow the Contributing upstream docs contribute generate ref docs contribute upstream guide You can file document formatting bugs against the reference docs https github com kubernetes sigs reference docs project Build a set of KRM resources using a kustomization yaml file The DIR argument must be a path to a directory containing kustomization yaml or a git repository URL with a path suffix specifying same with respect to the repository root If DIR is omitted is assumed kubectl kustomize DIR flags Build the current working directory kubectl kustomize Build some shared configuration directory kubectl kustomize home config production Build from github kubectl kustomize https github com kubernetes sigs kustomize git examples helloWorld ref v1 0 6 table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as current user td tr tr td td td style line height 130 word wrap break word p use the uid and gid of the command executor to run the function in the container p td tr tr td colspan 2 enable alpha plugins td tr tr td td td style line height 130 word wrap break word p enable kustomize plugins p td tr tr td colspan 2 enable helm td tr tr td td td style line height 130 word wrap break word p Enable use of the Helm chart inflator generator p td tr tr td colspan 2 e env strings td tr tr td td td style line height 130 word wrap break word p a list of environment variables to be used by functions p td tr tr td colspan 2 helm api versions strings td tr tr td td td style line height 130 word wrap break word p Kubernetes api versions used by Helm for Capabilities APIVersions p td tr tr td colspan 2 helm command string nbsp nbsp nbsp nbsp nbsp Default helm td tr tr td td td style line height 130 word wrap break word p helm command path to executable p td tr tr td colspan 2 helm kube version string td tr tr td td td style line height 130 word wrap break word p Kubernetes version used by Helm for Capabilities KubeVersion p td tr tr td colspan 2 h help td tr tr td td td style line height 130 word wrap break word p help for kustomize p td tr tr td colspan 2 load restrictor string nbsp nbsp nbsp nbsp nbsp Default LoadRestrictionsRootOnly td tr tr td td td style line height 130 word wrap break word p if set to LoadRestrictionsNone local kustomizations may load files from outside their root This does however break the relocatability of the kustomization p td tr tr td colspan 2 mount strings td tr tr td td td style line height 130 word wrap break word p a list of storage options read from the filesystem p td tr tr td colspan 2 network td tr tr td td td style line height 130 word wrap break word p enable network access for functions that declare it p td tr tr td colspan 2 network name string nbsp nbsp nbsp nbsp nbsp Default bridge td tr tr td td td style line height 130 word wrap break word p the docker network to run the container in p td tr tr td colspan 2 o output string td tr tr td td td style line height 130 word wrap break word p If specified write output to this path p td tr tbody table table style width 100 table layout fixed colgroup col span 1 style width 10px col span 1 colgroup tbody tr td colspan 2 as string td tr tr td td td style line height 130 word wrap break word p Username to impersonate for the operation User could be a regular user or a service account in a namespace p td tr tr td colspan 2 as group strings td tr tr td td td style line height 130 word wrap break word p Group to impersonate for the operation this flag can be repeated to specify multiple groups p td tr tr td colspan 2 as uid string td tr tr td td td style line height 130 word wrap break word p UID to impersonate for the operation p td tr tr td colspan 2 cache dir string nbsp nbsp nbsp nbsp nbsp Default HOME kube cache td tr tr td td td style line height 130 word wrap break word p Default cache directory p td tr tr td colspan 2 certificate authority string td tr tr td td td style line height 130 word wrap break word p Path to a cert file for the certificate authority p td tr tr td colspan 2 client certificate string td tr tr td td td style line height 130 word wrap break word p Path to a client certificate file for TLS p td tr tr td colspan 2 client key string td tr tr td td td style line height 130 word wrap break word p Path to a client key file for TLS p td tr tr td colspan 2 cluster string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig cluster to use p td tr tr td colspan 2 context string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig context to use p td tr tr td colspan 2 default not ready toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 default unreachable toleration seconds int nbsp nbsp nbsp nbsp nbsp Default 300 td tr tr td td td style line height 130 word wrap break word p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration p td tr tr td colspan 2 disable compression td tr tr td td td style line height 130 word wrap break word p If true opt out of response compression for all requests to the server p td tr tr td colspan 2 insecure skip tls verify td tr tr td td td style line height 130 word wrap break word p If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure p td tr tr td colspan 2 kubeconfig string td tr tr td td td style line height 130 word wrap break word p Path to the kubeconfig file to use for CLI requests p td tr tr td colspan 2 match server version td tr tr td td td style line height 130 word wrap break word p Require server version to match client version p td tr tr td colspan 2 n namespace string td tr tr td td td style line height 130 word wrap break word p If present the namespace scope for this CLI request p td tr tr td colspan 2 password string td tr tr td td td style line height 130 word wrap break word p Password for basic authentication to the API server p td tr tr td colspan 2 profile string nbsp nbsp nbsp nbsp nbsp Default none td tr tr td td td style line height 130 word wrap break word p Name of profile to capture One of none cpu heap goroutine threadcreate block mutex p td tr tr td colspan 2 profile output string nbsp nbsp nbsp nbsp nbsp Default profile pprof td tr tr td td td style line height 130 word wrap break word p Name of the file to write the profile to p td tr tr td colspan 2 request timeout string nbsp nbsp nbsp nbsp nbsp Default 0 td tr tr td td td style line height 130 word wrap break word p The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests p td tr tr td colspan 2 s server string td tr tr td td td style line height 130 word wrap break word p The address and port of the Kubernetes API server p td tr tr td colspan 2 storage driver buffer duration duration nbsp nbsp nbsp nbsp nbsp Default 1m0s td tr tr td td td style line height 130 word wrap break word p Writes in the storage driver will be buffered for this duration and committed to the non memory backends as a single transaction p td tr tr td colspan 2 storage driver db string nbsp nbsp nbsp nbsp nbsp Default cadvisor td tr tr td td td style line height 130 word wrap break word p database name p td tr tr td colspan 2 storage driver host string nbsp nbsp nbsp nbsp nbsp Default localhost 8086 td tr tr td td td style line height 130 word wrap break word p database host port p td tr tr td colspan 2 storage driver password string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database password p td tr tr td colspan 2 storage driver secure td tr tr td td td style line height 130 word wrap break word p use secure connection with database p td tr tr td colspan 2 storage driver table string nbsp nbsp nbsp nbsp nbsp Default stats td tr tr td td td style line height 130 word wrap break word p table name p td tr tr td colspan 2 storage driver user string nbsp nbsp nbsp nbsp nbsp Default root td tr tr td td td style line height 130 word wrap break word p database username p td tr tr td colspan 2 tls server name string td tr tr td td td style line height 130 word wrap break word p Server name to use for server certificate validation If it is not provided the hostname used to contact the server is used p td tr tr td colspan 2 token string td tr tr td td td style line height 130 word wrap break word p Bearer token for authentication to the API server p td tr tr td colspan 2 user string td tr tr td td td style line height 130 word wrap break word p The name of the kubeconfig user to use p td tr tr td colspan 2 username string td tr tr td td td style line height 130 word wrap break word p Username for basic authentication to the API server p td tr tr td colspan 2 version version true td tr tr td td td style line height 130 word wrap break word p version version raw prints version information and quits version vX Y Z sets the reported version p td tr tr td colspan 2 warnings as errors td tr tr td td td style line height 130 word wrap break word p Treat warnings received from the server as errors and exit with a non zero exit code p td tr tbody table kubectl kubectl kubectl controls the Kubernetes cluster manager |
kubernetes reference liggitt deads2k erictune contenttype concept weight 10 reviewers title Authenticating lavalamp | ---
reviewers:
- erictune
- lavalamp
- deads2k
- liggitt
title: Authenticating
content_type: concept
weight: 10
---
<!-- overview -->
This page provides an overview of authentication.
<!-- body -->
## Users in Kubernetes
All Kubernetes clusters have two categories of users: service accounts managed
by Kubernetes, and normal users.
It is assumed that a cluster-independent service manages normal users in the following ways:
- an administrator distributing private keys
- a user store like Keystone or Google Accounts
- a file with a list of usernames and passwords
In this regard, _Kubernetes does not have objects which represent normal user accounts._
Normal users cannot be added to a cluster through an API call.
Even though a normal user cannot be added via an API call, any user that
presents a valid certificate signed by the cluster's certificate authority
(CA) is considered authenticated. In this configuration, Kubernetes determines
the username from the common name field in the 'subject' of the cert (e.g.,
"/CN=bob"). From there, the role based access control (RBAC) sub-system would
determine whether the user is authorized to perform a specific operation on a
resource. For more details, refer to the normal users topic in
[certificate request](/docs/reference/access-authn-authz/certificate-signing-requests/#normal-user)
for more details about this.
In contrast, service accounts are users managed by the Kubernetes API. They are
bound to specific namespaces, and created automatically by the API server or
manually through API calls. Service accounts are tied to a set of credentials
stored as `Secrets`, which are mounted into pods allowing in-cluster processes
to talk to the Kubernetes API.
API requests are tied to either a normal user or a service account, or are treated
as [anonymous requests](#anonymous-requests). This means every process inside or outside the cluster, from
a human user typing `kubectl` on a workstation, to `kubelets` on nodes, to members
of the control plane, must authenticate when making requests to the API server,
or be treated as an anonymous user.
## Authentication strategies
Kubernetes uses client certificates, bearer tokens, or an authenticating proxy to
authenticate API requests through authentication plugins. As HTTP requests are
made to the API server, plugins attempt to associate the following attributes
with the request:
* Username: a string which identifies the end user. Common values might be `kube-admin` or `[email protected]`.
* UID: a string which identifies the end user and attempts to be more consistent and unique than username.
* Groups: a set of strings, each of which indicates the user's membership in a named logical collection of users.
Common values might be `system:masters` or `devops-team`.
* Extra fields: a map of strings to list of strings which holds additional information authorizers may find useful.
All values are opaque to the authentication system and only hold significance
when interpreted by an [authorizer](/docs/reference/access-authn-authz/authorization/).
You can enable multiple authentication methods at once. You should usually use at least two methods:
- service account tokens for service accounts
- at least one other method for user authentication.
When multiple authenticator modules are enabled, the first module
to successfully authenticate the request short-circuits evaluation.
The API server does not guarantee the order authenticators run in.
The `system:authenticated` group is included in the list of groups for all authenticated users.
Integrations with other authentication protocols (LDAP, SAML, Kerberos, alternate x509 schemes, etc)
can be accomplished using an [authenticating proxy](#authenticating-proxy) or the
[authentication webhook](#webhook-token-authentication).
### X509 client certificates
Client certificate authentication is enabled by passing the `--client-ca-file=SOMEFILE`
option to API server. The referenced file must contain one or more certificate authorities
to use to validate client certificates presented to the API server. If a client certificate
is presented and verified, the common name of the subject is used as the user name for the
request. As of Kubernetes 1.4, client certificates can also indicate a user's group memberships
using the certificate's organization fields. To include multiple group memberships for a user,
include multiple organization fields in the certificate.
For example, using the `openssl` command line tool to generate a certificate signing request:
```bash
openssl req -new -key jbeda.pem -out jbeda-csr.pem -subj "/CN=jbeda/O=app1/O=app2"
```
This would create a CSR for the username "jbeda", belonging to two groups, "app1" and "app2".
See [Managing Certificates](/docs/tasks/administer-cluster/certificates/) for how to generate a client cert.
### Static token file
The API server reads bearer tokens from a file when given the `--token-auth-file=SOMEFILE` option
on the command line. Currently, tokens last indefinitely, and the token list cannot be
changed without restarting the API server.
The token file is a csv file with a minimum of 3 columns: token, user name, user uid,
followed by optional group names.
If you have more than one group, the column must be double quoted e.g.
```conf
token,user,uid,"group1,group2,group3"
```
#### Putting a bearer token in a request
When using bearer token authentication from an http client, the API
server expects an `Authorization` header with a value of `Bearer
<token>`. The bearer token must be a character sequence that can be
put in an HTTP header value using no more than the encoding and
quoting facilities of HTTP. For example: if the bearer token is
`31ada4fd-adec-460c-809a-9e56ceb75269` then it would appear in an HTTP
header as shown below.
```http
Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269
```
### Bootstrap tokens
To allow for streamlined bootstrapping for new clusters, Kubernetes includes a
dynamically-managed Bearer token type called a *Bootstrap Token*. These tokens
are stored as Secrets in the `kube-system` namespace, where they can be
dynamically managed and created. Controller Manager contains a TokenCleaner
controller that deletes bootstrap tokens as they expire.
The tokens are of the form `[a-z0-9]{6}.[a-z0-9]{16}`. The first component is a
Token ID and the second component is the Token Secret. You specify the token
in an HTTP header as follows:
```http
Authorization: Bearer 781292.db7bc3a58fc5f07e
```
You must enable the Bootstrap Token Authenticator with the
`--enable-bootstrap-token-auth` flag on the API Server. You must enable
the TokenCleaner controller via the `--controllers` flag on the Controller
Manager. This is done with something like `--controllers=*,tokencleaner`.
`kubeadm` will do this for you if you are using it to bootstrap a cluster.
The authenticator authenticates as `system:bootstrap:<Token ID>`. It is
included in the `system:bootstrappers` group. The naming and groups are
intentionally limited to discourage users from using these tokens past
bootstrapping. The user names and group can be used (and are used by `kubeadm`)
to craft the appropriate authorization policies to support bootstrapping a
cluster.
Please see [Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) for in depth
documentation on the Bootstrap Token authenticator and controllers along with
how to manage these tokens with `kubeadm`.
### Service account tokens
A service account is an automatically enabled authenticator that uses signed
bearer tokens to verify requests. The plugin takes two optional flags:
* `--service-account-key-file` File containing PEM-encoded x509 RSA or ECDSA
private or public keys, used to verify ServiceAccount tokens. The specified file
can contain multiple keys, and the flag can be specified multiple times with
different files. If unspecified, --tls-private-key-file is used.
* `--service-account-lookup` If enabled, tokens which are deleted from the API will be revoked.
Service accounts are usually created automatically by the API server and
associated with pods running in the cluster through the `ServiceAccount`
[Admission Controller](/docs/reference/access-authn-authz/admission-controllers/). Bearer tokens are
mounted into pods at well-known locations, and allow in-cluster processes to
talk to the API server. Accounts may be explicitly associated with pods using the
`serviceAccountName` field of a `PodSpec`.
`serviceAccountName` is usually omitted because this is done automatically.
```yaml
apiVersion: apps/v1 # this apiVersion is relevant as of Kubernetes 1.9
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
replicas: 3
template:
metadata:
# ...
spec:
serviceAccountName: bob-the-bot
containers:
- name: nginx
image: nginx:1.14.2
```
Service account bearer tokens are perfectly valid to use outside the cluster and
can be used to create identities for long standing jobs that wish to talk to the
Kubernetes API. To manually create a service account, use the `kubectl create
serviceaccount (NAME)` command. This creates a service account in the current
namespace.
```bash
kubectl create serviceaccount jenkins
```
```none
serviceaccount/jenkins created
```
Create an associated token:
```bash
kubectl create token jenkins
```
```none
eyJhbGciOiJSUzI1NiIsImtp...
```
The created token is a signed JSON Web Token (JWT).
The signed JWT can be used as a bearer token to authenticate as the given service
account. See [above](#putting-a-bearer-token-in-a-request) for how the token is included
in a request. Normally these tokens are mounted into pods for in-cluster access to
the API server, but can be used from outside the cluster as well.
Service accounts authenticate with the username `system:serviceaccount:(NAMESPACE):(SERVICEACCOUNT)`,
and are assigned to the groups `system:serviceaccounts` and `system:serviceaccounts:(NAMESPACE)`.
Because service account tokens can also be stored in Secret API objects, any user with
write access to Secrets can request a token, and any user with read access to those
Secrets can authenticate as the service account. Be cautious when granting permissions
to service accounts and read or write capabilities for Secrets.
### OpenID Connect Tokens
[OpenID Connect](https://openid.net/connect/) is a flavor of OAuth2 supported by
some OAuth2 providers, notably Microsoft Entra ID, Salesforce, and Google.
The protocol's main extension of OAuth2 is an additional field returned with
the access token called an [ID Token](https://openid.net/specs/openid-connect-core-1_0.html#IDToken).
This token is a JSON Web Token (JWT) with well known fields, such as a user's
email, signed by the server.
To identify the user, the authenticator uses the `id_token` (not the `access_token`)
from the OAuth2 [token response](https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse)
as a bearer token. See [above](#putting-a-bearer-token-in-a-request) for how the token
is included in a request.
sequenceDiagram
participant user as User
participant idp as Identity Provider
participant kube as kubectl
participant api as API Server
user ->> idp: 1. Log in to IdP
activate idp
idp -->> user: 2. Provide access_token,<br>id_token, and refresh_token
deactivate idp
activate user
user ->> kube: 3. Call kubectl<br>with --token being the id_token<br>OR add tokens to .kube/config
deactivate user
activate kube
kube ->> api: 4. Authorization: Bearer...
deactivate kube
activate api
api ->> api: 5. Is JWT signature valid?
api ->> api: 6. Has the JWT expired? (iat+exp)
api ->> api: 7. User authorized?
api -->> kube: 8. Authorized: Perform<br>action and return result
deactivate api
activate kube
kube --x user: 9. Return result
deactivate kube
1. Log in to your identity provider
1. Your identity provider will provide you with an `access_token`, `id_token` and a `refresh_token`
1. When using `kubectl`, use your `id_token` with the `--token` flag or add it directly to your `kubeconfig`
1. `kubectl` sends your `id_token` in a header called Authorization to the API server
1. The API server will make sure the JWT signature is valid
1. Check to make sure the `id_token` hasn't expired
Perform claim and/or user validation if CEL expressions are configured with `AuthenticationConfiguration`.
1. Make sure the user is authorized
1. Once authorized the API server returns a response to `kubectl`
1. `kubectl` provides feedback to the user
Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to
"phone home" to the identity provider. In a model where every request is stateless this provides a
very scalable solution for authentication. It does offer a few challenges:
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or
interface to collect credentials which is why you need to authenticate to your identity provider first.
1. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes)
so it can be very annoying to have to get a new token every few minutes.
1. To authenticate to the Kubernetes dashboard, you must use the `kubectl proxy` command or a reverse proxy
that injects the `id_token`.
#### Configuring the API Server
##### Using flags
To enable the plugin, configure the following flags on the API server:
| Parameter | Description | Example | Required |
| --------- | ----------- | ------- | ------- |
| `--oidc-issuer-url` | URL of the provider that allows the API server to discover public signing keys. Only URLs that use the `https://` scheme are accepted. This is typically the provider's discovery URL, changed to have an empty path. | If the issuer's OIDC discovery URL is `https://accounts.provider.example/.well-known/openid-configuration`, the value should be `https://accounts.provider.example` | Yes |
| `--oidc-client-id` | A client id that all tokens must be issued for. | kubernetes | Yes |
| `--oidc-username-claim` | JWT claim to use as the user name. By default `sub`, which is expected to be a unique identifier of the end user. Admins can choose other claims, such as `email` or `name`, depending on their provider. However, claims other than `email` will be prefixed with the issuer URL to prevent naming clashes with other plugins. | sub | No |
| `--oidc-username-prefix` | Prefix prepended to username claims to prevent clashes with existing names (such as `system:` users). For example, the value `oidc:` will create usernames like `oidc:jane.doe`. If this flag isn't provided and `--oidc-username-claim` is a value other than `email` the prefix defaults to `( Issuer URL )#` where `( Issuer URL )` is the value of `--oidc-issuer-url`. The value `-` can be used to disable all prefixing. | `oidc:` | No |
| `--oidc-groups-claim` | JWT claim to use as the user's group. If the claim is present it must be an array of strings. | groups | No |
| `--oidc-groups-prefix` | Prefix prepended to group claims to prevent clashes with existing names (such as `system:` groups). For example, the value `oidc:` will create group names like `oidc:engineering` and `oidc:infra`. | `oidc:` | No |
| `--oidc-required-claim` | A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims. | `claim=value` | No |
| `--oidc-ca-file` | The path to the certificate for the CA that signed your identity provider's web certificate. Defaults to the host's root CAs. | `/etc/kubernetes/ssl/kc-ca.pem` | No |
| `--oidc-signing-algs` | The signing algorithms accepted. Default is "RS256". | `RS512` | No |
##### Authentication configuration from a file {#using-authentication-configuration}
JWT Authenticator is an authenticator to authenticate Kubernetes users using JWT compliant tokens.
The authenticator will attempt to parse a raw ID token, verify it's been signed by the configured issuer.
The public key to verify the signature is discovered from the issuer's public endpoint using OIDC discovery.
The minimum valid JWT payload must contain the following claims:
```json
{
"iss": "https://example.com", // must match the issuer.url
"aud": ["my-app"], // at least one of the entries in issuer.audiences must match the "aud" claim in presented JWTs.
"exp": 1234567890, // token expiration as Unix time (the number of seconds elapsed since January 1, 1970 UTC)
"<username-claim>": "user" // this is the username claim configured in the claimMappings.username.claim or claimMappings.username.expression
}
```
The configuration file approach allows you to configure multiple JWT authenticators, each with a unique
`issuer.url` and `issuer.discoveryURL`. The configuration file even allows you to specify [CEL](/docs/reference/using-api/cel/)
expressions to map claims to user attributes, and to validate claims and user information.
The API server also automatically reloads the authenticators when the configuration file is modified.
You can use `apiserver_authentication_config_controller_automatic_reload_last_timestamp_seconds` metric
to monitor the last time the configuration was reloaded by the API server.
You must specify the path to the authentication configuration using the `--authentication-config` flag
on the API server. If you want to use command line flags instead of the configuration file, those will
continue to work as-is. To access the new capabilities like configuring multiple authenticators,
setting multiple audiences for an issuer, switch to using the configuration file.
For Kubernetes v, the structured authentication configuration file format
is beta-level, and the mechanism for using that configuration is also beta. Provided you didn't specifically
disable the `StructuredAuthenticationConfiguration`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for your cluster,
you can turn on structured authentication by specifying the `--authentication-config` command line
argument to the kube-apiserver. An example of the structured authentication configuration file is shown below.
If you specify `--authentication-config` along with any of the `--oidc-*` command line arguments, this is
a misconfiguration. In this situation, the API server reports an error and then immediately exits.
If you want to switch to using structured authentication configuration, you have to remove the `--oidc-*`
command line arguments, and use the configuration file instead.
```yaml
---
#
# CAUTION: this is an example configuration.
# Do not use this for your own cluster!
#
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
# list of authenticators to authenticate Kubernetes users using JWT compliant tokens.
# the maximum number of allowed authenticators is 64.
jwt:
- issuer:
# url must be unique across all authenticators.
# url must not conflict with issuer configured in --service-account-issuer.
url: https://example.com # Same as --oidc-issuer-url.
# discoveryURL, if specified, overrides the URL used to fetch discovery
# information instead of using "{url}/.well-known/openid-configuration".
# The exact value specified is used, so "/.well-known/openid-configuration"
# must be included in discoveryURL if needed.
#
# The "issuer" field in the fetched discovery information must match the "issuer.url" field
# in the AuthenticationConfiguration and will be used to validate the "iss" claim in the presented JWT.
# This is for scenarios where the well-known and jwks endpoints are hosted at a different
# location than the issuer (such as locally in the cluster).
# discoveryURL must be different from url if specified and must be unique across all authenticators.
discoveryURL: https://discovery.example.com/.well-known/openid-configuration
# PEM encoded CA certificates used to validate the connection when fetching
# discovery information. If not set, the system verifier will be used.
# Same value as the content of the file referenced by the --oidc-ca-file flag.
certificateAuthority: <PEM encoded CA certificates>
# audiences is the set of acceptable audiences the JWT must be issued to.
# At least one of the entries must match the "aud" claim in presented JWTs.
audiences:
- my-app # Same as --oidc-client-id.
- my-other-app
# this is required to be set to "MatchAny" when multiple audiences are specified.
audienceMatchPolicy: MatchAny
# rules applied to validate token claims to authenticate users.
claimValidationRules:
# Same as --oidc-required-claim key=value.
- claim: hd
requiredValue: example.com
# Instead of claim and requiredValue, you can use expression to validate the claim.
# expression is a CEL expression that evaluates to a boolean.
# all the expressions must evaluate to true for validation to succeed.
- expression: 'claims.hd == "example.com"'
# Message customizes the error message seen in the API server logs when the validation fails.
message: the hd claim must be set to example.com
- expression: 'claims.exp - claims.nbf <= 86400'
message: total token lifetime must not exceed 24 hours
claimMappings:
# username represents an option for the username attribute.
# This is the only required attribute.
username:
# Same as --oidc-username-claim. Mutually exclusive with username.expression.
claim: "sub"
# Same as --oidc-username-prefix. Mutually exclusive with username.expression.
# if username.claim is set, username.prefix is required.
# Explicitly set it to "" if no prefix is desired.
prefix: ""
# Mutually exclusive with username.claim and username.prefix.
# expression is a CEL expression that evaluates to a string.
#
# 1. If username.expression uses 'claims.email', then 'claims.email_verified' must be used in
# username.expression or extra[*].valueExpression or claimValidationRules[*].expression.
# An example claim validation rule expression that matches the validation automatically
# applied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true)'.
# 2. If the username asserted based on username.expression is the empty string, the authentication
# request will fail.
expression: 'claims.username + ":external-user"'
# groups represents an option for the groups attribute.
groups:
# Same as --oidc-groups-claim. Mutually exclusive with groups.expression.
claim: "sub"
# Same as --oidc-groups-prefix. Mutually exclusive with groups.expression.
# if groups.claim is set, groups.prefix is required.
# Explicitly set it to "" if no prefix is desired.
prefix: ""
# Mutually exclusive with groups.claim and groups.prefix.
# expression is a CEL expression that evaluates to a string or a list of strings.
expression: 'claims.roles.split(",")'
# uid represents an option for the uid attribute.
uid:
# Mutually exclusive with uid.expression.
claim: 'sub'
# Mutually exclusive with uid.claim
# expression is a CEL expression that evaluates to a string.
expression: 'claims.sub'
# extra attributes to be added to the UserInfo object. Keys must be domain-prefix path and must be unique.
extra:
- key: 'example.com/tenant'
# valueExpression is a CEL expression that evaluates to a string or a list of strings.
valueExpression: 'claims.tenant'
# validation rules applied to the final user object.
userValidationRules:
# expression is a CEL expression that evaluates to a boolean.
# all the expressions must evaluate to true for the user to be valid.
- expression: "!user.username.startsWith('system:')"
# Message customizes the error message seen in the API server logs when the validation fails.
message: 'username cannot used reserved system: prefix'
- expression: "user.groups.all(group, !group.startsWith('system:'))"
message: 'groups cannot used reserved system: prefix'
```
* Claim validation rule expression
`jwt.claimValidationRules[i].expression` represents the expression which will be evaluated by CEL.
CEL expressions have access to the contents of the token payload, organized into `claims` CEL variable.
`claims` is a map of claim names (as strings) to claim values (of any type).
* User validation rule expression
`jwt.userValidationRules[i].expression` represents the expression which will be evaluated by CEL.
CEL expressions have access to the contents of `userInfo`, organized into `user` CEL variable.
Refer to the [UserInfo](/docs/reference/generated/kubernetes-api/v/#userinfo-v1-authentication-k8s-io)
API documentation for the schema of `user`.
* Claim mapping expression
`jwt.claimMappings.username.expression`, `jwt.claimMappings.groups.expression`, `jwt.claimMappings.uid.expression`
`jwt.claimMappings.extra[i].valueExpression` represents the expression which will be evaluated by CEL.
CEL expressions have access to the contents of the token payload, organized into `claims` CEL variable.
`claims` is a map of claim names (as strings) to claim values (of any type).
To learn more, see the [Documentation on CEL](/docs/reference/using-api/cel/)
Here are examples of the `AuthenticationConfiguration` with different token payloads.
```yaml
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://example.com
audiences:
- my-app
claimMappings:
username:
expression: 'claims.username + ":external-user"'
groups:
expression: 'claims.roles.split(",")'
uid:
expression: 'claims.sub'
extra:
- key: 'example.com/tenant'
valueExpression: 'claims.tenant'
userValidationRules:
- expression: "!user.username.startsWith('system:')" # the expression will evaluate to true, so validation will succeed.
message: 'username cannot used reserved system: prefix'
```
```bash
TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA
```
where the token payload is:
```json
{
"aud": "kubernetes",
"exp": 1703232949,
"iat": 1701107233,
"iss": "https://example.com",
"jti": "7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873",
"nbf": 1701107233,
"roles": "user,admin",
"sub": "auth",
"tenant": "72f988bf-86f1-41af-91ab-2d7cd011db4a",
"username": "foo"
}
```
The token with the above `AuthenticationConfiguration` will produce the following `UserInfo` object and successfully authenticate the user.
```json
{
"username": "foo:external-user",
"uid": "auth",
"groups": [
"user",
"admin"
],
"extra": {
"example.com/tenant": "72f988bf-86f1-41af-91ab-2d7cd011db4a"
}
}
```
```yaml
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://example.com
audiences:
- my-app
claimValidationRules:
- expression: 'claims.hd == "example.com"' # the token below does not have this claim, so validation will fail.
message: the hd claim must be set to example.com
claimMappings:
username:
expression: 'claims.username + ":external-user"'
groups:
expression: 'claims.roles.split(",")'
uid:
expression: 'claims.sub'
extra:
- key: 'example.com/tenant'
valueExpression: 'claims.tenant'
userValidationRules:
- expression: "!user.username.startsWith('system:')" # the expression will evaluate to true, so validation will succeed.
message: 'username cannot used reserved system: prefix'
```
```bash
TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA
```
where the token payload is:
```json
{
"aud": "kubernetes",
"exp": 1703232949,
"iat": 1701107233,
"iss": "https://example.com",
"jti": "7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873",
"nbf": 1701107233,
"roles": "user,admin",
"sub": "auth",
"tenant": "72f988bf-86f1-41af-91ab-2d7cd011db4a",
"username": "foo"
}
```
The token with the above `AuthenticationConfiguration` will fail to authenticate because the
`hd` claim is not set to `example.com`. The API server will return `401 Unauthorized` error.
```yaml
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://example.com
audiences:
- my-app
claimValidationRules:
- expression: 'claims.hd == "example.com"'
message: the hd claim must be set to example.com
claimMappings:
username:
expression: '"system:" + claims.username' # this will prefix the username with "system:" and will fail user validation.
groups:
expression: 'claims.roles.split(",")'
uid:
expression: 'claims.sub'
extra:
- key: 'example.com/tenant'
valueExpression: 'claims.tenant'
userValidationRules:
- expression: "!user.username.startsWith('system:')" # the username will be system:foo and expression will evaluate to false, so validation will fail.
message: 'username cannot used reserved system: prefix'
```
```bash
TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJoZCI6ImV4YW1wbGUuY29tIiwiaWF0IjoxNzAxMTEzMTAxLCJpc3MiOiJodHRwczovL2V4YW1wbGUuY29tIiwianRpIjoiYjViMDY1MjM3MmNkMjBlMzQ1YjZmZGZmY2RjMjE4MWY0YWZkNmYyNTlhYWI0YjdlMzU4ODEyMzdkMjkyMjBiYyIsIm5iZiI6MTcwMTExMzEwMSwicm9sZXMiOiJ1c2VyLGFkbWluIiwic3ViIjoiYXV0aCIsInRlbmFudCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0YSIsInVzZXJuYW1lIjoiZm9vIn0.FgPJBYLobo9jnbHreooBlvpgEcSPWnKfX6dc0IvdlRB-F0dCcgy91oCJeK_aBk-8zH5AKUXoFTlInfLCkPivMOJqMECA1YTrMUwt_IVqwb116AqihfByUYIIqzMjvUbthtbpIeHQm2fF0HbrUqa_Q0uaYwgy8mD807h7sBcUMjNd215ff_nFIHss-9zegH8GI1d9fiBf-g6zjkR1j987EP748khpQh9IxPjMJbSgG_uH5x80YFuqgEWwq-aYJPQxXX6FatP96a2EAn7wfPpGlPRt0HcBOvq5pCnudgCgfVgiOJiLr_7robQu4T1bis0W75VPEvwWtgFcLnvcQx0JWg
```
where the token payload is:
```json
{
"aud": "kubernetes",
"exp": 1703232949,
"hd": "example.com",
"iat": 1701113101,
"iss": "https://example.com",
"jti": "b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc",
"nbf": 1701113101,
"roles": "user,admin",
"sub": "auth",
"tenant": "72f988bf-86f1-41af-91ab-2d7cd011db4a",
"username": "foo"
}
```
The token with the above `AuthenticationConfiguration` will produce the following `UserInfo` object:
```json
{
"username": "system:foo",
"uid": "auth",
"groups": [
"user",
"admin"
],
"extra": {
"example.com/tenant": "72f988bf-86f1-41af-91ab-2d7cd011db4a"
}
}
```
which will fail user validation because the username starts with `system:`.
The API server will return `401 Unauthorized` error.
###### Limitations
1. Distributed claims do not work via [CEL](/docs/reference/using-api/cel/) expressions.
1. Egress selector configuration is not supported for calls to `issuer.url` and `issuer.discoveryURL`.
Kubernetes does not provide an OpenID Connect Identity Provider.
You can use an existing public OpenID Connect Identity Provider (such as Google, or
[others](https://connect2id.com/products/nimbus-oauth-openid-connect-sdk/openid-connect-providers)).
Or, you can run your own Identity Provider, such as [dex](https://dexidp.io/),
[Keycloak](https://github.com/keycloak/keycloak),
CloudFoundry [UAA](https://github.com/cloudfoundry/uaa), or
Tremolo Security's [OpenUnison](https://openunison.github.io/).
For an identity provider to work with Kubernetes it must:
1. Support [OpenID connect discovery](https://openid.net/specs/openid-connect-discovery-1_0.html)
The public key to verify the signature is discovered from the issuer's public endpoint using OIDC discovery.
If you're using the authentication configuration file, the identity provider doesn't need to publicly expose the discovery endpoint.
You can host the discovery endpoint at a different location than the issuer (such as locally in the cluster) and specify the
`issuer.discoveryURL` in the configuration file.
1. Run in TLS with non-obsolete ciphers
1. Have a CA signed certificate (even if the CA is not a commercial CA or is self signed)
A note about requirement #3 above, requiring a CA signed certificate. If you deploy your own
identity provider (as opposed to one of the cloud providers like Google or Microsoft) you MUST
have your identity provider's web server certificate signed by a certificate with the `CA` flag
set to `TRUE`, even if it is self signed. This is due to GoLang's TLS client implementation
being very strict to the standards around certificate validation. If you don't have a CA handy,
you can use the [gencert script](https://github.com/dexidp/dex/blob/master/examples/k8s/gencert.sh)
from the Dex team to create a simple CA and a signed certificate and key pair. Or you can use
[this similar script](https://raw.githubusercontent.com/TremoloSecurity/openunison-qs-kubernetes/master/src/main/bash/makessl.sh)
that generates SHA256 certs with a longer life and larger key size.
Refer to setup instructions for specific systems:
- [UAA](https://docs.cloudfoundry.org/concepts/architecture/uaa.html)
- [Dex](https://dexidp.io/docs/kubernetes/)
- [OpenUnison](https://www.tremolosecurity.com/orchestra-k8s/)
#### Using kubectl
##### Option 1 - OIDC Authenticator
The first option is to use the kubectl `oidc` authenticator, which sets the `id_token` as a bearer token
for all requests and refreshes the token once it expires. After you've logged into your provider, use
kubectl to add your `id_token`, `refresh_token`, `client_id`, and `client_secret` to configure the plugin.
Providers that don't return an `id_token` as part of their refresh token response aren't supported
by this plugin and should use "Option 2" below.
```bash
kubectl config set-credentials USER_NAME \
--auth-provider=oidc \
--auth-provider-arg=idp-issuer-url=( issuer url ) \
--auth-provider-arg=client-id=( your client id ) \
--auth-provider-arg=client-secret=( your client secret ) \
--auth-provider-arg=refresh-token=( your refresh token ) \
--auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \
--auth-provider-arg=id-token=( your id_token )
```
As an example, running the below command after authenticating to your identity provider:
```bash
kubectl config set-credentials mmosley \
--auth-provider=oidc \
--auth-provider-arg=idp-issuer-url=https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP \
--auth-provider-arg=client-id=kubernetes \
--auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5 \
--auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= \
--auth-provider-arg=idp-certificate-authority=/root/ca.pem \
--auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw
```
Which would produce the below configuration:
```yaml
users:
- name: mmosley
user:
auth-provider:
config:
client-id: kubernetes
client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5
id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw
idp-certificate-authority: /root/ca.pem
idp-issuer-url: https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP
refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq
name: oidc
```
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token`
and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
##### Option 2 - Use the `--token` Option
The `kubectl` command lets you pass in a token using the `--token` option.
Copy and paste the `id_token` into this option:
```bash
kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes
```
### Webhook Token Authentication
Webhook authentication is a hook for verifying bearer tokens.
* `--authentication-token-webhook-config-file` a configuration file describing how to access the remote webhook service.
* `--authentication-token-webhook-cache-ttl` how long to cache authentication decisions. Defaults to two minutes.
* `--authentication-token-webhook-version` determines whether to use `authentication.k8s.io/v1beta1` or `authentication.k8s.io/v1`
`TokenReview` objects to send/receive information from the webhook. Defaults to `v1beta1`.
The configuration file uses the [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
file format. Within the file, `clusters` refers to the remote service and
`users` refers to the API server webhook. An example would be:
```yaml
# Kubernetes API version
apiVersion: v1
# kind of the API object
kind: Config
# clusters refers to the remote service.
clusters:
- name: name-of-remote-authn-service
cluster:
certificate-authority: /path/to/ca.pem # CA for verifying the remote service.
server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production.
# users refers to the API server's webhook configuration.
users:
- name: name-of-api-server
user:
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert
# kubeconfig files require a context. Provide one for the API server.
current-context: webhook
contexts:
- context:
cluster: name-of-remote-authn-service
user: name-of-api-server
name: webhook
```
When a client attempts to authenticate with the API server using a bearer token as discussed
[above](#putting-a-bearer-token-in-a-request), the authentication webhook POSTs a JSON-serialized
`TokenReview` object containing the token to the remote service.
Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/concepts/overview/kubernetes-api/)
as other Kubernetes API objects. Implementers should check the `apiVersion` field of the request to ensure correct deserialization,
and **must** respond with a `TokenReview` object of the same version as the request.
The Kubernetes API server defaults to sending `authentication.k8s.io/v1beta1` token reviews for backwards compatibility.
To opt into receiving `authentication.k8s.io/v1` token reviews, the API server must be started with `--authentication-token-webhook-version=v1`.
```yaml
{
"apiVersion": "authentication.k8s.io/v1",
"kind": "TokenReview",
"spec": {
# Opaque bearer token sent to the API server
"token": "014fbff9a07c...",
# Optional list of the audience identifiers for the server the token was presented to.
# Audience-aware token authenticators (for example, OIDC token authenticators)
# should verify the token was intended for at least one of the audiences in this list,
# and return the intersection of this list and the valid audiences for the token in the response status.
# This ensures the token is valid to authenticate to the server it was presented to.
# If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.
"audiences": ["https://myserver.example.com", "https://myserver.internal.example.com"]
}
}
```
```yaml
{
"apiVersion": "authentication.k8s.io/v1beta1",
"kind": "TokenReview",
"spec": {
# Opaque bearer token sent to the API server
"token": "014fbff9a07c...",
# Optional list of the audience identifiers for the server the token was presented to.
# Audience-aware token authenticators (for example, OIDC token authenticators)
# should verify the token was intended for at least one of the audiences in this list,
# and return the intersection of this list and the valid audiences for the token in the response status.
# This ensures the token is valid to authenticate to the server it was presented to.
# If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.
"audiences": ["https://myserver.example.com", "https://myserver.internal.example.com"]
}
}
```
The remote service is expected to fill the `status` field of the request to indicate the success of the login.
The response body's `spec` field is ignored and may be omitted.
The remote service must return a response using the same `TokenReview` API version that it received.
A successful validation of the bearer token would return:
```yaml
{
"apiVersion": "authentication.k8s.io/v1",
"kind": "TokenReview",
"status": {
"authenticated": true,
"user": {
# Required
"username": "[email protected]",
# Optional
"uid": "42",
# Optional group memberships
"groups": ["developers", "qa"],
# Optional additional information provided by the authenticator.
# This should not contain confidential data, as it can be recorded in logs
# or API objects, and is made available to admission webhooks.
"extra": {
"extrafield1": [
"extravalue1",
"extravalue2"
]
}
},
# Optional list audience-aware token authenticators can return,
# containing the audiences from the `spec.audiences` list for which the provided token was valid.
# If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.
"audiences": ["https://myserver.example.com"]
}
}
```
```yaml
{
"apiVersion": "authentication.k8s.io/v1beta1",
"kind": "TokenReview",
"status": {
"authenticated": true,
"user": {
# Required
"username": "[email protected]",
# Optional
"uid": "42",
# Optional group memberships
"groups": ["developers", "qa"],
# Optional additional information provided by the authenticator.
# This should not contain confidential data, as it can be recorded in logs
# or API objects, and is made available to admission webhooks.
"extra": {
"extrafield1": [
"extravalue1",
"extravalue2"
]
}
},
# Optional list audience-aware token authenticators can return,
# containing the audiences from the `spec.audiences` list for which the provided token was valid.
# If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.
"audiences": ["https://myserver.example.com"]
}
}
```
An unsuccessful request would return:
```yaml
{
"apiVersion": "authentication.k8s.io/v1",
"kind": "TokenReview",
"status": {
"authenticated": false,
# Optionally include details about why authentication failed.
# If no error is provided, the API will return a generic Unauthorized message.
# The error field is ignored when authenticated=true.
"error": "Credentials are expired"
}
}
```
```yaml
{
"apiVersion": "authentication.k8s.io/v1beta1",
"kind": "TokenReview",
"status": {
"authenticated": false,
# Optionally include details about why authentication failed.
# If no error is provided, the API will return a generic Unauthorized message.
# The error field is ignored when authenticated=true.
"error": "Credentials are expired"
}
}
```
### Authenticating Proxy
The API server can be configured to identify users from request header values, such as `X-Remote-User`.
It is designed for use in combination with an authenticating proxy, which sets the request header value.
* `--requestheader-username-headers` Required, case-insensitive. Header names to check, in order,
for the user identity. The first header containing a value is used as the username.
* `--requestheader-group-headers` 1.6+. Optional, case-insensitive. "X-Remote-Group" is suggested.
Header names to check, in order, for the user's groups. All values in all specified headers are used as group names.
* `--requestheader-extra-headers-prefix` 1.6+. Optional, case-insensitive. "X-Remote-Extra-" is suggested.
Header prefixes to look for to determine extra information about the user (typically used by the configured authorization plugin).
Any headers beginning with any of the specified prefixes have the prefix removed.
The remainder of the header name is lowercased and [percent-decoded](https://tools.ietf.org/html/rfc3986#section-2.1)
and becomes the extra key, and the header value is the extra value.
Prior to 1.11.3 (and 1.10.7, 1.9.11), the extra key could only contain characters which
were [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6).
For example, with this configuration:
```
--requestheader-username-headers=X-Remote-User
--requestheader-group-headers=X-Remote-Group
--requestheader-extra-headers-prefix=X-Remote-Extra-
```
this request:
```http
GET / HTTP/1.1
X-Remote-User: fido
X-Remote-Group: dogs
X-Remote-Group: dachshunds
X-Remote-Extra-Acme.com%2Fproject: some-project
X-Remote-Extra-Scopes: openid
X-Remote-Extra-Scopes: profile
```
would result in this user info:
```yaml
name: fido
groups:
- dogs
- dachshunds
extra:
acme.com/project:
- some-project
scopes:
- openid
- profile
```
In order to prevent header spoofing, the authenticating proxy is required to present a valid client
certificate to the API server for validation against the specified CA before the request headers are
checked. WARNING: do **not** reuse a CA that is used in a different context unless you understand
the risks and the mechanisms to protect the CA's usage.
* `--requestheader-client-ca-file` Required. PEM-encoded certificate bundle. A valid client certificate
must be presented and validated against the certificate authorities in the specified file before the
request headers are checked for user names.
* `--requestheader-allowed-names` Optional. List of Common Name values (CNs). If set, a valid client
certificate with a CN in the specified list must be presented before the request headers are checked
for user names. If empty, any CN is allowed.
## Anonymous requests
When enabled, requests that are not rejected by other configured authentication methods are
treated as anonymous requests, and given a username of `system:anonymous` and a group of
`system:unauthenticated`.
For example, on a server with token authentication configured, and anonymous access enabled,
a request providing an invalid bearer token would receive a `401 Unauthorized` error.
A request providing no bearer token would be treated as an anonymous request.
In 1.5.1-1.5.x, anonymous access is disabled by default, and can be enabled by
passing the `--anonymous-auth=true` option to the API server.
In 1.6+, anonymous access is enabled by default if an authorization mode other than `AlwaysAllow`
is used, and can be disabled by passing the `--anonymous-auth=false` option to the API server.
Starting in 1.6, the ABAC and RBAC authorizers require explicit authorization of the
`system:anonymous` user or the `system:unauthenticated` group, so legacy policy rules
that grant access to the `*` user or `*` group do not include anonymous users.
### Anonymous Authenticator Configuration
The `AuthenticationConfiguration` can be used to configure the anonymous
authenticator. To enable configuring anonymous auth via the config file you need
enable the `AnonymousAuthConfigurableEndpoints` feature gate. When this feature
gate is enabled you cannot set the `--anonymous-auth` flag.
The main advantage of configuring anonymous authenticator using the authentication
configuration file is that in addition to enabling and disabling anonymous authentication
you can also configure which endpoints support anonymous authentication.
A sample authentication configuration file is below:
```yaml
---
#
# CAUTION: this is an example configuration.
# Do not use this for your own cluster!
#
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
anonymous:
enabled: true
conditions:
- path: /livez
- path: /readyz
- path: /healthz
```
In the configuration above only the `/livez`, `/readyz` and `/healthz` endpoints
are reachable by anonymous requests. Any other endpoints will not be reachable
even if it is allowed by RBAC configuration.
## User impersonation
A user can act as another user through impersonation headers. These let requests
manually override the user info a request authenticates as. For example, an admin
could use this feature to debug an authorization policy by temporarily
impersonating another user and seeing if a request was denied.
Impersonation requests first authenticate as the requesting user, then switch
to the impersonated user info.
* A user makes an API call with their credentials _and_ impersonation headers.
* API server authenticates the user.
* API server ensures the authenticated users have impersonation privileges.
* Request user info is replaced with impersonation values.
* Request is evaluated, authorization acts on impersonated user info.
The following HTTP headers can be used to performing an impersonation request:
* `Impersonate-User`: The username to act as.
* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups.
Optional. Requires "Impersonate-User".
* `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user.
Optional. Requires "Impersonate-User". In order to be preserved consistently, `( extra name )`
must be lower-case, and any characters which aren't [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6)
MUST be utf8 and [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1).
* `Impersonate-Uid`: A unique identifier that represents the user being impersonated. Optional.
Requires "Impersonate-User". Kubernetes does not impose any format requirements on this string.
Prior to 1.11.3 (and 1.10.7, 1.9.11), `( extra name )` could only contain characters which
were [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6).
`Impersonate-Uid` is only available in versions 1.22.0 and higher.
An example of the impersonation headers used when impersonating a user with groups:
```http
Impersonate-User: [email protected]
Impersonate-Group: developers
Impersonate-Group: admins
```
An example of the impersonation headers used when impersonating a user with a UID and
extra fields:
```http
Impersonate-User: [email protected]
Impersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com
Impersonate-Extra-acme.com%2Fproject: some-project
Impersonate-Extra-scopes: view
Impersonate-Extra-scopes: development
Impersonate-Uid: 06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b
```
When using `kubectl` set the `--as` flag to configure the `Impersonate-User`
header, set the `--as-group` flag to configure the `Impersonate-Group` header.
```bash
kubectl drain mynode
```
```none
Error from server (Forbidden): User "clark" cannot get nodes at the cluster scope. (get nodes mynode)
```
Set the `--as` and `--as-group` flag:
```bash
kubectl drain mynode --as=superman --as-group=system:masters
```
```none
node/mynode cordoned
node/mynode drained
```
`kubectl` cannot impersonate extra fields or UIDs.
To impersonate a user, group, user identifier (UID) or extra fields, the impersonating user must
have the ability to perform the "impersonate" verb on the kind of attribute
being impersonated ("user", "group", "uid", etc.). For clusters that enable the RBAC
authorization plugin, the following ClusterRole encompasses the rules needed to
set user and group impersonation headers:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: impersonator
rules:
- apiGroups: [""]
resources: ["users", "groups", "serviceaccounts"]
verbs: ["impersonate"]
```
For impersonation, extra fields and impersonated UIDs are both under the "authentication.k8s.io" `apiGroup`.
Extra fields are evaluated as sub-resources of the resource "userextras". To
allow a user to use impersonation headers for the extra field "scopes" and
for UIDs, a user should be granted the following role:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: scopes-and-uid-impersonator
rules:
# Can set "Impersonate-Extra-scopes" header and the "Impersonate-Uid" header.
- apiGroups: ["authentication.k8s.io"]
resources: ["userextras/scopes", "uids"]
verbs: ["impersonate"]
```
The values of impersonation headers can also be restricted by limiting the set
of `resourceNames` a resource can take.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: limited-impersonator
rules:
# Can impersonate the user "[email protected]"
- apiGroups: [""]
resources: ["users"]
verbs: ["impersonate"]
resourceNames: ["[email protected]"]
# Can impersonate the groups "developers" and "admins"
- apiGroups: [""]
resources: ["groups"]
verbs: ["impersonate"]
resourceNames: ["developers","admins"]
# Can impersonate the extras field "scopes" with the values "view" and "development"
- apiGroups: ["authentication.k8s.io"]
resources: ["userextras/scopes"]
verbs: ["impersonate"]
resourceNames: ["view", "development"]
# Can impersonate the uid "06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b"
- apiGroups: ["authentication.k8s.io"]
resources: ["uids"]
verbs: ["impersonate"]
resourceNames: ["06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b"]
```
Impersonating a user or group allows you to perform any action as if you were that user or group;
for that reason, impersonation is not namespace scoped.
If you want to allow impersonation using Kubernetes RBAC,
this requires using a `ClusterRole` and a `ClusterRoleBinding`,
not a `Role` and `RoleBinding`.
## client-go credential plugins
`k8s.io/client-go` and tools using it such as `kubectl` and `kubelet` are able to execute an
external command to receive user credentials.
This feature is intended for client side integrations with authentication protocols not natively
supported by `k8s.io/client-go` (LDAP, Kerberos, OAuth2, SAML, etc.). The plugin implements the
protocol specific logic, then returns opaque credentials to use. Almost all credential plugin
use cases require a server side component with support for the [webhook token authenticator](#webhook-token-authentication)
to interpret the credential format produced by the client plugin.
Earlier versions of `kubectl` included built-in support for authenticating to AKS and GKE, but this is no longer present.
### Example use case
In a hypothetical use case, an organization would run an external service that exchanges LDAP credentials
for user specific, signed tokens. The service would also be capable of responding to [webhook token
authenticator](#webhook-token-authentication) requests to validate the tokens. Users would be required
to install a credential plugin on their workstation.
To authenticate against the API:
* The user issues a `kubectl` command.
* Credential plugin prompts the user for LDAP credentials, exchanges credentials with external service for a token.
* Credential plugin returns token to client-go, which uses it as a bearer token against the API server.
* API server uses the [webhook token authenticator](#webhook-token-authentication) to submit a `TokenReview` to the external service.
* External service verifies the signature on the token and returns the user's username and groups.
### Configuration
Credential plugins are configured through [kubectl config files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
as part of the user fields.
```yaml
apiVersion: v1
kind: Config
users:
- name: my-user
user:
exec:
# Command to execute. Required.
command: "example-client-go-exec-plugin"
# API version to use when decoding the ExecCredentials resource. Required.
#
# The API version returned by the plugin MUST match the version listed here.
#
# To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1beta1),
# set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,
# or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.
apiVersion: "client.authentication.k8s.io/v1"
# Environment variables to set when executing the plugin. Optional.
env:
- name: "FOO"
value: "bar"
# Arguments to pass when executing the plugin. Optional.
args:
- "arg1"
- "arg2"
# Text shown to the user when the executable doesn't seem to be present. Optional.
installHint: |
example-client-go-exec-plugin is required to authenticate
to the current cluster. It can be installed:
On macOS: brew install example-client-go-exec-plugin
On Ubuntu: apt-get install example-client-go-exec-plugin
On Fedora: dnf install example-client-go-exec-plugin
...
# Whether or not to provide cluster information, which could potentially contain
# very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO
# environment variable.
provideClusterInfo: true
# The contract between the exec plugin and the standard input I/O stream. If the
# contract cannot be satisfied, this plugin will not be run and an error will be
# returned. Valid values are "Never" (this exec plugin never uses standard input),
# "IfAvailable" (this exec plugin wants to use standard input if it is available),
# or "Always" (this exec plugin requires standard input to function). Required.
interactiveMode: Never
clusters:
- name: my-cluster
cluster:
server: "https://172.17.4.100:6443"
certificate-authority: "/etc/kubernetes/ca.pem"
extensions:
- name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config
extension:
arbitrary: config
this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo
you: ["can", "put", "anything", "here"]
contexts:
- name: my-cluster
context:
cluster: my-cluster
user: my-user
current-context: my-cluster
```
```yaml
apiVersion: v1
kind: Config
users:
- name: my-user
user:
exec:
# Command to execute. Required.
command: "example-client-go-exec-plugin"
# API version to use when decoding the ExecCredentials resource. Required.
#
# The API version returned by the plugin MUST match the version listed here.
#
# To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1),
# set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,
# or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.
apiVersion: "client.authentication.k8s.io/v1beta1"
# Environment variables to set when executing the plugin. Optional.
env:
- name: "FOO"
value: "bar"
# Arguments to pass when executing the plugin. Optional.
args:
- "arg1"
- "arg2"
# Text shown to the user when the executable doesn't seem to be present. Optional.
installHint: |
example-client-go-exec-plugin is required to authenticate
to the current cluster. It can be installed:
On macOS: brew install example-client-go-exec-plugin
On Ubuntu: apt-get install example-client-go-exec-plugin
On Fedora: dnf install example-client-go-exec-plugin
...
# Whether or not to provide cluster information, which could potentially contain
# very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO
# environment variable.
provideClusterInfo: true
# The contract between the exec plugin and the standard input I/O stream. If the
# contract cannot be satisfied, this plugin will not be run and an error will be
# returned. Valid values are "Never" (this exec plugin never uses standard input),
# "IfAvailable" (this exec plugin wants to use standard input if it is available),
# or "Always" (this exec plugin requires standard input to function). Optional.
# Defaults to "IfAvailable".
interactiveMode: Never
clusters:
- name: my-cluster
cluster:
server: "https://172.17.4.100:6443"
certificate-authority: "/etc/kubernetes/ca.pem"
extensions:
- name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config
extension:
arbitrary: config
this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo
you: ["can", "put", "anything", "here"]
contexts:
- name: my-cluster
context:
cluster: my-cluster
user: my-user
current-context: my-cluster
```
Relative command paths are interpreted as relative to the directory of the config file. If
KUBECONFIG is set to `/home/jane/kubeconfig` and the exec command is `./bin/example-client-go-exec-plugin`,
the binary `/home/jane/bin/example-client-go-exec-plugin` is executed.
```yaml
- name: my-user
user:
exec:
# Path relative to the directory of the kubeconfig
command: "./bin/example-client-go-exec-plugin"
apiVersion: "client.authentication.k8s.io/v1"
interactiveMode: Never
```
### Input and output formats
The executed command prints an `ExecCredential` object to `stdout`. `k8s.io/client-go`
authenticates against the Kubernetes API using the returned credentials in the `status`.
The executed command is passed an `ExecCredential` object as input via the `KUBERNETES_EXEC_INFO`
environment variable. This input contains helpful information like the expected API version
of the returned `ExecCredential` object and whether or not the plugin can use `stdin` to interact
with the user.
When run from an interactive session (i.e., a terminal), `stdin` can be exposed directly
to the plugin. Plugins should use the `spec.interactive` field of the input
`ExecCredential` object from the `KUBERNETES_EXEC_INFO` environment variable in order to
determine if `stdin` has been provided. A plugin's `stdin` requirements (i.e., whether
`stdin` is optional, strictly required, or never used in order for the plugin
to run successfully) is declared via the `user.exec.interactiveMode` field in the
[kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
(see table below for valid values). The `user.exec.interactiveMode` field is optional
in `client.authentication.k8s.io/v1beta1` and required in `client.authentication.k8s.io/v1`.
| `interactiveMode` Value | Meaning |
| ----------------------- | ------- |
| `Never` | This exec plugin never needs to use standard input, and therefore the exec plugin will be run regardless of whether standard input is available for user input. |
| `IfAvailable` | This exec plugin would like to use standard input if it is available, but can still operate if standard input is not available. Therefore, the exec plugin will be run regardless of whether stdin is available for user input. If standard input is available for user input, then it will be provided to this exec plugin. |
| `Always` | This exec plugin requires standard input in order to run, and therefore the exec plugin will only be run if standard input is available for user input. If standard input is not available for user input, then the exec plugin will not be run and an error will be returned by the exec plugin runner. |
To use bearer token credentials, the plugin returns a token in the status of the
[`ExecCredential`](/docs/reference/config-api/client-authentication.v1beta1/#client-authentication-k8s-io-v1beta1-ExecCredential)
```json
{
"apiVersion": "client.authentication.k8s.io/v1",
"kind": "ExecCredential",
"status": {
"token": "my-bearer-token"
}
}
```
```json
{
"apiVersion": "client.authentication.k8s.io/v1beta1",
"kind": "ExecCredential",
"status": {
"token": "my-bearer-token"
}
}
```
Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth.
If the plugin returns a different certificate and key on a subsequent call, `k8s.io/client-go`
will close existing connections with the server to force a new TLS handshake.
If specified, `clientKeyData` and `clientCertificateData` must both must be present.
`clientCertificateData` may contain additional intermediate certificates to send to the server.
```json
{
"apiVersion": "client.authentication.k8s.io/v1",
"kind": "ExecCredential",
"status": {
"clientCertificateData": "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----",
"clientKeyData": "-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----"
}
}
```
```json
{
"apiVersion": "client.authentication.k8s.io/v1beta1",
"kind": "ExecCredential",
"status": {
"clientCertificateData": "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----",
"clientKeyData": "-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----"
}
}
```
Optionally, the response can include the expiry of the credential formatted as a
[RFC 3339](https://datatracker.ietf.org/doc/html/rfc3339) timestamp.
Presence or absence of an expiry has the following impact:
- If an expiry is included, the bearer token and TLS credentials are cached until
the expiry time is reached, or if the server responds with a 401 HTTP status code,
or when the process exits.
- If an expiry is omitted, the bearer token and TLS credentials are cached until
the server responds with a 401 HTTP status code or until the process exits.
```json
{
"apiVersion": "client.authentication.k8s.io/v1",
"kind": "ExecCredential",
"status": {
"token": "my-bearer-token",
"expirationTimestamp": "2018-03-05T17:30:20-08:00"
}
}
```
```json
{
"apiVersion": "client.authentication.k8s.io/v1beta1",
"kind": "ExecCredential",
"status": {
"token": "my-bearer-token",
"expirationTimestamp": "2018-03-05T17:30:20-08:00"
}
}
```
To enable the exec plugin to obtain cluster-specific information, set `provideClusterInfo` on the `user.exec`
field in the [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
The plugin will then be supplied this cluster-specific information in the `KUBERNETES_EXEC_INFO` environment variable.
Information from this environment variable can be used to perform cluster-specific
credential acquisition logic.
The following `ExecCredential` manifest describes a cluster information sample.
```json
{
"apiVersion": "client.authentication.k8s.io/v1",
"kind": "ExecCredential",
"spec": {
"cluster": {
"server": "https://172.17.4.100:6443",
"certificate-authority-data": "LS0t...",
"config": {
"arbitrary": "config",
"this": "can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo",
"you": ["can", "put", "anything", "here"]
}
},
"interactive": true
}
}
```
```json
{
"apiVersion": "client.authentication.k8s.io/v1beta1",
"kind": "ExecCredential",
"spec": {
"cluster": {
"server": "https://172.17.4.100:6443",
"certificate-authority-data": "LS0t...",
"config": {
"arbitrary": "config",
"this": "can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo",
"you": ["can", "put", "anything", "here"]
}
},
"interactive": true
}
}
```
## API access to authentication information for a client {#self-subject-review}
If your cluster has the API enabled, you can use the `SelfSubjectReview` API to find out
how your Kubernetes cluster maps your authentication information to identify you as a client.
This works whether you are authenticating as a user (typically representing
a real person) or as a ServiceAccount.
`SelfSubjectReview` objects do not have any configurable fields. On receiving a request,
the Kubernetes API server fills the status with the user attributes and returns it to the user.
Request example (the body would be a `SelfSubjectReview`):
```http
POST /apis/authentication.k8s.io/v1/selfsubjectreviews
```
```json
{
"apiVersion": "authentication.k8s.io/v1",
"kind": "SelfSubjectReview"
}
```
Response example:
```json
{
"apiVersion": "authentication.k8s.io/v1",
"kind": "SelfSubjectReview",
"status": {
"userInfo": {
"name": "jane.doe",
"uid": "b6c7cfd4-f166-11ec-8ea0-0242ac120002",
"groups": [
"viewers",
"editors",
"system:authenticated"
],
"extra": {
"provider_id": ["token.company.example"]
}
}
}
}
```
For convenience, the `kubectl auth whoami` command is present. Executing this command will
produce the following output (yet different user attributes will be shown):
* Simple output example
```
ATTRIBUTE VALUE
Username jane.doe
Groups [system:authenticated]
```
* Complex example including extra attributes
```
ATTRIBUTE VALUE
Username jane.doe
UID b79dbf30-0c6a-11ed-861d-0242ac120002
Groups [students teachers system:authenticated]
Extra: skills [reading learning]
Extra: subjects [math sports]
```
By providing the output flag, it is also possible to print the JSON or YAML representation of the result:
```json
{
"apiVersion": "authentication.k8s.io/v1",
"kind": "SelfSubjectReview",
"status": {
"userInfo": {
"username": "jane.doe",
"uid": "b79dbf30-0c6a-11ed-861d-0242ac120002",
"groups": [
"students",
"teachers",
"system:authenticated"
],
"extra": {
"skills": [
"reading",
"learning"
],
"subjects": [
"math",
"sports"
]
}
}
}
}
```
```yaml
apiVersion: authentication.k8s.io/v1
kind: SelfSubjectReview
status:
userInfo:
username: jane.doe
uid: b79dbf30-0c6a-11ed-861d-0242ac120002
groups:
- students
- teachers
- system:authenticated
extra:
skills:
- reading
- learning
subjects:
- math
- sports
```
This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster,
for example, if you use [webhook token authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
or [authenticating proxy](/docs/reference/access-authn-authz/authentication/#authenticating-proxy).
The Kubernetes API server fills the `userInfo` after all authentication mechanisms are applied,
including [impersonation](/docs/reference/access-authn-authz/authentication/#user-impersonation).
If you, or an authentication proxy, make a SelfSubjectReview using impersonation,
you see the user details and properties for the user that was impersonated.
By default, all authenticated users can create `SelfSubjectReview` objects when the `APISelfSubjectReview`
feature is enabled. It is allowed by the `system:basic-user` cluster role.
You can only make `SelfSubjectReview` requests if:
* the `APISelfSubjectReview`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
is enabled for your cluster (not needed for Kubernetes , but older
Kubernetes versions might not offer this feature gate, or might default it to be off)
* (if you are running a version of Kubernetes older than v1.28) the API server for your
cluster has the `authentication.k8s.io/v1alpha1` or `authentication.k8s.io/v1beta1`
enabled.
##
* Read the [client authentication reference (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/)
* Read the [client authentication reference (v1)](/docs/reference/config-api/client-authentication.v1/) | kubernetes reference | reviewers erictune lavalamp deads2k liggitt title Authenticating content type concept weight 10 overview This page provides an overview of authentication body Users in Kubernetes All Kubernetes clusters have two categories of users service accounts managed by Kubernetes and normal users It is assumed that a cluster independent service manages normal users in the following ways an administrator distributing private keys a user store like Keystone or Google Accounts a file with a list of usernames and passwords In this regard Kubernetes does not have objects which represent normal user accounts Normal users cannot be added to a cluster through an API call Even though a normal user cannot be added via an API call any user that presents a valid certificate signed by the cluster s certificate authority CA is considered authenticated In this configuration Kubernetes determines the username from the common name field in the subject of the cert e g CN bob From there the role based access control RBAC sub system would determine whether the user is authorized to perform a specific operation on a resource For more details refer to the normal users topic in certificate request docs reference access authn authz certificate signing requests normal user for more details about this In contrast service accounts are users managed by the Kubernetes API They are bound to specific namespaces and created automatically by the API server or manually through API calls Service accounts are tied to a set of credentials stored as Secrets which are mounted into pods allowing in cluster processes to talk to the Kubernetes API API requests are tied to either a normal user or a service account or are treated as anonymous requests anonymous requests This means every process inside or outside the cluster from a human user typing kubectl on a workstation to kubelets on nodes to members of the control plane must authenticate when making requests to the API server or be treated as an anonymous user Authentication strategies Kubernetes uses client certificates bearer tokens or an authenticating proxy to authenticate API requests through authentication plugins As HTTP requests are made to the API server plugins attempt to associate the following attributes with the request Username a string which identifies the end user Common values might be kube admin or jane example com UID a string which identifies the end user and attempts to be more consistent and unique than username Groups a set of strings each of which indicates the user s membership in a named logical collection of users Common values might be system masters or devops team Extra fields a map of strings to list of strings which holds additional information authorizers may find useful All values are opaque to the authentication system and only hold significance when interpreted by an authorizer docs reference access authn authz authorization You can enable multiple authentication methods at once You should usually use at least two methods service account tokens for service accounts at least one other method for user authentication When multiple authenticator modules are enabled the first module to successfully authenticate the request short circuits evaluation The API server does not guarantee the order authenticators run in The system authenticated group is included in the list of groups for all authenticated users Integrations with other authentication protocols LDAP SAML Kerberos alternate x509 schemes etc can be accomplished using an authenticating proxy authenticating proxy or the authentication webhook webhook token authentication X509 client certificates Client certificate authentication is enabled by passing the client ca file SOMEFILE option to API server The referenced file must contain one or more certificate authorities to use to validate client certificates presented to the API server If a client certificate is presented and verified the common name of the subject is used as the user name for the request As of Kubernetes 1 4 client certificates can also indicate a user s group memberships using the certificate s organization fields To include multiple group memberships for a user include multiple organization fields in the certificate For example using the openssl command line tool to generate a certificate signing request bash openssl req new key jbeda pem out jbeda csr pem subj CN jbeda O app1 O app2 This would create a CSR for the username jbeda belonging to two groups app1 and app2 See Managing Certificates docs tasks administer cluster certificates for how to generate a client cert Static token file The API server reads bearer tokens from a file when given the token auth file SOMEFILE option on the command line Currently tokens last indefinitely and the token list cannot be changed without restarting the API server The token file is a csv file with a minimum of 3 columns token user name user uid followed by optional group names If you have more than one group the column must be double quoted e g conf token user uid group1 group2 group3 Putting a bearer token in a request When using bearer token authentication from an http client the API server expects an Authorization header with a value of Bearer token The bearer token must be a character sequence that can be put in an HTTP header value using no more than the encoding and quoting facilities of HTTP For example if the bearer token is 31ada4fd adec 460c 809a 9e56ceb75269 then it would appear in an HTTP header as shown below http Authorization Bearer 31ada4fd adec 460c 809a 9e56ceb75269 Bootstrap tokens To allow for streamlined bootstrapping for new clusters Kubernetes includes a dynamically managed Bearer token type called a Bootstrap Token These tokens are stored as Secrets in the kube system namespace where they can be dynamically managed and created Controller Manager contains a TokenCleaner controller that deletes bootstrap tokens as they expire The tokens are of the form a z0 9 6 a z0 9 16 The first component is a Token ID and the second component is the Token Secret You specify the token in an HTTP header as follows http Authorization Bearer 781292 db7bc3a58fc5f07e You must enable the Bootstrap Token Authenticator with the enable bootstrap token auth flag on the API Server You must enable the TokenCleaner controller via the controllers flag on the Controller Manager This is done with something like controllers tokencleaner kubeadm will do this for you if you are using it to bootstrap a cluster The authenticator authenticates as system bootstrap Token ID It is included in the system bootstrappers group The naming and groups are intentionally limited to discourage users from using these tokens past bootstrapping The user names and group can be used and are used by kubeadm to craft the appropriate authorization policies to support bootstrapping a cluster Please see Bootstrap Tokens docs reference access authn authz bootstrap tokens for in depth documentation on the Bootstrap Token authenticator and controllers along with how to manage these tokens with kubeadm Service account tokens A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests The plugin takes two optional flags service account key file File containing PEM encoded x509 RSA or ECDSA private or public keys used to verify ServiceAccount tokens The specified file can contain multiple keys and the flag can be specified multiple times with different files If unspecified tls private key file is used service account lookup If enabled tokens which are deleted from the API will be revoked Service accounts are usually created automatically by the API server and associated with pods running in the cluster through the ServiceAccount Admission Controller docs reference access authn authz admission controllers Bearer tokens are mounted into pods at well known locations and allow in cluster processes to talk to the API server Accounts may be explicitly associated with pods using the serviceAccountName field of a PodSpec serviceAccountName is usually omitted because this is done automatically yaml apiVersion apps v1 this apiVersion is relevant as of Kubernetes 1 9 kind Deployment metadata name nginx deployment namespace default spec replicas 3 template metadata spec serviceAccountName bob the bot containers name nginx image nginx 1 14 2 Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API To manually create a service account use the kubectl create serviceaccount NAME command This creates a service account in the current namespace bash kubectl create serviceaccount jenkins none serviceaccount jenkins created Create an associated token bash kubectl create token jenkins none eyJhbGciOiJSUzI1NiIsImtp The created token is a signed JSON Web Token JWT The signed JWT can be used as a bearer token to authenticate as the given service account See above putting a bearer token in a request for how the token is included in a request Normally these tokens are mounted into pods for in cluster access to the API server but can be used from outside the cluster as well Service accounts authenticate with the username system serviceaccount NAMESPACE SERVICEACCOUNT and are assigned to the groups system serviceaccounts and system serviceaccounts NAMESPACE Because service account tokens can also be stored in Secret API objects any user with write access to Secrets can request a token and any user with read access to those Secrets can authenticate as the service account Be cautious when granting permissions to service accounts and read or write capabilities for Secrets OpenID Connect Tokens OpenID Connect https openid net connect is a flavor of OAuth2 supported by some OAuth2 providers notably Microsoft Entra ID Salesforce and Google The protocol s main extension of OAuth2 is an additional field returned with the access token called an ID Token https openid net specs openid connect core 1 0 html IDToken This token is a JSON Web Token JWT with well known fields such as a user s email signed by the server To identify the user the authenticator uses the id token not the access token from the OAuth2 token response https openid net specs openid connect core 1 0 html TokenResponse as a bearer token See above putting a bearer token in a request for how the token is included in a request sequenceDiagram participant user as User participant idp as Identity Provider participant kube as kubectl participant api as API Server user idp 1 Log in to IdP activate idp idp user 2 Provide access token br id token and refresh token deactivate idp activate user user kube 3 Call kubectl br with token being the id token br OR add tokens to kube config deactivate user activate kube kube api 4 Authorization Bearer deactivate kube activate api api api 5 Is JWT signature valid api api 6 Has the JWT expired iat exp api api 7 User authorized api kube 8 Authorized Perform br action and return result deactivate api activate kube kube x user 9 Return result deactivate kube 1 Log in to your identity provider 1 Your identity provider will provide you with an access token id token and a refresh token 1 When using kubectl use your id token with the token flag or add it directly to your kubeconfig 1 kubectl sends your id token in a header called Authorization to the API server 1 The API server will make sure the JWT signature is valid 1 Check to make sure the id token hasn t expired Perform claim and or user validation if CEL expressions are configured with AuthenticationConfiguration 1 Make sure the user is authorized 1 Once authorized the API server returns a response to kubectl 1 kubectl provides feedback to the user Since all of the data needed to validate who you are is in the id token Kubernetes doesn t need to phone home to the identity provider In a model where every request is stateless this provides a very scalable solution for authentication It does offer a few challenges 1 Kubernetes has no web interface to trigger the authentication process There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first 1 The id token can t be revoked it s like a certificate so it should be short lived only a few minutes so it can be very annoying to have to get a new token every few minutes 1 To authenticate to the Kubernetes dashboard you must use the kubectl proxy command or a reverse proxy that injects the id token Configuring the API Server Using flags To enable the plugin configure the following flags on the API server Parameter Description Example Required oidc issuer url URL of the provider that allows the API server to discover public signing keys Only URLs that use the https scheme are accepted This is typically the provider s discovery URL changed to have an empty path If the issuer s OIDC discovery URL is https accounts provider example well known openid configuration the value should be https accounts provider example Yes oidc client id A client id that all tokens must be issued for kubernetes Yes oidc username claim JWT claim to use as the user name By default sub which is expected to be a unique identifier of the end user Admins can choose other claims such as email or name depending on their provider However claims other than email will be prefixed with the issuer URL to prevent naming clashes with other plugins sub No oidc username prefix Prefix prepended to username claims to prevent clashes with existing names such as system users For example the value oidc will create usernames like oidc jane doe If this flag isn t provided and oidc username claim is a value other than email the prefix defaults to Issuer URL where Issuer URL is the value of oidc issuer url The value can be used to disable all prefixing oidc No oidc groups claim JWT claim to use as the user s group If the claim is present it must be an array of strings groups No oidc groups prefix Prefix prepended to group claims to prevent clashes with existing names such as system groups For example the value oidc will create group names like oidc engineering and oidc infra oidc No oidc required claim A key value pair that describes a required claim in the ID Token If set the claim is verified to be present in the ID Token with a matching value Repeat this flag to specify multiple claims claim value No oidc ca file The path to the certificate for the CA that signed your identity provider s web certificate Defaults to the host s root CAs etc kubernetes ssl kc ca pem No oidc signing algs The signing algorithms accepted Default is RS256 RS512 No Authentication configuration from a file using authentication configuration JWT Authenticator is an authenticator to authenticate Kubernetes users using JWT compliant tokens The authenticator will attempt to parse a raw ID token verify it s been signed by the configured issuer The public key to verify the signature is discovered from the issuer s public endpoint using OIDC discovery The minimum valid JWT payload must contain the following claims json iss https example com must match the issuer url aud my app at least one of the entries in issuer audiences must match the aud claim in presented JWTs exp 1234567890 token expiration as Unix time the number of seconds elapsed since January 1 1970 UTC username claim user this is the username claim configured in the claimMappings username claim or claimMappings username expression The configuration file approach allows you to configure multiple JWT authenticators each with a unique issuer url and issuer discoveryURL The configuration file even allows you to specify CEL docs reference using api cel expressions to map claims to user attributes and to validate claims and user information The API server also automatically reloads the authenticators when the configuration file is modified You can use apiserver authentication config controller automatic reload last timestamp seconds metric to monitor the last time the configuration was reloaded by the API server You must specify the path to the authentication configuration using the authentication config flag on the API server If you want to use command line flags instead of the configuration file those will continue to work as is To access the new capabilities like configuring multiple authenticators setting multiple audiences for an issuer switch to using the configuration file For Kubernetes v the structured authentication configuration file format is beta level and the mechanism for using that configuration is also beta Provided you didn t specifically disable the StructuredAuthenticationConfiguration feature gate docs reference command line tools reference feature gates for your cluster you can turn on structured authentication by specifying the authentication config command line argument to the kube apiserver An example of the structured authentication configuration file is shown below If you specify authentication config along with any of the oidc command line arguments this is a misconfiguration In this situation the API server reports an error and then immediately exits If you want to switch to using structured authentication configuration you have to remove the oidc command line arguments and use the configuration file instead yaml CAUTION this is an example configuration Do not use this for your own cluster apiVersion apiserver config k8s io v1beta1 kind AuthenticationConfiguration list of authenticators to authenticate Kubernetes users using JWT compliant tokens the maximum number of allowed authenticators is 64 jwt issuer url must be unique across all authenticators url must not conflict with issuer configured in service account issuer url https example com Same as oidc issuer url discoveryURL if specified overrides the URL used to fetch discovery information instead of using url well known openid configuration The exact value specified is used so well known openid configuration must be included in discoveryURL if needed The issuer field in the fetched discovery information must match the issuer url field in the AuthenticationConfiguration and will be used to validate the iss claim in the presented JWT This is for scenarios where the well known and jwks endpoints are hosted at a different location than the issuer such as locally in the cluster discoveryURL must be different from url if specified and must be unique across all authenticators discoveryURL https discovery example com well known openid configuration PEM encoded CA certificates used to validate the connection when fetching discovery information If not set the system verifier will be used Same value as the content of the file referenced by the oidc ca file flag certificateAuthority PEM encoded CA certificates audiences is the set of acceptable audiences the JWT must be issued to At least one of the entries must match the aud claim in presented JWTs audiences my app Same as oidc client id my other app this is required to be set to MatchAny when multiple audiences are specified audienceMatchPolicy MatchAny rules applied to validate token claims to authenticate users claimValidationRules Same as oidc required claim key value claim hd requiredValue example com Instead of claim and requiredValue you can use expression to validate the claim expression is a CEL expression that evaluates to a boolean all the expressions must evaluate to true for validation to succeed expression claims hd example com Message customizes the error message seen in the API server logs when the validation fails message the hd claim must be set to example com expression claims exp claims nbf 86400 message total token lifetime must not exceed 24 hours claimMappings username represents an option for the username attribute This is the only required attribute username Same as oidc username claim Mutually exclusive with username expression claim sub Same as oidc username prefix Mutually exclusive with username expression if username claim is set username prefix is required Explicitly set it to if no prefix is desired prefix Mutually exclusive with username claim and username prefix expression is a CEL expression that evaluates to a string 1 If username expression uses claims email then claims email verified must be used in username expression or extra valueExpression or claimValidationRules expression An example claim validation rule expression that matches the validation automatically applied when username claim is set to email is claims email verified orValue true 2 If the username asserted based on username expression is the empty string the authentication request will fail expression claims username external user groups represents an option for the groups attribute groups Same as oidc groups claim Mutually exclusive with groups expression claim sub Same as oidc groups prefix Mutually exclusive with groups expression if groups claim is set groups prefix is required Explicitly set it to if no prefix is desired prefix Mutually exclusive with groups claim and groups prefix expression is a CEL expression that evaluates to a string or a list of strings expression claims roles split uid represents an option for the uid attribute uid Mutually exclusive with uid expression claim sub Mutually exclusive with uid claim expression is a CEL expression that evaluates to a string expression claims sub extra attributes to be added to the UserInfo object Keys must be domain prefix path and must be unique extra key example com tenant valueExpression is a CEL expression that evaluates to a string or a list of strings valueExpression claims tenant validation rules applied to the final user object userValidationRules expression is a CEL expression that evaluates to a boolean all the expressions must evaluate to true for the user to be valid expression user username startsWith system Message customizes the error message seen in the API server logs when the validation fails message username cannot used reserved system prefix expression user groups all group group startsWith system message groups cannot used reserved system prefix Claim validation rule expression jwt claimValidationRules i expression represents the expression which will be evaluated by CEL CEL expressions have access to the contents of the token payload organized into claims CEL variable claims is a map of claim names as strings to claim values of any type User validation rule expression jwt userValidationRules i expression represents the expression which will be evaluated by CEL CEL expressions have access to the contents of userInfo organized into user CEL variable Refer to the UserInfo docs reference generated kubernetes api v userinfo v1 authentication k8s io API documentation for the schema of user Claim mapping expression jwt claimMappings username expression jwt claimMappings groups expression jwt claimMappings uid expression jwt claimMappings extra i valueExpression represents the expression which will be evaluated by CEL CEL expressions have access to the contents of the token payload organized into claims CEL variable claims is a map of claim names as strings to claim values of any type To learn more see the Documentation on CEL docs reference using api cel Here are examples of the AuthenticationConfiguration with different token payloads yaml apiVersion apiserver config k8s io v1beta1 kind AuthenticationConfiguration jwt issuer url https example com audiences my app claimMappings username expression claims username external user groups expression claims roles split uid expression claims sub extra key example com tenant valueExpression claims tenant userValidationRules expression user username startsWith system the expression will evaluate to true so validation will succeed message username cannot used reserved system prefix bash TOKEN eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ TBWF2RkQHm4QQz85AYPcwLxSk VLvQW mNDHx7SEOSv9LVwcPYPuPajJpuQn9C gKq1R94QKSQ5F6UgHMILz8OfmPKmX 00wpwwNVGeevJ79ieX2V W56iNR5gJ i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga HxtKB t0kRMNzLRS7rka SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi P3iCd88iu1xnzA where the token payload is json aud kubernetes exp 1703232949 iat 1701107233 iss https example com jti 7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873 nbf 1701107233 roles user admin sub auth tenant 72f988bf 86f1 41af 91ab 2d7cd011db4a username foo The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user json username foo external user uid auth groups user admin extra example com tenant 72f988bf 86f1 41af 91ab 2d7cd011db4a yaml apiVersion apiserver config k8s io v1beta1 kind AuthenticationConfiguration jwt issuer url https example com audiences my app claimValidationRules expression claims hd example com the token below does not have this claim so validation will fail message the hd claim must be set to example com claimMappings username expression claims username external user groups expression claims roles split uid expression claims sub extra key example com tenant valueExpression claims tenant userValidationRules expression user username startsWith system the expression will evaluate to true so validation will succeed message username cannot used reserved system prefix bash TOKEN eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ TBWF2RkQHm4QQz85AYPcwLxSk VLvQW mNDHx7SEOSv9LVwcPYPuPajJpuQn9C gKq1R94QKSQ5F6UgHMILz8OfmPKmX 00wpwwNVGeevJ79ieX2V W56iNR5gJ i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga HxtKB t0kRMNzLRS7rka SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi P3iCd88iu1xnzA where the token payload is json aud kubernetes exp 1703232949 iat 1701107233 iss https example com jti 7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873 nbf 1701107233 roles user admin sub auth tenant 72f988bf 86f1 41af 91ab 2d7cd011db4a username foo The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example com The API server will return 401 Unauthorized error yaml apiVersion apiserver config k8s io v1beta1 kind AuthenticationConfiguration jwt issuer url https example com audiences my app claimValidationRules expression claims hd example com message the hd claim must be set to example com claimMappings username expression system claims username this will prefix the username with system and will fail user validation groups expression claims roles split uid expression claims sub extra key example com tenant valueExpression claims tenant userValidationRules expression user username startsWith system the username will be system foo and expression will evaluate to false so validation will fail message username cannot used reserved system prefix bash TOKEN eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJoZCI6ImV4YW1wbGUuY29tIiwiaWF0IjoxNzAxMTEzMTAxLCJpc3MiOiJodHRwczovL2V4YW1wbGUuY29tIiwianRpIjoiYjViMDY1MjM3MmNkMjBlMzQ1YjZmZGZmY2RjMjE4MWY0YWZkNmYyNTlhYWI0YjdlMzU4ODEyMzdkMjkyMjBiYyIsIm5iZiI6MTcwMTExMzEwMSwicm9sZXMiOiJ1c2VyLGFkbWluIiwic3ViIjoiYXV0aCIsInRlbmFudCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0YSIsInVzZXJuYW1lIjoiZm9vIn0 FgPJBYLobo9jnbHreooBlvpgEcSPWnKfX6dc0IvdlRB F0dCcgy91oCJeK aBk 8zH5AKUXoFTlInfLCkPivMOJqMECA1YTrMUwt IVqwb116AqihfByUYIIqzMjvUbthtbpIeHQm2fF0HbrUqa Q0uaYwgy8mD807h7sBcUMjNd215ff nFIHss 9zegH8GI1d9fiBf g6zjkR1j987EP748khpQh9IxPjMJbSgG uH5x80YFuqgEWwq aYJPQxXX6FatP96a2EAn7wfPpGlPRt0HcBOvq5pCnudgCgfVgiOJiLr 7robQu4T1bis0W75VPEvwWtgFcLnvcQx0JWg where the token payload is json aud kubernetes exp 1703232949 hd example com iat 1701113101 iss https example com jti b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc nbf 1701113101 roles user admin sub auth tenant 72f988bf 86f1 41af 91ab 2d7cd011db4a username foo The token with the above AuthenticationConfiguration will produce the following UserInfo object json username system foo uid auth groups user admin extra example com tenant 72f988bf 86f1 41af 91ab 2d7cd011db4a which will fail user validation because the username starts with system The API server will return 401 Unauthorized error Limitations 1 Distributed claims do not work via CEL docs reference using api cel expressions 1 Egress selector configuration is not supported for calls to issuer url and issuer discoveryURL Kubernetes does not provide an OpenID Connect Identity Provider You can use an existing public OpenID Connect Identity Provider such as Google or others https connect2id com products nimbus oauth openid connect sdk openid connect providers Or you can run your own Identity Provider such as dex https dexidp io Keycloak https github com keycloak keycloak CloudFoundry UAA https github com cloudfoundry uaa or Tremolo Security s OpenUnison https openunison github io For an identity provider to work with Kubernetes it must 1 Support OpenID connect discovery https openid net specs openid connect discovery 1 0 html The public key to verify the signature is discovered from the issuer s public endpoint using OIDC discovery If you re using the authentication configuration file the identity provider doesn t need to publicly expose the discovery endpoint You can host the discovery endpoint at a different location than the issuer such as locally in the cluster and specify the issuer discoveryURL in the configuration file 1 Run in TLS with non obsolete ciphers 1 Have a CA signed certificate even if the CA is not a commercial CA or is self signed A note about requirement 3 above requiring a CA signed certificate If you deploy your own identity provider as opposed to one of the cloud providers like Google or Microsoft you MUST have your identity provider s web server certificate signed by a certificate with the CA flag set to TRUE even if it is self signed This is due to GoLang s TLS client implementation being very strict to the standards around certificate validation If you don t have a CA handy you can use the gencert script https github com dexidp dex blob master examples k8s gencert sh from the Dex team to create a simple CA and a signed certificate and key pair Or you can use this similar script https raw githubusercontent com TremoloSecurity openunison qs kubernetes master src main bash makessl sh that generates SHA256 certs with a longer life and larger key size Refer to setup instructions for specific systems UAA https docs cloudfoundry org concepts architecture uaa html Dex https dexidp io docs kubernetes OpenUnison https www tremolosecurity com orchestra k8s Using kubectl Option 1 OIDC Authenticator The first option is to use the kubectl oidc authenticator which sets the id token as a bearer token for all requests and refreshes the token once it expires After you ve logged into your provider use kubectl to add your id token refresh token client id and client secret to configure the plugin Providers that don t return an id token as part of their refresh token response aren t supported by this plugin and should use Option 2 below bash kubectl config set credentials USER NAME auth provider oidc auth provider arg idp issuer url issuer url auth provider arg client id your client id auth provider arg client secret your client secret auth provider arg refresh token your refresh token auth provider arg idp certificate authority path to your ca certificate auth provider arg id token your id token As an example running the below command after authenticating to your identity provider bash kubectl config set credentials mmosley auth provider oidc auth provider arg idp issuer url https oidcidp tremolo lan 8443 auth idp OidcIdP auth provider arg client id kubernetes auth provider arg client secret 1db158f6 177d 4d9c 8a8b d36869918ec5 auth provider arg refresh token q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW VlFUeVRGcluyVF5JT4 haZmPsluFoFu5XkpXk5BXqHega4GAXlF ma vmYpFcHe5eZR slBFpZKtQA auth provider arg idp certificate authority root ca pem auth provider arg id token eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ w6p4J 6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW p mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3 UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401 RPQzPGMVBld0 zMCAwZttJ4knw Which would produce the below configuration yaml users name mmosley user auth provider config client id kubernetes client secret 1db158f6 177d 4d9c 8a8b d36869918ec5 id token eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ w6p4J 6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW p mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3 UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401 RPQzPGMVBld0 zMCAwZttJ4knw idp certificate authority root ca pem idp issuer url https oidcidp tremolo lan 8443 auth idp OidcIdP refresh token q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW VlFUeVRGcluyVF5JT4 haZmPsluFoFu5XkpXk5BXq name oidc Once your id token expires kubectl will attempt to refresh your id token using your refresh token and client secret storing the new values for the refresh token and id token in your kube config Option 2 Use the token Option The kubectl command lets you pass in a token using the token option Copy and paste the id token into this option bash kubectl token eyJhbGciOiJSUzI1NiJ9 eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0 f2As579n9VNoaKzoF dOQGmXkFKf1FMyNV0 va B63jn n9LGSCca 6IVMP8pO Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl TK yF5akjSTHFZD 0gRzlevBDiH8Q79NAr ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat K1PaUk5 ujMBG7yYnr95xD 63n8CO8teGUAAEMx6zRjzfhnhbzX ajwZLGwGUBT4WqjMs70 6a7 8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1 I2ulrOVsYx01 yD35 rw get nodes Webhook Token Authentication Webhook authentication is a hook for verifying bearer tokens authentication token webhook config file a configuration file describing how to access the remote webhook service authentication token webhook cache ttl how long to cache authentication decisions Defaults to two minutes authentication token webhook version determines whether to use authentication k8s io v1beta1 or authentication k8s io v1 TokenReview objects to send receive information from the webhook Defaults to v1beta1 The configuration file uses the kubeconfig docs concepts configuration organize cluster access kubeconfig file format Within the file clusters refers to the remote service and users refers to the API server webhook An example would be yaml Kubernetes API version apiVersion v1 kind of the API object kind Config clusters refers to the remote service clusters name name of remote authn service cluster certificate authority path to ca pem CA for verifying the remote service server https authn example com authenticate URL of remote service to query https recommended for production users refers to the API server s webhook configuration users name name of api server user client certificate path to cert pem cert for the webhook plugin to use client key path to key pem key matching the cert kubeconfig files require a context Provide one for the API server current context webhook contexts context cluster name of remote authn service user name of api server name webhook When a client attempts to authenticate with the API server using a bearer token as discussed above putting a bearer token in a request the authentication webhook POSTs a JSON serialized TokenReview object containing the token to the remote service Note that webhook API objects are subject to the same versioning compatibility rules docs concepts overview kubernetes api as other Kubernetes API objects Implementers should check the apiVersion field of the request to ensure correct deserialization and must respond with a TokenReview object of the same version as the request The Kubernetes API server defaults to sending authentication k8s io v1beta1 token reviews for backwards compatibility To opt into receiving authentication k8s io v1 token reviews the API server must be started with authentication token webhook version v1 yaml apiVersion authentication k8s io v1 kind TokenReview spec Opaque bearer token sent to the API server token 014fbff9a07c Optional list of the audience identifiers for the server the token was presented to Audience aware token authenticators for example OIDC token authenticators should verify the token was intended for at least one of the audiences in this list and return the intersection of this list and the valid audiences for the token in the response status This ensures the token is valid to authenticate to the server it was presented to If no audiences are provided the token should be validated to authenticate to the Kubernetes API server audiences https myserver example com https myserver internal example com yaml apiVersion authentication k8s io v1beta1 kind TokenReview spec Opaque bearer token sent to the API server token 014fbff9a07c Optional list of the audience identifiers for the server the token was presented to Audience aware token authenticators for example OIDC token authenticators should verify the token was intended for at least one of the audiences in this list and return the intersection of this list and the valid audiences for the token in the response status This ensures the token is valid to authenticate to the server it was presented to If no audiences are provided the token should be validated to authenticate to the Kubernetes API server audiences https myserver example com https myserver internal example com The remote service is expected to fill the status field of the request to indicate the success of the login The response body s spec field is ignored and may be omitted The remote service must return a response using the same TokenReview API version that it received A successful validation of the bearer token would return yaml apiVersion authentication k8s io v1 kind TokenReview status authenticated true user Required username janedoe example com Optional uid 42 Optional group memberships groups developers qa Optional additional information provided by the authenticator This should not contain confidential data as it can be recorded in logs or API objects and is made available to admission webhooks extra extrafield1 extravalue1 extravalue2 Optional list audience aware token authenticators can return containing the audiences from the spec audiences list for which the provided token was valid If this is omitted the token is considered to be valid to authenticate to the Kubernetes API server audiences https myserver example com yaml apiVersion authentication k8s io v1beta1 kind TokenReview status authenticated true user Required username janedoe example com Optional uid 42 Optional group memberships groups developers qa Optional additional information provided by the authenticator This should not contain confidential data as it can be recorded in logs or API objects and is made available to admission webhooks extra extrafield1 extravalue1 extravalue2 Optional list audience aware token authenticators can return containing the audiences from the spec audiences list for which the provided token was valid If this is omitted the token is considered to be valid to authenticate to the Kubernetes API server audiences https myserver example com An unsuccessful request would return yaml apiVersion authentication k8s io v1 kind TokenReview status authenticated false Optionally include details about why authentication failed If no error is provided the API will return a generic Unauthorized message The error field is ignored when authenticated true error Credentials are expired yaml apiVersion authentication k8s io v1beta1 kind TokenReview status authenticated false Optionally include details about why authentication failed If no error is provided the API will return a generic Unauthorized message The error field is ignored when authenticated true error Credentials are expired Authenticating Proxy The API server can be configured to identify users from request header values such as X Remote User It is designed for use in combination with an authenticating proxy which sets the request header value requestheader username headers Required case insensitive Header names to check in order for the user identity The first header containing a value is used as the username requestheader group headers 1 6 Optional case insensitive X Remote Group is suggested Header names to check in order for the user s groups All values in all specified headers are used as group names requestheader extra headers prefix 1 6 Optional case insensitive X Remote Extra is suggested Header prefixes to look for to determine extra information about the user typically used by the configured authorization plugin Any headers beginning with any of the specified prefixes have the prefix removed The remainder of the header name is lowercased and percent decoded https tools ietf org html rfc3986 section 2 1 and becomes the extra key and the header value is the extra value Prior to 1 11 3 and 1 10 7 1 9 11 the extra key could only contain characters which were legal in HTTP header labels https tools ietf org html rfc7230 section 3 2 6 For example with this configuration requestheader username headers X Remote User requestheader group headers X Remote Group requestheader extra headers prefix X Remote Extra this request http GET HTTP 1 1 X Remote User fido X Remote Group dogs X Remote Group dachshunds X Remote Extra Acme com 2Fproject some project X Remote Extra Scopes openid X Remote Extra Scopes profile would result in this user info yaml name fido groups dogs dachshunds extra acme com project some project scopes openid profile In order to prevent header spoofing the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are checked WARNING do not reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA s usage requestheader client ca file Required PEM encoded certificate bundle A valid client certificate must be presented and validated against the certificate authorities in the specified file before the request headers are checked for user names requestheader allowed names Optional List of Common Name values CNs If set a valid client certificate with a CN in the specified list must be presented before the request headers are checked for user names If empty any CN is allowed Anonymous requests When enabled requests that are not rejected by other configured authentication methods are treated as anonymous requests and given a username of system anonymous and a group of system unauthenticated For example on a server with token authentication configured and anonymous access enabled a request providing an invalid bearer token would receive a 401 Unauthorized error A request providing no bearer token would be treated as an anonymous request In 1 5 1 1 5 x anonymous access is disabled by default and can be enabled by passing the anonymous auth true option to the API server In 1 6 anonymous access is enabled by default if an authorization mode other than AlwaysAllow is used and can be disabled by passing the anonymous auth false option to the API server Starting in 1 6 the ABAC and RBAC authorizers require explicit authorization of the system anonymous user or the system unauthenticated group so legacy policy rules that grant access to the user or group do not include anonymous users Anonymous Authenticator Configuration The AuthenticationConfiguration can be used to configure the anonymous authenticator To enable configuring anonymous auth via the config file you need enable the AnonymousAuthConfigurableEndpoints feature gate When this feature gate is enabled you cannot set the anonymous auth flag The main advantage of configuring anonymous authenticator using the authentication configuration file is that in addition to enabling and disabling anonymous authentication you can also configure which endpoints support anonymous authentication A sample authentication configuration file is below yaml CAUTION this is an example configuration Do not use this for your own cluster apiVersion apiserver config k8s io v1beta1 kind AuthenticationConfiguration anonymous enabled true conditions path livez path readyz path healthz In the configuration above only the livez readyz and healthz endpoints are reachable by anonymous requests Any other endpoints will not be reachable even if it is allowed by RBAC configuration User impersonation A user can act as another user through impersonation headers These let requests manually override the user info a request authenticates as For example an admin could use this feature to debug an authorization policy by temporarily impersonating another user and seeing if a request was denied Impersonation requests first authenticate as the requesting user then switch to the impersonated user info A user makes an API call with their credentials and impersonation headers API server authenticates the user API server ensures the authenticated users have impersonation privileges Request user info is replaced with impersonation values Request is evaluated authorization acts on impersonated user info The following HTTP headers can be used to performing an impersonation request Impersonate User The username to act as Impersonate Group A group name to act as Can be provided multiple times to set multiple groups Optional Requires Impersonate User Impersonate Extra extra name A dynamic header used to associate extra fields with the user Optional Requires Impersonate User In order to be preserved consistently extra name must be lower case and any characters which aren t legal in HTTP header labels https tools ietf org html rfc7230 section 3 2 6 MUST be utf8 and percent encoded https tools ietf org html rfc3986 section 2 1 Impersonate Uid A unique identifier that represents the user being impersonated Optional Requires Impersonate User Kubernetes does not impose any format requirements on this string Prior to 1 11 3 and 1 10 7 1 9 11 extra name could only contain characters which were legal in HTTP header labels https tools ietf org html rfc7230 section 3 2 6 Impersonate Uid is only available in versions 1 22 0 and higher An example of the impersonation headers used when impersonating a user with groups http Impersonate User jane doe example com Impersonate Group developers Impersonate Group admins An example of the impersonation headers used when impersonating a user with a UID and extra fields http Impersonate User jane doe example com Impersonate Extra dn cn jane ou engineers dc example dc com Impersonate Extra acme com 2Fproject some project Impersonate Extra scopes view Impersonate Extra scopes development Impersonate Uid 06f6ce97 e2c5 4ab8 7ba5 7654dd08d52b When using kubectl set the as flag to configure the Impersonate User header set the as group flag to configure the Impersonate Group header bash kubectl drain mynode none Error from server Forbidden User clark cannot get nodes at the cluster scope get nodes mynode Set the as and as group flag bash kubectl drain mynode as superman as group system masters none node mynode cordoned node mynode drained kubectl cannot impersonate extra fields or UIDs To impersonate a user group user identifier UID or extra fields the impersonating user must have the ability to perform the impersonate verb on the kind of attribute being impersonated user group uid etc For clusters that enable the RBAC authorization plugin the following ClusterRole encompasses the rules needed to set user and group impersonation headers yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name impersonator rules apiGroups resources users groups serviceaccounts verbs impersonate For impersonation extra fields and impersonated UIDs are both under the authentication k8s io apiGroup Extra fields are evaluated as sub resources of the resource userextras To allow a user to use impersonation headers for the extra field scopes and for UIDs a user should be granted the following role yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name scopes and uid impersonator rules Can set Impersonate Extra scopes header and the Impersonate Uid header apiGroups authentication k8s io resources userextras scopes uids verbs impersonate The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name limited impersonator rules Can impersonate the user jane doe example com apiGroups resources users verbs impersonate resourceNames jane doe example com Can impersonate the groups developers and admins apiGroups resources groups verbs impersonate resourceNames developers admins Can impersonate the extras field scopes with the values view and development apiGroups authentication k8s io resources userextras scopes verbs impersonate resourceNames view development Can impersonate the uid 06f6ce97 e2c5 4ab8 7ba5 7654dd08d52b apiGroups authentication k8s io resources uids verbs impersonate resourceNames 06f6ce97 e2c5 4ab8 7ba5 7654dd08d52b Impersonating a user or group allows you to perform any action as if you were that user or group for that reason impersonation is not namespace scoped If you want to allow impersonation using Kubernetes RBAC this requires using a ClusterRole and a ClusterRoleBinding not a Role and RoleBinding client go credential plugins k8s io client go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials This feature is intended for client side integrations with authentication protocols not natively supported by k8s io client go LDAP Kerberos OAuth2 SAML etc The plugin implements the protocol specific logic then returns opaque credentials to use Almost all credential plugin use cases require a server side component with support for the webhook token authenticator webhook token authentication to interpret the credential format produced by the client plugin Earlier versions of kubectl included built in support for authenticating to AKS and GKE but this is no longer present Example use case In a hypothetical use case an organization would run an external service that exchanges LDAP credentials for user specific signed tokens The service would also be capable of responding to webhook token authenticator webhook token authentication requests to validate the tokens Users would be required to install a credential plugin on their workstation To authenticate against the API The user issues a kubectl command Credential plugin prompts the user for LDAP credentials exchanges credentials with external service for a token Credential plugin returns token to client go which uses it as a bearer token against the API server API server uses the webhook token authenticator webhook token authentication to submit a TokenReview to the external service External service verifies the signature on the token and returns the user s username and groups Configuration Credential plugins are configured through kubectl config files docs tasks access application cluster configure access multiple clusters as part of the user fields yaml apiVersion v1 kind Config users name my user user exec Command to execute Required command example client go exec plugin API version to use when decoding the ExecCredentials resource Required The API version returned by the plugin MUST match the version listed here To integrate with tools that support multiple versions such as client authentication k8s io v1beta1 set an environment variable pass an argument to the tool that indicates which version the exec plugin expects or read the version from the ExecCredential object in the KUBERNETES EXEC INFO environment variable apiVersion client authentication k8s io v1 Environment variables to set when executing the plugin Optional env name FOO value bar Arguments to pass when executing the plugin Optional args arg1 arg2 Text shown to the user when the executable doesn t seem to be present Optional installHint example client go exec plugin is required to authenticate to the current cluster It can be installed On macOS brew install example client go exec plugin On Ubuntu apt get install example client go exec plugin On Fedora dnf install example client go exec plugin Whether or not to provide cluster information which could potentially contain very large CA data to this exec plugin as a part of the KUBERNETES EXEC INFO environment variable provideClusterInfo true The contract between the exec plugin and the standard input I O stream If the contract cannot be satisfied this plugin will not be run and an error will be returned Valid values are Never this exec plugin never uses standard input IfAvailable this exec plugin wants to use standard input if it is available or Always this exec plugin requires standard input to function Required interactiveMode Never clusters name my cluster cluster server https 172 17 4 100 6443 certificate authority etc kubernetes ca pem extensions name client authentication k8s io exec reserved extension name for per cluster exec config extension arbitrary config this can be provided via the KUBERNETES EXEC INFO environment variable upon setting provideClusterInfo you can put anything here contexts name my cluster context cluster my cluster user my user current context my cluster yaml apiVersion v1 kind Config users name my user user exec Command to execute Required command example client go exec plugin API version to use when decoding the ExecCredentials resource Required The API version returned by the plugin MUST match the version listed here To integrate with tools that support multiple versions such as client authentication k8s io v1 set an environment variable pass an argument to the tool that indicates which version the exec plugin expects or read the version from the ExecCredential object in the KUBERNETES EXEC INFO environment variable apiVersion client authentication k8s io v1beta1 Environment variables to set when executing the plugin Optional env name FOO value bar Arguments to pass when executing the plugin Optional args arg1 arg2 Text shown to the user when the executable doesn t seem to be present Optional installHint example client go exec plugin is required to authenticate to the current cluster It can be installed On macOS brew install example client go exec plugin On Ubuntu apt get install example client go exec plugin On Fedora dnf install example client go exec plugin Whether or not to provide cluster information which could potentially contain very large CA data to this exec plugin as a part of the KUBERNETES EXEC INFO environment variable provideClusterInfo true The contract between the exec plugin and the standard input I O stream If the contract cannot be satisfied this plugin will not be run and an error will be returned Valid values are Never this exec plugin never uses standard input IfAvailable this exec plugin wants to use standard input if it is available or Always this exec plugin requires standard input to function Optional Defaults to IfAvailable interactiveMode Never clusters name my cluster cluster server https 172 17 4 100 6443 certificate authority etc kubernetes ca pem extensions name client authentication k8s io exec reserved extension name for per cluster exec config extension arbitrary config this can be provided via the KUBERNETES EXEC INFO environment variable upon setting provideClusterInfo you can put anything here contexts name my cluster context cluster my cluster user my user current context my cluster Relative command paths are interpreted as relative to the directory of the config file If KUBECONFIG is set to home jane kubeconfig and the exec command is bin example client go exec plugin the binary home jane bin example client go exec plugin is executed yaml name my user user exec Path relative to the directory of the kubeconfig command bin example client go exec plugin apiVersion client authentication k8s io v1 interactiveMode Never Input and output formats The executed command prints an ExecCredential object to stdout k8s io client go authenticates against the Kubernetes API using the returned credentials in the status The executed command is passed an ExecCredential object as input via the KUBERNETES EXEC INFO environment variable This input contains helpful information like the expected API version of the returned ExecCredential object and whether or not the plugin can use stdin to interact with the user When run from an interactive session i e a terminal stdin can be exposed directly to the plugin Plugins should use the spec interactive field of the input ExecCredential object from the KUBERNETES EXEC INFO environment variable in order to determine if stdin has been provided A plugin s stdin requirements i e whether stdin is optional strictly required or never used in order for the plugin to run successfully is declared via the user exec interactiveMode field in the kubeconfig docs concepts configuration organize cluster access kubeconfig see table below for valid values The user exec interactiveMode field is optional in client authentication k8s io v1beta1 and required in client authentication k8s io v1 interactiveMode Value Meaning Never This exec plugin never needs to use standard input and therefore the exec plugin will be run regardless of whether standard input is available for user input IfAvailable This exec plugin would like to use standard input if it is available but can still operate if standard input is not available Therefore the exec plugin will be run regardless of whether stdin is available for user input If standard input is available for user input then it will be provided to this exec plugin Always This exec plugin requires standard input in order to run and therefore the exec plugin will only be run if standard input is available for user input If standard input is not available for user input then the exec plugin will not be run and an error will be returned by the exec plugin runner To use bearer token credentials the plugin returns a token in the status of the ExecCredential docs reference config api client authentication v1beta1 client authentication k8s io v1beta1 ExecCredential json apiVersion client authentication k8s io v1 kind ExecCredential status token my bearer token json apiVersion client authentication k8s io v1beta1 kind ExecCredential status token my bearer token Alternatively a PEM encoded client certificate and key can be returned to use TLS client auth If the plugin returns a different certificate and key on a subsequent call k8s io client go will close existing connections with the server to force a new TLS handshake If specified clientKeyData and clientCertificateData must both must be present clientCertificateData may contain additional intermediate certificates to send to the server json apiVersion client authentication k8s io v1 kind ExecCredential status clientCertificateData BEGIN CERTIFICATE n n END CERTIFICATE clientKeyData BEGIN RSA PRIVATE KEY n n END RSA PRIVATE KEY json apiVersion client authentication k8s io v1beta1 kind ExecCredential status clientCertificateData BEGIN CERTIFICATE n n END CERTIFICATE clientKeyData BEGIN RSA PRIVATE KEY n n END RSA PRIVATE KEY Optionally the response can include the expiry of the credential formatted as a RFC 3339 https datatracker ietf org doc html rfc3339 timestamp Presence or absence of an expiry has the following impact If an expiry is included the bearer token and TLS credentials are cached until the expiry time is reached or if the server responds with a 401 HTTP status code or when the process exits If an expiry is omitted the bearer token and TLS credentials are cached until the server responds with a 401 HTTP status code or until the process exits json apiVersion client authentication k8s io v1 kind ExecCredential status token my bearer token expirationTimestamp 2018 03 05T17 30 20 08 00 json apiVersion client authentication k8s io v1beta1 kind ExecCredential status token my bearer token expirationTimestamp 2018 03 05T17 30 20 08 00 To enable the exec plugin to obtain cluster specific information set provideClusterInfo on the user exec field in the kubeconfig docs concepts configuration organize cluster access kubeconfig The plugin will then be supplied this cluster specific information in the KUBERNETES EXEC INFO environment variable Information from this environment variable can be used to perform cluster specific credential acquisition logic The following ExecCredential manifest describes a cluster information sample json apiVersion client authentication k8s io v1 kind ExecCredential spec cluster server https 172 17 4 100 6443 certificate authority data LS0t config arbitrary config this can be provided via the KUBERNETES EXEC INFO environment variable upon setting provideClusterInfo you can put anything here interactive true json apiVersion client authentication k8s io v1beta1 kind ExecCredential spec cluster server https 172 17 4 100 6443 certificate authority data LS0t config arbitrary config this can be provided via the KUBERNETES EXEC INFO environment variable upon setting provideClusterInfo you can put anything here interactive true API access to authentication information for a client self subject review If your cluster has the API enabled you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client This works whether you are authenticating as a user typically representing a real person or as a ServiceAccount SelfSubjectReview objects do not have any configurable fields On receiving a request the Kubernetes API server fills the status with the user attributes and returns it to the user Request example the body would be a SelfSubjectReview http POST apis authentication k8s io v1 selfsubjectreviews json apiVersion authentication k8s io v1 kind SelfSubjectReview Response example json apiVersion authentication k8s io v1 kind SelfSubjectReview status userInfo name jane doe uid b6c7cfd4 f166 11ec 8ea0 0242ac120002 groups viewers editors system authenticated extra provider id token company example For convenience the kubectl auth whoami command is present Executing this command will produce the following output yet different user attributes will be shown Simple output example ATTRIBUTE VALUE Username jane doe Groups system authenticated Complex example including extra attributes ATTRIBUTE VALUE Username jane doe UID b79dbf30 0c6a 11ed 861d 0242ac120002 Groups students teachers system authenticated Extra skills reading learning Extra subjects math sports By providing the output flag it is also possible to print the JSON or YAML representation of the result json apiVersion authentication k8s io v1 kind SelfSubjectReview status userInfo username jane doe uid b79dbf30 0c6a 11ed 861d 0242ac120002 groups students teachers system authenticated extra skills reading learning subjects math sports yaml apiVersion authentication k8s io v1 kind SelfSubjectReview status userInfo username jane doe uid b79dbf30 0c6a 11ed 861d 0242ac120002 groups students teachers system authenticated extra skills reading learning subjects math sports This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster for example if you use webhook token authentication docs reference access authn authz authentication webhook token authentication or authenticating proxy docs reference access authn authz authentication authenticating proxy The Kubernetes API server fills the userInfo after all authentication mechanisms are applied including impersonation docs reference access authn authz authentication user impersonation If you or an authentication proxy make a SelfSubjectReview using impersonation you see the user details and properties for the user that was impersonated By default all authenticated users can create SelfSubjectReview objects when the APISelfSubjectReview feature is enabled It is allowed by the system basic user cluster role You can only make SelfSubjectReview requests if the APISelfSubjectReview feature gate docs reference command line tools reference feature gates is enabled for your cluster not needed for Kubernetes but older Kubernetes versions might not offer this feature gate or might default it to be off if you are running a version of Kubernetes older than v1 28 the API server for your cluster has the authentication k8s io v1alpha1 or authentication k8s io v1beta1 enabled Read the client authentication reference v1beta1 docs reference config api client authentication v1beta1 Read the client authentication reference v1 docs reference config api client authentication v1 |
kubernetes reference liggitt deads2k erictune contenttype concept weight 39 reviewers title Using ABAC Authorization lavalamp | ---
reviewers:
- erictune
- lavalamp
- deads2k
- liggitt
title: Using ABAC Authorization
content_type: concept
weight: 39
---
<!-- overview -->
Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted
to users through the use of policies which combine attributes together.
<!-- body -->
## Policy File Format
To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC`
on startup.
The file format is [one JSON object per line](https://jsonlines.org/). There
should be no enclosing list or map, only one map per line.
Each line is a "policy object", where each such object is a map with the following
properties:
- Versioning properties:
- `apiVersion`, type string; valid values are "abac.authorization.kubernetes.io/v1beta1". Allows versioning
and conversion of the policy format.
- `kind`, type string: valid values are "Policy". Allows versioning and conversion of the policy format.
- `spec` property set to a map with the following properties:
- Subject-matching properties:
- `user`, type string; the user-string from `--token-auth-file`. If you specify `user`, it must match the
username of the authenticated user.
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user.
`system:authenticated` matches all authenticated requests. `system:unauthenticated` matches all
unauthenticated requests.
- Resource-matching properties:
- `apiGroup`, type string; an API group.
- Ex: `apps`, `networking.k8s.io`
- Wildcard: `*` matches all API groups.
- `namespace`, type string; a namespace.
- Ex: `kube-system`
- Wildcard: `*` matches all resource requests.
- `resource`, type string; a resource type
- Ex: `pods`, `deployments`
- Wildcard: `*` matches all resource requests.
- Non-resource-matching properties:
- `nonResourcePath`, type string; non-resource request paths.
- Ex: `/version` or `/apis`
- Wildcard:
- `*` matches all non-resource requests.
- `/foo/*` matches all subpaths of `/foo/`.
- `readonly`, type boolean, when true, means that the Resource-matching policy only applies to get, list,
and watch operations, Non-resource-matching policy only applies to get operation.
An unset property is the same as a property set to the zero value for its type
(e.g. empty string, 0, false). However, unset should be preferred for
readability.
In the future, policies may be expressed in a JSON format, and managed via a
REST interface.
## Authorization Algorithm
A request has attributes which correspond to the properties of a policy object.
When a request is received, the attributes are determined. Unknown attributes
are set to the zero value of its type (e.g. empty string, 0, false).
A property set to `"*"` will match any value of the corresponding attribute.
The tuple of attributes is checked for a match against every policy in the
policy file. If at least one line matches the request attributes, then the
request is authorized (but may fail later validation).
To permit any authenticated user to do something, write a policy with the
group property set to `"system:authenticated"`.
To permit any unauthenticated user to do something, write a policy with the
group property set to `"system:unauthenticated"`.
To permit a user to do anything, write a policy with the apiGroup, namespace,
resource, and nonResourcePath properties set to `"*"`.
## Kubectl
Kubectl uses the `/api` and `/apis` endpoints of apiserver to discover
served resource types, and validates objects sent to the API by create/update
operations using schema information located at `/openapi/v2`.
When using ABAC authorization, those special resources have to be explicitly
exposed via the `nonResourcePath` property in a policy (see [examples](#examples) below):
* `/api`, `/api/*`, `/apis`, and `/apis/*` for API version negotiation.
* `/version` for retrieving the server version via `kubectl version`.
* `/swaggerapi/*` for create/update operations.
To inspect the HTTP calls involved in a specific kubectl operation you can turn
up the verbosity:
```shell
kubectl --v=8 version
```
## Examples
1. Alice can do anything to all resources:
```json
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}}
```
1. The kubelet can read any pods:
```json
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}}
```
1. The kubelet can read and write events:
```json
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}}
```
1. Bob can just read pods in namespace "projectCaribou":
```json
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}}
```
1. Anyone can make read-only requests to all non-resource paths:
```json
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "readonly": true, "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "readonly": true, "nonResourcePath": "*"}}
```
[Complete file example](https://releases.k8s.io/v/pkg/auth/authorizer/abac/example_policy_file.jsonl)
## A quick note on service accounts
Every service account has a corresponding ABAC username, and that service account's username is generated
according to the naming convention:
```shell
system:serviceaccount:<namespace>:<serviceaccountname>
```
Creating a new namespace leads to the creation of a new service account in the following format:
```shell
system:serviceaccount:<namespace>:default
```
For example, if you wanted to grant the default service account (in the `kube-system` namespace) full
privilege to the API using ABAC, you would add this line to your policy file:
```json
{"apiVersion":"abac.authorization.kubernetes.io/v1beta1","kind":"Policy","spec":{"user":"system:serviceaccount:kube-system:default","namespace":"*","resource":"*","apiGroup":"*"}}
```
The apiserver will need to be restarted to pick up the new policy lines. | kubernetes reference | reviewers erictune lavalamp deads2k liggitt title Using ABAC Authorization content type concept weight 39 overview Attribute based access control ABAC defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together body Policy File Format To enable ABAC mode specify authorization policy file SOME FILENAME and authorization mode ABAC on startup The file format is one JSON object per line https jsonlines org There should be no enclosing list or map only one map per line Each line is a policy object where each such object is a map with the following properties Versioning properties apiVersion type string valid values are abac authorization kubernetes io v1beta1 Allows versioning and conversion of the policy format kind type string valid values are Policy Allows versioning and conversion of the policy format spec property set to a map with the following properties Subject matching properties user type string the user string from token auth file If you specify user it must match the username of the authenticated user group type string if you specify group it must match one of the groups of the authenticated user system authenticated matches all authenticated requests system unauthenticated matches all unauthenticated requests Resource matching properties apiGroup type string an API group Ex apps networking k8s io Wildcard matches all API groups namespace type string a namespace Ex kube system Wildcard matches all resource requests resource type string a resource type Ex pods deployments Wildcard matches all resource requests Non resource matching properties nonResourcePath type string non resource request paths Ex version or apis Wildcard matches all non resource requests foo matches all subpaths of foo readonly type boolean when true means that the Resource matching policy only applies to get list and watch operations Non resource matching policy only applies to get operation An unset property is the same as a property set to the zero value for its type e g empty string 0 false However unset should be preferred for readability In the future policies may be expressed in a JSON format and managed via a REST interface Authorization Algorithm A request has attributes which correspond to the properties of a policy object When a request is received the attributes are determined Unknown attributes are set to the zero value of its type e g empty string 0 false A property set to will match any value of the corresponding attribute The tuple of attributes is checked for a match against every policy in the policy file If at least one line matches the request attributes then the request is authorized but may fail later validation To permit any authenticated user to do something write a policy with the group property set to system authenticated To permit any unauthenticated user to do something write a policy with the group property set to system unauthenticated To permit a user to do anything write a policy with the apiGroup namespace resource and nonResourcePath properties set to Kubectl Kubectl uses the api and apis endpoints of apiserver to discover served resource types and validates objects sent to the API by create update operations using schema information located at openapi v2 When using ABAC authorization those special resources have to be explicitly exposed via the nonResourcePath property in a policy see examples examples below api api apis and apis for API version negotiation version for retrieving the server version via kubectl version swaggerapi for create update operations To inspect the HTTP calls involved in a specific kubectl operation you can turn up the verbosity shell kubectl v 8 version Examples 1 Alice can do anything to all resources json apiVersion abac authorization kubernetes io v1beta1 kind Policy spec user alice namespace resource apiGroup 1 The kubelet can read any pods json apiVersion abac authorization kubernetes io v1beta1 kind Policy spec user kubelet namespace resource pods readonly true 1 The kubelet can read and write events json apiVersion abac authorization kubernetes io v1beta1 kind Policy spec user kubelet namespace resource events 1 Bob can just read pods in namespace projectCaribou json apiVersion abac authorization kubernetes io v1beta1 kind Policy spec user bob namespace projectCaribou resource pods readonly true 1 Anyone can make read only requests to all non resource paths json apiVersion abac authorization kubernetes io v1beta1 kind Policy spec group system authenticated readonly true nonResourcePath apiVersion abac authorization kubernetes io v1beta1 kind Policy spec group system unauthenticated readonly true nonResourcePath Complete file example https releases k8s io v pkg auth authorizer abac example policy file jsonl A quick note on service accounts Every service account has a corresponding ABAC username and that service account s username is generated according to the naming convention shell system serviceaccount namespace serviceaccountname Creating a new namespace leads to the creation of a new service account in the following format shell system serviceaccount namespace default For example if you wanted to grant the default service account in the kube system namespace full privilege to the API using ABAC you would add this line to your policy file json apiVersion abac authorization kubernetes io v1beta1 kind Policy spec user system serviceaccount kube system default namespace resource apiGroup The apiserver will need to be restarted to pick up the new policy lines |
kubernetes reference weight 20 title Authenticating with Bootstrap Tokens contenttype concept reviewers jbeda overview | ---
reviewers:
- jbeda
title: Authenticating with Bootstrap Tokens
content_type: concept
weight: 20
---
<!-- overview -->
Bootstrap tokens are a simple bearer token that is meant to be used when
creating new clusters or joining new nodes to an existing cluster.
It was built to support [kubeadm](/docs/reference/setup-tools/kubeadm/), but can be used in other contexts
for users that wish to start clusters without `kubeadm`. It is also built to
work, via RBAC policy, with the
[kubelet TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) system.
<!-- body -->
## Bootstrap Tokens Overview
Bootstrap Tokens are defined with a specific type
(`bootstrap.kubernetes.io/token`) of secrets that lives in the `kube-system`
namespace. These Secrets are then read by the Bootstrap Authenticator in the
API Server. Expired tokens are removed with the TokenCleaner controller in the
Controller Manager. The tokens are also used to create a signature for a
specific ConfigMap used in a "discovery" process through a BootstrapSigner
controller.
## Token Format
Bootstrap Tokens take the form of `abcdef.0123456789abcdef`.
More formally, they must match the regular expression `[a-z0-9]{6}\.[a-z0-9]{16}`.
The first part of the token is the "Token ID" and is considered public
information. It is used when referring to a token without leaking the secret
part used for authentication. The second part is the "Token Secret" and should
only be shared with trusted parties.
## Enabling Bootstrap Token Authentication
The Bootstrap Token authenticator can be enabled using the following flag on the
API server:
```
--enable-bootstrap-token-auth
```
When enabled, bootstrapping tokens can be used as bearer token credentials to
authenticate requests against the API server.
```http
Authorization: Bearer 07401b.f395accd246ae52d
```
Tokens authenticate as the username `system:bootstrap:<token id>` and are members
of the group `system:bootstrappers`.
Additional groups may be specified in the token's Secret.
Expired tokens can be deleted automatically by enabling the `tokencleaner`
controller on the controller manager.
```
--controllers=*,tokencleaner
```
## Bootstrap Token Secret Format
Each valid token is backed by a secret in the `kube-system` namespace. You can
find the full design doc
[here](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/bootstrap-discovery.md).
Here is what the secret looks like.
```yaml
apiVersion: v1
kind: Secret
metadata:
# Name MUST be of form "bootstrap-token-<token id>"
name: bootstrap-token-07401b
namespace: kube-system
# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
# Human readable description. Optional.
description: "The default bootstrap token generated by 'kubeadm init'."
# Token ID and secret. Required.
token-id: 07401b
token-secret: f395accd246ae52d
# Expiration. Optional.
expiration: 2017-03-10T03:22:11Z
# Allowed usages.
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
# Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress
```
The type of the secret must be `bootstrap.kubernetes.io/token` and the name must
be `bootstrap-token-<token id>`. It must also exist in the `kube-system` namespace.
The `usage-bootstrap-*` members indicate what this secret is intended to be used for.
A value must be set to `true` to be enabled.
* `usage-bootstrap-authentication` indicates that the token can be used to
authenticate to the API server as a bearer token.
* `usage-bootstrap-signing` indicates that the token may be used to sign the
`cluster-info` ConfigMap as described below.
The `expiration` field controls the expiry of the token. Expired tokens are
rejected when used for authentication and ignored during ConfigMap signing.
The expiry value is encoded as an absolute UTC time using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339). Enable the
`tokencleaner` controller to automatically delete expired tokens.
## Token Management with kubeadm
You can use the `kubeadm` tool to manage tokens on a running cluster. See the
[kubeadm token docs](/docs/reference/setup-tools/kubeadm/kubeadm-token/) for details.
## ConfigMap Signing
In addition to authentication, the tokens can be used to sign a ConfigMap.
This is used early in a cluster bootstrap process before the client trusts the API
server. The signed ConfigMap can be authenticated by the shared token.
Enable ConfigMap signing by enabling the `bootstrapsigner` controller on the
Controller Manager.
```
--controllers=*,bootstrapsigner
```
The ConfigMap that is signed is `cluster-info` in the `kube-public` namespace.
The typical flow is that a client reads this ConfigMap while unauthenticated and
ignoring TLS errors. It then validates the payload of the ConfigMap by looking
at a signature embedded in the ConfigMap.
The ConfigMap may look like this:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-public
data:
jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <really long certificate data>
server: https://10.138.0.2:6443
name: ""
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
```
The `kubeconfig` member of the ConfigMap is a config file with only the cluster
information filled out. The key thing being communicated here is the
`certificate-authority-data`. This may be expanded in the future.
The signature is a JWS signature using the "detached" mode. To validate the
signature, the user should encode the `kubeconfig` payload according to JWS
rules (base64 encoded while discarding any trailing `=`). That encoded payload
is then used to form a whole JWS by inserting it between the 2 dots. You can
verify the JWS using the `HS256` scheme (HMAC-SHA256) with the full token (e.g.
`07401b.f395accd246ae52d`) as the shared secret. Users _must_ verify that HS256
is used.
Any party with a bootstrapping token can create a valid signature for that
token. When using ConfigMap signing it's discouraged to share the same token with
many clients, since a compromised client can potentially man-in-the middle another
client relying on the signature to bootstrap TLS trust.
Consult the [kubeadm implementation details](/docs/reference/setup-tools/kubeadm/implementation-details/)
section for more information. | kubernetes reference | reviewers jbeda title Authenticating with Bootstrap Tokens content type concept weight 20 overview Bootstrap tokens are a simple bearer token that is meant to be used when creating new clusters or joining new nodes to an existing cluster It was built to support kubeadm docs reference setup tools kubeadm but can be used in other contexts for users that wish to start clusters without kubeadm It is also built to work via RBAC policy with the kubelet TLS Bootstrapping docs reference access authn authz kubelet tls bootstrapping system body Bootstrap Tokens Overview Bootstrap Tokens are defined with a specific type bootstrap kubernetes io token of secrets that lives in the kube system namespace These Secrets are then read by the Bootstrap Authenticator in the API Server Expired tokens are removed with the TokenCleaner controller in the Controller Manager The tokens are also used to create a signature for a specific ConfigMap used in a discovery process through a BootstrapSigner controller Token Format Bootstrap Tokens take the form of abcdef 0123456789abcdef More formally they must match the regular expression a z0 9 6 a z0 9 16 The first part of the token is the Token ID and is considered public information It is used when referring to a token without leaking the secret part used for authentication The second part is the Token Secret and should only be shared with trusted parties Enabling Bootstrap Token Authentication The Bootstrap Token authenticator can be enabled using the following flag on the API server enable bootstrap token auth When enabled bootstrapping tokens can be used as bearer token credentials to authenticate requests against the API server http Authorization Bearer 07401b f395accd246ae52d Tokens authenticate as the username system bootstrap token id and are members of the group system bootstrappers Additional groups may be specified in the token s Secret Expired tokens can be deleted automatically by enabling the tokencleaner controller on the controller manager controllers tokencleaner Bootstrap Token Secret Format Each valid token is backed by a secret in the kube system namespace You can find the full design doc here https git k8s io design proposals archive cluster lifecycle bootstrap discovery md Here is what the secret looks like yaml apiVersion v1 kind Secret metadata Name MUST be of form bootstrap token token id name bootstrap token 07401b namespace kube system Type MUST be bootstrap kubernetes io token type bootstrap kubernetes io token stringData Human readable description Optional description The default bootstrap token generated by kubeadm init Token ID and secret Required token id 07401b token secret f395accd246ae52d Expiration Optional expiration 2017 03 10T03 22 11Z Allowed usages usage bootstrap authentication true usage bootstrap signing true Extra groups to authenticate the token as Must start with system bootstrappers auth extra groups system bootstrappers worker system bootstrappers ingress The type of the secret must be bootstrap kubernetes io token and the name must be bootstrap token token id It must also exist in the kube system namespace The usage bootstrap members indicate what this secret is intended to be used for A value must be set to true to be enabled usage bootstrap authentication indicates that the token can be used to authenticate to the API server as a bearer token usage bootstrap signing indicates that the token may be used to sign the cluster info ConfigMap as described below The expiration field controls the expiry of the token Expired tokens are rejected when used for authentication and ignored during ConfigMap signing The expiry value is encoded as an absolute UTC time using RFC3339 https datatracker ietf org doc html rfc3339 Enable the tokencleaner controller to automatically delete expired tokens Token Management with kubeadm You can use the kubeadm tool to manage tokens on a running cluster See the kubeadm token docs docs reference setup tools kubeadm kubeadm token for details ConfigMap Signing In addition to authentication the tokens can be used to sign a ConfigMap This is used early in a cluster bootstrap process before the client trusts the API server The signed ConfigMap can be authenticated by the shared token Enable ConfigMap signing by enabling the bootstrapsigner controller on the Controller Manager controllers bootstrapsigner The ConfigMap that is signed is cluster info in the kube public namespace The typical flow is that a client reads this ConfigMap while unauthenticated and ignoring TLS errors It then validates the payload of the ConfigMap by looking at a signature embedded in the ConfigMap The ConfigMap may look like this yaml apiVersion v1 kind ConfigMap metadata name cluster info namespace kube public data jws kubeconfig 07401b eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9 tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U kubeconfig apiVersion v1 clusters cluster certificate authority data really long certificate data server https 10 138 0 2 6443 name contexts current context kind Config preferences users The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out The key thing being communicated here is the certificate authority data This may be expanded in the future The signature is a JWS signature using the detached mode To validate the signature the user should encode the kubeconfig payload according to JWS rules base64 encoded while discarding any trailing That encoded payload is then used to form a whole JWS by inserting it between the 2 dots You can verify the JWS using the HS256 scheme HMAC SHA256 with the full token e g 07401b f395accd246ae52d as the shared secret Users must verify that HS256 is used Any party with a bootstrapping token can create a valid signature for that token When using ConfigMap signing it s discouraged to share the same token with many clients since a compromised client can potentially man in the middle another client relying on the signature to bootstrap TLS trust Consult the kubeadm implementation details docs reference setup tools kubeadm implementation details section for more information |
kubernetes reference liggitt deads2k weight 30 erictune contenttype concept title Authorization reviewers lavalamp | ---
reviewers:
- erictune
- lavalamp
- deads2k
- liggitt
title: Authorization
content_type: concept
weight: 30
description: >
Details of Kubernetes authorization mechanisms and supported authorization modes.
---
<!-- overview -->
Kubernetes authorization takes place following
[authentication](/docs/reference/access-authn-authz/authentication/).
Usually, a client making a request must be authenticated (logged in) before its
request can be allowed; however, Kubernetes also allows anonymous requests in
some circumstances.
For an overview of how authorization fits into the wider context of API access
control, read
[Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/).
<!-- body -->
## Authorization verdicts {#determine-whether-a-request-is-allowed-or-denied}
Kubernetes authorization of API requests takes place within the API server.
The API server evaluates all of the request attributes against all policies,
potentially also consulting external services, and then allows or denies the
request.
All parts of an API request must be allowed by some authorization
mechanism in order to proceed. In other words: access is denied by default.
Access controls and policies that
depend on specific fields of specific kinds of objects are handled by
.
Kubernetes admission control happens after authorization has completed (and,
therefore, only when the authorization decision was to allow the request).
When multiple [authorization modules](#authorization-modules) are configured,
each is checked in sequence.
If any authorizer _approves_ or _denies_ a request, that decision is immediately
returned and no other authorizer is consulted. If all modules have _no opinion_
on the request, then the request is denied. An overall deny verdict means that
the API server rejects the request and responds with an HTTP 403 (Forbidden)
status.
## Request attributes used in authorization
Kubernetes reviews only the following API request attributes:
* **user** - The `user` string provided during authentication.
* **group** - The list of group names to which the authenticated user belongs.
* **extra** - A map of arbitrary string keys to string values, provided by the authentication layer.
* **API** - Indicates whether the request is for an API resource.
* **Request path** - Path to miscellaneous non-resource endpoints like `/api` or `/healthz`.
* **API request verb** - API verbs like `get`, `list`, `create`, `update`, `patch`, `watch`, `delete`, and `deletecollection` are used for resource requests. To determine the request verb for a resource API endpoint, see [request verbs and authorization](/docs/reference/access-authn-authz/authorization/#determine-the-request-verb).
* **HTTP request verb** - Lowercased HTTP methods like `get`, `post`, `put`, and `delete` are used for non-resource requests.
* **Resource** - The ID or name of the resource that is being accessed (for resource requests only) -- For resource requests using `get`, `update`, `patch`, and `delete` verbs, you must provide the resource name.
* **Subresource** - The subresource that is being accessed (for resource requests only).
* **Namespace** - The namespace of the object that is being accessed (for namespaced resource requests only).
* **API group** - The being accessed (for resource requests only). An empty string designates the _core_ [API group](/docs/reference/using-api/#api-groups).
### Request verbs and authorization {#determine-the-request-verb}
#### Non-resource requests {#request-verb-non-resource}
Requests to endpoints other than `/api/v1/...` or `/apis/<group>/<version>/...`
are considered _non-resource requests_, and use the lower-cased HTTP method of the request as the verb.
For example, making a `GET` request using HTTP to endpoints such as `/api` or `/healthz` would use **get** as the verb.
#### Resource requests {#request-verb-resource}
To determine the request verb for a resource API endpoint, Kubernetes maps the HTTP verb
used and considers whether or not the request acts on an individual resource or on a
collection of resources:
HTTP verb | request verb
--------------|---------------
`POST` | **create**
`GET`, `HEAD` | **get** (for individual resources), **list** (for collections, including full object content), **watch** (for watching an individual resource or collection of resources)
`PUT` | **update**
`PATCH` | **patch**
`DELETE` | **delete** (for individual resources), **deletecollection** (for collections)
+The **get**, **list** and **watch** verbs can all return the full details of a resource. In
terms of access to the returned data they are equivalent. For example, **list** on `secrets`
will reveal the **data** attributes of any returned resources.
Kubernetes sometimes checks authorization for additional permissions using specialized verbs. For example:
* Special cases of [authentication](/docs/reference/access-authn-authz/authentication/)
* **impersonate** verb on `users`, `groups`, and `serviceaccounts` in the core API group, and the `userextras` in the `authentication.k8s.io` API group.
* [Authorization of CertificateSigningRequests](/docs/reference/access-authn-authz/certificate-signing-requests/#authorization)
* **approve** verb for CertificateSigningRequests, and **update** for revisions to existing approvals
* [RBAC](/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping)
* **bind** and **escalate** verbs on `roles` and `clusterroles` resources in the `rbac.authorization.k8s.io` API group.
## Authorization context
Kubernetes expects attributes that are common to REST API requests. This means
that Kubernetes authorization works with existing organization-wide or
cloud-provider-wide access control systems which may handle other APIs besides
the Kubernetes API.
## Authorization modes {#authorization-modules}
The Kubernetes API server may authorize a request using one of several authorization modes:
`AlwaysAllow`
: This mode allows all requests, which brings [security risks](#warning-always-allow). Use this authorization mode only if you do not require authorization for your API requests (for example, for testing).
`AlwaysDeny`
: This mode blocks all requests. Use this authorization mode only for testing.
`ABAC` ([attribute-based access control](/docs/reference/access-authn-authz/abac/))
: Kubernetes ABAC mode defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together. The policies can use any type of attributes (user attributes, resource attributes, object, environment attributes, etc).
`RBAC` ([role-based access control](/docs/reference/access-authn-authz/rbac/))
: Kubernetes RBAC is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise. In this context, access is the ability of an individual user to perform a specific task, such as view, create, or modify a file.
In this mode, Kubernetes uses the `rbac.authorization.k8s.io` API group to drive authorization decisions, allowing you to dynamically configure permission policies through the Kubernetes API.
`Node`
: A special-purpose authorization mode that grants permissions to kubelets based on the pods they are scheduled to run. To learn more about the Node authorization mode, see [Node Authorization](/docs/reference/access-authn-authz/node/).
`Webhook`
: Kubernetes [webhook mode](/docs/reference/access-authn-authz/webhook/) for authorization makes a synchronous HTTP callout, blocking the request until the remote HTTP service responds to the query.You can write your own software to handle the callout, or use solutions from the ecosystem.
<a id="warning-always-allow" />
Enabling the `AlwaysAllow` mode bypasses authorization; do not use this on a cluster where
you do not trust **all** potential API clients, including the workloads that you run.
Authorization mechanisms typically return either a _deny_ or _no opinion_ result; see
[authorization verdicts](#determine-whether-a-request-is-allowed-or-denied) for more on this.
Activating the `AlwaysAllow` means that if all other authorizers return “no opinion”,
the request is allowed. For example, `--authorization-mode=AlwaysAllow,RBAC` has the
same effect as `--authorization-mode=AlwaysAllow` because Kubernetes RBAC does not
provide negative (deny) access rules.
You should not use the `AlwaysAllow` mode on a Kubernetes cluster where the API server
is reachable from the public internet.
### The system:masters group
The `system:masters` group is a built-in Kubernetes group that grants unrestricted
access to the API server. Any user assigned to this group has full cluster administrator
privileges, bypassing any authorization restrictions imposed by the RBAC or Webhook mechanisms.
[Avoid adding users](/docs/concepts/security/rbac-good-practices/#least-privilege)
to this group. If you do need to grant a user cluster-admin rights, you can create a
[ClusterRoleBinding](/docs/reference/access-authn-authz/rbac/#user-facing-roles)
to the built-in `cluster-admin` ClusterRole.
### Authorization mode configuration {#choice-of-authz-config}
You can configure the Kubernetes API server's authorizer chain using either
[command line arguments](#using-flags-for-your-authorization-module) only or, as a beta feature,
using a [configuration file](#using-configuration-file-for-authorization).
You have to pick one of the two configuration approaches; setting both `--authorization-config`
path and configuring an authorization webhook using the `--authorization-mode` and
`--authorization-webhook-*` command line arguments is not allowed.
If you try this, the API server reports an error message during startup, then exits immediately.
### Command line authorization mode configuration {#using-flags-for-your-authorization-module}
You can use the following modes:
* `--authorization-mode=ABAC` (Attribute-based access control mode)
* `--authorization-mode=RBAC` (Role-based access control mode)
* `--authorization-mode=Node` (Node authorizer)
* `--authorization-mode=Webhook` (Webhook authorization mode)
* `--authorization-mode=AlwaysAllow` (always allows requests; carries [security risks](#warning-always-allow))
* `--authorization-mode=AlwaysDeny` (always denies requests)
You can choose more than one authorization mode; for example:
`--authorization-mode=Node,Webhook`
Kubernetes checks authorization modules based on the order that you specify them
on the API server's command line, so an earlier module has higher priority to allow
or deny a request.
You cannot combine the `--authorization-mode` command line argument with the
`--authorization-config` command line argument used for
[configuring authorization using a local file](#using-configuration-file-for-authorization-mode).
For more information on command line arguments to the API server, read the
[`kube-apiserver` reference](/docs/reference/command-line-tools-reference/kube-apiserver/).
<!-- keep legacy hyperlinks working -->
<a id="configuring-the-api-server-using-an-authorization-config-file" />
### Configuring the API Server using an authorization config file {#using-configuration-file-for-authorization}
As a beta feature, Kubernetes lets you configure authorization chains that can include multiple
webhooks. The authorization items in that chain can have well-defined parameters that validate
requests in a particular order, offering you fine-grained control, such as explicit Deny on failures.
The configuration file approach even allows you to specify
[CEL](/docs/reference/using-api/cel/) rules to pre-filter requests before they are dispatched
to webhooks, helping you to prevent unnecessary invocations. The API server also automatically
reloads the authorizer chain when the configuration file is modified.
You specify the path to the authorization configuration using the
`--authorization-config` command line argument.
If you want to use command line arguments instead of a configuration file, that's also a valid and supported approach.
Some authorization capabilities (for example: multiple webhooks, webhook failure policy, and pre-filter rules)
are only available if you use an authorization configuration file.
#### Example configuration {#authz-config-example}
---
#
# DO NOT USE THE CONFIG AS IS. THIS IS AN EXAMPLE.
#
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthorizationConfiguration
authorizers:
- type: Webhook
# Name used to describe the authorizer
# This is explicitly used in monitoring machinery for metrics
# Note:
# - Validation for this field is similar to how K8s labels are validated today.
# Required, with no default
name: webhook
webhook:
# The duration to cache 'authorized' responses from the webhook
# authorizer.
# Same as setting `--authorization-webhook-cache-authorized-ttl` flag
# Default: 5m0s
authorizedTTL: 30s
# The duration to cache 'unauthorized' responses from the webhook
# authorizer.
# Same as setting `--authorization-webhook-cache-unauthorized-ttl` flag
# Default: 30s
unauthorizedTTL: 30s
# Timeout for the webhook request
# Maximum allowed is 30s.
# Required, with no default.
timeout: 3s
# The API version of the authorization.k8s.io SubjectAccessReview to
# send to and expect from the webhook.
# Same as setting `--authorization-webhook-version` flag
# Required, with no default
# Valid values: v1beta1, v1
subjectAccessReviewVersion: v1
# MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview
# version the CEL expressions are evaluated against
# Valid values: v1
# Required, no default value
matchConditionSubjectAccessReviewVersion: v1
# Controls the authorization decision when a webhook request fails to
# complete or returns a malformed response or errors evaluating
# matchConditions.
# Valid values:
# - NoOpinion: continue to subsequent authorizers to see if one of
# them allows the request
# - Deny: reject the request without consulting subsequent authorizers
# Required, with no default.
failurePolicy: Deny
connectionInfo:
# Controls how the webhook should communicate with the server.
# Valid values:
# - KubeConfigFile: use the file specified in kubeConfigFile to locate the
# server.
# - InClusterConfig: use the in-cluster configuration to call the
# SubjectAccessReview API hosted by kube-apiserver. This mode is not
# allowed for kube-apiserver.
type: KubeConfigFile
# Path to KubeConfigFile for connection info
# Required, if connectionInfo.Type is KubeConfigFile
kubeConfigFile: /kube-system-authz-webhook.yaml
# matchConditions is a list of conditions that must be met for a request to be sent to this
# webhook. An empty list of matchConditions matches all requests.
# There are a maximum of 64 match conditions allowed.
#
# The exact matching logic is (in order):
# 1. If at least one matchCondition evaluates to FALSE, then the webhook is skipped.
# 2. If ALL matchConditions evaluate to TRUE, then the webhook is called.
# 3. If at least one matchCondition evaluates to an error (but none are FALSE):
# - If failurePolicy=Deny, then the webhook rejects the request
# - If failurePolicy=NoOpinion, then the error is ignored and the webhook is skipped
matchConditions:
# expression represents the expression which will be evaluated by CEL. Must evaluate to bool.
# CEL expressions have access to the contents of the SubjectAccessReview in v1 version.
# If version specified by subjectAccessReviewVersion in the request variable is v1beta1,
# the contents would be converted to the v1 version before evaluating the CEL expression.
#
# Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/
#
# only send resource requests to the webhook
- expression: has(request.resourceAttributes)
# only intercept requests to kube-system
- expression: request.resourceAttributes.namespace == 'kube-system'
# don't intercept requests from kube-system service accounts
- expression: "!('system:serviceaccounts:kube-system' in request.groups)"
- type: Node
name: node
- type: RBAC
name: rbac
- type: Webhook
name: in-cluster-authorizer
webhook:
authorizedTTL: 5m
unauthorizedTTL: 30s
timeout: 3s
subjectAccessReviewVersion: v1
failurePolicy: NoOpinion
connectionInfo:
type: InClusterConfig
When configuring the authorizer chain using a configuration file, make sure all the
control plane nodes have the same file contents. Take a note of the API server
configuration when upgrading / downgrading your clusters. For example, if upgrading
from Kubernetes to Kubernetes ,
you would need to make sure the config file is in a format that Kubernetes
can understand, before you upgrade the cluster. If you downgrade to ,
you would need to set the configuration appropriately.
#### Authorization configuration and reloads
Kubernetes reloads the authorization configuration file when the API server observes a change
to the file, and also on a 60 second schedule if no change events were observed.
You must ensure that all non-webhook authorizer types remain unchanged in the file on reload.
A reload **must not** add or remove Node or RBAC authorizers (they can be reordered,
but cannot be added or removed).
## Privilege escalation via workload creation or edits {#privilege-escalation-via-pod-creation}
Users who can create/edit pods in a namespace, either directly or through an object that
enables indirect [workload management](/docs/concepts/architecture/controller/), may be
able to escalate their privileges in that namespace. The potential routes to privilege
escalation include Kubernetes [API extensions](/docs/concepts/extend-kubernetes/#api-extensions)
and their associated .
As a cluster administrator, use caution when granting access to create or edit workloads.
Some details of how these can be misused are documented in
[escalation paths](/docs/reference/access-authn-authz/authorization/#escalation-paths).
### Escalation paths {#escalation-paths}
There are different ways that an attacker or untrustworthy user could gain additional
privilege within a namespace, if you allow them to run arbitrary Pods in that namespace:
- Mounting arbitrary Secrets in that namespace
- Can be used to access confidential information meant for other workloads
- Can be used to obtain a more privileged ServiceAccount's service account token
- Using arbitrary ServiceAccounts in that namespace
- Can perform Kubernetes API actions as another workload (impersonation)
- Can perform any privileged actions that ServiceAccount has
- Mounting or using ConfigMaps meant for other workloads in that namespace
- Can be used to obtain information meant for other workloads, such as database host names.
- Mounting volumes meant for other workloads in that namespace
- Can be used to obtain information meant for other workloads, and change it.
As a system administrator, you should be cautious when deploying CustomResourceDefinitions
that let users make changes to the above areas. These may open privilege escalations paths.
Consider the consequences of this kind of change when deciding on your authorization controls.
## Checking API access
`kubectl` provides the `auth can-i` subcommand for quickly querying the API authorization layer.
The command uses the `SelfSubjectAccessReview` API to determine if the current user can perform
a given action, and works regardless of the authorization mode used.
```bash
kubectl auth can-i create deployments --namespace dev
```
The output is similar to this:
```
yes
```
```shell
kubectl auth can-i create deployments --namespace prod
```
The output is similar to this:
```
no
```
Administrators can combine this with [user impersonation](/docs/reference/access-authn-authz/authentication/#user-impersonation)
to determine what action other users can perform.
```bash
kubectl auth can-i list secrets --namespace dev --as dave
```
The output is similar to this:
```
no
```
Similarly, to check whether a ServiceAccount named `dev-sa` in Namespace `dev`
can list Pods in the Namespace `target`:
```bash
kubectl auth can-i list pods \
--namespace target \
--as system:serviceaccount:dev:dev-sa
```
The output is similar to this:
```
yes
```
SelfSubjectAccessReview is part of the `authorization.k8s.io` API group, which
exposes the API server authorization to external services. Other resources in
this group include:
SubjectAccessReview
: Access review for any user, not only the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs.
LocalSubjectAccessReview
: Like SubjectAccessReview but restricted to a specific namespace.
SelfSubjectRulesReview
: A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide/show actions.
These APIs can be queried by creating normal Kubernetes resources, where the response `status`
field of the returned object is the result of the query. For example:
```bash
kubectl create -f - -o yaml << EOF
apiVersion: authorization.k8s.io/v1
kind: SelfSubjectAccessReview
spec:
resourceAttributes:
group: apps
resource: deployments
verb: create
namespace: dev
EOF
```
The generated SelfSubjectAccessReview is similar to:
apiVersion: authorization.k8s.io/v1
kind: SelfSubjectAccessReview
metadata:
creationTimestamp: null
spec:
resourceAttributes:
group: apps
resource: deployments
namespace: dev
verb: create
status:
allowed: true
denied: false
##
* To learn more about Authentication, see [Authentication](/docs/reference/access-authn-authz/authentication/).
* For an overview, read [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/).
* To learn more about Admission Control, see [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/).
* Read more about [Common Expression Language in Kubernetes](/docs/reference/using-api/cel/). | kubernetes reference | reviewers erictune lavalamp deads2k liggitt title Authorization content type concept weight 30 description Details of Kubernetes authorization mechanisms and supported authorization modes overview Kubernetes authorization takes place following authentication docs reference access authn authz authentication Usually a client making a request must be authenticated logged in before its request can be allowed however Kubernetes also allows anonymous requests in some circumstances For an overview of how authorization fits into the wider context of API access control read Controlling Access to the Kubernetes API docs concepts security controlling access body Authorization verdicts determine whether a request is allowed or denied Kubernetes authorization of API requests takes place within the API server The API server evaluates all of the request attributes against all policies potentially also consulting external services and then allows or denies the request All parts of an API request must be allowed by some authorization mechanism in order to proceed In other words access is denied by default Access controls and policies that depend on specific fields of specific kinds of objects are handled by Kubernetes admission control happens after authorization has completed and therefore only when the authorization decision was to allow the request When multiple authorization modules authorization modules are configured each is checked in sequence If any authorizer approves or denies a request that decision is immediately returned and no other authorizer is consulted If all modules have no opinion on the request then the request is denied An overall deny verdict means that the API server rejects the request and responds with an HTTP 403 Forbidden status Request attributes used in authorization Kubernetes reviews only the following API request attributes user The user string provided during authentication group The list of group names to which the authenticated user belongs extra A map of arbitrary string keys to string values provided by the authentication layer API Indicates whether the request is for an API resource Request path Path to miscellaneous non resource endpoints like api or healthz API request verb API verbs like get list create update patch watch delete and deletecollection are used for resource requests To determine the request verb for a resource API endpoint see request verbs and authorization docs reference access authn authz authorization determine the request verb HTTP request verb Lowercased HTTP methods like get post put and delete are used for non resource requests Resource The ID or name of the resource that is being accessed for resource requests only For resource requests using get update patch and delete verbs you must provide the resource name Subresource The subresource that is being accessed for resource requests only Namespace The namespace of the object that is being accessed for namespaced resource requests only API group The being accessed for resource requests only An empty string designates the core API group docs reference using api api groups Request verbs and authorization determine the request verb Non resource requests request verb non resource Requests to endpoints other than api v1 or apis group version are considered non resource requests and use the lower cased HTTP method of the request as the verb For example making a GET request using HTTP to endpoints such as api or healthz would use get as the verb Resource requests request verb resource To determine the request verb for a resource API endpoint Kubernetes maps the HTTP verb used and considers whether or not the request acts on an individual resource or on a collection of resources HTTP verb request verb POST create GET HEAD get for individual resources list for collections including full object content watch for watching an individual resource or collection of resources PUT update PATCH patch DELETE delete for individual resources deletecollection for collections The get list and watch verbs can all return the full details of a resource In terms of access to the returned data they are equivalent For example list on secrets will reveal the data attributes of any returned resources Kubernetes sometimes checks authorization for additional permissions using specialized verbs For example Special cases of authentication docs reference access authn authz authentication impersonate verb on users groups and serviceaccounts in the core API group and the userextras in the authentication k8s io API group Authorization of CertificateSigningRequests docs reference access authn authz certificate signing requests authorization approve verb for CertificateSigningRequests and update for revisions to existing approvals RBAC docs reference access authn authz rbac privilege escalation prevention and bootstrapping bind and escalate verbs on roles and clusterroles resources in the rbac authorization k8s io API group Authorization context Kubernetes expects attributes that are common to REST API requests This means that Kubernetes authorization works with existing organization wide or cloud provider wide access control systems which may handle other APIs besides the Kubernetes API Authorization modes authorization modules The Kubernetes API server may authorize a request using one of several authorization modes AlwaysAllow This mode allows all requests which brings security risks warning always allow Use this authorization mode only if you do not require authorization for your API requests for example for testing AlwaysDeny This mode blocks all requests Use this authorization mode only for testing ABAC attribute based access control docs reference access authn authz abac Kubernetes ABAC mode defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together The policies can use any type of attributes user attributes resource attributes object environment attributes etc RBAC role based access control docs reference access authn authz rbac Kubernetes RBAC is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise In this context access is the ability of an individual user to perform a specific task such as view create or modify a file In this mode Kubernetes uses the rbac authorization k8s io API group to drive authorization decisions allowing you to dynamically configure permission policies through the Kubernetes API Node A special purpose authorization mode that grants permissions to kubelets based on the pods they are scheduled to run To learn more about the Node authorization mode see Node Authorization docs reference access authn authz node Webhook Kubernetes webhook mode docs reference access authn authz webhook for authorization makes a synchronous HTTP callout blocking the request until the remote HTTP service responds to the query You can write your own software to handle the callout or use solutions from the ecosystem a id warning always allow Enabling the AlwaysAllow mode bypasses authorization do not use this on a cluster where you do not trust all potential API clients including the workloads that you run Authorization mechanisms typically return either a deny or no opinion result see authorization verdicts determine whether a request is allowed or denied for more on this Activating the AlwaysAllow means that if all other authorizers return no opinion the request is allowed For example authorization mode AlwaysAllow RBAC has the same effect as authorization mode AlwaysAllow because Kubernetes RBAC does not provide negative deny access rules You should not use the AlwaysAllow mode on a Kubernetes cluster where the API server is reachable from the public internet The system masters group The system masters group is a built in Kubernetes group that grants unrestricted access to the API server Any user assigned to this group has full cluster administrator privileges bypassing any authorization restrictions imposed by the RBAC or Webhook mechanisms Avoid adding users docs concepts security rbac good practices least privilege to this group If you do need to grant a user cluster admin rights you can create a ClusterRoleBinding docs reference access authn authz rbac user facing roles to the built in cluster admin ClusterRole Authorization mode configuration choice of authz config You can configure the Kubernetes API server s authorizer chain using either command line arguments using flags for your authorization module only or as a beta feature using a configuration file using configuration file for authorization You have to pick one of the two configuration approaches setting both authorization config path and configuring an authorization webhook using the authorization mode and authorization webhook command line arguments is not allowed If you try this the API server reports an error message during startup then exits immediately Command line authorization mode configuration using flags for your authorization module You can use the following modes authorization mode ABAC Attribute based access control mode authorization mode RBAC Role based access control mode authorization mode Node Node authorizer authorization mode Webhook Webhook authorization mode authorization mode AlwaysAllow always allows requests carries security risks warning always allow authorization mode AlwaysDeny always denies requests You can choose more than one authorization mode for example authorization mode Node Webhook Kubernetes checks authorization modules based on the order that you specify them on the API server s command line so an earlier module has higher priority to allow or deny a request You cannot combine the authorization mode command line argument with the authorization config command line argument used for configuring authorization using a local file using configuration file for authorization mode For more information on command line arguments to the API server read the kube apiserver reference docs reference command line tools reference kube apiserver keep legacy hyperlinks working a id configuring the api server using an authorization config file Configuring the API Server using an authorization config file using configuration file for authorization As a beta feature Kubernetes lets you configure authorization chains that can include multiple webhooks The authorization items in that chain can have well defined parameters that validate requests in a particular order offering you fine grained control such as explicit Deny on failures The configuration file approach even allows you to specify CEL docs reference using api cel rules to pre filter requests before they are dispatched to webhooks helping you to prevent unnecessary invocations The API server also automatically reloads the authorizer chain when the configuration file is modified You specify the path to the authorization configuration using the authorization config command line argument If you want to use command line arguments instead of a configuration file that s also a valid and supported approach Some authorization capabilities for example multiple webhooks webhook failure policy and pre filter rules are only available if you use an authorization configuration file Example configuration authz config example DO NOT USE THE CONFIG AS IS THIS IS AN EXAMPLE apiVersion apiserver config k8s io v1beta1 kind AuthorizationConfiguration authorizers type Webhook Name used to describe the authorizer This is explicitly used in monitoring machinery for metrics Note Validation for this field is similar to how K8s labels are validated today Required with no default name webhook webhook The duration to cache authorized responses from the webhook authorizer Same as setting authorization webhook cache authorized ttl flag Default 5m0s authorizedTTL 30s The duration to cache unauthorized responses from the webhook authorizer Same as setting authorization webhook cache unauthorized ttl flag Default 30s unauthorizedTTL 30s Timeout for the webhook request Maximum allowed is 30s Required with no default timeout 3s The API version of the authorization k8s io SubjectAccessReview to send to and expect from the webhook Same as setting authorization webhook version flag Required with no default Valid values v1beta1 v1 subjectAccessReviewVersion v1 MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview version the CEL expressions are evaluated against Valid values v1 Required no default value matchConditionSubjectAccessReviewVersion v1 Controls the authorization decision when a webhook request fails to complete or returns a malformed response or errors evaluating matchConditions Valid values NoOpinion continue to subsequent authorizers to see if one of them allows the request Deny reject the request without consulting subsequent authorizers Required with no default failurePolicy Deny connectionInfo Controls how the webhook should communicate with the server Valid values KubeConfigFile use the file specified in kubeConfigFile to locate the server InClusterConfig use the in cluster configuration to call the SubjectAccessReview API hosted by kube apiserver This mode is not allowed for kube apiserver type KubeConfigFile Path to KubeConfigFile for connection info Required if connectionInfo Type is KubeConfigFile kubeConfigFile kube system authz webhook yaml matchConditions is a list of conditions that must be met for a request to be sent to this webhook An empty list of matchConditions matches all requests There are a maximum of 64 match conditions allowed The exact matching logic is in order 1 If at least one matchCondition evaluates to FALSE then the webhook is skipped 2 If ALL matchConditions evaluate to TRUE then the webhook is called 3 If at least one matchCondition evaluates to an error but none are FALSE If failurePolicy Deny then the webhook rejects the request If failurePolicy NoOpinion then the error is ignored and the webhook is skipped matchConditions expression represents the expression which will be evaluated by CEL Must evaluate to bool CEL expressions have access to the contents of the SubjectAccessReview in v1 version If version specified by subjectAccessReviewVersion in the request variable is v1beta1 the contents would be converted to the v1 version before evaluating the CEL expression Documentation on CEL https kubernetes io docs reference using api cel only send resource requests to the webhook expression has request resourceAttributes only intercept requests to kube system expression request resourceAttributes namespace kube system don t intercept requests from kube system service accounts expression system serviceaccounts kube system in request groups type Node name node type RBAC name rbac type Webhook name in cluster authorizer webhook authorizedTTL 5m unauthorizedTTL 30s timeout 3s subjectAccessReviewVersion v1 failurePolicy NoOpinion connectionInfo type InClusterConfig When configuring the authorizer chain using a configuration file make sure all the control plane nodes have the same file contents Take a note of the API server configuration when upgrading downgrading your clusters For example if upgrading from Kubernetes to Kubernetes you would need to make sure the config file is in a format that Kubernetes can understand before you upgrade the cluster If you downgrade to you would need to set the configuration appropriately Authorization configuration and reloads Kubernetes reloads the authorization configuration file when the API server observes a change to the file and also on a 60 second schedule if no change events were observed You must ensure that all non webhook authorizer types remain unchanged in the file on reload A reload must not add or remove Node or RBAC authorizers they can be reordered but cannot be added or removed Privilege escalation via workload creation or edits privilege escalation via pod creation Users who can create edit pods in a namespace either directly or through an object that enables indirect workload management docs concepts architecture controller may be able to escalate their privileges in that namespace The potential routes to privilege escalation include Kubernetes API extensions docs concepts extend kubernetes api extensions and their associated As a cluster administrator use caution when granting access to create or edit workloads Some details of how these can be misused are documented in escalation paths docs reference access authn authz authorization escalation paths Escalation paths escalation paths There are different ways that an attacker or untrustworthy user could gain additional privilege within a namespace if you allow them to run arbitrary Pods in that namespace Mounting arbitrary Secrets in that namespace Can be used to access confidential information meant for other workloads Can be used to obtain a more privileged ServiceAccount s service account token Using arbitrary ServiceAccounts in that namespace Can perform Kubernetes API actions as another workload impersonation Can perform any privileged actions that ServiceAccount has Mounting or using ConfigMaps meant for other workloads in that namespace Can be used to obtain information meant for other workloads such as database host names Mounting volumes meant for other workloads in that namespace Can be used to obtain information meant for other workloads and change it As a system administrator you should be cautious when deploying CustomResourceDefinitions that let users make changes to the above areas These may open privilege escalations paths Consider the consequences of this kind of change when deciding on your authorization controls Checking API access kubectl provides the auth can i subcommand for quickly querying the API authorization layer The command uses the SelfSubjectAccessReview API to determine if the current user can perform a given action and works regardless of the authorization mode used bash kubectl auth can i create deployments namespace dev The output is similar to this yes shell kubectl auth can i create deployments namespace prod The output is similar to this no Administrators can combine this with user impersonation docs reference access authn authz authentication user impersonation to determine what action other users can perform bash kubectl auth can i list secrets namespace dev as dave The output is similar to this no Similarly to check whether a ServiceAccount named dev sa in Namespace dev can list Pods in the Namespace target bash kubectl auth can i list pods namespace target as system serviceaccount dev dev sa The output is similar to this yes SelfSubjectAccessReview is part of the authorization k8s io API group which exposes the API server authorization to external services Other resources in this group include SubjectAccessReview Access review for any user not only the current one Useful for delegating authorization decisions to the API server For example the kubelet and extension API servers use this to determine user access to their own APIs LocalSubjectAccessReview Like SubjectAccessReview but restricted to a specific namespace SelfSubjectRulesReview A review which returns the set of actions a user can perform within a namespace Useful for users to quickly summarize their own access or for UIs to hide show actions These APIs can be queried by creating normal Kubernetes resources where the response status field of the returned object is the result of the query For example bash kubectl create f o yaml EOF apiVersion authorization k8s io v1 kind SelfSubjectAccessReview spec resourceAttributes group apps resource deployments verb create namespace dev EOF The generated SelfSubjectAccessReview is similar to apiVersion authorization k8s io v1 kind SelfSubjectAccessReview metadata creationTimestamp null spec resourceAttributes group apps resource deployments namespace dev verb create status allowed true denied false To learn more about Authentication see Authentication docs reference access authn authz authentication For an overview read Controlling Access to the Kubernetes API docs concepts security controlling access To learn more about Admission Control see Using Admission Controllers docs reference access authn authz admission controllers Read more about Common Expression Language in Kubernetes docs reference using api cel |
kubernetes reference liggitt contenttype concept enj title Managing Service Accounts reviewers overview weight 50 | ---
reviewers:
- liggitt
- enj
title: Managing Service Accounts
content_type: concept
weight: 50
---
<!-- overview -->
A _ServiceAccount_ provides an identity for processes that run in a Pod.
A process inside a Pod can use the identity of its associated service account to
authenticate to the cluster's API server.
For an introduction to service accounts, read [configure service accounts](/docs/tasks/configure-pod-container/configure-service-account/).
This task guide explains some of the concepts behind ServiceAccounts. The
guide also explains how to obtain or revoke tokens that represent
ServiceAccounts.
<!-- body -->
##
To be able to follow these steps exactly, ensure you have a namespace named
`examplens`.
If you don't, create one by running:
```shell
kubectl create namespace examplens
```
## User accounts versus service accounts
Kubernetes distinguishes between the concept of a user account and a service account
for a number of reasons:
- User accounts are for humans. Service accounts are for application processes,
which (for Kubernetes) run in containers that are part of pods.
- User accounts are intended to be global: names must be unique across all
namespaces of a cluster. No matter what namespace you look at, a particular
username that represents a user represents the same user.
In Kubernetes, service accounts are namespaced: two different namespaces can
contain ServiceAccounts that have identical names.
- Typically, a cluster's user accounts might be synchronised from a corporate
database, where new user account creation requires special privileges and is
tied to complex business processes. By contrast, service account creation is
intended to be more lightweight, allowing cluster users to create service accounts
for specific tasks on demand. Separating ServiceAccount creation from the steps to
onboard human users makes it easier for workloads to follow the principle of
least privilege.
- Auditing considerations for humans and service accounts may differ; the separation
makes that easier to achieve.
- A configuration bundle for a complex system may include definition of various service
accounts for components of that system. Because service accounts can be created
without many constraints and have namespaced names, such configuration is
usually portable.
## Bound service account tokens
ServiceAccount tokens can be bound to API objects that exist in the kube-apiserver.
This can be used to tie the validity of a token to the existence of another API object.
Supported object types are as follows:
* Pod (used for projected volume mounts, see below)
* Secret (can be used to allow revoking a token by deleting the Secret)
* Node (in v1.30, creating new node-bound tokens is alpha, using existing node-bound tokens is beta)
When a token is bound to an object, the object's `metadata.name` and `metadata.uid` are
stored as extra 'private claims' in the issued JWT.
When a bound token is presented to the kube-apiserver, the service account authenticator
will extract and verify these claims.
If the referenced object or the ServiceAccount is pending deletion (for example, due to finalizers),
then for any instant that is 60 seconds (or more) after the `.metadata.deletionTimestamp` date,
authentication with that token would fail.
If the referenced object no longer exists (or its `metadata.uid` does not match),
the request will not be authenticated.
### Additional metadata in Pod bound tokens
When a service account token is bound to a Pod object, additional metadata is also
embedded into the token that indicates the value of the bound pod's `spec.nodeName` field,
and the uid of that Node, if available.
This node information is **not** verified by the kube-apiserver when the token is used for authentication.
It is included so integrators do not have to fetch Pod or Node API objects to check the associated Node name
and uid when inspecting a JWT.
### Verifying and inspecting private claims
The `TokenReview` API can be used to verify and extract private claims from a token:
1. First, assume you have a pod named `test-pod` and a service account named `my-sa`.
2. Create a token that is bound to this Pod:
```shell
kubectl create token my-sa --bound-object-kind="Pod" --bound-object-name="test-pod"
```
3. Copy this token into a new file named `tokenreview.yaml`:
```yaml
apiVersion: authentication.k8s.io/v1
kind: TokenReview
spec:
token: <token from step 2>
```
4. Submit this resource to the apiserver for review:
```shell
kubectl create -o yaml -f tokenreview.yaml # we use '-o yaml' so we can inspect the output
```
You should see an output like below:
```yaml
apiVersion: authentication.k8s.io/v1
kind: TokenReview
metadata:
creationTimestamp: null
spec:
token: <token>
status:
audiences:
- https://kubernetes.default.svc.cluster.local
authenticated: true
user:
extra:
authentication.kubernetes.io/credential-id:
- JTI=7ee52be0-9045-4653-aa5e-0da57b8dccdc
authentication.kubernetes.io/node-name:
- kind-control-plane
authentication.kubernetes.io/node-uid:
- 497e9d9a-47aa-4930-b0f6-9f2fb574c8c6
authentication.kubernetes.io/pod-name:
- test-pod
authentication.kubernetes.io/pod-uid:
- e87dbbd6-3d7e-45db-aafb-72b24627dff5
groups:
- system:serviceaccounts
- system:serviceaccounts:default
- system:authenticated
uid: f8b4161b-2e2b-11e9-86b7-2afc33b31a7e
username: system:serviceaccount:default:my-sa
```
Despite using `kubectl create -f` to create this resource, and defining it similar to
other resource types in Kubernetes, TokenReview is a special type and the kube-apiserver
does not actually persist the TokenReview object into etcd.
Hence `kubectl get tokenreview` is not a valid command.
## Bound service account token volume mechanism {#bound-service-account-token-volume}
By default, the Kubernetes control plane (specifically, the
[ServiceAccount admission controller](#serviceaccount-admission-controller))
adds a [projected volume](/docs/concepts/storage/projected-volumes/) to Pods,
and this volume includes a token for Kubernetes API access.
Here's an example of how that looks for a launched Pod:
```yaml
...
- name: kube-api-access-<random-suffix>
projected:
sources:
- serviceAccountToken:
path: token # must match the path the app expects
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
```
That manifest snippet defines a projected volume that consists of three sources. In this case,
each source also represents a single path within that volume. The three sources are:
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver.
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
The kubelet also refreshes that token before the token expires.
The token is bound to the specific Pod and has the kube-apiserver as its audience.
This mechanism superseded an earlier mechanism that added a volume based on a Secret,
where the Secret represented the ServiceAccount for the Pod, but did not expire.
1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these
certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox
or an accidentally misconfigured peer).
1. A `downwardAPI` source that looks up the name of the namespace containing the Pod, and makes
that name information available to application code running inside the Pod.
Any container within the Pod that mounts this particular volume can access the above information.
There is no specific mechanism to invalidate a token issued via TokenRequest. If you no longer
trust a bound service account token for a Pod, you can delete that Pod. Deleting a Pod expires
its bound service account tokens.
## Manual Secret management for ServiceAccounts
Versions of Kubernetes before v1.22 automatically created credentials for accessing
the Kubernetes API. This older mechanism was based on creating token Secrets that
could then be mounted into running Pods.
In more recent versions, including Kubernetes v, API credentials
are [obtained directly](#bound-service-account-token-volume) using the
[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API,
and are mounted into Pods using a projected volume.
The tokens obtained using this method have bounded lifetimes, and are automatically
invalidated when the Pod they are mounted into is deleted.
You can still [manually create](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount) a Secret to hold a service account token; for example, if you need a token that never expires.
Once you manually create a Secret and link it to a ServiceAccount, the Kubernetes control plane automatically populates the token into that Secret.
Although the manual mechanism for creating a long-lived ServiceAccount token exists,
using [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
to obtain short-lived API access tokens is recommended instead.
## Auto-generated legacy ServiceAccount token clean up {#auto-generated-legacy-serviceaccount-token-clean-up}
Before version 1.24, Kubernetes automatically generated Secret-based tokens for
ServiceAccounts. To distinguish between automatically generated tokens and
manually created ones, Kubernetes checks for a reference from the
ServiceAccount's secrets field. If the Secret is referenced in the `secrets`
field, it is considered an auto-generated legacy token. Otherwise, it is
considered a manually created legacy token. For example:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
namespace: default
secrets:
- name: build-robot-secret # usually NOT present for a manually generated token
```
Beginning from version 1.29, legacy ServiceAccount tokens that were generated
automatically will be marked as invalid if they remain unused for a certain
period of time (set to default at one year). Tokens that continue to be unused
for this defined period (again, by default, one year) will subsequently be
purged by the control plane.
If users use an invalidated auto-generated token, the token validator will
1. add an audit annotation for the key-value pair
`authentication.k8s.io/legacy-token-invalidated: <secret name>/<namespace>`,
1. increment the `invalid_legacy_auto_token_uses_total` metric count,
1. update the Secret label `kubernetes.io/legacy-token-last-used` with the new
date,
1. return an error indicating that the token has been invalidated.
When receiving this validation error, users can update the Secret to remove the
`kubernetes.io/legacy-token-invalid-since` label to temporarily allow use of
this token.
Here's an example of an auto-generated legacy token that has been marked with the
`kubernetes.io/legacy-token-last-used` and `kubernetes.io/legacy-token-invalid-since`
labels:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: build-robot-secret
namespace: default
labels:
kubernetes.io/legacy-token-last-used: 2022-10-24
kubernetes.io/legacy-token-invalid-since: 2023-10-25
annotations:
kubernetes.io/service-account.name: build-robot
type: kubernetes.io/service-account-token
```
## Control plane details
### ServiceAccount controller
A ServiceAccount controller manages the ServiceAccounts inside namespaces, and
ensures a ServiceAccount named "default" exists in every active namespace.
### Token controller
The service account token controller runs as part of `kube-controller-manager`.
This controller acts asynchronously. It:
- watches for ServiceAccount deletion and deletes all corresponding ServiceAccount
token Secrets.
- watches for ServiceAccount token Secret addition, and ensures the referenced
ServiceAccount exists, and adds a token to the Secret if needed.
- watches for Secret deletion and removes a reference from the corresponding
ServiceAccount if needed.
You must pass a service account private key file to the token controller in
the `kube-controller-manager` using the `--service-account-private-key-file`
flag. The private key is used to sign generated service account tokens.
Similarly, you must pass the corresponding public key to the `kube-apiserver`
using the `--service-account-key-file` flag. The public key will be used to
verify the tokens during authentication.
### ServiceAccount admission controller
The modification of pods is implemented via a plugin
called an [Admission Controller](/docs/reference/access-authn-authz/admission-controllers/).
It is part of the API server.
This admission controller acts synchronously to modify pods as they are created.
When this plugin is active (and it is by default on most distributions), then
it does the following when a Pod is created:
1. If the pod does not have a `.spec.serviceAccountName` set, the admission controller sets the name of the
ServiceAccount for this incoming Pod to `default`.
1. The admission controller ensures that the ServiceAccount referenced by the incoming Pod exists. If there
is no ServiceAccount with a matching name, the admission controller rejects the incoming Pod. That check
applies even for the `default` ServiceAccount.
1. Provided that neither the ServiceAccount's `automountServiceAccountToken` field nor the
Pod's `automountServiceAccountToken` field is set to `false`:
- the admission controller mutates the incoming Pod, adding an extra
that contains
a token for API access.
- the admission controller adds a `volumeMount` to each container in the Pod,
skipping any containers that already have a volume mount defined for the path
`/var/run/secrets/kubernetes.io/serviceaccount`.
For Linux containers, that volume is mounted at `/var/run/secrets/kubernetes.io/serviceaccount`;
on Windows nodes, the mount is at the equivalent path.
1. If the spec of the incoming Pod doesn't already contain any `imagePullSecrets`, then the
admission controller adds `imagePullSecrets`, copying them from the `ServiceAccount`.
### Legacy ServiceAccount token tracking controller
This controller generates a ConfigMap called
`kube-system/kube-apiserver-legacy-service-account-token-tracking` in the
`kube-system` namespace. The ConfigMap records the timestamp when legacy service
account tokens began to be monitored by the system.
### Legacy ServiceAccount token cleaner
The legacy ServiceAccount token cleaner runs as part of the
`kube-controller-manager` and checks every 24 hours to see if any auto-generated
legacy ServiceAccount token has not been used in a *specified amount of time*.
If so, the cleaner marks those tokens as invalid.
The cleaner works by first checking the ConfigMap created by the control plane
(provided that `LegacyServiceAccountTokenTracking` is enabled). If the current
time is a *specified amount of time* after the date in the ConfigMap, the
cleaner then loops through the list of Secrets in the cluster and evaluates each
Secret that has the type `kubernetes.io/service-account-token`.
If a Secret meets all of the following conditions, the cleaner marks it as
invalid:
- The Secret is auto-generated, meaning that it is bi-directionally referenced
by a ServiceAccount.
- The Secret is not currently mounted by any pods.
- The Secret has not been used in a *specified amount of time* since it was
created or since it was last used.
The cleaner marks a Secret invalid by adding a label called
`kubernetes.io/legacy-token-invalid-since` to the Secret, with the current date
as the value. If an invalid Secret is not used in a *specified amount of time*,
the cleaner will delete it.
All the *specified amount of time* above defaults to one year. The cluster
administrator can configure this value through the
`--legacy-service-account-token-clean-up-period` command line argument for the
`kube-controller-manager` component.
### TokenRequest API
You use the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
subresource of a ServiceAccount to obtain a time-bound token for that ServiceAccount.
You don't need to call this to obtain an API token for use within a container, since
the kubelet sets this up for you using a _projected volume_.
If you want to use the TokenRequest API from `kubectl`, see
[Manually create an API token for a ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount).
The Kubernetes control plane (specifically, the ServiceAccount admission controller)
adds a projected volume to Pods, and the kubelet ensures that this volume contains a token
that lets containers authenticate as the right ServiceAccount.
(This mechanism superseded an earlier mechanism that added a volume based on a Secret,
where the Secret represented the ServiceAccount for the Pod but did not expire.)
Here's an example of how that looks for a launched Pod:
```yaml
...
- name: kube-api-access-<random-suffix>
projected:
defaultMode: 420 # decimal equivalent of octal 0644
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
```
That manifest snippet defines a projected volume that combines information from three sources:
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver.
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
The token is bound to the specific Pod and has the kube-apiserver as its audience.
1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these
certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox
or an accidentally misconfigured peer).
1. A `downwardAPI` source. This `downwardAPI` volume makes the name of the namespace containing the Pod available
to application code running inside the Pod.
Any container within the Pod that mounts this volume can access the above information.
## Create additional API tokens {#create-token}
Only create long-lived API tokens if the [token request](#tokenrequest-api) mechanism
is not suitable. The token request mechanism provides time-limited tokens; because these
expire, they represent a lower risk to information security.
To create a non-expiring, persisted API token for a ServiceAccount, create a
Secret of type `kubernetes.io/service-account-token` with an annotation
referencing the ServiceAccount. The control plane then generates a long-lived token and
updates that Secret with that generated token data.
Here is a sample manifest for such a Secret:
To create a Secret based on this example, run:
```shell
kubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml
```
To see the details for that Secret, run:
```shell
kubectl -n examplens describe secret mysecretname
```
The output is similar to:
```
Name: mysecretname
Namespace: examplens
Labels: <none>
Annotations: kubernetes.io/service-account.name=myserviceaccount
kubernetes.io/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1362 bytes
namespace: 9 bytes
token: ...
```
If you launch a new Pod into the `examplens` namespace, it can use the `myserviceaccount`
service-account-token Secret that you just created.
Do not reference manually created Secrets in the `secrets` field of a
ServiceAccount. Or the manually created Secrets will be cleaned if it is not used for a long
time. Please refer to [auto-generated legacy ServiceAccount token clean up](#auto-generated-legacy-serviceaccount-token-clean-up).
## Delete/invalidate a ServiceAccount token {#delete-token}
If you know the name of the Secret that contains the token you want to remove:
```shell
kubectl delete secret name-of-secret
```
Otherwise, first find the Secret for the ServiceAccount.
```shell
# This assumes that you already have a namespace named 'examplens'
kubectl -n examplens get serviceaccount/example-automated-thing -o yaml
```
The output is similar to:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"example-automated-thing","namespace":"examplens"}}
creationTimestamp: "2019-07-21T07:07:07Z"
name: example-automated-thing
namespace: examplens
resourceVersion: "777"
selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing
uid: f23fd170-66f2-4697-b049-e1e266b7f835
secrets:
- name: example-automated-thing-token-zyxwv
```
Then, delete the Secret you now know the name of:
```shell
kubectl -n examplens delete secret/example-automated-thing-token-zyxwv
```
## Clean up
If you created a namespace `examplens` to experiment with, you can remove it:
```shell
kubectl delete namespace examplens
```
##
- Read more details about [projected volumes](/docs/concepts/storage/projected-volumes/). | kubernetes reference | reviewers liggitt enj title Managing Service Accounts content type concept weight 50 overview A ServiceAccount provides an identity for processes that run in a Pod A process inside a Pod can use the identity of its associated service account to authenticate to the cluster s API server For an introduction to service accounts read configure service accounts docs tasks configure pod container configure service account This task guide explains some of the concepts behind ServiceAccounts The guide also explains how to obtain or revoke tokens that represent ServiceAccounts body To be able to follow these steps exactly ensure you have a namespace named examplens If you don t create one by running shell kubectl create namespace examplens User accounts versus service accounts Kubernetes distinguishes between the concept of a user account and a service account for a number of reasons User accounts are for humans Service accounts are for application processes which for Kubernetes run in containers that are part of pods User accounts are intended to be global names must be unique across all namespaces of a cluster No matter what namespace you look at a particular username that represents a user represents the same user In Kubernetes service accounts are namespaced two different namespaces can contain ServiceAccounts that have identical names Typically a cluster s user accounts might be synchronised from a corporate database where new user account creation requires special privileges and is tied to complex business processes By contrast service account creation is intended to be more lightweight allowing cluster users to create service accounts for specific tasks on demand Separating ServiceAccount creation from the steps to onboard human users makes it easier for workloads to follow the principle of least privilege Auditing considerations for humans and service accounts may differ the separation makes that easier to achieve A configuration bundle for a complex system may include definition of various service accounts for components of that system Because service accounts can be created without many constraints and have namespaced names such configuration is usually portable Bound service account tokens ServiceAccount tokens can be bound to API objects that exist in the kube apiserver This can be used to tie the validity of a token to the existence of another API object Supported object types are as follows Pod used for projected volume mounts see below Secret can be used to allow revoking a token by deleting the Secret Node in v1 30 creating new node bound tokens is alpha using existing node bound tokens is beta When a token is bound to an object the object s metadata name and metadata uid are stored as extra private claims in the issued JWT When a bound token is presented to the kube apiserver the service account authenticator will extract and verify these claims If the referenced object or the ServiceAccount is pending deletion for example due to finalizers then for any instant that is 60 seconds or more after the metadata deletionTimestamp date authentication with that token would fail If the referenced object no longer exists or its metadata uid does not match the request will not be authenticated Additional metadata in Pod bound tokens When a service account token is bound to a Pod object additional metadata is also embedded into the token that indicates the value of the bound pod s spec nodeName field and the uid of that Node if available This node information is not verified by the kube apiserver when the token is used for authentication It is included so integrators do not have to fetch Pod or Node API objects to check the associated Node name and uid when inspecting a JWT Verifying and inspecting private claims The TokenReview API can be used to verify and extract private claims from a token 1 First assume you have a pod named test pod and a service account named my sa 2 Create a token that is bound to this Pod shell kubectl create token my sa bound object kind Pod bound object name test pod 3 Copy this token into a new file named tokenreview yaml yaml apiVersion authentication k8s io v1 kind TokenReview spec token token from step 2 4 Submit this resource to the apiserver for review shell kubectl create o yaml f tokenreview yaml we use o yaml so we can inspect the output You should see an output like below yaml apiVersion authentication k8s io v1 kind TokenReview metadata creationTimestamp null spec token token status audiences https kubernetes default svc cluster local authenticated true user extra authentication kubernetes io credential id JTI 7ee52be0 9045 4653 aa5e 0da57b8dccdc authentication kubernetes io node name kind control plane authentication kubernetes io node uid 497e9d9a 47aa 4930 b0f6 9f2fb574c8c6 authentication kubernetes io pod name test pod authentication kubernetes io pod uid e87dbbd6 3d7e 45db aafb 72b24627dff5 groups system serviceaccounts system serviceaccounts default system authenticated uid f8b4161b 2e2b 11e9 86b7 2afc33b31a7e username system serviceaccount default my sa Despite using kubectl create f to create this resource and defining it similar to other resource types in Kubernetes TokenReview is a special type and the kube apiserver does not actually persist the TokenReview object into etcd Hence kubectl get tokenreview is not a valid command Bound service account token volume mechanism bound service account token volume By default the Kubernetes control plane specifically the ServiceAccount admission controller serviceaccount admission controller adds a projected volume docs concepts storage projected volumes to Pods and this volume includes a token for Kubernetes API access Here s an example of how that looks for a launched Pod yaml name kube api access random suffix projected sources serviceAccountToken path token must match the path the app expects configMap items key ca crt path ca crt name kube root ca crt downwardAPI items fieldRef apiVersion v1 fieldPath metadata namespace path namespace That manifest snippet defines a projected volume that consists of three sources In this case each source also represents a single path within that volume The three sources are 1 A serviceAccountToken source that contains a token that the kubelet acquires from kube apiserver The kubelet fetches time bound tokens using the TokenRequest API A token served for a TokenRequest expires either when the pod is deleted or after a defined lifespan by default that is 1 hour The kubelet also refreshes that token before the token expires The token is bound to the specific Pod and has the kube apiserver as its audience This mechanism superseded an earlier mechanism that added a volume based on a Secret where the Secret represented the ServiceAccount for the Pod but did not expire 1 A configMap source The ConfigMap contains a bundle of certificate authority data Pods can use these certificates to make sure that they are connecting to your cluster s kube apiserver and not to middlebox or an accidentally misconfigured peer 1 A downwardAPI source that looks up the name of the namespace containing the Pod and makes that name information available to application code running inside the Pod Any container within the Pod that mounts this particular volume can access the above information There is no specific mechanism to invalidate a token issued via TokenRequest If you no longer trust a bound service account token for a Pod you can delete that Pod Deleting a Pod expires its bound service account tokens Manual Secret management for ServiceAccounts Versions of Kubernetes before v1 22 automatically created credentials for accessing the Kubernetes API This older mechanism was based on creating token Secrets that could then be mounted into running Pods In more recent versions including Kubernetes v API credentials are obtained directly bound service account token volume using the TokenRequest docs reference kubernetes api authentication resources token request v1 API and are mounted into Pods using a projected volume The tokens obtained using this method have bounded lifetimes and are automatically invalidated when the Pod they are mounted into is deleted You can still manually create docs tasks configure pod container configure service account manually create an api token for a serviceaccount a Secret to hold a service account token for example if you need a token that never expires Once you manually create a Secret and link it to a ServiceAccount the Kubernetes control plane automatically populates the token into that Secret Although the manual mechanism for creating a long lived ServiceAccount token exists using TokenRequest docs reference kubernetes api authentication resources token request v1 to obtain short lived API access tokens is recommended instead Auto generated legacy ServiceAccount token clean up auto generated legacy serviceaccount token clean up Before version 1 24 Kubernetes automatically generated Secret based tokens for ServiceAccounts To distinguish between automatically generated tokens and manually created ones Kubernetes checks for a reference from the ServiceAccount s secrets field If the Secret is referenced in the secrets field it is considered an auto generated legacy token Otherwise it is considered a manually created legacy token For example yaml apiVersion v1 kind ServiceAccount metadata name build robot namespace default secrets name build robot secret usually NOT present for a manually generated token Beginning from version 1 29 legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time set to default at one year Tokens that continue to be unused for this defined period again by default one year will subsequently be purged by the control plane If users use an invalidated auto generated token the token validator will 1 add an audit annotation for the key value pair authentication k8s io legacy token invalidated secret name namespace 1 increment the invalid legacy auto token uses total metric count 1 update the Secret label kubernetes io legacy token last used with the new date 1 return an error indicating that the token has been invalidated When receiving this validation error users can update the Secret to remove the kubernetes io legacy token invalid since label to temporarily allow use of this token Here s an example of an auto generated legacy token that has been marked with the kubernetes io legacy token last used and kubernetes io legacy token invalid since labels yaml apiVersion v1 kind Secret metadata name build robot secret namespace default labels kubernetes io legacy token last used 2022 10 24 kubernetes io legacy token invalid since 2023 10 25 annotations kubernetes io service account name build robot type kubernetes io service account token Control plane details ServiceAccount controller A ServiceAccount controller manages the ServiceAccounts inside namespaces and ensures a ServiceAccount named default exists in every active namespace Token controller The service account token controller runs as part of kube controller manager This controller acts asynchronously It watches for ServiceAccount deletion and deletes all corresponding ServiceAccount token Secrets watches for ServiceAccount token Secret addition and ensures the referenced ServiceAccount exists and adds a token to the Secret if needed watches for Secret deletion and removes a reference from the corresponding ServiceAccount if needed You must pass a service account private key file to the token controller in the kube controller manager using the service account private key file flag The private key is used to sign generated service account tokens Similarly you must pass the corresponding public key to the kube apiserver using the service account key file flag The public key will be used to verify the tokens during authentication ServiceAccount admission controller The modification of pods is implemented via a plugin called an Admission Controller docs reference access authn authz admission controllers It is part of the API server This admission controller acts synchronously to modify pods as they are created When this plugin is active and it is by default on most distributions then it does the following when a Pod is created 1 If the pod does not have a spec serviceAccountName set the admission controller sets the name of the ServiceAccount for this incoming Pod to default 1 The admission controller ensures that the ServiceAccount referenced by the incoming Pod exists If there is no ServiceAccount with a matching name the admission controller rejects the incoming Pod That check applies even for the default ServiceAccount 1 Provided that neither the ServiceAccount s automountServiceAccountToken field nor the Pod s automountServiceAccountToken field is set to false the admission controller mutates the incoming Pod adding an extra that contains a token for API access the admission controller adds a volumeMount to each container in the Pod skipping any containers that already have a volume mount defined for the path var run secrets kubernetes io serviceaccount For Linux containers that volume is mounted at var run secrets kubernetes io serviceaccount on Windows nodes the mount is at the equivalent path 1 If the spec of the incoming Pod doesn t already contain any imagePullSecrets then the admission controller adds imagePullSecrets copying them from the ServiceAccount Legacy ServiceAccount token tracking controller This controller generates a ConfigMap called kube system kube apiserver legacy service account token tracking in the kube system namespace The ConfigMap records the timestamp when legacy service account tokens began to be monitored by the system Legacy ServiceAccount token cleaner The legacy ServiceAccount token cleaner runs as part of the kube controller manager and checks every 24 hours to see if any auto generated legacy ServiceAccount token has not been used in a specified amount of time If so the cleaner marks those tokens as invalid The cleaner works by first checking the ConfigMap created by the control plane provided that LegacyServiceAccountTokenTracking is enabled If the current time is a specified amount of time after the date in the ConfigMap the cleaner then loops through the list of Secrets in the cluster and evaluates each Secret that has the type kubernetes io service account token If a Secret meets all of the following conditions the cleaner marks it as invalid The Secret is auto generated meaning that it is bi directionally referenced by a ServiceAccount The Secret is not currently mounted by any pods The Secret has not been used in a specified amount of time since it was created or since it was last used The cleaner marks a Secret invalid by adding a label called kubernetes io legacy token invalid since to the Secret with the current date as the value If an invalid Secret is not used in a specified amount of time the cleaner will delete it All the specified amount of time above defaults to one year The cluster administrator can configure this value through the legacy service account token clean up period command line argument for the kube controller manager component TokenRequest API You use the TokenRequest docs reference kubernetes api authentication resources token request v1 subresource of a ServiceAccount to obtain a time bound token for that ServiceAccount You don t need to call this to obtain an API token for use within a container since the kubelet sets this up for you using a projected volume If you want to use the TokenRequest API from kubectl see Manually create an API token for a ServiceAccount docs tasks configure pod container configure service account manually create an api token for a serviceaccount The Kubernetes control plane specifically the ServiceAccount admission controller adds a projected volume to Pods and the kubelet ensures that this volume contains a token that lets containers authenticate as the right ServiceAccount This mechanism superseded an earlier mechanism that added a volume based on a Secret where the Secret represented the ServiceAccount for the Pod but did not expire Here s an example of how that looks for a launched Pod yaml name kube api access random suffix projected defaultMode 420 decimal equivalent of octal 0644 sources serviceAccountToken expirationSeconds 3607 path token configMap items key ca crt path ca crt name kube root ca crt downwardAPI items fieldRef apiVersion v1 fieldPath metadata namespace path namespace That manifest snippet defines a projected volume that combines information from three sources 1 A serviceAccountToken source that contains a token that the kubelet acquires from kube apiserver The kubelet fetches time bound tokens using the TokenRequest API A token served for a TokenRequest expires either when the pod is deleted or after a defined lifespan by default that is 1 hour The token is bound to the specific Pod and has the kube apiserver as its audience 1 A configMap source The ConfigMap contains a bundle of certificate authority data Pods can use these certificates to make sure that they are connecting to your cluster s kube apiserver and not to middlebox or an accidentally misconfigured peer 1 A downwardAPI source This downwardAPI volume makes the name of the namespace containing the Pod available to application code running inside the Pod Any container within the Pod that mounts this volume can access the above information Create additional API tokens create token Only create long lived API tokens if the token request tokenrequest api mechanism is not suitable The token request mechanism provides time limited tokens because these expire they represent a lower risk to information security To create a non expiring persisted API token for a ServiceAccount create a Secret of type kubernetes io service account token with an annotation referencing the ServiceAccount The control plane then generates a long lived token and updates that Secret with that generated token data Here is a sample manifest for such a Secret To create a Secret based on this example run shell kubectl n examplens create f https k8s io examples secret serviceaccount mysecretname yaml To see the details for that Secret run shell kubectl n examplens describe secret mysecretname The output is similar to Name mysecretname Namespace examplens Labels none Annotations kubernetes io service account name myserviceaccount kubernetes io service account uid 8a85c4c4 8483 11e9 bc42 526af7764f64 Type kubernetes io service account token Data ca crt 1362 bytes namespace 9 bytes token If you launch a new Pod into the examplens namespace it can use the myserviceaccount service account token Secret that you just created Do not reference manually created Secrets in the secrets field of a ServiceAccount Or the manually created Secrets will be cleaned if it is not used for a long time Please refer to auto generated legacy ServiceAccount token clean up auto generated legacy serviceaccount token clean up Delete invalidate a ServiceAccount token delete token If you know the name of the Secret that contains the token you want to remove shell kubectl delete secret name of secret Otherwise first find the Secret for the ServiceAccount shell This assumes that you already have a namespace named examplens kubectl n examplens get serviceaccount example automated thing o yaml The output is similar to yaml apiVersion v1 kind ServiceAccount metadata annotations kubectl kubernetes io last applied configuration apiVersion v1 kind ServiceAccount metadata annotations name example automated thing namespace examplens creationTimestamp 2019 07 21T07 07 07Z name example automated thing namespace examplens resourceVersion 777 selfLink api v1 namespaces examplens serviceaccounts example automated thing uid f23fd170 66f2 4697 b049 e1e266b7f835 secrets name example automated thing token zyxwv Then delete the Secret you now know the name of shell kubectl n examplens delete secret example automated thing token zyxwv Clean up If you created a namespace examplens to experiment with you can remove it shell kubectl delete namespace examplens Read more details about projected volumes docs concepts storage projected volumes |
kubernetes reference cici37 title Validating Admission Policy liggitt contenttype concept reviewers jpbetz overview | ---
reviewers:
- liggitt
- jpbetz
- cici37
title: Validating Admission Policy
content_type: concept
---
<!-- overview -->
This page provides an overview of Validating Admission Policy.
<!-- body -->
## What is Validating Admission Policy?
Validating admission policies offer a declarative, in-process alternative to validating admission webhooks.
Validating admission policies use the Common Expression Language (CEL) to declare the validation
rules of a policy.
Validation admission policies are highly configurable, enabling policy authors to define policies
that can be parameterized and scoped to resources as needed by cluster administrators.
## What Resources Make a Policy
A policy is generally made up of three resources:
- The `ValidatingAdmissionPolicy` describes the abstract logic of a policy
(think: "this policy makes sure a particular label is set to a particular value").
- A `ValidatingAdmissionPolicyBinding` links the above resources together and provides scoping.
If you only want to require an `owner` label to be set for `Pods`, the binding is where you would
specify this restriction.
- A parameter resource provides information to a ValidatingAdmissionPolicy to make it a concrete
statement (think "the `owner` label must be set to something that ends in `.company.com`").
A native type such as ConfigMap or a CRD defines the schema of a parameter resource.
`ValidatingAdmissionPolicy` objects specify what Kind they are expecting for their parameter resource.
At least a `ValidatingAdmissionPolicy` and a corresponding `ValidatingAdmissionPolicyBinding`
must be defined for a policy to have an effect.
If a `ValidatingAdmissionPolicy` does not need to be configured via parameters, simply leave
`spec.paramKind` in `ValidatingAdmissionPolicy` not specified.
## Getting Started with Validating Admission Policy
Validating Admission Policy is part of the cluster control-plane. You should write and deploy them
with great caution. The following describes how to quickly experiment with Validating Admission Policy.
### Creating a ValidatingAdmissionPolicy
The following is an example of a ValidatingAdmissionPolicy.
`spec.validations` contains CEL expressions which use the [Common Expression Language (CEL)](https://github.com/google/cel-spec)
to validate the request. If an expression evaluates to false, the validation check is enforced
according to the `spec.failurePolicy` field.
You can quickly test CEL expressions in [CEL Playground](https://playcel.undistro.io).
To configure a validating admission policy for use in a cluster, a binding is required.
The following is an example of a ValidatingAdmissionPolicyBinding.:
When trying to create a deployment with replicas set not satisfying the validation expression, an
error will return containing message:
```none
ValidatingAdmissionPolicy 'demo-policy.example.com' with binding 'demo-binding-test.example.com' denied request: failed expression: object.spec.replicas <= 5
```
The above provides a simple example of using ValidatingAdmissionPolicy without a parameter configured.
#### Validation actions
Each `ValidatingAdmissionPolicyBinding` must specify one or more
`validationActions` to declare how `validations` of a policy are enforced.
The supported `validationActions` are:
- `Deny`: Validation failure results in a denied request.
- `Warn`: Validation failure is reported to the request client
as a [warning](/blog/2020/09/03/warnings/).
- `Audit`: Validation failure is included in the audit event for the API request.
For example, to both warn clients about a validation failure and to audit the
validation failures, use:
```yaml
validationActions: [Warn, Audit]
```
`Deny` and `Warn` may not be used together since this combination
needlessly duplicates the validation failure both in the
API response body and the HTTP warning headers.
A `validation` that evaluates to false is always enforced according to these
actions. Failures defined by the `failurePolicy` are enforced
according to these actions only if the `failurePolicy` is set to `Fail` (or not specified),
otherwise the failures are ignored.
See [Audit Annotations: validation failures](/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation-failure)
for more details about the validation failure audit annotation.
### Parameter resources
Parameter resources allow a policy configuration to be separate from its definition.
A policy can define paramKind, which outlines GVK of the parameter resource,
and then a policy binding ties a policy by name (via policyName) to a particular parameter resource via paramRef.
If parameter configuration is needed, the following is an example of a ValidatingAdmissionPolicy
with parameter configuration.
The `spec.paramKind` field of the ValidatingAdmissionPolicy specifies the kind of resources used
to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources.
Note in this example how the CEL expression references the parameters via the CEL params variable,
e.g. `params.maxReplicas`. `spec.matchConstraints` specifies what resources this policy is
designed to validate. Note that the native types such like `ConfigMap` could also be used as
parameter reference.
The `spec.validations` fields contain CEL expressions. If an expression evaluates to false, the
validation check is enforced according to the `spec.failurePolicy` field.
The validating admission policy author is responsible for providing the ReplicaLimit parameter CRD.
To configure an validating admission policy for use in a cluster, a binding and parameter resource
are created. The following is an example of a ValidatingAdmissionPolicyBinding
that uses a **cluster-wide** param - the same param will be used to validate
every resource request that matches the binding:
Notice this binding applies a parameter to the policy for all resources which
are in the `test` environment.
The parameter resource could be as following:
This policy parameter resource limits deployments to a max of 3 replicas.
An admission policy may have multiple bindings. To bind all other environments
to have a maxReplicas limit of 100, create another ValidatingAdmissionPolicyBinding:
Notice this binding applies a different parameter to resources which
are not in the `test` environment.
And have a parameter resource:
For each admission request, the API server evaluates CEL expressions of each
(policy, binding, param) combination that match the request. For a request
to be admitted it must pass **all** evaluations.
If multiple bindings match the request, the policy will be evaluated for each,
and they must all pass evaluation for the policy to be considered passed.
If multiple parameters match a single binding, the policy rules will be evaluated
for each param, and they too must all pass for the binding to be considered passed.
Bindings can have overlapping match criteria. The policy is evaluated for each
matching binding-parameter combination. A policy may even be evaluated multiple
times if multiple bindings match it, or a single binding that matches multiple
parameters.
The params object representing a parameter resource will not be set if a parameter resource has
not been bound, so for policies requiring a parameter resource, it can be useful to add a check to
ensure one has been bound. A parameter resource will not be bound and `params` will be null
if `paramKind` of the policy, or `paramRef` of the binding are not specified.
For the use cases require parameter configuration, we recommend to add a param check in
`spec.validations[0].expression`:
```
- expression: "params != null"
message: "params missing but required to bind to this policy"
```
#### Optional parameters
It can be convenient to be able to have optional parameters as part of a parameter resource, and
only validate them if present. CEL provides `has()`, which checks if the key passed to it exists.
CEL also implements Boolean short-circuiting. If the first half of a logical OR evaluates to true,
it won’t evaluate the other half (since the result of the entire OR will be true regardless).
Combining the two, we can provide a way to validate optional parameters:
`!has(params.optionalNumber) || (params.optionalNumber >= 5 && params.optionalNumber <= 10)`
Here, we first check that the optional parameter is present with `!has(params.optionalNumber)`.
- If `optionalNumber` hasn’t been defined, then the expression short-circuits since
`!has(params.optionalNumber)` will evaluate to true.
- If `optionalNumber` has been defined, then the latter half of the CEL expression will be
evaluated, and optionalNumber will be checked to ensure that it contains a value between 5 and
10 inclusive.
#### Per-namespace Parameters
As the author of a ValidatingAdmissionPolicy and its ValidatingAdmissionPolicyBinding,
you can choose to specify cluster-wide, or per-namespace parameters.
If you specify a `namespace` for the binding's `paramRef`, the control plane only
searches for parameters in that namespace.
However, if `namespace` is not specified in the ValidatingAdmissionPolicyBinding, the
API server can search for relevant parameters in the namespace that a request is against.
For example, if you make a request to modify a ConfigMap in the `default` namespace and
there is a relevant ValidatingAdmissionPolicyBinding with no `namespace` set, then the
API server looks for a parameter object in `default`.
This design enables policy configuration that depends on the namespace
of the resource being manipulated, for more fine-tuned control.
#### Parameter selector
In addition to specify a parameter in a binding by `name`, you may
choose instead to specify label selector, such that all resources of the
policy's `paramKind`, and the param's `namespace` (if applicable) that match the
label selector are selected for evaluation. See for more information on how label selectors match resources.
If multiple parameters are found to meet the condition, the policy's rules are
evaluated for each parameter found and the results will be ANDed together.
If `namespace` is provided, only objects of the `paramKind` in the provided
namespace are eligible for selection. Otherwise, when `namespace` is empty and
`paramKind` is namespace-scoped, the `namespace` used in the request being
admitted will be used.
#### Authorization checks {#authorization-check}
We introduced the authorization check for parameter resources.
User is expected to have `read` access to the resources referenced by `paramKind` in
`ValidatingAdmissionPolicy` and `paramRef` in `ValidatingAdmissionPolicyBinding`.
Note that if a resource in `paramKind` fails resolving via the restmapper, `read` access to all
resources of groups is required.
### Failure Policy
`failurePolicy` defines how mis-configurations and CEL expressions evaluating to error from the
admission policy are handled. Allowed values are `Ignore` or `Fail`.
- `Ignore` means that an error calling the ValidatingAdmissionPolicy is ignored and the API
request is allowed to continue.
- `Fail` means that an error calling the ValidatingAdmissionPolicy causes the admission to fail
and the API request to be rejected.
Note that the `failurePolicy` is defined inside `ValidatingAdmissionPolicy`:
### Validation Expression
`spec.validations[i].expression` represents the expression which will be evaluated by CEL.
To learn more, see the [CEL language specification](https://github.com/google/cel-spec)
CEL expressions have access to the contents of the Admission request/response, organized into CEL
variables as well as some other useful variables:
- 'object' - The object from the incoming request. The value is null for DELETE requests.
- 'oldObject' - The existing object. The value is null for CREATE requests.
- 'request' - Attributes of the [admission request](/docs/reference/config-api/apiserver-admission.v1/#admission-k8s-io-v1-AdmissionRequest).
- 'params' - Parameter resource referred to by the policy binding being evaluated. The value is
null if `ParamKind` is not specified.
- `namespaceObject` - The namespace, as a Kubernetes resource, that the incoming object belongs to.
The value is null if the incoming object is cluster-scoped.
- `authorizer` - A CEL Authorizer. May be used to perform authorization checks for the principal
(authenticated user) of the request. See
[AuthzSelectors](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#AuthzSelectors) and
[Authz](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz) in the Kubernetes CEL library
documentation for more details.
- `authorizer.requestResource` - A shortcut for an authorization check configured with the request
resource (group, resource, (subresource), namespace, name).
The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from
the root of the object. No other metadata properties are accessible.
Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1].
Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:
- 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and
non-intersecting elements in `Y` are appended, retaining their partial order.
- 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values
are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with
non-intersecting keys are appended, retaining their partial order.
#### Validation expression examples
| Expression | Purpose |
|----------------------------------------------------------------------------------------------| ------------ |
| `object.minReplicas <= object.replicas && object.replicas <= object.maxReplicas` | Validate that the three fields defining replicas are ordered appropriately |
| `'Available' in object.stateCounts` | Validate that an entry with the 'Available' key exists in a map |
| `(size(object.list1) == 0) != (size(object.list2) == 0)` | Validate that one of two lists is non-empty, but not both |
| <code>!('MY_KEY' in object.map1) || object['MY_KEY'].matches('^[a-zA-Z]*$')</code> | Validate the value of a map for a specific key, if it is in the map |
| `object.envars.filter(e, e.name == 'MY_ENV').all(e, e.value.matches('^[a-zA-Z]*$')` | Validate the 'value' field of a listMap entry where key field 'name' is 'MY_ENV' |
| `has(object.expired) && object.created + object.ttl < object.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration |
| `object.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' |
| `object.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 |
| `type(object) == string ? object == '100%' : object == 1000` | Validate an int-or-string field for both the int and string cases |
| `object.metadata.name.startsWith(object.prefix)` | Validate that an object's name has the prefix of another field value |
| `object.set1.all(e, !(e in object.set2))` | Validate that two listSets are disjoint |
| `size(object.names) == size(object.details) && object.names.all(n, n in object.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet |
| `size(object.clusters.filter(c, c.name == object.primary)) == 1` | Validate that the 'primary' property has one and only one occurrence in the 'clusters' listMap |
Read [Supported evaluation on CEL](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#evaluation)
for more information about CEL rules.
`spec.validation[i].reason` represents a machine-readable description of why this validation failed.
If this is the first validation in the list to fail, this reason, as well as the corresponding
HTTP response code, are used in the HTTP response to the client.
The currently supported reasons are: `Unauthorized`, `Forbidden`, `Invalid`, `RequestEntityTooLarge`.
If not set, `StatusReasonInvalid` is used in the response to the client.
### Matching requests: `matchConditions`
You can define _match conditions_ for a `ValidatingAdmissionPolicy` if you need fine-grained request filtering. These
conditions are useful if you find that match rules, `objectSelectors` and `namespaceSelectors` still
doesn't provide the filtering you want. Match conditions are
[CEL expressions](/docs/reference/using-api/cel/). All match conditions must evaluate to true for the
resource to be evaluated.
Here is an example illustrating a few different uses for match conditions:
Match conditions have access to the same CEL variables as validation expressions.
In the event of an error evaluating a match condition the policy is not evaluated. Whether to reject
the request is determined as follows:
1. If **any** match condition evaluated to `false` (regardless of other errors), the API server skips the policy.
2. Otherwise:
- for [`failurePolicy: Fail`](#failure-policy), reject the request (without evaluating the policy).
- for [`failurePolicy: Ignore`](#failure-policy), proceed with the request but skip the policy.
### Audit annotations
`auditAnnotations` may be used to include audit annotations in the audit event of the API request.
For example, here is an admission policy with an audit annotation:
When an API request is validated with this admission policy, the resulting audit event will look like:
```
# the audit event recorded
{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"annotations": {
"demo-policy.example.com/high-replica-count": "Deployment spec.replicas set to 128"
# other annotations
...
}
# other fields
...
}
```
In this example the annotation will only be included if the `spec.replicas` of the Deployment is more than
50, otherwise the CEL expression evaluates to null and the annotation will not be included.
Note that audit annotation keys are prefixed by the name of the `ValidatingAdmissionWebhook` and a `/`. If
another admission controller, such as an admission webhook, uses the exact same audit annotation key, the
value of the first admission controller to include the audit annotation will be included in the audit
event and all other values will be ignored.
### Message expression
To return a more friendly message when the policy rejects a request, we can use a CEL expression
to composite a message with `spec.validations[i].messageExpression`. Similar to the validation expression,
a message expression has access to `object`, `oldObject`, `request`, `params`, and `namespaceObject`.
Unlike validations, message expression must evaluate to a string.
For example, to better inform the user of the reason of denial when the policy refers to a parameter,
we can have the following validation:
After creating a params object that limits the replicas to 3 and setting up the binding,
when we try to create a deployment with 5 replicas, we will receive the following message.
```
$ kubectl create deploy --image=nginx nginx --replicas=5
error: failed to create deployment: deployments.apps "nginx" is forbidden: ValidatingAdmissionPolicy 'deploy-replica-policy.example.com' with binding 'demo-binding-test.example.com' denied request: object.spec.replicas must be no greater than 3
```
This is more informative than a static message of "too many replicas".
The message expression takes precedence over the static message defined in `spec.validations[i].message` if both are defined.
However, if the message expression fails to evaluate, the static message will be used instead.
Additionally, if the message expression evaluates to a multi-line string,
the evaluation result will be discarded and the static message will be used if present.
Note that static message is validated against multi-line strings.
### Type checking
When a policy definition is created or updated, the validation process parses the expressions it contains
and reports any syntax errors, rejecting the definition if any errors are found.
Afterward, the referred variables are checked for type errors, including missing fields and type confusion,
against the matched types of `spec.matchConstraints`.
The result of type checking can be retrieved from `status.typeChecking`.
The presence of `status.typeChecking` indicates the completion of type checking,
and an empty `status.typeChecking` means that no errors were detected.
For example, given the following policy definition:
The status will yield the following information:
```yaml
status:
typeChecking:
expressionWarnings:
- fieldRef: spec.validations[0].expression
warning: |-
apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'
| object.replicas > 1
| ......^
```
If multiple resources are matched in `spec.matchConstraints`, all of matched resources will be checked against.
For example, the following policy definition
will have multiple types and type checking result of each type in the warning message.
```yaml
status:
typeChecking:
expressionWarnings:
- fieldRef: spec.validations[0].expression
warning: |-
apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'
| object.replicas > 1
| ......^
apps/v1, Kind=ReplicaSet: ERROR: <input>:1:7: undefined field 'replicas'
| object.replicas > 1
| ......^
```
Type Checking has the following limitation:
- No wildcard matching. If `spec.matchConstraints.resourceRules` contains `"*"` in any of `apiGroups`, `apiVersions` or `resources`,
the types that `"*"` matches will not be checked.
- The number of matched types is limited to 10. This is to prevent a policy that manually specifying too many types.
to consume excessive computing resources. In the order of ascending group, version, and then resource, 11th combination and beyond are ignored.
- Type Checking does not affect the policy behavior in any way. Even if the type checking detects errors, the policy will continue
to evaluate. If errors do occur during evaluate, the failure policy will decide its outcome.
- Type Checking does not apply to CRDs, including matched CRD types and reference of paramKind. The support for CRDs will come in future release.
### Variable composition
If an expression grows too complicated, or part of the expression is reusable and computationally expensive to evaluate,
you can extract some part of the expressions into variables. A variable is a named expression that can be referred later
in `variables` in other expressions.
```yaml
spec:
variables:
- name: foo
expression: "'foo' in object.spec.metadata.labels ? object.spec.metadata.labels['foo'] : 'default'"
validations:
- expression: variables.foo == 'bar'
```
A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be
reported during the evaluation of the referring expression. Both the result and potential error are memorized and
count only once towards the runtime cost.
The order of variables are important because a variable can refer to other variables that are defined before it.
This ordering prevents circular references.
The following is a more complex example of enforcing that image repo names match the environment defined in its namespace.
With the policy bound to the namespace `default`, which is labeled `environment: prod`,
the following attempt to create a deployment would be rejected.
```shell
kubectl create deploy --image=dev.example.com/nginx invalid
```
The error message is similar to this.
```console
error: failed to create deployment: deployments.apps "invalid" is forbidden: ValidatingAdmissionPolicy 'image-matches-namespace-environment.policy.example.com' with binding 'demo-binding-test.example.com' denied request: only prod images are allowed in namespace default
``` | kubernetes reference | reviewers liggitt jpbetz cici37 title Validating Admission Policy content type concept overview This page provides an overview of Validating Admission Policy body What is Validating Admission Policy Validating admission policies offer a declarative in process alternative to validating admission webhooks Validating admission policies use the Common Expression Language CEL to declare the validation rules of a policy Validation admission policies are highly configurable enabling policy authors to define policies that can be parameterized and scoped to resources as needed by cluster administrators What Resources Make a Policy A policy is generally made up of three resources The ValidatingAdmissionPolicy describes the abstract logic of a policy think this policy makes sure a particular label is set to a particular value A ValidatingAdmissionPolicyBinding links the above resources together and provides scoping If you only want to require an owner label to be set for Pods the binding is where you would specify this restriction A parameter resource provides information to a ValidatingAdmissionPolicy to make it a concrete statement think the owner label must be set to something that ends in company com A native type such as ConfigMap or a CRD defines the schema of a parameter resource ValidatingAdmissionPolicy objects specify what Kind they are expecting for their parameter resource At least a ValidatingAdmissionPolicy and a corresponding ValidatingAdmissionPolicyBinding must be defined for a policy to have an effect If a ValidatingAdmissionPolicy does not need to be configured via parameters simply leave spec paramKind in ValidatingAdmissionPolicy not specified Getting Started with Validating Admission Policy Validating Admission Policy is part of the cluster control plane You should write and deploy them with great caution The following describes how to quickly experiment with Validating Admission Policy Creating a ValidatingAdmissionPolicy The following is an example of a ValidatingAdmissionPolicy spec validations contains CEL expressions which use the Common Expression Language CEL https github com google cel spec to validate the request If an expression evaluates to false the validation check is enforced according to the spec failurePolicy field You can quickly test CEL expressions in CEL Playground https playcel undistro io To configure a validating admission policy for use in a cluster a binding is required The following is an example of a ValidatingAdmissionPolicyBinding When trying to create a deployment with replicas set not satisfying the validation expression an error will return containing message none ValidatingAdmissionPolicy demo policy example com with binding demo binding test example com denied request failed expression object spec replicas 5 The above provides a simple example of using ValidatingAdmissionPolicy without a parameter configured Validation actions Each ValidatingAdmissionPolicyBinding must specify one or more validationActions to declare how validations of a policy are enforced The supported validationActions are Deny Validation failure results in a denied request Warn Validation failure is reported to the request client as a warning blog 2020 09 03 warnings Audit Validation failure is included in the audit event for the API request For example to both warn clients about a validation failure and to audit the validation failures use yaml validationActions Warn Audit Deny and Warn may not be used together since this combination needlessly duplicates the validation failure both in the API response body and the HTTP warning headers A validation that evaluates to false is always enforced according to these actions Failures defined by the failurePolicy are enforced according to these actions only if the failurePolicy is set to Fail or not specified otherwise the failures are ignored See Audit Annotations validation failures docs reference labels annotations taints audit annotations validation policy admission k8s io validation failure for more details about the validation failure audit annotation Parameter resources Parameter resources allow a policy configuration to be separate from its definition A policy can define paramKind which outlines GVK of the parameter resource and then a policy binding ties a policy by name via policyName to a particular parameter resource via paramRef If parameter configuration is needed the following is an example of a ValidatingAdmissionPolicy with parameter configuration The spec paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy For this example it is configured by ReplicaLimit custom resources Note in this example how the CEL expression references the parameters via the CEL params variable e g params maxReplicas spec matchConstraints specifies what resources this policy is designed to validate Note that the native types such like ConfigMap could also be used as parameter reference The spec validations fields contain CEL expressions If an expression evaluates to false the validation check is enforced according to the spec failurePolicy field The validating admission policy author is responsible for providing the ReplicaLimit parameter CRD To configure an validating admission policy for use in a cluster a binding and parameter resource are created The following is an example of a ValidatingAdmissionPolicyBinding that uses a cluster wide param the same param will be used to validate every resource request that matches the binding Notice this binding applies a parameter to the policy for all resources which are in the test environment The parameter resource could be as following This policy parameter resource limits deployments to a max of 3 replicas An admission policy may have multiple bindings To bind all other environments to have a maxReplicas limit of 100 create another ValidatingAdmissionPolicyBinding Notice this binding applies a different parameter to resources which are not in the test environment And have a parameter resource For each admission request the API server evaluates CEL expressions of each policy binding param combination that match the request For a request to be admitted it must pass all evaluations If multiple bindings match the request the policy will be evaluated for each and they must all pass evaluation for the policy to be considered passed If multiple parameters match a single binding the policy rules will be evaluated for each param and they too must all pass for the binding to be considered passed Bindings can have overlapping match criteria The policy is evaluated for each matching binding parameter combination A policy may even be evaluated multiple times if multiple bindings match it or a single binding that matches multiple parameters The params object representing a parameter resource will not be set if a parameter resource has not been bound so for policies requiring a parameter resource it can be useful to add a check to ensure one has been bound A parameter resource will not be bound and params will be null if paramKind of the policy or paramRef of the binding are not specified For the use cases require parameter configuration we recommend to add a param check in spec validations 0 expression expression params null message params missing but required to bind to this policy Optional parameters It can be convenient to be able to have optional parameters as part of a parameter resource and only validate them if present CEL provides has which checks if the key passed to it exists CEL also implements Boolean short circuiting If the first half of a logical OR evaluates to true it won t evaluate the other half since the result of the entire OR will be true regardless Combining the two we can provide a way to validate optional parameters has params optionalNumber params optionalNumber 5 params optionalNumber 10 Here we first check that the optional parameter is present with has params optionalNumber If optionalNumber hasn t been defined then the expression short circuits since has params optionalNumber will evaluate to true If optionalNumber has been defined then the latter half of the CEL expression will be evaluated and optionalNumber will be checked to ensure that it contains a value between 5 and 10 inclusive Per namespace Parameters As the author of a ValidatingAdmissionPolicy and its ValidatingAdmissionPolicyBinding you can choose to specify cluster wide or per namespace parameters If you specify a namespace for the binding s paramRef the control plane only searches for parameters in that namespace However if namespace is not specified in the ValidatingAdmissionPolicyBinding the API server can search for relevant parameters in the namespace that a request is against For example if you make a request to modify a ConfigMap in the default namespace and there is a relevant ValidatingAdmissionPolicyBinding with no namespace set then the API server looks for a parameter object in default This design enables policy configuration that depends on the namespace of the resource being manipulated for more fine tuned control Parameter selector In addition to specify a parameter in a binding by name you may choose instead to specify label selector such that all resources of the policy s paramKind and the param s namespace if applicable that match the label selector are selected for evaluation See for more information on how label selectors match resources If multiple parameters are found to meet the condition the policy s rules are evaluated for each parameter found and the results will be ANDed together If namespace is provided only objects of the paramKind in the provided namespace are eligible for selection Otherwise when namespace is empty and paramKind is namespace scoped the namespace used in the request being admitted will be used Authorization checks authorization check We introduced the authorization check for parameter resources User is expected to have read access to the resources referenced by paramKind in ValidatingAdmissionPolicy and paramRef in ValidatingAdmissionPolicyBinding Note that if a resource in paramKind fails resolving via the restmapper read access to all resources of groups is required Failure Policy failurePolicy defines how mis configurations and CEL expressions evaluating to error from the admission policy are handled Allowed values are Ignore or Fail Ignore means that an error calling the ValidatingAdmissionPolicy is ignored and the API request is allowed to continue Fail means that an error calling the ValidatingAdmissionPolicy causes the admission to fail and the API request to be rejected Note that the failurePolicy is defined inside ValidatingAdmissionPolicy Validation Expression spec validations i expression represents the expression which will be evaluated by CEL To learn more see the CEL language specification https github com google cel spec CEL expressions have access to the contents of the Admission request response organized into CEL variables as well as some other useful variables object The object from the incoming request The value is null for DELETE requests oldObject The existing object The value is null for CREATE requests request Attributes of the admission request docs reference config api apiserver admission v1 admission k8s io v1 AdmissionRequest params Parameter resource referred to by the policy binding being evaluated The value is null if ParamKind is not specified namespaceObject The namespace as a Kubernetes resource that the incoming object belongs to The value is null if the incoming object is cluster scoped authorizer A CEL Authorizer May be used to perform authorization checks for the principal authenticated user of the request See AuthzSelectors https pkg go dev k8s io apiserver pkg cel library AuthzSelectors and Authz https pkg go dev k8s io apiserver pkg cel library Authz in the Kubernetes CEL library documentation for more details authorizer requestResource A shortcut for an authorization check configured with the request resource group resource subresource namespace name The apiVersion kind metadata name and metadata generateName are always accessible from the root of the object No other metadata properties are accessible Equality on arrays with list type of set or map ignores element order i e 1 2 2 1 Concatenation on arrays with x kubernetes list type use the semantics of the list type set X Y performs a union where the array positions of all elements in X are preserved and non intersecting elements in Y are appended retaining their partial order map X Y performs a merge where the array positions of all keys in X are preserved but the values are overwritten by values in Y when the key sets of X and Y intersect Elements in Y with non intersecting keys are appended retaining their partial order Validation expression examples Expression Purpose object minReplicas object replicas object replicas object maxReplicas Validate that the three fields defining replicas are ordered appropriately Available in object stateCounts Validate that an entry with the Available key exists in a map size object list1 0 size object list2 0 Validate that one of two lists is non empty but not both code MY KEY in object map1 124 124 object MY KEY matches a zA Z code Validate the value of a map for a specific key if it is in the map object envars filter e e name MY ENV all e e value matches a zA Z Validate the value field of a listMap entry where key field name is MY ENV has object expired object created object ttl object expired Validate that expired date is after a create date plus a ttl duration object health startsWith ok Validate a health string field has the prefix ok object widgets exists w w key x w foo 10 Validate that the foo property of a listMap item with a key x is less than 10 type object string object 100 object 1000 Validate an int or string field for both the int and string cases object metadata name startsWith object prefix Validate that an object s name has the prefix of another field value object set1 all e e in object set2 Validate that two listSets are disjoint size object names size object details object names all n n in object details Validate the details map is keyed by the items in the names listSet size object clusters filter c c name object primary 1 Validate that the primary property has one and only one occurrence in the clusters listMap Read Supported evaluation on CEL https github com google cel spec blob v0 6 0 doc langdef md evaluation for more information about CEL rules spec validation i reason represents a machine readable description of why this validation failed If this is the first validation in the list to fail this reason as well as the corresponding HTTP response code are used in the HTTP response to the client The currently supported reasons are Unauthorized Forbidden Invalid RequestEntityTooLarge If not set StatusReasonInvalid is used in the response to the client Matching requests matchConditions You can define match conditions for a ValidatingAdmissionPolicy if you need fine grained request filtering These conditions are useful if you find that match rules objectSelectors and namespaceSelectors still doesn t provide the filtering you want Match conditions are CEL expressions docs reference using api cel All match conditions must evaluate to true for the resource to be evaluated Here is an example illustrating a few different uses for match conditions Match conditions have access to the same CEL variables as validation expressions In the event of an error evaluating a match condition the policy is not evaluated Whether to reject the request is determined as follows 1 If any match condition evaluated to false regardless of other errors the API server skips the policy 2 Otherwise for failurePolicy Fail failure policy reject the request without evaluating the policy for failurePolicy Ignore failure policy proceed with the request but skip the policy Audit annotations auditAnnotations may be used to include audit annotations in the audit event of the API request For example here is an admission policy with an audit annotation When an API request is validated with this admission policy the resulting audit event will look like the audit event recorded kind Event apiVersion audit k8s io v1 annotations demo policy example com high replica count Deployment spec replicas set to 128 other annotations other fields In this example the annotation will only be included if the spec replicas of the Deployment is more than 50 otherwise the CEL expression evaluates to null and the annotation will not be included Note that audit annotation keys are prefixed by the name of the ValidatingAdmissionWebhook and a If another admission controller such as an admission webhook uses the exact same audit annotation key the value of the first admission controller to include the audit annotation will be included in the audit event and all other values will be ignored Message expression To return a more friendly message when the policy rejects a request we can use a CEL expression to composite a message with spec validations i messageExpression Similar to the validation expression a message expression has access to object oldObject request params and namespaceObject Unlike validations message expression must evaluate to a string For example to better inform the user of the reason of denial when the policy refers to a parameter we can have the following validation After creating a params object that limits the replicas to 3 and setting up the binding when we try to create a deployment with 5 replicas we will receive the following message kubectl create deploy image nginx nginx replicas 5 error failed to create deployment deployments apps nginx is forbidden ValidatingAdmissionPolicy deploy replica policy example com with binding demo binding test example com denied request object spec replicas must be no greater than 3 This is more informative than a static message of too many replicas The message expression takes precedence over the static message defined in spec validations i message if both are defined However if the message expression fails to evaluate the static message will be used instead Additionally if the message expression evaluates to a multi line string the evaluation result will be discarded and the static message will be used if present Note that static message is validated against multi line strings Type checking When a policy definition is created or updated the validation process parses the expressions it contains and reports any syntax errors rejecting the definition if any errors are found Afterward the referred variables are checked for type errors including missing fields and type confusion against the matched types of spec matchConstraints The result of type checking can be retrieved from status typeChecking The presence of status typeChecking indicates the completion of type checking and an empty status typeChecking means that no errors were detected For example given the following policy definition The status will yield the following information yaml status typeChecking expressionWarnings fieldRef spec validations 0 expression warning apps v1 Kind Deployment ERROR input 1 7 undefined field replicas object replicas 1 If multiple resources are matched in spec matchConstraints all of matched resources will be checked against For example the following policy definition will have multiple types and type checking result of each type in the warning message yaml status typeChecking expressionWarnings fieldRef spec validations 0 expression warning apps v1 Kind Deployment ERROR input 1 7 undefined field replicas object replicas 1 apps v1 Kind ReplicaSet ERROR input 1 7 undefined field replicas object replicas 1 Type Checking has the following limitation No wildcard matching If spec matchConstraints resourceRules contains in any of apiGroups apiVersions or resources the types that matches will not be checked The number of matched types is limited to 10 This is to prevent a policy that manually specifying too many types to consume excessive computing resources In the order of ascending group version and then resource 11th combination and beyond are ignored Type Checking does not affect the policy behavior in any way Even if the type checking detects errors the policy will continue to evaluate If errors do occur during evaluate the failure policy will decide its outcome Type Checking does not apply to CRDs including matched CRD types and reference of paramKind The support for CRDs will come in future release Variable composition If an expression grows too complicated or part of the expression is reusable and computationally expensive to evaluate you can extract some part of the expressions into variables A variable is a named expression that can be referred later in variables in other expressions yaml spec variables name foo expression foo in object spec metadata labels object spec metadata labels foo default validations expression variables foo bar A variable is lazily evaluated when it is first referred Any error that occurs during the evaluation will be reported during the evaluation of the referring expression Both the result and potential error are memorized and count only once towards the runtime cost The order of variables are important because a variable can refer to other variables that are defined before it This ordering prevents circular references The following is a more complex example of enforcing that image repo names match the environment defined in its namespace With the policy bound to the namespace default which is labeled environment prod the following attempt to create a deployment would be rejected shell kubectl create deploy image dev example com nginx invalid The error message is similar to this console error failed to create deployment deployments apps invalid is forbidden ValidatingAdmissionPolicy image matches namespace environment policy example com with binding demo binding test example com denied request only prod images are allowed in namespace default |
kubernetes reference liggitt deads2k weight 36 erictune contenttype concept reviewers title Webhook Mode lavalamp | ---
reviewers:
- erictune
- lavalamp
- deads2k
- liggitt
title: Webhook Mode
content_type: concept
weight: 36
---
<!-- overview -->
A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST. A web application implementing WebHooks will POST a message to a URL when certain things happen.
<!-- body -->
When specified, mode `Webhook` causes Kubernetes to query an outside REST
service when determining user privileges.
## Configuration File Format
Mode `Webhook` requires a file for HTTP configuration, specify by the
`--authorization-webhook-config-file=SOME_FILENAME` flag.
The configuration file uses the [kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
file format. Within the file "users" refers to the API Server webhook and
"clusters" refers to the remote service.
A configuration example which uses HTTPS client auth:
```yaml
# Kubernetes API version
apiVersion: v1
# kind of the API object
kind: Config
# clusters refers to the remote service.
clusters:
- name: name-of-remote-authz-service
cluster:
# CA for verifying the remote service.
certificate-authority: /path/to/ca.pem
# URL of remote service to query. Must use 'https'. May not include parameters.
server: https://authz.example.com/authorize
# users refers to the API Server's webhook configuration.
users:
- name: name-of-api-server
user:
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert
# kubeconfig files require a context. Provide one for the API Server.
current-context: webhook
contexts:
- context:
cluster: name-of-remote-authz-service
user: name-of-api-server
name: webhook
```
## Request Payloads
When faced with an authorization decision, the API Server POSTs a JSON-
serialized `authorization.k8s.io/v1beta1` `SubjectAccessReview` object describing the
action. This object contains fields describing the user attempting to make the
request, and either details about the resource being accessed or requests
attributes.
Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/concepts/overview/kubernetes-api/)
as other Kubernetes API objects. Implementers should be aware of looser
compatibility promises for beta objects and check the "apiVersion" field of the
request to ensure correct deserialization. Additionally, the API Server must
enable the `authorization.k8s.io/v1beta1` API extensions group (`--runtime-config=authorization.k8s.io/v1beta1=true`).
An example request body:
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"resourceAttributes": {
"namespace": "kittensandponies",
"verb": "get",
"group": "unicorn.example.org",
"resource": "pods"
},
"user": "jane",
"group": [
"group1",
"group2"
]
}
}
```
The remote service is expected to fill the `status` field of
the request and respond to either allow or disallow access. The response body's
`spec` field is ignored and may be omitted. A permissive response would return:
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"status": {
"allowed": true
}
}
```
For disallowing access there are two methods.
The first method is preferred in most cases, and indicates the authorization
webhook does not allow, or has "no opinion" about the request, but if other
authorizers are configured, they are given a chance to allow the request.
If there are no other authorizers, or none of them allow the request, the
request is forbidden. The webhook would return:
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"status": {
"allowed": false,
"reason": "user does not have read access to the namespace"
}
}
```
The second method denies immediately, short-circuiting evaluation by other
configured authorizers. This should only be used by webhooks that have
detailed knowledge of the full authorizer configuration of the cluster.
The webhook would return:
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"status": {
"allowed": false,
"denied": true,
"reason": "user does not have read access to the namespace"
}
}
```
Access to non-resource paths are sent as:
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"nonResourceAttributes": {
"path": "/debug",
"verb": "get"
},
"user": "jane",
"group": [
"group1",
"group2"
]
}
}
```
With the `AuthorizeWithSelectors` feature enabled, field and label selectors in the request
are passed to the authorization webhook. The webhook can make authorization decisions
informed by the scoped field and label selectors, if it wishes.
The [SubjectAccessReview API documentation](/docs/reference/kubernetes-api/authorization-resources/subject-access-review-v1/)
gives guidelines for how these fields should be interpreted and handled by authorization webhooks,
specifically using the parsed requirements rather than the raw selector strings,
and how to handle unrecognized operators safely.
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"resourceAttributes": {
"verb": "list",
"group": "",
"resource": "pods",
"fieldSelector": {
"requirements": [
{"key":"spec.nodeName", "operator":"In", "values":["mynode"]}
]
},
"labelSelector": {
"requirements": [
{"key":"example.com/mykey", "operator":"In", "values":["myvalue"]}
]
}
},
"user": "jane",
"group": [
"group1",
"group2"
]
}
}
```
Non-resource paths include: `/api`, `/apis`, `/metrics`,
`/logs`, `/debug`, `/healthz`, `/livez`, `/openapi/v2`, `/readyz`, and
`/version.` Clients require access to `/api`, `/api/*`, `/apis`, `/apis/*`,
and `/version` to discover what resources and versions are present on the server.
Access to other non-resource paths can be disallowed without restricting access
to the REST api.
For further information, refer to the
[SubjectAccessReview API documentation](/docs/reference/kubernetes-api/authorization-resources/subject-access-review-v1/)
and
[webhook.go implementation](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go).
| kubernetes reference | reviewers erictune lavalamp deads2k liggitt title Webhook Mode content type concept weight 36 overview A WebHook is an HTTP callback an HTTP POST that occurs when something happens a simple event notification via HTTP POST A web application implementing WebHooks will POST a message to a URL when certain things happen body When specified mode Webhook causes Kubernetes to query an outside REST service when determining user privileges Configuration File Format Mode Webhook requires a file for HTTP configuration specify by the authorization webhook config file SOME FILENAME flag The configuration file uses the kubeconfig docs tasks access application cluster configure access multiple clusters file format Within the file users refers to the API Server webhook and clusters refers to the remote service A configuration example which uses HTTPS client auth yaml Kubernetes API version apiVersion v1 kind of the API object kind Config clusters refers to the remote service clusters name name of remote authz service cluster CA for verifying the remote service certificate authority path to ca pem URL of remote service to query Must use https May not include parameters server https authz example com authorize users refers to the API Server s webhook configuration users name name of api server user client certificate path to cert pem cert for the webhook plugin to use client key path to key pem key matching the cert kubeconfig files require a context Provide one for the API Server current context webhook contexts context cluster name of remote authz service user name of api server name webhook Request Payloads When faced with an authorization decision the API Server POSTs a JSON serialized authorization k8s io v1beta1 SubjectAccessReview object describing the action This object contains fields describing the user attempting to make the request and either details about the resource being accessed or requests attributes Note that webhook API objects are subject to the same versioning compatibility rules docs concepts overview kubernetes api as other Kubernetes API objects Implementers should be aware of looser compatibility promises for beta objects and check the apiVersion field of the request to ensure correct deserialization Additionally the API Server must enable the authorization k8s io v1beta1 API extensions group runtime config authorization k8s io v1beta1 true An example request body json apiVersion authorization k8s io v1beta1 kind SubjectAccessReview spec resourceAttributes namespace kittensandponies verb get group unicorn example org resource pods user jane group group1 group2 The remote service is expected to fill the status field of the request and respond to either allow or disallow access The response body s spec field is ignored and may be omitted A permissive response would return json apiVersion authorization k8s io v1beta1 kind SubjectAccessReview status allowed true For disallowing access there are two methods The first method is preferred in most cases and indicates the authorization webhook does not allow or has no opinion about the request but if other authorizers are configured they are given a chance to allow the request If there are no other authorizers or none of them allow the request the request is forbidden The webhook would return json apiVersion authorization k8s io v1beta1 kind SubjectAccessReview status allowed false reason user does not have read access to the namespace The second method denies immediately short circuiting evaluation by other configured authorizers This should only be used by webhooks that have detailed knowledge of the full authorizer configuration of the cluster The webhook would return json apiVersion authorization k8s io v1beta1 kind SubjectAccessReview status allowed false denied true reason user does not have read access to the namespace Access to non resource paths are sent as json apiVersion authorization k8s io v1beta1 kind SubjectAccessReview spec nonResourceAttributes path debug verb get user jane group group1 group2 With the AuthorizeWithSelectors feature enabled field and label selectors in the request are passed to the authorization webhook The webhook can make authorization decisions informed by the scoped field and label selectors if it wishes The SubjectAccessReview API documentation docs reference kubernetes api authorization resources subject access review v1 gives guidelines for how these fields should be interpreted and handled by authorization webhooks specifically using the parsed requirements rather than the raw selector strings and how to handle unrecognized operators safely json apiVersion authorization k8s io v1beta1 kind SubjectAccessReview spec resourceAttributes verb list group resource pods fieldSelector requirements key spec nodeName operator In values mynode labelSelector requirements key example com mykey operator In values myvalue user jane group group1 group2 Non resource paths include api apis metrics logs debug healthz livez openapi v2 readyz and version Clients require access to api api apis apis and version to discover what resources and versions are present on the server Access to other non resource paths can be disallowed without restricting access to the REST api For further information refer to the SubjectAccessReview API documentation docs reference kubernetes api authorization resources subject access review v1 and webhook go implementation https github com kubernetes kubernetes blob master staging src k8s io apiserver plugin pkg authorizer webhook webhook go |
kubernetes reference janetkuo derekwaynecarr davidopp erictune title Admission Control in Kubernetes Admission Control reviewers thockin lavalamp | ---
reviewers:
- lavalamp
- davidopp
- derekwaynecarr
- erictune
- janetkuo
- thockin
title: Admission Control in Kubernetes
linkTitle: Admission Control
content_type: concept
weight: 40
---
<!-- overview -->
This page provides an overview of _admission controllers_.
An admission controller is a piece of code that intercepts requests to the
Kubernetes API server prior to persistence of the resource, but after the request
is authenticated and authorized.
Several important features of Kubernetes require an admission controller to be enabled in order
to properly support the feature. As a result, a Kubernetes API server that is not properly
configured with the right set of admission controllers is an incomplete server that will not
support all the features you expect.
<!-- body -->
## What are they?
Admission controllers are code within the Kubernetes
that check the
data arriving in a request to modify a resource.
Admission controllers apply to requests that create, delete, or modify objects.
Admission controllers can also block custom verbs, such as a request to connect to a
pod via an API server proxy. Admission controllers do _not_ (and cannot) block requests
to read (**get**, **watch** or **list**) objects, because reads bypass the admission
control layer.
Admission control mechanisms may be _validating_, _mutating_, or both. Mutating
controllers may modify the data for the resource being modified; validating controllers may not.
The admission controllers in Kubernetes consist of the
[list](#what-does-each-admission-controller-do) below, are compiled into the
`kube-apiserver` binary, and may only be configured by the cluster
administrator.
### Admission control extension points
Within the full [list](#what-does-each-admission-controller-do), there are three
special controllers:
[MutatingAdmissionWebhook](#mutatingadmissionwebhook),
[ValidatingAdmissionWebhook](#validatingadmissionwebhook), and
[ValidatingAdmissionPolicy](#validatingadmissionpolicy).
The two webhook controllers execute the mutating and validating (respectively)
[admission control webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
which are configured in the API. ValidatingAdmissionPolicy provides a way to embed
declarative validation code within the API, without relying on any external HTTP
callouts.
You can use these three admission controllers to customize cluster behavior at
admission time.
### Admission control phases
The admission control process proceeds in two phases. In the first phase,
mutating admission controllers are run. In the second phase, validating
admission controllers are run. Note again that some of the controllers are
both.
If any of the controllers in either phase reject the request, the entire
request is rejected immediately and an error is returned to the end-user.
Finally, in addition to sometimes mutating the object in question, admission
controllers may sometimes have side effects, that is, mutate related
resources as part of request processing. Incrementing quota usage is the
canonical example of why this is necessary. Any such side-effect needs a
corresponding reclamation or reconciliation process, as a given admission
controller does not know for sure that a given request will pass all of the
other admission controllers.
## How do I turn on an admission controller?
The Kubernetes API server flag `enable-admission-plugins` takes a comma-delimited list of admission control plugins to invoke prior to modifying objects in the cluster.
For example, the following command line enables the `NamespaceLifecycle` and the `LimitRanger`
admission control plugins:
```shell
kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger ...
```
Depending on the way your Kubernetes cluster is deployed and how the API server is
started, you may need to apply the settings in different ways. For example, you may
have to modify the systemd unit file if the API server is deployed as a systemd
service, you may modify the manifest file for the API server if Kubernetes is deployed
in a self-hosted way.
## How do I turn off an admission controller?
The Kubernetes API server flag `disable-admission-plugins` takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default.
```shell
kube-apiserver --disable-admission-plugins=PodNodeSelector,AlwaysDeny ...
```
## Which plugins are enabled by default?
To see which admission plugins are enabled:
```shell
kube-apiserver -h | grep enable-admission-plugins
```
In Kubernetes , the default ones are:
```shell
CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook
```
## What does each admission controller do?
### AlwaysAdmit {#alwaysadmit}
**Type**: Validating.
This admission controller allows all pods into the cluster. It is **deprecated** because
its behavior is the same as if there were no admission controller at all.
### AlwaysDeny {#alwaysdeny}
**Type**: Validating.
Rejects all requests. AlwaysDeny is **deprecated** as it has no real meaning.
### AlwaysPullImages {#alwayspullimages}
**Type**: Mutating and Validating.
This admission controller modifies every new Pod to force the image pull policy to `Always`. This is useful in a
multitenant cluster so that users can be assured that their private images can only be used by those
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
node, any pod from any user can use it by knowing the image's name (assuming the Pod is
scheduled onto the right node), without any authorization check against the image. When this admission controller
is enabled, images are always pulled prior to starting containers, which means valid credentials are
required.
### CertificateApproval {#certificateapproval}
**Type**: Validating.
This admission controller observes requests to approve CertificateSigningRequest resources and performs additional
authorization checks to ensure the approving user has permission to **approve** certificate requests with the
`spec.signerName` requested on the CertificateSigningRequest resource.
See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) for more
information on the permissions required to perform different actions on CertificateSigningRequest resources.
### CertificateSigning {#certificatesigning}
**Type**: Validating.
This admission controller observes updates to the `status.certificate` field of CertificateSigningRequest resources
and performs an additional authorization checks to ensure the signing user has permission to **sign** certificate
requests with the `spec.signerName` requested on the CertificateSigningRequest resource.
See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) for more
information on the permissions required to perform different actions on CertificateSigningRequest resources.
### CertificateSubjectRestriction {#certificatesubjectrestriction}
**Type**: Validating.
This admission controller observes creation of CertificateSigningRequest resources that have a `spec.signerName`
of `kubernetes.io/kube-apiserver-client`. It rejects any request that specifies a 'group' (or 'organization attribute')
of `system:masters`.
### DefaultIngressClass {#defaultingressclass}
**Type**: Mutating.
This admission controller observes creation of `Ingress` objects that do not request any specific
ingress class and automatically adds a default ingress class to them. This way, users that do not
request any special ingress class do not need to care about them at all and they will get the
default one.
This admission controller does not do anything when no default ingress class is configured. When more than one ingress
class is marked as default, it rejects any creation of `Ingress` with an error and an administrator
must revisit their `IngressClass` objects and mark only one as default (with the annotation
"ingressclass.kubernetes.io/is-default-class"). This admission controller ignores any `Ingress`
updates; it acts only on creation.
See the [Ingress](/docs/concepts/services-networking/ingress/) documentation for more about ingress
classes and how to mark one as default.
### DefaultStorageClass {#defaultstorageclass}
**Type**: Mutating.
This admission controller observes creation of `PersistentVolumeClaim` objects that do not request any specific storage class
and automatically adds a default storage class to them.
This way, users that do not request any special storage class do not need to care about them at all and they
will get the default one.
This admission controller does not do anything when no default storage class is configured. When more than one storage
class is marked as default, it rejects any creation of `PersistentVolumeClaim` with an error and an administrator
must revisit their `StorageClass` objects and mark only one as default.
This admission controller ignores any `PersistentVolumeClaim` updates; it acts only on creation.
See [persistent volume](/docs/concepts/storage/persistent-volumes/) documentation about persistent volume claims and
storage classes and how to mark a storage class as default.
### DefaultTolerationSeconds {#defaulttolerationseconds}
**Type**: Mutating.
This admission controller sets the default forgiveness toleration for pods to tolerate
the taints `notready:NoExecute` and `unreachable:NoExecute` based on the k8s-apiserver input parameters
`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already
have toleration for taints `node.kubernetes.io/not-ready:NoExecute` or
`node.kubernetes.io/unreachable:NoExecute`.
The default value for `default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` is 5 minutes.
### DenyServiceExternalIPs
**Type**: Validating.
This admission controller rejects all net-new usage of the `Service` field `externalIPs`. This
feature is very powerful (allows network traffic interception) and not well
controlled by policy. When enabled, users of the cluster may not create new
Services which use `externalIPs` and may not add new values to `externalIPs` on
existing `Service` objects. Existing uses of `externalIPs` are not affected,
and users may remove values from `externalIPs` on existing `Service` objects.
Most users do not need this feature at all, and cluster admins should consider disabling it.
Clusters that do need to use this feature should consider using some custom policy to manage usage
of it.
This admission controller is disabled by default.
### EventRateLimit {#eventratelimit}
**Type**: Validating.
This admission controller mitigates the problem where the API server gets flooded by
requests to store new Events. The cluster admin can specify event rate limits by:
* Enabling the `EventRateLimit` admission controller;
* Referencing an `EventRateLimit` configuration file from the file provided to the API
server's command line flag `--admission-control-config-file`:
```yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: EventRateLimit
path: eventconfig.yaml
...
```
There are four types of limits that can be specified in the configuration:
* `Server`: All Event requests (creation or modifications) received by the API server share a single bucket.
* `Namespace`: Each namespace has a dedicated bucket.
* `User`: Each user is allocated a bucket.
* `SourceAndObject`: A bucket is assigned by each combination of source and
involved object of the event.
Below is a sample `eventconfig.yaml` for such a configuration:
```yaml
apiVersion: eventratelimit.admission.k8s.io/v1alpha1
kind: Configuration
limits:
- type: Namespace
qps: 50
burst: 100
cacheSize: 2000
- type: User
qps: 10
burst: 50
```
See the [EventRateLimit Config API (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/)
for more details.
This admission controller is disabled by default.
### ExtendedResourceToleration {#extendedresourcetoleration}
**Type**: Mutating.
This plug-in facilitates creation of dedicated nodes with extended resources.
If operators want to create dedicated nodes with extended resources (like GPUs, FPGAs etc.), they are expected to
[taint the node](/docs/concepts/scheduling-eviction/taint-and-toleration/#example-use-cases) with the extended resource
name as the key. This admission controller, if enabled, automatically
adds tolerations for such taints to pods requesting extended resources, so users don't have to manually
add these tolerations.
This admission controller is disabled by default.
### ImagePolicyWebhook {#imagepolicywebhook}
**Type**: Validating.
The ImagePolicyWebhook admission controller allows a backend webhook to make admission decisions.
This admission controller is disabled by default.
#### Configuration file format {#imagereview-config-file-format}
ImagePolicyWebhook uses a configuration file to set options for the behavior of the backend.
This file may be json or yaml and has the following format:
```yaml
imagePolicy:
kubeConfigFile: /path/to/kubeconfig/for/backend
# time in s to cache approval
allowTTL: 50
# time in s to cache denial
denyTTL: 50
# time in ms to wait between retries
retryBackoff: 500
# determines behavior if the webhook backend fails
defaultAllow: true
```
Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag `--admission-control-config-file`:
```yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
path: imagepolicyconfig.yaml
...
```
Alternatively, you can embed the configuration directly in the file:
```yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: <path-to-kubeconfig-file>
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: true
```
The ImagePolicyWebhook config file must reference a
[kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
formatted file which sets up the connection to the backend.
It is required that the backend communicate over TLS.
The kubeconfig file's `cluster` field must point to the remote service, and the `user` field
must contain the returned authorizer.
```yaml
# clusters refers to the remote service.
clusters:
- name: name-of-remote-imagepolicy-service
cluster:
certificate-authority: /path/to/ca.pem # CA for verifying the remote service.
server: https://images.example.com/policy # URL of remote service to query. Must use 'https'.
# users refers to the API server's webhook configuration.
users:
- name: name-of-api-server
user:
client-certificate: /path/to/cert.pem # cert for the webhook admission controller to use
client-key: /path/to/key.pem # key matching the cert
```
For additional HTTP configuration, refer to the
[kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) documentation.
#### Request payloads
When faced with an admission decision, the API Server POSTs a JSON serialized
`imagepolicy.k8s.io/v1alpha1` `ImageReview` object describing the action.
This object contains fields describing the containers being admitted, as well as
any pod annotations that match `*.image-policy.k8s.io/*`.
The webhook API objects are subject to the same versioning compatibility rules
as other Kubernetes API objects. Implementers should be aware of looser compatibility
promises for alpha objects and check the `apiVersion` field of the request to
ensure correct deserialization.
Additionally, the API Server must enable the `imagepolicy.k8s.io/v1alpha1` API extensions
group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`).
An example request body:
```json
{
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"spec": {
"containers": [
{
"image": "myrepo/myimage:v1"
},
{
"image": "myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed"
}
],
"annotations": {
"mycluster.image-policy.k8s.io/ticket-1234": "break-glass"
},
"namespace": "mynamespace"
}
}
```
The remote service is expected to fill the `status` field of the request and
respond to either allow or disallow access. The response body's `spec` field is ignored, and
may be omitted. A permissive response would return:
```json
{
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"status": {
"allowed": true
}
}
```
To disallow access, the service would return:
```json
{
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"status": {
"allowed": false,
"reason": "image currently blacklisted"
}
}
```
For further documentation refer to the
[`imagepolicy.v1alpha1` API](/docs/reference/config-api/imagepolicy.v1alpha1/).
#### Extending with Annotations
All annotations on a Pod that match `*.image-policy.k8s.io/*` are sent to the webhook.
Sending annotations allows users who are aware of the image policy backend to
send extra information to it, and for different backends implementations to
accept different information.
Examples of information you might put here are:
* request to "break glass" to override a policy, in case of emergency.
* a ticket number from a ticket system that documents the break-glass request
* provide a hint to the policy server as to the imageID of the image being provided, to save it a lookup
In any case, the annotations are provided by the user and are not validated by Kubernetes in any way.
### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology}
**Type**: Validating.
This admission controller denies any pod that defines `AntiAffinity` topology key other than
`kubernetes.io/hostname` in `requiredDuringSchedulingRequiredDuringExecution`.
This admission controller is disabled by default.
### LimitRanger {#limitranger}
**Type**: Mutating and Validating.
This admission controller will observe the incoming request and ensure that it does not violate
any of the constraints enumerated in the `LimitRange` object in a `Namespace`. If you are using
`LimitRange` objects in your Kubernetes deployment, you MUST use this admission controller to
enforce those constraints. LimitRanger can also be used to apply default resource requests to Pods
that don't specify any; currently, the default LimitRanger applies a 0.1 CPU requirement to all
Pods in the `default` namespace.
See the [LimitRange API reference](/docs/reference/kubernetes-api/policy-resources/limit-range-v1/)
and the [example of LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
for more details.
### MutatingAdmissionWebhook {#mutatingadmissionwebhook}
**Type**: Mutating.
This admission controller calls any mutating webhooks which match the request. Matching
webhooks are called in serial; each one may modify the object if it desires.
This admission controller (as implied by the name) only runs in the mutating phase.
If a webhook called by this has side effects (for example, decrementing quota) it
*must* have a reconciliation system, as it is not guaranteed that subsequent
webhooks or validating admission controllers will permit the request to finish.
If you disable the MutatingAdmissionWebhook, you must also disable the
`MutatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1`
group/version via the `--runtime-config` flag, both are on by default.
#### Use caution when authoring and installing mutating webhooks
* Users may be confused when the objects they try to create are different from
what they get back.
* Built in control loops may break when the objects they try to create are
different when read back.
* Setting originally unset fields is less likely to cause problems than
overwriting fields set in the original request. Avoid doing the latter.
* Future changes to control loops for built-in resources or third-party resources
may break webhooks that work well today. Even when the webhook installation API
is finalized, not all possible webhook behaviors will be guaranteed to be supported
indefinitely.
### NamespaceAutoProvision {#namespaceautoprovision}
**Type**: Mutating.
This admission controller examines all incoming requests on namespaced resources and checks
if the referenced namespace does exist.
It creates a namespace if it cannot be found.
This admission controller is useful in deployments that do not want to restrict creation of
a namespace prior to its usage.
### NamespaceExists {#namespaceexists}
**Type**: Validating.
This admission controller checks all requests on namespaced resources other than `Namespace` itself.
If the namespace referenced from a request doesn't exist, the request is rejected.
### NamespaceLifecycle {#namespacelifecycle}
**Type**: Validating.
This admission controller enforces that a `Namespace` that is undergoing termination cannot have
new objects created in it, and ensures that requests in a non-existent `Namespace` are rejected.
This admission controller also prevents deletion of three system reserved namespaces `default`,
`kube-system`, `kube-public`.
A `Namespace` deletion kicks off a sequence of operations that remove all objects (pods, services,
etc.) in that namespace. In order to enforce integrity of that process, we strongly recommend
running this admission controller.
### NodeRestriction {#noderestriction}
**Type**: Validating.
This admission controller limits the `Node` and `Pod` objects a kubelet can modify. In order to be limited by this admission controller,
kubelets must use credentials in the `system:nodes` group, with a username in the form `system:node:<nodeName>`.
Such kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node.
kubelets are not allowed to update or remove taints from their `Node` API object.
The `NodeRestriction` admission plugin prevents kubelets from deleting their `Node` API object,
and enforces kubelet modification of labels under the `kubernetes.io/` or `k8s.io/` prefixes as follows:
* **Prevents** kubelets from adding/removing/updating labels with a `node-restriction.kubernetes.io/` prefix.
This label prefix is reserved for administrators to label their `Node` objects for workload isolation purposes,
and kubelets will not be allowed to modify labels with that prefix.
* **Allows** kubelets to add/remove/update these labels and label prefixes:
* `kubernetes.io/hostname`
* `kubernetes.io/arch`
* `kubernetes.io/os`
* `beta.kubernetes.io/instance-type`
* `node.kubernetes.io/instance-type`
* `failure-domain.beta.kubernetes.io/region` (deprecated)
* `failure-domain.beta.kubernetes.io/zone` (deprecated)
* `topology.kubernetes.io/region`
* `topology.kubernetes.io/zone`
* `kubelet.kubernetes.io/`-prefixed labels
* `node.kubernetes.io/`-prefixed labels
Use of any other labels under the `kubernetes.io` or `k8s.io` prefixes by kubelets is reserved,
and may be disallowed or allowed by the `NodeRestriction` admission plugin in the future.
Future versions may add additional restrictions to ensure kubelets have the minimal set of
permissions required to operate correctly.
### OwnerReferencesPermissionEnforcement {#ownerreferencespermissionenforcement}
**Type**: Validating.
This admission controller protects the access to the `metadata.ownerReferences` of an object
so that only users with **delete** permission to the object can change it.
This admission controller also protects the access to `metadata.ownerReferences[x].blockOwnerDeletion`
of an object, so that only users with **update** permission to the `finalizers`
subresource of the referenced *owner* can change it.
### PersistentVolumeClaimResize {#persistentvolumeclaimresize}
**Type**: Validating.
This admission controller implements additional validations for checking incoming
`PersistentVolumeClaim` resize requests.
Enabling the `PersistentVolumeClaimResize` admission controller is recommended.
This admission controller prevents resizing of all claims by default unless a claim's `StorageClass`
explicitly enables resizing by setting `allowVolumeExpansion` to `true`.
For example: all `PersistentVolumeClaim`s created from the following `StorageClass` support volume expansion:
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster-vol-default
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://192.168.10.100:8080"
restuser: ""
secretNamespace: ""
secretName: ""
allowVolumeExpansion: true
```
For more information about persistent volume claims, see [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims).
### PodNodeSelector {#podnodeselector}
**Type**: Validating.
This admission controller defaults and limits what node selectors may be used within a namespace
by reading a namespace annotation and a global configuration.
This admission controller is disabled by default.
#### Configuration file format
`PodNodeSelector` uses a configuration file to set options for the behavior of the backend.
Note that the configuration file format will move to a versioned file in a future release.
This file may be json or yaml and has the following format:
```yaml
podNodeSelectorPluginConfig:
clusterDefaultNodeSelector: name-of-node-selector
namespace1: name-of-node-selector
namespace2: name-of-node-selector
```
Reference the `PodNodeSelector` configuration file from the file provided to the API server's
command line flag `--admission-control-config-file`:
```yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodNodeSelector
path: podnodeselector.yaml
...
```
#### Configuration Annotation Format
`PodNodeSelector` uses the annotation key `scheduler.alpha.kubernetes.io/node-selector` to assign
node selectors to namespaces.
```yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
scheduler.alpha.kubernetes.io/node-selector: name-of-node-selector
name: namespace3
```
#### Internal Behavior
This admission controller has the following behavior:
1. If the `Namespace` has an annotation with a key `scheduler.alpha.kubernetes.io/node-selector`,
use its value as the node selector.
2. If the namespace lacks such an annotation, use the `clusterDefaultNodeSelector` defined in the
`PodNodeSelector` plugin configuration file as the node selector.
3. Evaluate the pod's node selector against the namespace node selector for conflicts. Conflicts
result in rejection.
4. Evaluate the pod's node selector against the namespace-specific allowed selector defined the
plugin configuration file. Conflicts result in rejection.
PodNodeSelector allows forcing pods to run on specifically labeled nodes. Also see the PodTolerationRestriction
admission plugin, which allows preventing pods from running on specifically tainted nodes.
### PodSecurity {#podsecurity}
**Type**: Validating.
The PodSecurity admission controller checks new Pods before they are
admitted, determines if it should be admitted based on the requested security context and the restrictions on permitted
[Pod Security Standards](/docs/concepts/security/pod-security-standards/)
for the namespace that the Pod would be in.
See the [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
documentation for more information.
PodSecurity replaced an older admission controller named PodSecurityPolicy.
### PodTolerationRestriction {#podtolerationrestriction}
**Type**: Mutating and Validating.
The PodTolerationRestriction admission controller verifies any conflict between tolerations of a
pod and the tolerations of its namespace.
It rejects the pod request if there is a conflict.
It then merges the tolerations annotated on the namespace into the tolerations of the pod.
The resulting tolerations are checked against a list of allowed tolerations annotated to the namespace.
If the check succeeds, the pod request is admitted otherwise it is rejected.
If the namespace of the pod does not have any associated default tolerations or allowed
tolerations annotated, the cluster-level default tolerations or cluster-level list of allowed tolerations are used
instead if they are specified.
Tolerations to a namespace are assigned via the `scheduler.alpha.kubernetes.io/defaultTolerations` annotation key.
The list of allowed tolerations can be added via the `scheduler.alpha.kubernetes.io/tolerationsWhitelist` annotation key.
Example for namespace annotations:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: apps-that-need-nodes-exclusively
annotations:
scheduler.alpha.kubernetes.io/defaultTolerations: '[{"operator": "Exists", "effect": "NoSchedule", "key": "dedicated-node"}]'
scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{"operator": "Exists", "effect": "NoSchedule", "key": "dedicated-node"}]'
```
This admission controller is disabled by default.
### Priority {#priority}
**Type**: Mutating and Validating.
The priority admission controller uses the `priorityClassName` field and populates the integer
value of the priority.
If the priority class is not found, the Pod is rejected.
### ResourceQuota {#resourcequota}
**Type**: Validating.
This admission controller will observe the incoming request and ensure that it does not violate
any of the constraints enumerated in the `ResourceQuota` object in a `Namespace`. If you are
using `ResourceQuota` objects in your Kubernetes deployment, you MUST use this admission
controller to enforce quota constraints.
See the [ResourceQuota API reference](/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/)
and the [example of Resource Quota](/docs/concepts/policy/resource-quotas/) for more details.
### RuntimeClass {#runtimeclass}
**Type**: Mutating and Validating.
If you define a RuntimeClass with [Pod overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
configured, this admission controller checks incoming Pods.
When enabled, this admission controller rejects any Pod create requests
that have the overhead already set.
For Pods that have a RuntimeClass configured and selected in their `.spec`,
this admission controller sets `.spec.overhead` in the Pod based on the value
defined in the corresponding RuntimeClass.
See also [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
for more information.
### ServiceAccount {#serviceaccount}
**Type**: Mutating and Validating.
This admission controller implements automation for
[serviceAccounts](/docs/tasks/configure-pod-container/configure-service-account/).
The Kubernetes project strongly recommends enabling this admission controller.
You should enable this admission controller if you intend to make any use of Kubernetes
`ServiceAccount` objects.
Regarding the annotation `kubernetes.io/enforce-mountable-secrets`: While the annotation's name suggests it only concerns the mounting of Secrets,
its enforcement also extends to other ways Secrets are used in the context of a Pod.
Therefore, it is crucial to ensure that all the referenced secrets are correctly specified in the ServiceAccount.
### StorageObjectInUseProtection
**Type**: Mutating.
The `StorageObjectInUseProtection` plugin adds the `kubernetes.io/pvc-protection` or `kubernetes.io/pv-protection`
finalizers to newly created Persistent Volume Claims (PVCs) or Persistent Volumes (PV).
In case a user deletes a PVC or PV the PVC or PV is not removed until the finalizer is removed
from the PVC or PV by PVC or PV Protection Controller.
Refer to the
[Storage Object in Use Protection](/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection)
for more detailed information.
### TaintNodesByCondition {#taintnodesbycondition}
**Type**: Mutating.
This admission controller newly created
Nodes as `NotReady` and `NoSchedule`. That tainting avoids a race condition that could cause Pods
to be scheduled on new Nodes before their taints were updated to accurately reflect their reported
conditions.
### ValidatingAdmissionPolicy {#validatingadmissionpolicy}
**Type**: Validating.
[This admission controller](/docs/reference/access-authn-authz/validating-admission-policy/) implements the CEL validation for incoming matched requests.
It is enabled when both feature gate `validatingadmissionpolicy` and `admissionregistration.k8s.io/v1alpha1` group/version are enabled.
If any of the ValidatingAdmissionPolicy fails, the request fails.
### ValidatingAdmissionWebhook {#validatingadmissionwebhook}
**Type**: Validating.
This admission controller calls any validating webhooks which match the request. Matching
webhooks are called in parallel; if any of them rejects the request, the request
fails. This admission controller only runs in the validation phase; the webhooks it calls may not
mutate the object, as opposed to the webhooks called by the `MutatingAdmissionWebhook` admission controller.
If a webhook called by this has side effects (for example, decrementing quota) it
*must* have a reconciliation system, as it is not guaranteed that subsequent
webhooks or other validating admission controllers will permit the request to finish.
If you disable the ValidatingAdmissionWebhook, you must also disable the
`ValidatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1`
group/version via the `--runtime-config` flag.
## Is there a recommended set of admission controllers to use?
Yes. The recommended admission controllers are enabled by default
(shown [here](/docs/reference/command-line-tools-reference/kube-apiserver/#options)),
so you do not need to explicitly specify them.
You can enable additional admission controllers beyond the default set using the
`--enable-admission-plugins` flag (**order doesn't matter**).
| kubernetes reference | reviewers lavalamp davidopp derekwaynecarr erictune janetkuo thockin title Admission Control in Kubernetes linkTitle Admission Control content type concept weight 40 overview This page provides an overview of admission controllers An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the resource but after the request is authenticated and authorized Several important features of Kubernetes require an admission controller to be enabled in order to properly support the feature As a result a Kubernetes API server that is not properly configured with the right set of admission controllers is an incomplete server that will not support all the features you expect body What are they Admission controllers are code within the Kubernetes that check the data arriving in a request to modify a resource Admission controllers apply to requests that create delete or modify objects Admission controllers can also block custom verbs such as a request to connect to a pod via an API server proxy Admission controllers do not and cannot block requests to read get watch or list objects because reads bypass the admission control layer Admission control mechanisms may be validating mutating or both Mutating controllers may modify the data for the resource being modified validating controllers may not The admission controllers in Kubernetes consist of the list what does each admission controller do below are compiled into the kube apiserver binary and may only be configured by the cluster administrator Admission control extension points Within the full list what does each admission controller do there are three special controllers MutatingAdmissionWebhook mutatingadmissionwebhook ValidatingAdmissionWebhook validatingadmissionwebhook and ValidatingAdmissionPolicy validatingadmissionpolicy The two webhook controllers execute the mutating and validating respectively admission control webhooks docs reference access authn authz extensible admission controllers admission webhooks which are configured in the API ValidatingAdmissionPolicy provides a way to embed declarative validation code within the API without relying on any external HTTP callouts You can use these three admission controllers to customize cluster behavior at admission time Admission control phases The admission control process proceeds in two phases In the first phase mutating admission controllers are run In the second phase validating admission controllers are run Note again that some of the controllers are both If any of the controllers in either phase reject the request the entire request is rejected immediately and an error is returned to the end user Finally in addition to sometimes mutating the object in question admission controllers may sometimes have side effects that is mutate related resources as part of request processing Incrementing quota usage is the canonical example of why this is necessary Any such side effect needs a corresponding reclamation or reconciliation process as a given admission controller does not know for sure that a given request will pass all of the other admission controllers How do I turn on an admission controller The Kubernetes API server flag enable admission plugins takes a comma delimited list of admission control plugins to invoke prior to modifying objects in the cluster For example the following command line enables the NamespaceLifecycle and the LimitRanger admission control plugins shell kube apiserver enable admission plugins NamespaceLifecycle LimitRanger Depending on the way your Kubernetes cluster is deployed and how the API server is started you may need to apply the settings in different ways For example you may have to modify the systemd unit file if the API server is deployed as a systemd service you may modify the manifest file for the API server if Kubernetes is deployed in a self hosted way How do I turn off an admission controller The Kubernetes API server flag disable admission plugins takes a comma delimited list of admission control plugins to be disabled even if they are in the list of plugins enabled by default shell kube apiserver disable admission plugins PodNodeSelector AlwaysDeny Which plugins are enabled by default To see which admission plugins are enabled shell kube apiserver h grep enable admission plugins In Kubernetes the default ones are shell CertificateApproval CertificateSigning CertificateSubjectRestriction DefaultIngressClass DefaultStorageClass DefaultTolerationSeconds LimitRanger MutatingAdmissionWebhook NamespaceLifecycle PersistentVolumeClaimResize PodSecurity Priority ResourceQuota RuntimeClass ServiceAccount StorageObjectInUseProtection TaintNodesByCondition ValidatingAdmissionPolicy ValidatingAdmissionWebhook What does each admission controller do AlwaysAdmit alwaysadmit Type Validating This admission controller allows all pods into the cluster It is deprecated because its behavior is the same as if there were no admission controller at all AlwaysDeny alwaysdeny Type Validating Rejects all requests AlwaysDeny is deprecated as it has no real meaning AlwaysPullImages alwayspullimages Type Mutating and Validating This admission controller modifies every new Pod to force the image pull policy to Always This is useful in a multitenant cluster so that users can be assured that their private images can only be used by those who have the credentials to pull them Without this admission controller once an image has been pulled to a node any pod from any user can use it by knowing the image s name assuming the Pod is scheduled onto the right node without any authorization check against the image When this admission controller is enabled images are always pulled prior to starting containers which means valid credentials are required CertificateApproval certificateapproval Type Validating This admission controller observes requests to approve CertificateSigningRequest resources and performs additional authorization checks to ensure the approving user has permission to approve certificate requests with the spec signerName requested on the CertificateSigningRequest resource See Certificate Signing Requests docs reference access authn authz certificate signing requests for more information on the permissions required to perform different actions on CertificateSigningRequest resources CertificateSigning certificatesigning Type Validating This admission controller observes updates to the status certificate field of CertificateSigningRequest resources and performs an additional authorization checks to ensure the signing user has permission to sign certificate requests with the spec signerName requested on the CertificateSigningRequest resource See Certificate Signing Requests docs reference access authn authz certificate signing requests for more information on the permissions required to perform different actions on CertificateSigningRequest resources CertificateSubjectRestriction certificatesubjectrestriction Type Validating This admission controller observes creation of CertificateSigningRequest resources that have a spec signerName of kubernetes io kube apiserver client It rejects any request that specifies a group or organization attribute of system masters DefaultIngressClass defaultingressclass Type Mutating This admission controller observes creation of Ingress objects that do not request any specific ingress class and automatically adds a default ingress class to them This way users that do not request any special ingress class do not need to care about them at all and they will get the default one This admission controller does not do anything when no default ingress class is configured When more than one ingress class is marked as default it rejects any creation of Ingress with an error and an administrator must revisit their IngressClass objects and mark only one as default with the annotation ingressclass kubernetes io is default class This admission controller ignores any Ingress updates it acts only on creation See the Ingress docs concepts services networking ingress documentation for more about ingress classes and how to mark one as default DefaultStorageClass defaultstorageclass Type Mutating This admission controller observes creation of PersistentVolumeClaim objects that do not request any specific storage class and automatically adds a default storage class to them This way users that do not request any special storage class do not need to care about them at all and they will get the default one This admission controller does not do anything when no default storage class is configured When more than one storage class is marked as default it rejects any creation of PersistentVolumeClaim with an error and an administrator must revisit their StorageClass objects and mark only one as default This admission controller ignores any PersistentVolumeClaim updates it acts only on creation See persistent volume docs concepts storage persistent volumes documentation about persistent volume claims and storage classes and how to mark a storage class as default DefaultTolerationSeconds defaulttolerationseconds Type Mutating This admission controller sets the default forgiveness toleration for pods to tolerate the taints notready NoExecute and unreachable NoExecute based on the k8s apiserver input parameters default not ready toleration seconds and default unreachable toleration seconds if the pods don t already have toleration for taints node kubernetes io not ready NoExecute or node kubernetes io unreachable NoExecute The default value for default not ready toleration seconds and default unreachable toleration seconds is 5 minutes DenyServiceExternalIPs Type Validating This admission controller rejects all net new usage of the Service field externalIPs This feature is very powerful allows network traffic interception and not well controlled by policy When enabled users of the cluster may not create new Services which use externalIPs and may not add new values to externalIPs on existing Service objects Existing uses of externalIPs are not affected and users may remove values from externalIPs on existing Service objects Most users do not need this feature at all and cluster admins should consider disabling it Clusters that do need to use this feature should consider using some custom policy to manage usage of it This admission controller is disabled by default EventRateLimit eventratelimit Type Validating This admission controller mitigates the problem where the API server gets flooded by requests to store new Events The cluster admin can specify event rate limits by Enabling the EventRateLimit admission controller Referencing an EventRateLimit configuration file from the file provided to the API server s command line flag admission control config file yaml apiVersion apiserver config k8s io v1 kind AdmissionConfiguration plugins name EventRateLimit path eventconfig yaml There are four types of limits that can be specified in the configuration Server All Event requests creation or modifications received by the API server share a single bucket Namespace Each namespace has a dedicated bucket User Each user is allocated a bucket SourceAndObject A bucket is assigned by each combination of source and involved object of the event Below is a sample eventconfig yaml for such a configuration yaml apiVersion eventratelimit admission k8s io v1alpha1 kind Configuration limits type Namespace qps 50 burst 100 cacheSize 2000 type User qps 10 burst 50 See the EventRateLimit Config API v1alpha1 docs reference config api apiserver eventratelimit v1alpha1 for more details This admission controller is disabled by default ExtendedResourceToleration extendedresourcetoleration Type Mutating This plug in facilitates creation of dedicated nodes with extended resources If operators want to create dedicated nodes with extended resources like GPUs FPGAs etc they are expected to taint the node docs concepts scheduling eviction taint and toleration example use cases with the extended resource name as the key This admission controller if enabled automatically adds tolerations for such taints to pods requesting extended resources so users don t have to manually add these tolerations This admission controller is disabled by default ImagePolicyWebhook imagepolicywebhook Type Validating The ImagePolicyWebhook admission controller allows a backend webhook to make admission decisions This admission controller is disabled by default Configuration file format imagereview config file format ImagePolicyWebhook uses a configuration file to set options for the behavior of the backend This file may be json or yaml and has the following format yaml imagePolicy kubeConfigFile path to kubeconfig for backend time in s to cache approval allowTTL 50 time in s to cache denial denyTTL 50 time in ms to wait between retries retryBackoff 500 determines behavior if the webhook backend fails defaultAllow true Reference the ImagePolicyWebhook configuration file from the file provided to the API server s command line flag admission control config file yaml apiVersion apiserver config k8s io v1 kind AdmissionConfiguration plugins name ImagePolicyWebhook path imagepolicyconfig yaml Alternatively you can embed the configuration directly in the file yaml apiVersion apiserver config k8s io v1 kind AdmissionConfiguration plugins name ImagePolicyWebhook configuration imagePolicy kubeConfigFile path to kubeconfig file allowTTL 50 denyTTL 50 retryBackoff 500 defaultAllow true The ImagePolicyWebhook config file must reference a kubeconfig docs tasks access application cluster configure access multiple clusters formatted file which sets up the connection to the backend It is required that the backend communicate over TLS The kubeconfig file s cluster field must point to the remote service and the user field must contain the returned authorizer yaml clusters refers to the remote service clusters name name of remote imagepolicy service cluster certificate authority path to ca pem CA for verifying the remote service server https images example com policy URL of remote service to query Must use https users refers to the API server s webhook configuration users name name of api server user client certificate path to cert pem cert for the webhook admission controller to use client key path to key pem key matching the cert For additional HTTP configuration refer to the kubeconfig docs tasks access application cluster configure access multiple clusters documentation Request payloads When faced with an admission decision the API Server POSTs a JSON serialized imagepolicy k8s io v1alpha1 ImageReview object describing the action This object contains fields describing the containers being admitted as well as any pod annotations that match image policy k8s io The webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects Implementers should be aware of looser compatibility promises for alpha objects and check the apiVersion field of the request to ensure correct deserialization Additionally the API Server must enable the imagepolicy k8s io v1alpha1 API extensions group runtime config imagepolicy k8s io v1alpha1 true An example request body json apiVersion imagepolicy k8s io v1alpha1 kind ImageReview spec containers image myrepo myimage v1 image myrepo myimage sha256 beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed annotations mycluster image policy k8s io ticket 1234 break glass namespace mynamespace The remote service is expected to fill the status field of the request and respond to either allow or disallow access The response body s spec field is ignored and may be omitted A permissive response would return json apiVersion imagepolicy k8s io v1alpha1 kind ImageReview status allowed true To disallow access the service would return json apiVersion imagepolicy k8s io v1alpha1 kind ImageReview status allowed false reason image currently blacklisted For further documentation refer to the imagepolicy v1alpha1 API docs reference config api imagepolicy v1alpha1 Extending with Annotations All annotations on a Pod that match image policy k8s io are sent to the webhook Sending annotations allows users who are aware of the image policy backend to send extra information to it and for different backends implementations to accept different information Examples of information you might put here are request to break glass to override a policy in case of emergency a ticket number from a ticket system that documents the break glass request provide a hint to the policy server as to the imageID of the image being provided to save it a lookup In any case the annotations are provided by the user and are not validated by Kubernetes in any way LimitPodHardAntiAffinityTopology limitpodhardantiaffinitytopology Type Validating This admission controller denies any pod that defines AntiAffinity topology key other than kubernetes io hostname in requiredDuringSchedulingRequiredDuringExecution This admission controller is disabled by default LimitRanger limitranger Type Mutating and Validating This admission controller will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the LimitRange object in a Namespace If you are using LimitRange objects in your Kubernetes deployment you MUST use this admission controller to enforce those constraints LimitRanger can also be used to apply default resource requests to Pods that don t specify any currently the default LimitRanger applies a 0 1 CPU requirement to all Pods in the default namespace See the LimitRange API reference docs reference kubernetes api policy resources limit range v1 and the example of LimitRange docs tasks administer cluster manage resources memory default namespace for more details MutatingAdmissionWebhook mutatingadmissionwebhook Type Mutating This admission controller calls any mutating webhooks which match the request Matching webhooks are called in serial each one may modify the object if it desires This admission controller as implied by the name only runs in the mutating phase If a webhook called by this has side effects for example decrementing quota it must have a reconciliation system as it is not guaranteed that subsequent webhooks or validating admission controllers will permit the request to finish If you disable the MutatingAdmissionWebhook you must also disable the MutatingWebhookConfiguration object in the admissionregistration k8s io v1 group version via the runtime config flag both are on by default Use caution when authoring and installing mutating webhooks Users may be confused when the objects they try to create are different from what they get back Built in control loops may break when the objects they try to create are different when read back Setting originally unset fields is less likely to cause problems than overwriting fields set in the original request Avoid doing the latter Future changes to control loops for built in resources or third party resources may break webhooks that work well today Even when the webhook installation API is finalized not all possible webhook behaviors will be guaranteed to be supported indefinitely NamespaceAutoProvision namespaceautoprovision Type Mutating This admission controller examines all incoming requests on namespaced resources and checks if the referenced namespace does exist It creates a namespace if it cannot be found This admission controller is useful in deployments that do not want to restrict creation of a namespace prior to its usage NamespaceExists namespaceexists Type Validating This admission controller checks all requests on namespaced resources other than Namespace itself If the namespace referenced from a request doesn t exist the request is rejected NamespaceLifecycle namespacelifecycle Type Validating This admission controller enforces that a Namespace that is undergoing termination cannot have new objects created in it and ensures that requests in a non existent Namespace are rejected This admission controller also prevents deletion of three system reserved namespaces default kube system kube public A Namespace deletion kicks off a sequence of operations that remove all objects pods services etc in that namespace In order to enforce integrity of that process we strongly recommend running this admission controller NodeRestriction noderestriction Type Validating This admission controller limits the Node and Pod objects a kubelet can modify In order to be limited by this admission controller kubelets must use credentials in the system nodes group with a username in the form system node nodeName Such kubelets will only be allowed to modify their own Node API object and only modify Pod API objects that are bound to their node kubelets are not allowed to update or remove taints from their Node API object The NodeRestriction admission plugin prevents kubelets from deleting their Node API object and enforces kubelet modification of labels under the kubernetes io or k8s io prefixes as follows Prevents kubelets from adding removing updating labels with a node restriction kubernetes io prefix This label prefix is reserved for administrators to label their Node objects for workload isolation purposes and kubelets will not be allowed to modify labels with that prefix Allows kubelets to add remove update these labels and label prefixes kubernetes io hostname kubernetes io arch kubernetes io os beta kubernetes io instance type node kubernetes io instance type failure domain beta kubernetes io region deprecated failure domain beta kubernetes io zone deprecated topology kubernetes io region topology kubernetes io zone kubelet kubernetes io prefixed labels node kubernetes io prefixed labels Use of any other labels under the kubernetes io or k8s io prefixes by kubelets is reserved and may be disallowed or allowed by the NodeRestriction admission plugin in the future Future versions may add additional restrictions to ensure kubelets have the minimal set of permissions required to operate correctly OwnerReferencesPermissionEnforcement ownerreferencespermissionenforcement Type Validating This admission controller protects the access to the metadata ownerReferences of an object so that only users with delete permission to the object can change it This admission controller also protects the access to metadata ownerReferences x blockOwnerDeletion of an object so that only users with update permission to the finalizers subresource of the referenced owner can change it PersistentVolumeClaimResize persistentvolumeclaimresize Type Validating This admission controller implements additional validations for checking incoming PersistentVolumeClaim resize requests Enabling the PersistentVolumeClaimResize admission controller is recommended This admission controller prevents resizing of all claims by default unless a claim s StorageClass explicitly enables resizing by setting allowVolumeExpansion to true For example all PersistentVolumeClaim s created from the following StorageClass support volume expansion yaml apiVersion storage k8s io v1 kind StorageClass metadata name gluster vol default provisioner kubernetes io glusterfs parameters resturl http 192 168 10 100 8080 restuser secretNamespace secretName allowVolumeExpansion true For more information about persistent volume claims see PersistentVolumeClaims docs concepts storage persistent volumes persistentvolumeclaims PodNodeSelector podnodeselector Type Validating This admission controller defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration This admission controller is disabled by default Configuration file format PodNodeSelector uses a configuration file to set options for the behavior of the backend Note that the configuration file format will move to a versioned file in a future release This file may be json or yaml and has the following format yaml podNodeSelectorPluginConfig clusterDefaultNodeSelector name of node selector namespace1 name of node selector namespace2 name of node selector Reference the PodNodeSelector configuration file from the file provided to the API server s command line flag admission control config file yaml apiVersion apiserver config k8s io v1 kind AdmissionConfiguration plugins name PodNodeSelector path podnodeselector yaml Configuration Annotation Format PodNodeSelector uses the annotation key scheduler alpha kubernetes io node selector to assign node selectors to namespaces yaml apiVersion v1 kind Namespace metadata annotations scheduler alpha kubernetes io node selector name of node selector name namespace3 Internal Behavior This admission controller has the following behavior 1 If the Namespace has an annotation with a key scheduler alpha kubernetes io node selector use its value as the node selector 2 If the namespace lacks such an annotation use the clusterDefaultNodeSelector defined in the PodNodeSelector plugin configuration file as the node selector 3 Evaluate the pod s node selector against the namespace node selector for conflicts Conflicts result in rejection 4 Evaluate the pod s node selector against the namespace specific allowed selector defined the plugin configuration file Conflicts result in rejection PodNodeSelector allows forcing pods to run on specifically labeled nodes Also see the PodTolerationRestriction admission plugin which allows preventing pods from running on specifically tainted nodes PodSecurity podsecurity Type Validating The PodSecurity admission controller checks new Pods before they are admitted determines if it should be admitted based on the requested security context and the restrictions on permitted Pod Security Standards docs concepts security pod security standards for the namespace that the Pod would be in See the Pod Security Admission docs concepts security pod security admission documentation for more information PodSecurity replaced an older admission controller named PodSecurityPolicy PodTolerationRestriction podtolerationrestriction Type Mutating and Validating The PodTolerationRestriction admission controller verifies any conflict between tolerations of a pod and the tolerations of its namespace It rejects the pod request if there is a conflict It then merges the tolerations annotated on the namespace into the tolerations of the pod The resulting tolerations are checked against a list of allowed tolerations annotated to the namespace If the check succeeds the pod request is admitted otherwise it is rejected If the namespace of the pod does not have any associated default tolerations or allowed tolerations annotated the cluster level default tolerations or cluster level list of allowed tolerations are used instead if they are specified Tolerations to a namespace are assigned via the scheduler alpha kubernetes io defaultTolerations annotation key The list of allowed tolerations can be added via the scheduler alpha kubernetes io tolerationsWhitelist annotation key Example for namespace annotations yaml apiVersion v1 kind Namespace metadata name apps that need nodes exclusively annotations scheduler alpha kubernetes io defaultTolerations operator Exists effect NoSchedule key dedicated node scheduler alpha kubernetes io tolerationsWhitelist operator Exists effect NoSchedule key dedicated node This admission controller is disabled by default Priority priority Type Mutating and Validating The priority admission controller uses the priorityClassName field and populates the integer value of the priority If the priority class is not found the Pod is rejected ResourceQuota resourcequota Type Validating This admission controller will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the ResourceQuota object in a Namespace If you are using ResourceQuota objects in your Kubernetes deployment you MUST use this admission controller to enforce quota constraints See the ResourceQuota API reference docs reference kubernetes api policy resources resource quota v1 and the example of Resource Quota docs concepts policy resource quotas for more details RuntimeClass runtimeclass Type Mutating and Validating If you define a RuntimeClass with Pod overhead docs concepts scheduling eviction pod overhead configured this admission controller checks incoming Pods When enabled this admission controller rejects any Pod create requests that have the overhead already set For Pods that have a RuntimeClass configured and selected in their spec this admission controller sets spec overhead in the Pod based on the value defined in the corresponding RuntimeClass See also Pod Overhead docs concepts scheduling eviction pod overhead for more information ServiceAccount serviceaccount Type Mutating and Validating This admission controller implements automation for serviceAccounts docs tasks configure pod container configure service account The Kubernetes project strongly recommends enabling this admission controller You should enable this admission controller if you intend to make any use of Kubernetes ServiceAccount objects Regarding the annotation kubernetes io enforce mountable secrets While the annotation s name suggests it only concerns the mounting of Secrets its enforcement also extends to other ways Secrets are used in the context of a Pod Therefore it is crucial to ensure that all the referenced secrets are correctly specified in the ServiceAccount StorageObjectInUseProtection Type Mutating The StorageObjectInUseProtection plugin adds the kubernetes io pvc protection or kubernetes io pv protection finalizers to newly created Persistent Volume Claims PVCs or Persistent Volumes PV In case a user deletes a PVC or PV the PVC or PV is not removed until the finalizer is removed from the PVC or PV by PVC or PV Protection Controller Refer to the Storage Object in Use Protection docs concepts storage persistent volumes storage object in use protection for more detailed information TaintNodesByCondition taintnodesbycondition Type Mutating This admission controller newly created Nodes as NotReady and NoSchedule That tainting avoids a race condition that could cause Pods to be scheduled on new Nodes before their taints were updated to accurately reflect their reported conditions ValidatingAdmissionPolicy validatingadmissionpolicy Type Validating This admission controller docs reference access authn authz validating admission policy implements the CEL validation for incoming matched requests It is enabled when both feature gate validatingadmissionpolicy and admissionregistration k8s io v1alpha1 group version are enabled If any of the ValidatingAdmissionPolicy fails the request fails ValidatingAdmissionWebhook validatingadmissionwebhook Type Validating This admission controller calls any validating webhooks which match the request Matching webhooks are called in parallel if any of them rejects the request the request fails This admission controller only runs in the validation phase the webhooks it calls may not mutate the object as opposed to the webhooks called by the MutatingAdmissionWebhook admission controller If a webhook called by this has side effects for example decrementing quota it must have a reconciliation system as it is not guaranteed that subsequent webhooks or other validating admission controllers will permit the request to finish If you disable the ValidatingAdmissionWebhook you must also disable the ValidatingWebhookConfiguration object in the admissionregistration k8s io v1 group version via the runtime config flag Is there a recommended set of admission controllers to use Yes The recommended admission controllers are enabled by default shown here docs reference command line tools reference kube apiserver options so you do not need to explicitly specify them You can enable additional admission controllers beyond the default set using the enable admission plugins flag order doesn t matter |
kubernetes reference liggitt deads2k erictune contenttype concept aliases rbac reviewers weight 33 title Using RBAC Authorization | ---
reviewers:
- erictune
- deads2k
- liggitt
title: Using RBAC Authorization
content_type: concept
aliases: [/rbac/]
weight: 33
---
<!-- overview -->
Role-based access control (RBAC) is a method of regulating access to computer or
network resources based on the roles of individual users within your organization.
<!-- body -->
RBAC authorization uses the `rbac.authorization.k8s.io`
to drive authorization
decisions, allowing you to dynamically configure policies through the Kubernetes API.
To enable RBAC, start the
with the `--authorization-mode` flag set to a comma-separated list that includes `RBAC`;
for example:
```shell
kube-apiserver --authorization-mode=Example,RBAC --other-options --more-options
```
## API objects {#api-overview}
The RBAC API declares four kinds of Kubernetes object: _Role_, _ClusterRole_,
_RoleBinding_ and _ClusterRoleBinding_. You can describe or amend the RBAC
using tools such as `kubectl`, just like any other Kubernetes object.
These objects, by design, impose access restrictions. If you are making changes
to a cluster as you learn, see
[privilege escalation prevention and bootstrapping](#privilege-escalation-prevention-and-bootstrapping)
to understand how those restrictions can prevent you making some changes.
### Role and ClusterRole
An RBAC _Role_ or _ClusterRole_ contains rules that represent a set of permissions.
Permissions are purely additive (there are no "deny" rules).
A Role always sets permissions within a particular ;
when you create a Role, you have to specify the namespace it belongs in.
ClusterRole, by contrast, is a non-namespaced resource. The resources have different names (Role
and ClusterRole) because a Kubernetes object always has to be either namespaced or not namespaced;
it can't be both.
ClusterRoles have several uses. You can use a ClusterRole to:
1. define permissions on namespaced resources and be granted access within individual namespace(s)
1. define permissions on namespaced resources and be granted access across all namespaces
1. define permissions on cluster-scoped resources
If you want to define a role within a namespace, use a Role; if you want to define
a role cluster-wide, use a ClusterRole.
#### Role example
Here's an example Role in the "default" namespace that can be used to grant read access to
:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
```
#### ClusterRole example
A ClusterRole can be used to grant the same permissions as a Role.
Because ClusterRoles are cluster-scoped, you can also use them to grant access to:
* cluster-scoped resources (like )
* non-resource endpoints (like `/healthz`)
* namespaced resources (like Pods), across all namespaces
For example: you can use a ClusterRole to allow a particular user to run
`kubectl get pods --all-namespaces`
Here is an example of a ClusterRole that can be used to grant read access to
in any particular namespace,
or across all namespaces (depending on how it is [bound](#rolebinding-and-clusterrolebinding)):
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-reader
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing Secret
# objects is "secrets"
resources: ["secrets"]
verbs: ["get", "watch", "list"]
```
The name of a Role or a ClusterRole object must be a valid
[path segment name](/docs/concepts/overview/working-with-objects/names#path-segment-names).
### RoleBinding and ClusterRoleBinding
A role binding grants the permissions defined in a role to a user or set of users.
It holds a list of *subjects* (users, groups, or service accounts), and a reference to the
role being granted.
A RoleBinding grants permissions within a specific namespace whereas a ClusterRoleBinding
grants that access cluster-wide.
A RoleBinding may reference any Role in the same namespace. Alternatively, a RoleBinding
can reference a ClusterRole and bind that ClusterRole to the namespace of the RoleBinding.
If you want to bind a ClusterRole to all the namespaces in your cluster, you use a
ClusterRoleBinding.
The name of a RoleBinding or ClusterRoleBinding object must be a valid
[path segment name](/docs/concepts/overview/working-with-objects/names#path-segment-names).
#### RoleBinding examples {#rolebinding-example}
Here is an example of a RoleBinding that grants the "pod-reader" Role to the user "jane"
within the "default" namespace.
This allows "jane" to read pods in the "default" namespace.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
name: jane # "name" is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
```
A RoleBinding can also reference a ClusterRole to grant the permissions defined in that
ClusterRole to resources inside the RoleBinding's namespace. This kind of reference
lets you define a set of common roles across your cluster, then reuse them within
multiple namespaces.
For instance, even though the following RoleBinding refers to a ClusterRole,
"dave" (the subject, case sensitive) will only be able to read Secrets in the "development"
namespace, because the RoleBinding's namespace (in its metadata) is "development".
```yaml
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "dave" to read secrets in the "development" namespace.
# You need to already have a ClusterRole named "secret-reader".
kind: RoleBinding
metadata:
name: read-secrets
#
# The namespace of the RoleBinding determines where the permissions are granted.
# This only grants permissions within the "development" namespace.
namespace: development
subjects:
- kind: User
name: dave # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
```
#### ClusterRoleBinding example
To grant permissions across a whole cluster, you can use a ClusterRoleBinding.
The following ClusterRoleBinding allows any user in the group "manager" to read
secrets in any namespace.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: manager # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
```
After you create a binding, you cannot change the Role or ClusterRole that it refers to.
If you try to change a binding's `roleRef`, you get a validation error. If you do want
to change the `roleRef` for a binding, you need to remove the binding object and create
a replacement.
There are two reasons for this restriction:
1. Making `roleRef` immutable allows granting someone `update` permission on an existing binding
object, so that they can manage the list of subjects, without being able to change
the role that is granted to those subjects.
1. A binding to a different role is a fundamentally different binding.
Requiring a binding to be deleted/recreated in order to change the `roleRef`
ensures the full list of subjects in the binding is intended to be granted
the new role (as opposed to enabling or accidentally modifying only the roleRef
without verifying all of the existing subjects should be given the new role's
permissions).
The `kubectl auth reconcile` command-line utility creates or updates a manifest file containing RBAC objects,
and handles deleting and recreating binding objects if required to change the role they refer to.
See [command usage and examples](#kubectl-auth-reconcile) for more information.
### Referring to resources
In the Kubernetes API, most resources are represented and accessed using a string representation of
their object name, such as `pods` for a Pod. RBAC refers to resources using exactly the same
name that appears in the URL for the relevant API endpoint.
Some Kubernetes APIs involve a
_subresource_, such as the logs for a Pod. A request for a Pod's logs looks like:
```http
GET /api/v1/namespaces/{namespace}/pods/{name}/log
```
In this case, `pods` is the namespaced resource for Pod resources, and `log` is a
subresource of `pods`. To represent this in an RBAC role, use a slash (`/`) to
delimit the resource and subresource. To allow a subject to read `pods` and
also access the `log` subresource for each of those Pods, you write:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
```
You can also refer to resources by name for certain requests through the `resourceNames` list.
When specified, requests can be restricted to individual instances of a resource.
Here is an example that restricts its subject to only `get` or `update` a
named `my-configmap`:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing ConfigMap
# objects is "configmaps"
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]
```
You cannot restrict `create` or `deletecollection` requests by their resource name.
For `create`, this limitation is because the name of the new object may not be known at authorization time.
If you restrict `list` or `watch` by resourceName, clients must include a `metadata.name` field selector in their `list` or `watch` request that matches the specified resourceName in order to be authorized.
For example, `kubectl get configmaps --field-selector=metadata.name=my-configmap`
Rather than referring to individual `resources`, `apiGroups`, and `verbs`,
you can use the wildcard `*` symbol to refer to all such objects.
For `nonResourceURLs`, you can use the wildcard `*` as a suffix glob match.
For `resourceNames`, an empty set means that everything is allowed.
Here is an example that allows access to perform any current and future action on
all current and future resources in the `example.com` API group.
This is similar to the built-in `cluster-admin` role.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: example.com-superuser # DO NOT USE THIS ROLE, IT IS JUST AN EXAMPLE
rules:
- apiGroups: ["example.com"]
resources: ["*"]
verbs: ["*"]
```
Using wildcards in resource and verb entries could result in overly permissive access being granted
to sensitive resources.
For instance, if a new resource type is added, or a new subresource is added,
or a new custom verb is checked, the wildcard entry automatically grants access, which may be undesirable.
The [principle of least privilege](/docs/concepts/security/rbac-good-practices/#least-privilege)
should be employed, using specific resources and verbs to ensure only the permissions required for the
workload to function correctly are applied.
### Aggregated ClusterRoles
You can _aggregate_ several ClusterRoles into one combined ClusterRole.
A controller, running as part of the cluster control plane, watches for ClusterRole
objects with an `aggregationRule` set. The `aggregationRule` defines a label
that the controller
uses to match other ClusterRole objects that should be combined into the `rules`
field of this one.
The control plane overwrites any values that you manually specify in the `rules` field of an
aggregate ClusterRole. If you want to change or add rules, do so in the `ClusterRole` objects
that are selected by the `aggregationRule`.
Here is an example aggregated ClusterRole:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.example.com/aggregate-to-monitoring: "true"
rules: [] # The control plane automatically fills in the rules
```
If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole,
that change triggers adding the new rules into the aggregated ClusterRole.
Here is an example that adds rules to the "monitoring" ClusterRole, by creating another
ClusterRole labeled `rbac.example.com/aggregate-to-monitoring: true`.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-endpoints
labels:
rbac.example.com/aggregate-to-monitoring: "true"
# When you create the "monitoring-endpoints" ClusterRole,
# the rules below will be added to the "monitoring" ClusterRole.
rules:
- apiGroups: [""]
resources: ["services", "endpointslices", "pods"]
verbs: ["get", "list", "watch"]
```
The [default user-facing roles](#default-roles-and-role-bindings) use ClusterRole aggregation. This lets you,
as a cluster administrator, include rules for custom resources, such as those served by
or aggregated API servers, to extend the default roles.
For example: the following ClusterRoles let the "admin" and "edit" default roles manage the custom resource
named CronTab, whereas the "view" role can perform only read actions on CronTab resources.
You can assume that CronTab objects are named `"crontabs"` in URLs as seen by the API server.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: aggregate-cron-tabs-edit
labels:
# Add these permissions to the "admin" and "edit" default roles.
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rules:
- apiGroups: ["stable.example.com"]
resources: ["crontabs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: aggregate-cron-tabs-view
labels:
# Add these permissions to the "view" default role.
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["stable.example.com"]
resources: ["crontabs"]
verbs: ["get", "list", "watch"]
```
#### Role examples
The following examples are excerpts from Role or ClusterRole objects, showing only
the `rules` section.
Allow reading `"pods"` resources in the core
:
```yaml
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing Pod
# objects is "pods"
resources: ["pods"]
verbs: ["get", "list", "watch"]
```
Allow reading/writing Deployments (at the HTTP level: objects with `"deployments"`
in the resource part of their URL) in the `"apps"` API groups:
```yaml
rules:
- apiGroups: ["apps"]
#
# at the HTTP level, the name of the resource for accessing Deployment
# objects is "deployments"
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
```
Allow reading Pods in the core API group, as well as reading or writing Job
resources in the `"batch"` API group:
```yaml
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing Pod
# objects is "pods"
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
#
# at the HTTP level, the name of the resource for accessing Job
# objects is "jobs"
resources: ["jobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
```
Allow reading a ConfigMap named "my-config" (must be bound with a
RoleBinding to limit to a single ConfigMap in a single namespace):
```yaml
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing ConfigMap
# objects is "configmaps"
resources: ["configmaps"]
resourceNames: ["my-config"]
verbs: ["get"]
```
Allow reading the resource `"nodes"` in the core group (because a
Node is cluster-scoped, this must be in a ClusterRole bound with a
ClusterRoleBinding to be effective):
```yaml
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing Node
# objects is "nodes"
resources: ["nodes"]
verbs: ["get", "list", "watch"]
```
Allow GET and POST requests to the non-resource endpoint `/healthz` and
all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding
to be effective):
```yaml
rules:
- nonResourceURLs: ["/healthz", "/healthz/*"] # '*' in a nonResourceURL is a suffix glob match
verbs: ["get", "post"]
```
### Referring to subjects
A RoleBinding or ClusterRoleBinding binds a role to subjects.
Subjects can be groups, users or
.
Kubernetes represents usernames as strings.
These can be: plain names, such as "alice"; email-style names, like "[email protected]";
or numeric user IDs represented as a string. It is up to you as a cluster administrator
to configure the [authentication modules](/docs/reference/access-authn-authz/authentication/)
so that authentication produces usernames in the format you want.
The prefix `system:` is reserved for Kubernetes system use, so you should ensure
that you don't have users or groups with names that start with `system:` by
accident.
Other than this special prefix, the RBAC authorization system does not require any format
for usernames.
In Kubernetes, Authenticator modules provide group information.
Groups, like users, are represented as strings, and that string has no format requirements,
other than that the prefix `system:` is reserved.
[ServiceAccounts](/docs/tasks/configure-pod-container/configure-service-account/) have names prefixed
with `system:serviceaccount:`, and belong to groups that have names prefixed with `system:serviceaccounts:`.
- `system:serviceaccount:` (singular) is the prefix for service account usernames.
- `system:serviceaccounts:` (plural) is the prefix for service account groups.
#### RoleBinding examples {#role-binding-examples}
The following examples are `RoleBinding` excerpts that only
show the `subjects` section.
For a user named `[email protected]`:
```yaml
subjects:
- kind: User
name: "[email protected]"
apiGroup: rbac.authorization.k8s.io
```
For a group named `frontend-admins`:
```yaml
subjects:
- kind: Group
name: "frontend-admins"
apiGroup: rbac.authorization.k8s.io
```
For the default service account in the "kube-system" namespace:
```yaml
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
```
For all service accounts in the "qa" namespace:
```yaml
subjects:
- kind: Group
name: system:serviceaccounts:qa
apiGroup: rbac.authorization.k8s.io
```
For all service accounts in any namespace:
```yaml
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
```
For all authenticated users:
```yaml
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
```
For all unauthenticated users:
```yaml
subjects:
- kind: Group
name: system:unauthenticated
apiGroup: rbac.authorization.k8s.io
```
For all users:
```yaml
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:unauthenticated
apiGroup: rbac.authorization.k8s.io
```
## Default roles and role bindings
API servers create a set of default ClusterRole and ClusterRoleBinding objects.
Many of these are `system:` prefixed, which indicates that the resource is directly
managed by the cluster control plane.
All of the default ClusterRoles and ClusterRoleBindings are labeled with `kubernetes.io/bootstrapping=rbac-defaults`.
Take care when modifying ClusterRoles and ClusterRoleBindings with names
that have a `system:` prefix.
Modifications to these resources can result in non-functional clusters.
### Auto-reconciliation
At each start-up, the API server updates default cluster roles with any missing permissions,
and updates default cluster role bindings with any missing subjects.
This allows the cluster to repair accidental modifications, and helps to keep roles and role bindings
up-to-date as permissions and subjects change in new Kubernetes releases.
To opt out of this reconciliation, set the `rbac.authorization.kubernetes.io/autoupdate`
annotation on a default cluster role or default cluster RoleBinding to `false`.
Be aware that missing default permissions and subjects can result in non-functional clusters.
Auto-reconciliation is enabled by default if the RBAC authorizer is active.
### API discovery roles {#discovery-roles}
Default cluster role bindings authorize unauthenticated and authenticated users to read API information
that is deemed safe to be publicly accessible (including CustomResourceDefinitions).
To disable anonymous unauthenticated access, add `--anonymous-auth=false` flag to
the API server configuration.
To view the configuration of these roles via `kubectl` run:
```shell
kubectl get clusterroles system:discovery -o yaml
```
If you edit that ClusterRole, your changes will be overwritten on API server restart
via [auto-reconciliation](#auto-reconciliation). To avoid that overwriting,
either do not manually edit the role, or disable auto-reconciliation.
<table>
<caption>Kubernetes RBAC API discovery roles</caption>
<colgroup><col style="width: 25%;" /><col style="width: 25%;" /><col /></colgroup>
<thead>
<tr>
<th>Default ClusterRole</th>
<th>Default ClusterRoleBinding</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>system:basic-user</b></td>
<td><b>system:authenticated</b> group</td>
<td>Allows a user read-only access to basic information about themselves. Prior to v1.14, this role was also bound to <tt>system:unauthenticated</tt> by default.</td>
</tr>
<tr>
<td><b>system:discovery</b></td>
<td><b>system:authenticated</b> group</td>
<td>Allows read-only access to API discovery endpoints needed to discover and negotiate an API level. Prior to v1.14, this role was also bound to <tt>system:unauthenticated</tt> by default.</td>
</tr>
<tr>
<td><b>system:public-info-viewer</b></td>
<td><b>system:authenticated</b> and <b>system:unauthenticated</b> groups</td>
<td>Allows read-only access to non-sensitive information about the cluster. Introduced in Kubernetes v1.14.</td>
</tr>
</tbody>
</table>
### User-facing roles
Some of the default ClusterRoles are not `system:` prefixed. These are intended to be user-facing roles.
They include super-user roles (`cluster-admin`), roles intended to be granted cluster-wide
using ClusterRoleBindings, and roles intended to be granted within particular
namespaces using RoleBindings (`admin`, `edit`, `view`).
User-facing ClusterRoles use [ClusterRole aggregation](#aggregated-clusterroles) to allow admins to include
rules for custom resources on these ClusterRoles. To add rules to the `admin`, `edit`, or `view` roles, create
a ClusterRole with one or more of the following labels:
```yaml
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
```
<table>
<colgroup><col style="width: 25%;" /><col style="width: 25%;" /><col /></colgroup>
<thead>
<tr>
<th>Default ClusterRole</th>
<th>Default ClusterRoleBinding</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>cluster-admin</b></td>
<td><b>system:masters</b> group</td>
<td>Allows super-user access to perform any action on any resource.
When used in a <b>ClusterRoleBinding</b>, it gives full control over every resource in the cluster and in all namespaces.
When used in a <b>RoleBinding</b>, it gives full control over every resource in the role binding's namespace, including the namespace itself.</td>
</tr>
<tr>
<td><b>admin</b></td>
<td>None</td>
<td>Allows admin access, intended to be granted within a namespace using a <b>RoleBinding</b>.
If used in a <b>RoleBinding</b>, allows read/write access to most resources in a namespace,
including the ability to create roles and role bindings within the namespace.
This role does not allow write access to resource quota or to the namespace itself.
This role also does not allow write access to EndpointSlices (or Endpoints) in clusters created
using Kubernetes v1.22+. More information is available in the
["Write Access for EndpointSlices and Endpoints" section](#write-access-for-endpoints).</td>
</tr>
<tr>
<td><b>edit</b></td>
<td>None</td>
<td>Allows read/write access to most objects in a namespace.
This role does not allow viewing or modifying roles or role bindings.
However, this role allows accessing Secrets and running Pods as any ServiceAccount in
the namespace, so it can be used to gain the API access levels of any ServiceAccount in
the namespace. This role also does not allow write access to EndpointSlices (or Endpoints) in
clusters created using Kubernetes v1.22+. More information is available in the
["Write Access for EndpointSlices and Endpoints" section](#write-access-for-endpoints).</td>
</tr>
<tr>
<td><b>view</b></td>
<td>None</td>
<td>Allows read-only access to see most objects in a namespace.
It does not allow viewing roles or role bindings.
This role does not allow viewing Secrets, since reading
the contents of Secrets enables access to ServiceAccount credentials
in the namespace, which would allow API access as any ServiceAccount
in the namespace (a form of privilege escalation).</td>
</tr>
</tbody>
</table>
### Core component roles
<table>
<colgroup><col style="width: 25%;" /><col style="width: 25%;" /><col /></colgroup>
<thead>
<tr>
<th>Default ClusterRole</th>
<th>Default ClusterRoleBinding</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>system:kube-scheduler</b></td>
<td><b>system:kube-scheduler</b> user</td>
<td>Allows access to the resources required by the component.</td>
</tr>
<tr>
<td><b>system:volume-scheduler</b></td>
<td><b>system:kube-scheduler</b> user</td>
<td>Allows access to the volume resources required by the kube-scheduler component.</td>
</tr>
<tr>
<td><b>system:kube-controller-manager</b></td>
<td><b>system:kube-controller-manager</b> user</td>
<td>Allows access to the resources required by the component.
The permissions required by individual controllers are detailed in the <a href="#controller-roles">controller roles</a>.</td>
</tr>
<tr>
<td><b>system:node</b></td>
<td>None</td>
<td>Allows access to resources required by the kubelet, <b>including read access to all secrets, and write access to all pod status objects</b>.
You should use the <a href="/docs/reference/access-authn-authz/node/">Node authorizer</a> and <a href="/docs/reference/access-authn-authz/admission-controllers/#noderestriction">NodeRestriction admission plugin</a> instead of the <tt>system:node</tt> role, and allow granting API access to kubelets based on the Pods scheduled to run on them.
The <tt>system:node</tt> role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8.
</td>
</tr>
<tr>
<td><b>system:node-proxier</b></td>
<td><b>system:kube-proxy</b> user</td>
<td>Allows access to the resources required by the component.</td>
</tr>
</tbody>
</table>
### Other component roles
<table>
<colgroup><col style="width: 25%;" /><col style="width: 25%;" /><col /></colgroup>
<thead>
<tr>
<th>Default ClusterRole</th>
<th>Default ClusterRoleBinding</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>system:auth-delegator</b></td>
<td>None</td>
<td>Allows delegated authentication and authorization checks.
This is commonly used by add-on API servers for unified authentication and authorization.</td>
</tr>
<tr>
<td><b>system:heapster</b></td>
<td>None</td>
<td>Role for the <a href="https://github.com/kubernetes/heapster">Heapster</a> component (deprecated).</td>
</tr>
<tr>
<td><b>system:kube-aggregator</b></td>
<td>None</td>
<td>Role for the <a href="https://github.com/kubernetes/kube-aggregator">kube-aggregator</a> component.</td>
</tr>
<tr>
<td><b>system:kube-dns</b></td>
<td><b>kube-dns</b> service account in the <b>kube-system</b> namespace</td>
<td>Role for the <a href="/docs/concepts/services-networking/dns-pod-service/">kube-dns</a> component.</td>
</tr>
<tr>
<td><b>system:kubelet-api-admin</b></td>
<td>None</td>
<td>Allows full access to the kubelet API.</td>
</tr>
<tr>
<td><b>system:node-bootstrapper</b></td>
<td>None</td>
<td>Allows access to the resources required to perform
<a href="/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/">kubelet TLS bootstrapping</a>.</td>
</tr>
<tr>
<td><b>system:node-problem-detector</b></td>
<td>None</td>
<td>Role for the <a href="https://github.com/kubernetes/node-problem-detector">node-problem-detector</a> component.</td>
</tr>
<tr>
<td><b>system:persistent-volume-provisioner</b></td>
<td>None</td>
<td>Allows access to the resources required by most <a href="/docs/concepts/storage/persistent-volumes/#dynamic">dynamic volume provisioners</a>.</td>
</tr>
<tr>
<td><b>system:monitoring</b></td>
<td><b>system:monitoring</b> group</td>
<td>Allows read access to control-plane monitoring endpoints (i.e. liveness and readiness endpoints (<tt>/healthz</tt>, <tt>/livez</tt>, <tt>/readyz</tt>), the individual health-check endpoints (<tt>/healthz/*</tt>, <tt>/livez/*</tt>, <tt>/readyz/*</tt>), and <tt>/metrics</tt>). Note that individual health check endpoints and the metric endpoint may expose sensitive information.</td>
</tr>
</tbody>
</table>
### Roles for built-in controllers {#controller-roles}
The Kubernetes runs
that are built in to the Kubernetes
control plane.
When invoked with `--use-service-account-credentials`, kube-controller-manager starts each controller
using a separate service account.
Corresponding roles exist for each built-in controller, prefixed with `system:controller:`.
If the controller manager is not started with `--use-service-account-credentials`, it runs all control loops
using its own credential, which must be granted all the relevant roles.
These roles include:
* `system:controller:attachdetach-controller`
* `system:controller:certificate-controller`
* `system:controller:clusterrole-aggregation-controller`
* `system:controller:cronjob-controller`
* `system:controller:daemon-set-controller`
* `system:controller:deployment-controller`
* `system:controller:disruption-controller`
* `system:controller:endpoint-controller`
* `system:controller:expand-controller`
* `system:controller:generic-garbage-collector`
* `system:controller:horizontal-pod-autoscaler`
* `system:controller:job-controller`
* `system:controller:namespace-controller`
* `system:controller:node-controller`
* `system:controller:persistent-volume-binder`
* `system:controller:pod-garbage-collector`
* `system:controller:pv-protection-controller`
* `system:controller:pvc-protection-controller`
* `system:controller:replicaset-controller`
* `system:controller:replication-controller`
* `system:controller:resourcequota-controller`
* `system:controller:root-ca-cert-publisher`
* `system:controller:route-controller`
* `system:controller:service-account-controller`
* `system:controller:service-controller`
* `system:controller:statefulset-controller`
* `system:controller:ttl-controller`
## Privilege escalation prevention and bootstrapping
The RBAC API prevents users from escalating privileges by editing roles or role bindings.
Because this is enforced at the API level, it applies even when the RBAC authorizer is not in use.
### Restrictions on role creation or update
You can only create/update a role if at least one of the following things is true:
1. You already have all the permissions contained in the role, at the same scope as the object being modified
(cluster-wide for a ClusterRole, within the same namespace or cluster-wide for a Role).
2. You are granted explicit permission to perform the `escalate` verb on the `roles` or
`clusterroles` resource in the `rbac.authorization.k8s.io` API group.
For example, if `user-1` does not have the ability to list Secrets cluster-wide, they cannot create a ClusterRole
containing that permission. To allow a user to create/update roles:
1. Grant them a role that allows them to create/update Role or ClusterRole objects, as desired.
2. Grant them permission to include specific permissions in the roles they create/update:
* implicitly, by giving them those permissions (if they attempt to create or modify a Role or
ClusterRole with permissions they themselves have not been granted, the API request will be forbidden)
* or explicitly allow specifying any permission in a `Role` or `ClusterRole` by giving them
permission to perform the `escalate` verb on `roles` or `clusterroles` resources in the
`rbac.authorization.k8s.io` API group
### Restrictions on role binding creation or update
You can only create/update a role binding if you already have all the permissions contained in the referenced role
(at the same scope as the role binding) *or* if you have been authorized to perform the `bind` verb on the referenced role.
For example, if `user-1` does not have the ability to list Secrets cluster-wide, they cannot create a ClusterRoleBinding
to a role that grants that permission. To allow a user to create/update role bindings:
1. Grant them a role that allows them to create/update RoleBinding or ClusterRoleBinding objects, as desired.
2. Grant them permissions needed to bind a particular role:
* implicitly, by giving them the permissions contained in the role.
* explicitly, by giving them permission to perform the `bind` verb on the particular Role (or ClusterRole).
For example, this ClusterRole and RoleBinding would allow `user-1` to grant other users the `admin`, `edit`, and `view` roles in the namespace `user-1-namespace`:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: role-grantor
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings"]
verbs: ["create"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterroles"]
verbs: ["bind"]
# omit resourceNames to allow binding any ClusterRole
resourceNames: ["admin","edit","view"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-grantor-binding
namespace: user-1-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: role-grantor
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: user-1
```
When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have.
To bootstrap initial roles and role bindings:
* Use a credential with the "system:masters" group, which is bound to the "cluster-admin" super-user role by the default bindings.
## Command-line utilities
### `kubectl create role`
Creates a Role object defining permissions within a single namespace. Examples:
* Create a Role named "pod-reader" that allows users to perform `get`, `watch` and `list` on pods:
```shell
kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods
```
* Create a Role named "pod-reader" with resourceNames specified:
```shell
kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod
```
* Create a Role named "foo" with apiGroups specified:
```shell
kubectl create role foo --verb=get,list,watch --resource=replicasets.apps
```
* Create a Role named "foo" with subresource permissions:
```shell
kubectl create role foo --verb=get,list,watch --resource=pods,pods/status
```
* Create a Role named "my-component-lease-holder" with permissions to get/update a resource with a specific name:
```shell
kubectl create role my-component-lease-holder --verb=get,list,watch,update --resource=lease --resource-name=my-component
```
### `kubectl create clusterrole`
Creates a ClusterRole. Examples:
* Create a ClusterRole named "pod-reader" that allows user to perform `get`, `watch` and `list` on pods:
```shell
kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
```
* Create a ClusterRole named "pod-reader" with resourceNames specified:
```shell
kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod
```
* Create a ClusterRole named "foo" with apiGroups specified:
```shell
kubectl create clusterrole foo --verb=get,list,watch --resource=replicasets.apps
```
* Create a ClusterRole named "foo" with subresource permissions:
```shell
kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status
```
* Create a ClusterRole named "foo" with nonResourceURL specified:
```shell
kubectl create clusterrole "foo" --verb=get --non-resource-url=/logs/*
```
* Create a ClusterRole named "monitoring" with an aggregationRule specified:
```shell
kubectl create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true"
```
### `kubectl create rolebinding`
Grants a Role or ClusterRole within a specific namespace. Examples:
* Within the namespace "acme", grant the permissions in the "admin" ClusterRole to a user named "bob":
```shell
kubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme
```
* Within the namespace "acme", grant the permissions in the "view" ClusterRole to the service account in the namespace "acme" named "myapp":
```shell
kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme
```
* Within the namespace "acme", grant the permissions in the "view" ClusterRole to a service account in the namespace "myappnamespace" named "myapp":
```shell
kubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme
```
### `kubectl create clusterrolebinding`
Grants a ClusterRole across the entire cluster (all namespaces). Examples:
* Across the entire cluster, grant the permissions in the "cluster-admin" ClusterRole to a user named "root":
```shell
kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root
```
* Across the entire cluster, grant the permissions in the "system:node-proxier" ClusterRole to a user named "system:kube-proxy":
```shell
kubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy
```
* Across the entire cluster, grant the permissions in the "view" ClusterRole to a service account named "myapp" in the namespace "acme":
```shell
kubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp
```
### `kubectl auth reconcile` {#kubectl-auth-reconcile}
Creates or updates `rbac.authorization.k8s.io/v1` API objects from a manifest file.
Missing objects are created, and the containing namespace is created for namespaced objects, if required.
Existing roles are updated to include the permissions in the input objects,
and remove extra permissions if `--remove-extra-permissions` is specified.
Existing bindings are updated to include the subjects in the input objects,
and remove extra subjects if `--remove-extra-subjects` is specified.
Examples:
* Test applying a manifest file of RBAC objects, displaying changes that would be made:
```shell
kubectl auth reconcile -f my-rbac-rules.yaml --dry-run=client
```
* Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):
```shell
kubectl auth reconcile -f my-rbac-rules.yaml
```
* Apply a manifest file of RBAC objects, removing any extra permissions (in roles) and any extra subjects (in bindings):
```shell
kubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions
```
## ServiceAccount permissions {#service-account-permissions}
Default RBAC policies grant scoped permissions to control-plane components, nodes,
and controllers, but grant *no permissions* to service accounts outside the `kube-system` namespace
(beyond the permissions given by [API discovery roles](#discovery-roles)).
This allows you to grant particular roles to particular ServiceAccounts as needed.
Fine-grained role bindings provide greater security, but require more effort to administrate.
Broader grants can give unnecessary (and potentially escalating) API access to
ServiceAccounts, but are easier to administrate.
In order from most secure to least secure, the approaches are:
1. Grant a role to an application-specific service account (best practice)
This requires the application to specify a `serviceAccountName` in its pod spec,
and for the service account to be created (via the API, application manifest, `kubectl create serviceaccount`, etc.).
For example, grant read-only permission within "my-namespace" to the "my-sa" service account:
```shell
kubectl create rolebinding my-sa-view \
--clusterrole=view \
--serviceaccount=my-namespace:my-sa \
--namespace=my-namespace
```
2. Grant a role to the "default" service account in a namespace
If an application does not specify a `serviceAccountName`, it uses the "default" service account.
Permissions given to the "default" service account are available to any pod
in the namespace that does not specify a `serviceAccountName`.
For example, grant read-only permission within "my-namespace" to the "default" service account:
```shell
kubectl create rolebinding default-view \
--clusterrole=view \
--serviceaccount=my-namespace:default \
--namespace=my-namespace
```
Many [add-ons](/docs/concepts/cluster-administration/addons/) run as the
"default" service account in the `kube-system` namespace.
To allow those add-ons to run with super-user access, grant cluster-admin
permissions to the "default" service account in the `kube-system` namespace.
Enabling this means the `kube-system` namespace contains Secrets
that grant super-user access to your cluster's API.
```shell
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
```
3. Grant a role to all service accounts in a namespace
If you want all applications in a namespace to have a role, no matter what service account they use,
you can grant a role to the service account group for that namespace.
For example, grant read-only permission within "my-namespace" to all service accounts in that namespace:
```shell
kubectl create rolebinding serviceaccounts-view \
--clusterrole=view \
--group=system:serviceaccounts:my-namespace \
--namespace=my-namespace
```
4. Grant a limited role to all service accounts cluster-wide (discouraged)
If you don't want to manage permissions per-namespace, you can grant a cluster-wide role to all service accounts.
For example, grant read-only permission across all namespaces to all service accounts in the cluster:
```shell
kubectl create clusterrolebinding serviceaccounts-view \
--clusterrole=view \
--group=system:serviceaccounts
```
5. Grant super-user access to all service accounts cluster-wide (strongly discouraged)
If you don't care about partitioning permissions at all, you can grant super-user access to all service accounts.
This allows any application full access to your cluster, and also grants
any user with read access to Secrets (or the ability to create any pod)
full access to your cluster.
```shell
kubectl create clusterrolebinding serviceaccounts-cluster-admin \
--clusterrole=cluster-admin \
--group=system:serviceaccounts
```
## Write access for EndpointSlices and Endpoints {#write-access-for-endpoints}
Kubernetes clusters created before Kubernetes v1.22 include write access to
EndpointSlices (and Endpoints) in the aggregated "edit" and "admin" roles.
As a mitigation for [CVE-2021-25740](https://github.com/kubernetes/kubernetes/issues/103675),
this access is not part of the aggregated roles in clusters that you create using
Kubernetes v1.22 or later.
Existing clusters that have been upgraded to Kubernetes v1.22 will not be
subject to this change. The [CVE
announcement](https://github.com/kubernetes/kubernetes/issues/103675) includes
guidance for restricting this access in existing clusters.
If you want new clusters to retain this level of access in the aggregated roles,
you can create the following ClusterRole:
## Upgrading from ABAC
Clusters that originally ran older Kubernetes versions often used
permissive ABAC policies, including granting full API access to all
service accounts.
Default RBAC policies grant scoped permissions to control-plane components, nodes,
and controllers, but grant *no permissions* to service accounts outside the `kube-system` namespace
(beyond the permissions given by [API discovery roles](#discovery-roles)).
While far more secure, this can be disruptive to existing workloads expecting to automatically receive API permissions.
Here are two approaches for managing this transition:
### Parallel authorizers
Run both the RBAC and ABAC authorizers, and specify a policy file that contains
the [legacy ABAC policy](/docs/reference/access-authn-authz/abac/#policy-file-format):
```shell
--authorization-mode=...,RBAC,ABAC --authorization-policy-file=mypolicy.json
```
To explain that first command line option in detail: if earlier authorizers, such as Node,
deny a request, then the RBAC authorizer attempts to authorize the API request. If RBAC
also denies that API request, the ABAC authorizer is then run. This means that any request
allowed by *either* the RBAC or ABAC policies is allowed.
When the kube-apiserver is run with a log level of 5 or higher for the RBAC component
(`--vmodule=rbac*=5` or `--v=5`), you can see RBAC denials in the API server log
(prefixed with `RBAC`).
You can use that information to determine which roles need to be granted to which users, groups, or service accounts.
Once you have [granted roles to service accounts](#service-account-permissions) and workloads
are running with no RBAC denial messages in the server logs, you can remove the ABAC authorizer.
### Permissive RBAC permissions
You can replicate a permissive ABAC policy using RBAC role bindings.
The following policy allows **ALL** service accounts to act as cluster administrators.
Any application running in a container receives service account credentials automatically,
and could perform any action against the API, including viewing secrets and modifying permissions.
This is not a recommended policy.
```shell
kubectl create clusterrolebinding permissive-binding \
--clusterrole=cluster-admin \
--user=admin \
--user=kubelet \
--group=system:serviceaccounts
```
After you have transitioned to use RBAC, you should adjust the access controls
for your cluster to ensure that these meet your information security needs. | kubernetes reference | reviewers erictune deads2k liggitt title Using RBAC Authorization content type concept aliases rbac weight 33 overview Role based access control RBAC is a method of regulating access to computer or network resources based on the roles of individual users within your organization body RBAC authorization uses the rbac authorization k8s io to drive authorization decisions allowing you to dynamically configure policies through the Kubernetes API To enable RBAC start the with the authorization mode flag set to a comma separated list that includes RBAC for example shell kube apiserver authorization mode Example RBAC other options more options API objects api overview The RBAC API declares four kinds of Kubernetes object Role ClusterRole RoleBinding and ClusterRoleBinding You can describe or amend the RBAC using tools such as kubectl just like any other Kubernetes object These objects by design impose access restrictions If you are making changes to a cluster as you learn see privilege escalation prevention and bootstrapping privilege escalation prevention and bootstrapping to understand how those restrictions can prevent you making some changes Role and ClusterRole An RBAC Role or ClusterRole contains rules that represent a set of permissions Permissions are purely additive there are no deny rules A Role always sets permissions within a particular when you create a Role you have to specify the namespace it belongs in ClusterRole by contrast is a non namespaced resource The resources have different names Role and ClusterRole because a Kubernetes object always has to be either namespaced or not namespaced it can t be both ClusterRoles have several uses You can use a ClusterRole to 1 define permissions on namespaced resources and be granted access within individual namespace s 1 define permissions on namespaced resources and be granted access across all namespaces 1 define permissions on cluster scoped resources If you want to define a role within a namespace use a Role if you want to define a role cluster wide use a ClusterRole Role example Here s an example Role in the default namespace that can be used to grant read access to yaml apiVersion rbac authorization k8s io v1 kind Role metadata namespace default name pod reader rules apiGroups indicates the core API group resources pods verbs get watch list ClusterRole example A ClusterRole can be used to grant the same permissions as a Role Because ClusterRoles are cluster scoped you can also use them to grant access to cluster scoped resources like non resource endpoints like healthz namespaced resources like Pods across all namespaces For example you can use a ClusterRole to allow a particular user to run kubectl get pods all namespaces Here is an example of a ClusterRole that can be used to grant read access to in any particular namespace or across all namespaces depending on how it is bound rolebinding and clusterrolebinding yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata namespace omitted since ClusterRoles are not namespaced name secret reader rules apiGroups at the HTTP level the name of the resource for accessing Secret objects is secrets resources secrets verbs get watch list The name of a Role or a ClusterRole object must be a valid path segment name docs concepts overview working with objects names path segment names RoleBinding and ClusterRoleBinding A role binding grants the permissions defined in a role to a user or set of users It holds a list of subjects users groups or service accounts and a reference to the role being granted A RoleBinding grants permissions within a specific namespace whereas a ClusterRoleBinding grants that access cluster wide A RoleBinding may reference any Role in the same namespace Alternatively a RoleBinding can reference a ClusterRole and bind that ClusterRole to the namespace of the RoleBinding If you want to bind a ClusterRole to all the namespaces in your cluster you use a ClusterRoleBinding The name of a RoleBinding or ClusterRoleBinding object must be a valid path segment name docs concepts overview working with objects names path segment names RoleBinding examples rolebinding example Here is an example of a RoleBinding that grants the pod reader Role to the user jane within the default namespace This allows jane to read pods in the default namespace yaml apiVersion rbac authorization k8s io v1 This role binding allows jane to read pods in the default namespace You need to already have a Role named pod reader in that namespace kind RoleBinding metadata name read pods namespace default subjects You can specify more than one subject kind User name jane name is case sensitive apiGroup rbac authorization k8s io roleRef roleRef specifies the binding to a Role ClusterRole kind Role this must be Role or ClusterRole name pod reader this must match the name of the Role or ClusterRole you wish to bind to apiGroup rbac authorization k8s io A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding s namespace This kind of reference lets you define a set of common roles across your cluster then reuse them within multiple namespaces For instance even though the following RoleBinding refers to a ClusterRole dave the subject case sensitive will only be able to read Secrets in the development namespace because the RoleBinding s namespace in its metadata is development yaml apiVersion rbac authorization k8s io v1 This role binding allows dave to read secrets in the development namespace You need to already have a ClusterRole named secret reader kind RoleBinding metadata name read secrets The namespace of the RoleBinding determines where the permissions are granted This only grants permissions within the development namespace namespace development subjects kind User name dave Name is case sensitive apiGroup rbac authorization k8s io roleRef kind ClusterRole name secret reader apiGroup rbac authorization k8s io ClusterRoleBinding example To grant permissions across a whole cluster you can use a ClusterRoleBinding The following ClusterRoleBinding allows any user in the group manager to read secrets in any namespace yaml apiVersion rbac authorization k8s io v1 This cluster role binding allows anyone in the manager group to read secrets in any namespace kind ClusterRoleBinding metadata name read secrets global subjects kind Group name manager Name is case sensitive apiGroup rbac authorization k8s io roleRef kind ClusterRole name secret reader apiGroup rbac authorization k8s io After you create a binding you cannot change the Role or ClusterRole that it refers to If you try to change a binding s roleRef you get a validation error If you do want to change the roleRef for a binding you need to remove the binding object and create a replacement There are two reasons for this restriction 1 Making roleRef immutable allows granting someone update permission on an existing binding object so that they can manage the list of subjects without being able to change the role that is granted to those subjects 1 A binding to a different role is a fundamentally different binding Requiring a binding to be deleted recreated in order to change the roleRef ensures the full list of subjects in the binding is intended to be granted the new role as opposed to enabling or accidentally modifying only the roleRef without verifying all of the existing subjects should be given the new role s permissions The kubectl auth reconcile command line utility creates or updates a manifest file containing RBAC objects and handles deleting and recreating binding objects if required to change the role they refer to See command usage and examples kubectl auth reconcile for more information Referring to resources In the Kubernetes API most resources are represented and accessed using a string representation of their object name such as pods for a Pod RBAC refers to resources using exactly the same name that appears in the URL for the relevant API endpoint Some Kubernetes APIs involve a subresource such as the logs for a Pod A request for a Pod s logs looks like http GET api v1 namespaces namespace pods name log In this case pods is the namespaced resource for Pod resources and log is a subresource of pods To represent this in an RBAC role use a slash to delimit the resource and subresource To allow a subject to read pods and also access the log subresource for each of those Pods you write yaml apiVersion rbac authorization k8s io v1 kind Role metadata namespace default name pod and pod logs reader rules apiGroups resources pods pods log verbs get list You can also refer to resources by name for certain requests through the resourceNames list When specified requests can be restricted to individual instances of a resource Here is an example that restricts its subject to only get or update a named my configmap yaml apiVersion rbac authorization k8s io v1 kind Role metadata namespace default name configmap updater rules apiGroups at the HTTP level the name of the resource for accessing ConfigMap objects is configmaps resources configmaps resourceNames my configmap verbs update get You cannot restrict create or deletecollection requests by their resource name For create this limitation is because the name of the new object may not be known at authorization time If you restrict list or watch by resourceName clients must include a metadata name field selector in their list or watch request that matches the specified resourceName in order to be authorized For example kubectl get configmaps field selector metadata name my configmap Rather than referring to individual resources apiGroups and verbs you can use the wildcard symbol to refer to all such objects For nonResourceURLs you can use the wildcard as a suffix glob match For resourceNames an empty set means that everything is allowed Here is an example that allows access to perform any current and future action on all current and future resources in the example com API group This is similar to the built in cluster admin role yaml apiVersion rbac authorization k8s io v1 kind Role metadata namespace default name example com superuser DO NOT USE THIS ROLE IT IS JUST AN EXAMPLE rules apiGroups example com resources verbs Using wildcards in resource and verb entries could result in overly permissive access being granted to sensitive resources For instance if a new resource type is added or a new subresource is added or a new custom verb is checked the wildcard entry automatically grants access which may be undesirable The principle of least privilege docs concepts security rbac good practices least privilege should be employed using specific resources and verbs to ensure only the permissions required for the workload to function correctly are applied Aggregated ClusterRoles You can aggregate several ClusterRoles into one combined ClusterRole A controller running as part of the cluster control plane watches for ClusterRole objects with an aggregationRule set The aggregationRule defines a label that the controller uses to match other ClusterRole objects that should be combined into the rules field of this one The control plane overwrites any values that you manually specify in the rules field of an aggregate ClusterRole If you want to change or add rules do so in the ClusterRole objects that are selected by the aggregationRule Here is an example aggregated ClusterRole yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name monitoring aggregationRule clusterRoleSelectors matchLabels rbac example com aggregate to monitoring true rules The control plane automatically fills in the rules If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole that change triggers adding the new rules into the aggregated ClusterRole Here is an example that adds rules to the monitoring ClusterRole by creating another ClusterRole labeled rbac example com aggregate to monitoring true yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name monitoring endpoints labels rbac example com aggregate to monitoring true When you create the monitoring endpoints ClusterRole the rules below will be added to the monitoring ClusterRole rules apiGroups resources services endpointslices pods verbs get list watch The default user facing roles default roles and role bindings use ClusterRole aggregation This lets you as a cluster administrator include rules for custom resources such as those served by or aggregated API servers to extend the default roles For example the following ClusterRoles let the admin and edit default roles manage the custom resource named CronTab whereas the view role can perform only read actions on CronTab resources You can assume that CronTab objects are named crontabs in URLs as seen by the API server yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name aggregate cron tabs edit labels Add these permissions to the admin and edit default roles rbac authorization k8s io aggregate to admin true rbac authorization k8s io aggregate to edit true rules apiGroups stable example com resources crontabs verbs get list watch create update patch delete kind ClusterRole apiVersion rbac authorization k8s io v1 metadata name aggregate cron tabs view labels Add these permissions to the view default role rbac authorization k8s io aggregate to view true rules apiGroups stable example com resources crontabs verbs get list watch Role examples The following examples are excerpts from Role or ClusterRole objects showing only the rules section Allow reading pods resources in the core yaml rules apiGroups at the HTTP level the name of the resource for accessing Pod objects is pods resources pods verbs get list watch Allow reading writing Deployments at the HTTP level objects with deployments in the resource part of their URL in the apps API groups yaml rules apiGroups apps at the HTTP level the name of the resource for accessing Deployment objects is deployments resources deployments verbs get list watch create update patch delete Allow reading Pods in the core API group as well as reading or writing Job resources in the batch API group yaml rules apiGroups at the HTTP level the name of the resource for accessing Pod objects is pods resources pods verbs get list watch apiGroups batch at the HTTP level the name of the resource for accessing Job objects is jobs resources jobs verbs get list watch create update patch delete Allow reading a ConfigMap named my config must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace yaml rules apiGroups at the HTTP level the name of the resource for accessing ConfigMap objects is configmaps resources configmaps resourceNames my config verbs get Allow reading the resource nodes in the core group because a Node is cluster scoped this must be in a ClusterRole bound with a ClusterRoleBinding to be effective yaml rules apiGroups at the HTTP level the name of the resource for accessing Node objects is nodes resources nodes verbs get list watch Allow GET and POST requests to the non resource endpoint healthz and all subpaths must be in a ClusterRole bound with a ClusterRoleBinding to be effective yaml rules nonResourceURLs healthz healthz in a nonResourceURL is a suffix glob match verbs get post Referring to subjects A RoleBinding or ClusterRoleBinding binds a role to subjects Subjects can be groups users or Kubernetes represents usernames as strings These can be plain names such as alice email style names like bob example com or numeric user IDs represented as a string It is up to you as a cluster administrator to configure the authentication modules docs reference access authn authz authentication so that authentication produces usernames in the format you want The prefix system is reserved for Kubernetes system use so you should ensure that you don t have users or groups with names that start with system by accident Other than this special prefix the RBAC authorization system does not require any format for usernames In Kubernetes Authenticator modules provide group information Groups like users are represented as strings and that string has no format requirements other than that the prefix system is reserved ServiceAccounts docs tasks configure pod container configure service account have names prefixed with system serviceaccount and belong to groups that have names prefixed with system serviceaccounts system serviceaccount singular is the prefix for service account usernames system serviceaccounts plural is the prefix for service account groups RoleBinding examples role binding examples The following examples are RoleBinding excerpts that only show the subjects section For a user named alice example com yaml subjects kind User name alice example com apiGroup rbac authorization k8s io For a group named frontend admins yaml subjects kind Group name frontend admins apiGroup rbac authorization k8s io For the default service account in the kube system namespace yaml subjects kind ServiceAccount name default namespace kube system For all service accounts in the qa namespace yaml subjects kind Group name system serviceaccounts qa apiGroup rbac authorization k8s io For all service accounts in any namespace yaml subjects kind Group name system serviceaccounts apiGroup rbac authorization k8s io For all authenticated users yaml subjects kind Group name system authenticated apiGroup rbac authorization k8s io For all unauthenticated users yaml subjects kind Group name system unauthenticated apiGroup rbac authorization k8s io For all users yaml subjects kind Group name system authenticated apiGroup rbac authorization k8s io kind Group name system unauthenticated apiGroup rbac authorization k8s io Default roles and role bindings API servers create a set of default ClusterRole and ClusterRoleBinding objects Many of these are system prefixed which indicates that the resource is directly managed by the cluster control plane All of the default ClusterRoles and ClusterRoleBindings are labeled with kubernetes io bootstrapping rbac defaults Take care when modifying ClusterRoles and ClusterRoleBindings with names that have a system prefix Modifications to these resources can result in non functional clusters Auto reconciliation At each start up the API server updates default cluster roles with any missing permissions and updates default cluster role bindings with any missing subjects This allows the cluster to repair accidental modifications and helps to keep roles and role bindings up to date as permissions and subjects change in new Kubernetes releases To opt out of this reconciliation set the rbac authorization kubernetes io autoupdate annotation on a default cluster role or default cluster RoleBinding to false Be aware that missing default permissions and subjects can result in non functional clusters Auto reconciliation is enabled by default if the RBAC authorizer is active API discovery roles discovery roles Default cluster role bindings authorize unauthenticated and authenticated users to read API information that is deemed safe to be publicly accessible including CustomResourceDefinitions To disable anonymous unauthenticated access add anonymous auth false flag to the API server configuration To view the configuration of these roles via kubectl run shell kubectl get clusterroles system discovery o yaml If you edit that ClusterRole your changes will be overwritten on API server restart via auto reconciliation auto reconciliation To avoid that overwriting either do not manually edit the role or disable auto reconciliation table caption Kubernetes RBAC API discovery roles caption colgroup col style width 25 col style width 25 col colgroup thead tr th Default ClusterRole th th Default ClusterRoleBinding th th Description th tr thead tbody tr td b system basic user b td td b system authenticated b group td td Allows a user read only access to basic information about themselves Prior to v1 14 this role was also bound to tt system unauthenticated tt by default td tr tr td b system discovery b td td b system authenticated b group td td Allows read only access to API discovery endpoints needed to discover and negotiate an API level Prior to v1 14 this role was also bound to tt system unauthenticated tt by default td tr tr td b system public info viewer b td td b system authenticated b and b system unauthenticated b groups td td Allows read only access to non sensitive information about the cluster Introduced in Kubernetes v1 14 td tr tbody table User facing roles Some of the default ClusterRoles are not system prefixed These are intended to be user facing roles They include super user roles cluster admin roles intended to be granted cluster wide using ClusterRoleBindings and roles intended to be granted within particular namespaces using RoleBindings admin edit view User facing ClusterRoles use ClusterRole aggregation aggregated clusterroles to allow admins to include rules for custom resources on these ClusterRoles To add rules to the admin edit or view roles create a ClusterRole with one or more of the following labels yaml metadata labels rbac authorization k8s io aggregate to admin true rbac authorization k8s io aggregate to edit true rbac authorization k8s io aggregate to view true table colgroup col style width 25 col style width 25 col colgroup thead tr th Default ClusterRole th th Default ClusterRoleBinding th th Description th tr thead tbody tr td b cluster admin b td td b system masters b group td td Allows super user access to perform any action on any resource When used in a b ClusterRoleBinding b it gives full control over every resource in the cluster and in all namespaces When used in a b RoleBinding b it gives full control over every resource in the role binding s namespace including the namespace itself td tr tr td b admin b td td None td td Allows admin access intended to be granted within a namespace using a b RoleBinding b If used in a b RoleBinding b allows read write access to most resources in a namespace including the ability to create roles and role bindings within the namespace This role does not allow write access to resource quota or to the namespace itself This role also does not allow write access to EndpointSlices or Endpoints in clusters created using Kubernetes v1 22 More information is available in the Write Access for EndpointSlices and Endpoints section write access for endpoints td tr tr td b edit b td td None td td Allows read write access to most objects in a namespace This role does not allow viewing or modifying roles or role bindings However this role allows accessing Secrets and running Pods as any ServiceAccount in the namespace so it can be used to gain the API access levels of any ServiceAccount in the namespace This role also does not allow write access to EndpointSlices or Endpoints in clusters created using Kubernetes v1 22 More information is available in the Write Access for EndpointSlices and Endpoints section write access for endpoints td tr tr td b view b td td None td td Allows read only access to see most objects in a namespace It does not allow viewing roles or role bindings This role does not allow viewing Secrets since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace which would allow API access as any ServiceAccount in the namespace a form of privilege escalation td tr tbody table Core component roles table colgroup col style width 25 col style width 25 col colgroup thead tr th Default ClusterRole th th Default ClusterRoleBinding th th Description th tr thead tbody tr td b system kube scheduler b td td b system kube scheduler b user td td Allows access to the resources required by the component td tr tr td b system volume scheduler b td td b system kube scheduler b user td td Allows access to the volume resources required by the kube scheduler component td tr tr td b system kube controller manager b td td b system kube controller manager b user td td Allows access to the resources required by the component The permissions required by individual controllers are detailed in the a href controller roles controller roles a td tr tr td b system node b td td None td td Allows access to resources required by the kubelet b including read access to all secrets and write access to all pod status objects b You should use the a href docs reference access authn authz node Node authorizer a and a href docs reference access authn authz admission controllers noderestriction NodeRestriction admission plugin a instead of the tt system node tt role and allow granting API access to kubelets based on the Pods scheduled to run on them The tt system node tt role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1 8 td tr tr td b system node proxier b td td b system kube proxy b user td td Allows access to the resources required by the component td tr tbody table Other component roles table colgroup col style width 25 col style width 25 col colgroup thead tr th Default ClusterRole th th Default ClusterRoleBinding th th Description th tr thead tbody tr td b system auth delegator b td td None td td Allows delegated authentication and authorization checks This is commonly used by add on API servers for unified authentication and authorization td tr tr td b system heapster b td td None td td Role for the a href https github com kubernetes heapster Heapster a component deprecated td tr tr td b system kube aggregator b td td None td td Role for the a href https github com kubernetes kube aggregator kube aggregator a component td tr tr td b system kube dns b td td b kube dns b service account in the b kube system b namespace td td Role for the a href docs concepts services networking dns pod service kube dns a component td tr tr td b system kubelet api admin b td td None td td Allows full access to the kubelet API td tr tr td b system node bootstrapper b td td None td td Allows access to the resources required to perform a href docs reference access authn authz kubelet tls bootstrapping kubelet TLS bootstrapping a td tr tr td b system node problem detector b td td None td td Role for the a href https github com kubernetes node problem detector node problem detector a component td tr tr td b system persistent volume provisioner b td td None td td Allows access to the resources required by most a href docs concepts storage persistent volumes dynamic dynamic volume provisioners a td tr tr td b system monitoring b td td b system monitoring b group td td Allows read access to control plane monitoring endpoints i e liveness and readiness endpoints tt healthz tt tt livez tt tt readyz tt the individual health check endpoints tt healthz tt tt livez tt tt readyz tt and tt metrics tt Note that individual health check endpoints and the metric endpoint may expose sensitive information td tr tbody table Roles for built in controllers controller roles The Kubernetes runs that are built in to the Kubernetes control plane When invoked with use service account credentials kube controller manager starts each controller using a separate service account Corresponding roles exist for each built in controller prefixed with system controller If the controller manager is not started with use service account credentials it runs all control loops using its own credential which must be granted all the relevant roles These roles include system controller attachdetach controller system controller certificate controller system controller clusterrole aggregation controller system controller cronjob controller system controller daemon set controller system controller deployment controller system controller disruption controller system controller endpoint controller system controller expand controller system controller generic garbage collector system controller horizontal pod autoscaler system controller job controller system controller namespace controller system controller node controller system controller persistent volume binder system controller pod garbage collector system controller pv protection controller system controller pvc protection controller system controller replicaset controller system controller replication controller system controller resourcequota controller system controller root ca cert publisher system controller route controller system controller service account controller system controller service controller system controller statefulset controller system controller ttl controller Privilege escalation prevention and bootstrapping The RBAC API prevents users from escalating privileges by editing roles or role bindings Because this is enforced at the API level it applies even when the RBAC authorizer is not in use Restrictions on role creation or update You can only create update a role if at least one of the following things is true 1 You already have all the permissions contained in the role at the same scope as the object being modified cluster wide for a ClusterRole within the same namespace or cluster wide for a Role 2 You are granted explicit permission to perform the escalate verb on the roles or clusterroles resource in the rbac authorization k8s io API group For example if user 1 does not have the ability to list Secrets cluster wide they cannot create a ClusterRole containing that permission To allow a user to create update roles 1 Grant them a role that allows them to create update Role or ClusterRole objects as desired 2 Grant them permission to include specific permissions in the roles they create update implicitly by giving them those permissions if they attempt to create or modify a Role or ClusterRole with permissions they themselves have not been granted the API request will be forbidden or explicitly allow specifying any permission in a Role or ClusterRole by giving them permission to perform the escalate verb on roles or clusterroles resources in the rbac authorization k8s io API group Restrictions on role binding creation or update You can only create update a role binding if you already have all the permissions contained in the referenced role at the same scope as the role binding or if you have been authorized to perform the bind verb on the referenced role For example if user 1 does not have the ability to list Secrets cluster wide they cannot create a ClusterRoleBinding to a role that grants that permission To allow a user to create update role bindings 1 Grant them a role that allows them to create update RoleBinding or ClusterRoleBinding objects as desired 2 Grant them permissions needed to bind a particular role implicitly by giving them the permissions contained in the role explicitly by giving them permission to perform the bind verb on the particular Role or ClusterRole For example this ClusterRole and RoleBinding would allow user 1 to grant other users the admin edit and view roles in the namespace user 1 namespace yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name role grantor rules apiGroups rbac authorization k8s io resources rolebindings verbs create apiGroups rbac authorization k8s io resources clusterroles verbs bind omit resourceNames to allow binding any ClusterRole resourceNames admin edit view apiVersion rbac authorization k8s io v1 kind RoleBinding metadata name role grantor binding namespace user 1 namespace roleRef apiGroup rbac authorization k8s io kind ClusterRole name role grantor subjects apiGroup rbac authorization k8s io kind User name user 1 When bootstrapping the first roles and role bindings it is necessary for the initial user to grant permissions they do not yet have To bootstrap initial roles and role bindings Use a credential with the system masters group which is bound to the cluster admin super user role by the default bindings Command line utilities kubectl create role Creates a Role object defining permissions within a single namespace Examples Create a Role named pod reader that allows users to perform get watch and list on pods shell kubectl create role pod reader verb get verb list verb watch resource pods Create a Role named pod reader with resourceNames specified shell kubectl create role pod reader verb get resource pods resource name readablepod resource name anotherpod Create a Role named foo with apiGroups specified shell kubectl create role foo verb get list watch resource replicasets apps Create a Role named foo with subresource permissions shell kubectl create role foo verb get list watch resource pods pods status Create a Role named my component lease holder with permissions to get update a resource with a specific name shell kubectl create role my component lease holder verb get list watch update resource lease resource name my component kubectl create clusterrole Creates a ClusterRole Examples Create a ClusterRole named pod reader that allows user to perform get watch and list on pods shell kubectl create clusterrole pod reader verb get list watch resource pods Create a ClusterRole named pod reader with resourceNames specified shell kubectl create clusterrole pod reader verb get resource pods resource name readablepod resource name anotherpod Create a ClusterRole named foo with apiGroups specified shell kubectl create clusterrole foo verb get list watch resource replicasets apps Create a ClusterRole named foo with subresource permissions shell kubectl create clusterrole foo verb get list watch resource pods pods status Create a ClusterRole named foo with nonResourceURL specified shell kubectl create clusterrole foo verb get non resource url logs Create a ClusterRole named monitoring with an aggregationRule specified shell kubectl create clusterrole monitoring aggregation rule rbac example com aggregate to monitoring true kubectl create rolebinding Grants a Role or ClusterRole within a specific namespace Examples Within the namespace acme grant the permissions in the admin ClusterRole to a user named bob shell kubectl create rolebinding bob admin binding clusterrole admin user bob namespace acme Within the namespace acme grant the permissions in the view ClusterRole to the service account in the namespace acme named myapp shell kubectl create rolebinding myapp view binding clusterrole view serviceaccount acme myapp namespace acme Within the namespace acme grant the permissions in the view ClusterRole to a service account in the namespace myappnamespace named myapp shell kubectl create rolebinding myappnamespace myapp view binding clusterrole view serviceaccount myappnamespace myapp namespace acme kubectl create clusterrolebinding Grants a ClusterRole across the entire cluster all namespaces Examples Across the entire cluster grant the permissions in the cluster admin ClusterRole to a user named root shell kubectl create clusterrolebinding root cluster admin binding clusterrole cluster admin user root Across the entire cluster grant the permissions in the system node proxier ClusterRole to a user named system kube proxy shell kubectl create clusterrolebinding kube proxy binding clusterrole system node proxier user system kube proxy Across the entire cluster grant the permissions in the view ClusterRole to a service account named myapp in the namespace acme shell kubectl create clusterrolebinding myapp view binding clusterrole view serviceaccount acme myapp kubectl auth reconcile kubectl auth reconcile Creates or updates rbac authorization k8s io v1 API objects from a manifest file Missing objects are created and the containing namespace is created for namespaced objects if required Existing roles are updated to include the permissions in the input objects and remove extra permissions if remove extra permissions is specified Existing bindings are updated to include the subjects in the input objects and remove extra subjects if remove extra subjects is specified Examples Test applying a manifest file of RBAC objects displaying changes that would be made shell kubectl auth reconcile f my rbac rules yaml dry run client Apply a manifest file of RBAC objects preserving any extra permissions in roles and any extra subjects in bindings shell kubectl auth reconcile f my rbac rules yaml Apply a manifest file of RBAC objects removing any extra permissions in roles and any extra subjects in bindings shell kubectl auth reconcile f my rbac rules yaml remove extra subjects remove extra permissions ServiceAccount permissions service account permissions Default RBAC policies grant scoped permissions to control plane components nodes and controllers but grant no permissions to service accounts outside the kube system namespace beyond the permissions given by API discovery roles discovery roles This allows you to grant particular roles to particular ServiceAccounts as needed Fine grained role bindings provide greater security but require more effort to administrate Broader grants can give unnecessary and potentially escalating API access to ServiceAccounts but are easier to administrate In order from most secure to least secure the approaches are 1 Grant a role to an application specific service account best practice This requires the application to specify a serviceAccountName in its pod spec and for the service account to be created via the API application manifest kubectl create serviceaccount etc For example grant read only permission within my namespace to the my sa service account shell kubectl create rolebinding my sa view clusterrole view serviceaccount my namespace my sa namespace my namespace 2 Grant a role to the default service account in a namespace If an application does not specify a serviceAccountName it uses the default service account Permissions given to the default service account are available to any pod in the namespace that does not specify a serviceAccountName For example grant read only permission within my namespace to the default service account shell kubectl create rolebinding default view clusterrole view serviceaccount my namespace default namespace my namespace Many add ons docs concepts cluster administration addons run as the default service account in the kube system namespace To allow those add ons to run with super user access grant cluster admin permissions to the default service account in the kube system namespace Enabling this means the kube system namespace contains Secrets that grant super user access to your cluster s API shell kubectl create clusterrolebinding add on cluster admin clusterrole cluster admin serviceaccount kube system default 3 Grant a role to all service accounts in a namespace If you want all applications in a namespace to have a role no matter what service account they use you can grant a role to the service account group for that namespace For example grant read only permission within my namespace to all service accounts in that namespace shell kubectl create rolebinding serviceaccounts view clusterrole view group system serviceaccounts my namespace namespace my namespace 4 Grant a limited role to all service accounts cluster wide discouraged If you don t want to manage permissions per namespace you can grant a cluster wide role to all service accounts For example grant read only permission across all namespaces to all service accounts in the cluster shell kubectl create clusterrolebinding serviceaccounts view clusterrole view group system serviceaccounts 5 Grant super user access to all service accounts cluster wide strongly discouraged If you don t care about partitioning permissions at all you can grant super user access to all service accounts This allows any application full access to your cluster and also grants any user with read access to Secrets or the ability to create any pod full access to your cluster shell kubectl create clusterrolebinding serviceaccounts cluster admin clusterrole cluster admin group system serviceaccounts Write access for EndpointSlices and Endpoints write access for endpoints Kubernetes clusters created before Kubernetes v1 22 include write access to EndpointSlices and Endpoints in the aggregated edit and admin roles As a mitigation for CVE 2021 25740 https github com kubernetes kubernetes issues 103675 this access is not part of the aggregated roles in clusters that you create using Kubernetes v1 22 or later Existing clusters that have been upgraded to Kubernetes v1 22 will not be subject to this change The CVE announcement https github com kubernetes kubernetes issues 103675 includes guidance for restricting this access in existing clusters If you want new clusters to retain this level of access in the aggregated roles you can create the following ClusterRole Upgrading from ABAC Clusters that originally ran older Kubernetes versions often used permissive ABAC policies including granting full API access to all service accounts Default RBAC policies grant scoped permissions to control plane components nodes and controllers but grant no permissions to service accounts outside the kube system namespace beyond the permissions given by API discovery roles discovery roles While far more secure this can be disruptive to existing workloads expecting to automatically receive API permissions Here are two approaches for managing this transition Parallel authorizers Run both the RBAC and ABAC authorizers and specify a policy file that contains the legacy ABAC policy docs reference access authn authz abac policy file format shell authorization mode RBAC ABAC authorization policy file mypolicy json To explain that first command line option in detail if earlier authorizers such as Node deny a request then the RBAC authorizer attempts to authorize the API request If RBAC also denies that API request the ABAC authorizer is then run This means that any request allowed by either the RBAC or ABAC policies is allowed When the kube apiserver is run with a log level of 5 or higher for the RBAC component vmodule rbac 5 or v 5 you can see RBAC denials in the API server log prefixed with RBAC You can use that information to determine which roles need to be granted to which users groups or service accounts Once you have granted roles to service accounts service account permissions and workloads are running with no RBAC denial messages in the server logs you can remove the ABAC authorizer Permissive RBAC permissions You can replicate a permissive ABAC policy using RBAC role bindings The following policy allows ALL service accounts to act as cluster administrators Any application running in a container receives service account credentials automatically and could perform any action against the API including viewing secrets and modifying permissions This is not a recommended policy shell kubectl create clusterrolebinding permissive binding clusterrole cluster admin user admin user kubelet group system serviceaccounts After you have transitioned to use RBAC you should adjust the access controls for your cluster to ensure that these meet your information security needs |
kubernetes reference kind CertificateSigningRequest liggitt munnerz mikedanese enj reviewers apimetadata apiVersion certificates k8s io v1 title Certificates and Certificate Signing Requests | ---
reviewers:
- liggitt
- mikedanese
- munnerz
- enj
title: Certificates and Certificate Signing Requests
api_metadata:
- apiVersion: "certificates.k8s.io/v1"
kind: "CertificateSigningRequest"
override_link_text: "CSR v1"
- apiVersion: "certificates.k8s.io/v1alpha1"
kind: "ClusterTrustBundle"
content_type: concept
weight: 60
---
<!-- overview -->
Kubernetes certificate and trust bundle APIs enable automation of
[X.509](https://www.itu.int/rec/T-REC-X.509) credential provisioning by providing
a programmatic interface for clients of the Kubernetes API to request and obtain
X.509 from a Certificate Authority (CA).
There is also experimental (alpha) support for distributing [trust bundles](#cluster-trust-bundles).
<!-- body -->
## Certificate signing requests
A [CertificateSigningRequest](/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/)
(CSR) resource is used to request that a certificate be signed
by a denoted signer, after which the request may be approved or denied before
finally being signed.
### Request signing process
The CertificateSigningRequest resource type allows a client to ask for an X.509 certificate
be issued, based on a signing request.
The CertificateSigningRequest object includes a PEM-encoded PKCS#10 signing request in
the `spec.request` field. The CertificateSigningRequest denotes the signer (the
recipient that the request is being made to) using the `spec.signerName` field.
Note that `spec.signerName` is a required key after API version `certificates.k8s.io/v1`.
In Kubernetes v1.22 and later, clients may optionally set the `spec.expirationSeconds`
field to request a particular lifetime for the issued certificate. The minimum valid
value for this field is `600`, i.e. ten minutes.
Once created, a CertificateSigningRequest must be approved before it can be signed.
Depending on the signer selected, a CertificateSigningRequest may be automatically approved
by a .
Otherwise, a CertificateSigningRequest must be manually approved either via the REST API (or client-go)
or by running `kubectl certificate approve`. Likewise, a CertificateSigningRequest may also be denied,
which tells the configured signer that it must not sign the request.
For certificates that have been approved, the next step is signing. The relevant signing controller
first validates that the signing conditions are met and then creates a certificate.
The signing controller then updates the CertificateSigningRequest, storing the new certificate into
the `status.certificate` field of the existing CertificateSigningRequest object. The
`status.certificate` field is either empty or contains a X.509 certificate, encoded in PEM format.
The CertificateSigningRequest `status.certificate` field is empty until the signer does this.
Once the `status.certificate` field has been populated, the request has been completed and clients can now
fetch the signed certificate PEM data from the CertificateSigningRequest resource.
The signers can instead deny certificate signing if the approval conditions are not met.
In order to reduce the number of old CertificateSigningRequest resources left in a cluster, a garbage collection
controller runs periodically. The garbage collection removes CertificateSigningRequests that have not changed
state for some duration:
* Approved requests: automatically deleted after 1 hour
* Denied requests: automatically deleted after 1 hour
* Failed requests: automatically deleted after 1 hour
* Pending requests: automatically deleted after 24 hours
* All requests: automatically deleted after the issued certificate has expired
### Certificate signing authorization {#authorization}
To allow creating a CertificateSigningRequest and retrieving any CertificateSigningRequest:
* Verbs: `create`, `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
For example:
To allow approving a CertificateSigningRequest:
* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/approval`
* Verbs: `approve`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
For example:
To allow signing a CertificateSigningRequest:
* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/status`
* Verbs: `sign`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
## Signers
Signers abstractly represent the entity or entities that might sign, or have
signed, a security certificate.
Any signer that is made available for outside a particular cluster should provide information
about how the signer works, so that consumers can understand what that means for CertifcateSigningRequests
and (if enabled) [ClusterTrustBundles](#cluster-trust-bundles).
This includes:
1. **Trust distribution**: how trust anchors (CA certificates or certificate bundles) are distributed.
1. **Permitted subjects**: any restrictions on and behavior when a disallowed subject is requested.
1. **Permitted x509 extensions**: including IP subjectAltNames, DNS subjectAltNames,
Email subjectAltNames, URI subjectAltNames etc, and behavior when a disallowed extension is requested.
1. **Permitted key usages / extended key usages**: any restrictions on and behavior
when usages different than the signer-determined usages are specified in the CSR.
1. **Expiration/certificate lifetime**: whether it is fixed by the signer, configurable by the admin, determined by the CSR `spec.expirationSeconds` field, etc
and the behavior when the signer-determined expiration is different from the CSR `spec.expirationSeconds` field.
1. **CA bit allowed/disallowed**: and behavior if a CSR contains a request a for a CA certificate when the signer does not permit it.
Commonly, the `status.certificate` field of a CertificateSigningRequest contains a
single PEM-encoded X.509 certificate once the CSR is approved and the certificate is issued.
Some signers store multiple certificates into the `status.certificate` field. In
that case, the documentation for the signer should specify the meaning of
additional certificates; for example, this might be the certificate plus
intermediates to be presented during TLS handshakes.
If you want to make the _trust anchor_ (root certificate) available, this should be done
separately from a CertificateSigningRequest and its `status.certificate` field. For example,
you could use a ClusterTrustBundle.
The PKCS#10 signing request format does not have a standard mechanism to specify a
certificate expiration or lifetime. The expiration or lifetime therefore has to be set
through the `spec.expirationSeconds` field of the CSR object. The built-in signers
use the `ClusterSigningDuration` configuration option, which defaults to 1 year,
(the `--cluster-signing-duration` command-line flag of the kube-controller-manager)
as the default when no `spec.expirationSeconds` is specified. When `spec.expirationSeconds`
is specified, the minimum of `spec.expirationSeconds` and `ClusterSigningDuration` is
used.
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
### Kubernetes signers
Kubernetes provides built-in signers that each have a well-known `signerName`:
1. `kubernetes.io/kube-apiserver-client`: signs certificates that will be honored as client certificates by the API server.
Never auto-approved by .
1. Trust distribution: signed certificates must be honored as client certificates by the API server. The CA bundle is not distributed by any other means.
1. Permitted subjects - no subject restrictions, but approvers and signers may choose not to approve or sign.
Certain subjects like cluster-admin level users or groups vary between distributions and installations,
but deserve additional scrutiny before approval and signing.
The `CertificateSubjectRestriction` admission plugin is enabled by default to restrict `system:masters`,
but it is often not the only cluster-admin subject in a cluster.
1. Permitted x509 extensions - honors subjectAltName and key usage extensions and discards other extensions.
1. Permitted key usages - must include `["client auth"]`. Must not include key usages beyond `["digital signature", "key encipherment", "client auth"]`.
1. Expiration/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum
of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.
1. CA bit allowed/disallowed - not allowed.
1. `kubernetes.io/kube-apiserver-client-kubelet`: signs client certificates that will be honored as client certificates by the
API server.
May be auto-approved by .
1. Trust distribution: signed certificates must be honored as client certificates by the API server. The CA bundle
is not distributed by any other means.
1. Permitted subjects - organizations are exactly `["system:nodes"]`, common name is "`system:node:${NODE_NAME}`".
1. Permitted x509 extensions - honors key usage extensions, forbids subjectAltName extensions and drops other extensions.
1. Permitted key usages - `["key encipherment", "digital signature", "client auth"]` or `["digital signature", "client auth"]`.
1. Expiration/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum
of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.
1. CA bit allowed/disallowed - not allowed.
1. `kubernetes.io/kubelet-serving`: signs serving certificates that are honored as a valid kubelet serving certificate
by the API server, but has no other guarantees.
Never auto-approved by .
1. Trust distribution: signed certificates must be honored by the API server as valid to terminate connections to a kubelet.
The CA bundle is not distributed by any other means.
1. Permitted subjects - organizations are exactly `["system:nodes"]`, common name is "`system:node:${NODE_NAME}`".
1. Permitted x509 extensions - honors key usage and DNSName/IPAddress subjectAltName extensions, forbids EmailAddress and
URI subjectAltName extensions, drops other extensions. At least one DNS or IP subjectAltName must be present.
1. Permitted key usages - `["key encipherment", "digital signature", "server auth"]` or `["digital signature", "server auth"]`.
1. Expiration/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum
of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.
1. CA bit allowed/disallowed - not allowed.
1. `kubernetes.io/legacy-unknown`: has no guarantees for trust at all. Some third-party distributions of Kubernetes
may honor client certificates signed by it. The stable CertificateSigningRequest API (version `certificates.k8s.io/v1` and later)
does not allow to set the `signerName` as `kubernetes.io/legacy-unknown`.
Never auto-approved by .
1. Trust distribution: None. There is no standard trust or distribution for this signer in a Kubernetes cluster.
1. Permitted subjects - any
1. Permitted x509 extensions - honors subjectAltName and key usage extensions and discards other extensions.
1. Permitted key usages - any
1. Expiration/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum
of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.
1. CA bit allowed/disallowed - not allowed.
The kube-controller-manager implements [control plane signing](#signer-control-plane) for each of the built in
signers. Failures for all of these are only reported in kube-controller-manager logs.
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
Distribution of trust happens out of band for these signers. Any trust outside of those described above are strictly
coincidental. For instance, some distributions may honor `kubernetes.io/legacy-unknown` as client certificates for the
kube-apiserver, but this is not a standard.
None of these usages are related to ServiceAccount token secrets `.data[ca.crt]` in any way. That CA bundle is only
guaranteed to verify a connection to the API server using the default service (`kubernetes.default.svc`).
### Custom signers
You can also introduce your own custom signer, which should have a similar prefixed name but using your
own domain name. For example, if you represent an open source project that uses the domain `open-fictional.example`
then you might use `issuer.open-fictional.example/service-mesh` as a signer name.
A custom signer uses the Kubernetes API to issue a certificate. See [API-based signers](#signer-api).
## Signing
### Control plane signer {#signer-control-plane}
The Kubernetes control plane implements each of the
[Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers),
as part of the kube-controller-manager.
Prior to Kubernetes v1.18, the kube-controller-manager would sign any CSRs that
were marked as approved.
The `spec.expirationSeconds` field was added in Kubernetes v1.22.
Earlier versions of Kubernetes do not honor this field.
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
### API-based signers {#signer-api}
Users of the REST API can sign CSRs by submitting an UPDATE request to the `status`
subresource of the CSR to be signed.
As part of this request, the `status.certificate` field should be set to contain the
signed certificate. This field contains one or more PEM-encoded certificates.
All PEM blocks must have the "CERTIFICATE" label, contain no headers,
and the encoded data must be a BER-encoded ASN.1 Certificate structure
as described in [section 4 of RFC5280](https://tools.ietf.org/html/rfc5280#section-4.1).
Example certificate content:
```
-----BEGIN CERTIFICATE-----
MIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b/oAwDQYJKoZIhvcNAQEL
BQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV
BAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4
MB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz
dGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9+ZmU3
+e1zfOywLdoQxrPi+o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm
kmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh
Q7Et13wrvTJqBMJo1GTwQuF+HYOku0NF/DLqbZIcpI08yQKyrBgYz2uO51/oNp8a
sTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7
2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj
YTBfMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB
Af8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC
ggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr
L0/TCrqmuaaliUa42jQTt2OVsVP/L8ofFunj/KjpQU0bvKJPLMRKtmxbhXuQCQi1
qCRkp8o93mHvEz3mTUN+D1cfQ2fpsBENLnpS0F4G/JyY2Vrh19/X8+mImMEK5eOy
o0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY+/XeDhEcg+lJvz9Eyo2
aGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd
M1fLPhLyR54fGaY+7/X8P9AZzPefAkwizeXwe9ii6/a08vWoiE4=
-----END CERTIFICATE-----
```
Non-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated,
to allow for explanatory text as described in [section 5.2 of RFC7468](https://www.rfc-editor.org/rfc/rfc7468#section-5.2).
When encoded in JSON or YAML, this field is base-64 encoded.
A CertificateSigningRequest containing the example certificate above would look like this:
```yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
...
status:
certificate: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS..."
```
## Approval or rejection {#approval-rejection}
Before a [signer](#signers) issues a certificate based on a CertificateSigningRequest,
the signer typically checks that the issuance for that CSR has been _approved_.
### Control plane automated approval {#approval-rejection-control-plane}
The kube-controller-manager ships with a built-in approver for certificates with
a signerName of `kubernetes.io/kube-apiserver-client-kubelet` that delegates various
permissions on CSRs for node credentials to authorization.
The kube-controller-manager POSTs SubjectAccessReview resources to the API server
in order to check authorization for certificate approval.
### Approval or rejection using `kubectl` {#approval-rejection-kubectl}
A Kubernetes administrator (with appropriate permissions) can manually approve
(or deny) CertificateSigningRequests by using the `kubectl certificate
approve` and `kubectl certificate deny` commands.
To approve a CSR with kubectl:
```shell
kubectl certificate approve <certificate-signing-request-name>
```
Likewise, to deny a CSR:
```shell
kubectl certificate deny <certificate-signing-request-name>
```
### Approval or rejection using the Kubernetes API {#approval-rejection-api-client}
Users of the REST API can approve CSRs by submitting an UPDATE request to the `approval`
subresource of the CSR to be approved. For example, you could write an
that watches for a particular
kind of CSR and then sends an UPDATE to approve them.
When you make an approval or rejection request, set either the `Approved` or `Denied`
status condition based on the state you determine:
For `Approved` CSRs:
```yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
...
status:
conditions:
- lastUpdateTime: "2020-02-08T11:37:35Z"
lastTransitionTime: "2020-02-08T11:37:35Z"
message: Approved by my custom approver controller
reason: ApprovedByMyPolicy # You can set this to any string
type: Approved
```
For `Denied` CSRs:
```yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
...
status:
conditions:
- lastUpdateTime: "2020-02-08T11:37:35Z"
lastTransitionTime: "2020-02-08T11:37:35Z"
message: Denied by my custom approver controller
reason: DeniedByMyPolicy # You can set this to any string
type: Denied
```
It's usual to set `status.conditions.reason` to a machine-friendly reason
code using TitleCase; this is a convention but you can set it to anything
you like. If you want to add a note for human consumption, use the
`status.conditions.message` field.
## Cluster trust bundles {#cluster-trust-bundles}
In Kubernetes , you must enable the `ClusterTrustBundle`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
_and_ the `certificates.k8s.io/v1alpha1`
in order to use
this API.
A ClusterTrustBundles is a cluster-scoped object for distributing X.509 trust
anchors (root certificates) to workloads within the cluster. They're designed
to work well with the [signer](#signers) concept from CertificateSigningRequests.
ClusterTrustBundles can be used in two modes:
[signer-linked](#ctb-signer-linked) and [signer-unlinked](#ctb-signer-unlinked).
### Common properties and validation {#ctb-common}
All ClusterTrustBundle objects have strong validation on the contents of their
`trustBundle` field. That field must contain one or more X.509 certificates,
DER-serialized, each wrapped in a PEM `CERTIFICATE` block. The certificates
must parse as valid X.509 certificates.
Esoteric PEM features like inter-block data and intra-block headers are either
rejected during object validation, or can be ignored by consumers of the object.
Additionally, consumers are allowed to reorder the certificates in
the bundle with their own arbitrary but stable ordering.
ClusterTrustBundle objects should be considered world-readable within the
cluster. If your cluster uses [RBAC](/docs/reference/access-authn-authz/rbac/)
authorization, all ServiceAccounts have a default grant that allows them to
**get**, **list**, and **watch** all ClusterTrustBundle objects.
If you use your own authorization mechanism and you have enabled
ClusterTrustBundles in your cluster, you should set up an equivalent rule to
make these objects public within the cluster, so that they work as intended.
If you do not have permission to list cluster trust bundles by default in your
cluster, you can impersonate a service account you have access to in order to
see available ClusterTrustBundles:
```bash
kubectl get clustertrustbundles --as='system:serviceaccount:mynamespace:default'
```
### Signer-linked ClusterTrustBundles {#ctb-signer-linked}
Signer-linked ClusterTrustBundles are associated with a _signer name_, like this:
```yaml
apiVersion: certificates.k8s.io/v1alpha1
kind: ClusterTrustBundle
metadata:
name: example.com:mysigner:foo
spec:
signerName: example.com/mysigner
trustBundle: "<... PEM data ...>"
```
These ClusterTrustBundles are intended to be maintained by a signer-specific
controller in the cluster, so they have several security features:
* To create or update a signer-linked ClusterTrustBundle, you must be permitted
to **attest** on the signer (custom authorization verb `attest`,
API group `certificates.k8s.io`; resource path `signers`). You can configure
authorization for the specific resource name
`<signerNameDomain>/<signerNamePath>` or match a pattern such as
`<signerNameDomain>/*`.
* Signer-linked ClusterTrustBundles **must** be named with a prefix derived from
their `spec.signerName` field. Slashes (`/`) are replaced with colons (`:`),
and a final colon is appended. This is followed by an arbitrary name. For
example, the signer `example.com/mysigner` can be linked to a
ClusterTrustBundle `example.com:mysigner:<arbitrary-name>`.
Signer-linked ClusterTrustBundles will typically be consumed in workloads
by a combination of a
[field selector](/docs/concepts/overview/working-with-objects/field-selectors/) on the signer name, and a separate
[label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
### Signer-unlinked ClusterTrustBundles {#ctb-signer-unlinked}
Signer-unlinked ClusterTrustBundles have an empty `spec.signerName` field, like this:
```yaml
apiVersion: certificates.k8s.io/v1alpha1
kind: ClusterTrustBundle
metadata:
name: foo
spec:
# no signerName specified, so the field is blank
trustBundle: "<... PEM data ...>"
```
They are primarily intended for cluster configuration use cases.
Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the
customary grouping behavior of signer-linked ClusterTrustBundles.
Signer-unlinked ClusterTrustBundles have no `attest` verb requirement.
Instead, you control access to them directly using the usual mechanisms,
such as role-based access control.
To distinguish them from signer-linked ClusterTrustBundles, the names of
signer-unlinked ClusterTrustBundles **must not** contain a colon (`:`).
### Accessing ClusterTrustBundles from pods {#ctb-projection}
The contents of ClusterTrustBundles can be injected into the container filesystem, similar to ConfigMaps and Secrets.
See the [clusterTrustBundle projected volume source](/docs/concepts/storage/projected-volumes#clustertrustbundle) for more details.
<!-- TODO this should become a task page -->
## How to issue a certificate for a user {#normal-user}
A few steps are required in order to get a normal user to be able to
authenticate and invoke an API. First, this user must have a certificate issued
by the Kubernetes cluster, and then present that certificate to the Kubernetes API.
### Create private key
The following scripts show how to generate PKI private key and CSR. It is
important to set CN and O attribute of the CSR. CN is the name of the user and
O is the group that this user will belong to. You can refer to
[RBAC](/docs/reference/access-authn-authz/rbac/) for standard groups.
```shell
openssl genrsa -out myuser.key 2048
openssl req -new -key myuser.key -out myuser.csr -subj "/CN=myuser"
```
### Create a CertificateSigningRequest {#create-certificatessigningrequest}
Create a [CertificateSigningRequest](/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/)
and submit it to a Kubernetes Cluster via kubectl. Below is a script to generate the
CertificateSigningRequest.
```shell
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: myuser
spec:
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 86400 # one day
usages:
- client auth
EOF
```
Some points to note:
- `usages` has to be '`client auth`'
- `expirationSeconds` could be made longer (i.e. `864000` for ten days) or shorter (i.e. `3600` for one hour)
- `request` is the base64 encoded value of the CSR file content.
You can get the content using this command:
```shell
cat myuser.csr | base64 | tr -d "\n"
```
### Approve the CertificateSigningRequest {#approve-certificate-signing-request}
Use kubectl to create a CSR and approve it.
Get the list of CSRs:
```shell
kubectl get csr
```
Approve the CSR:
```shell
kubectl certificate approve myuser
```
### Get the certificate
Retrieve the certificate from the CSR:
```shell
kubectl get csr/myuser -o yaml
```
The certificate value is in Base64-encoded format under `status.certificate`.
Export the issued certificate from the CertificateSigningRequest.
```shell
kubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt
```
### Create Role and RoleBinding
With the certificate created it is time to define the Role and RoleBinding for
this user to access Kubernetes cluster resources.
This is a sample command to create a Role for this new user:
```shell
kubectl create role developer --verb=create --verb=get --verb=list --verb=update --verb=delete --resource=pods
```
This is a sample command to create a RoleBinding for this new user:
```shell
kubectl create rolebinding developer-binding-myuser --role=developer --user=myuser
```
### Add to kubeconfig
The last step is to add this user into the kubeconfig file.
First, you need to add new credentials:
```shell
kubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true
```
Then, you need to add the context:
```shell
kubectl config set-context myuser --cluster=kubernetes --user=myuser
```
To test it, change the context to `myuser`:
```shell
kubectl config use-context myuser
```
##
* Read [Manage TLS Certificates in a Cluster](/docs/tasks/tls/managing-tls-in-a-cluster/)
* View the source code for the kube-controller-manager built in
[signer](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/signer/cfssl_signer.go)
* View the source code for the kube-controller-manager built in
[approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)
* For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1
* For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986)
* Read about the ClusterTrustBundle API:
* | kubernetes reference | reviewers liggitt mikedanese munnerz enj title Certificates and Certificate Signing Requests api metadata apiVersion certificates k8s io v1 kind CertificateSigningRequest override link text CSR v1 apiVersion certificates k8s io v1alpha1 kind ClusterTrustBundle content type concept weight 60 overview Kubernetes certificate and trust bundle APIs enable automation of X 509 https www itu int rec T REC X 509 credential provisioning by providing a programmatic interface for clients of the Kubernetes API to request and obtain X 509 from a Certificate Authority CA There is also experimental alpha support for distributing trust bundles cluster trust bundles body Certificate signing requests A CertificateSigningRequest docs reference kubernetes api authentication resources certificate signing request v1 CSR resource is used to request that a certificate be signed by a denoted signer after which the request may be approved or denied before finally being signed Request signing process The CertificateSigningRequest resource type allows a client to ask for an X 509 certificate be issued based on a signing request The CertificateSigningRequest object includes a PEM encoded PKCS 10 signing request in the spec request field The CertificateSigningRequest denotes the signer the recipient that the request is being made to using the spec signerName field Note that spec signerName is a required key after API version certificates k8s io v1 In Kubernetes v1 22 and later clients may optionally set the spec expirationSeconds field to request a particular lifetime for the issued certificate The minimum valid value for this field is 600 i e ten minutes Once created a CertificateSigningRequest must be approved before it can be signed Depending on the signer selected a CertificateSigningRequest may be automatically approved by a Otherwise a CertificateSigningRequest must be manually approved either via the REST API or client go or by running kubectl certificate approve Likewise a CertificateSigningRequest may also be denied which tells the configured signer that it must not sign the request For certificates that have been approved the next step is signing The relevant signing controller first validates that the signing conditions are met and then creates a certificate The signing controller then updates the CertificateSigningRequest storing the new certificate into the status certificate field of the existing CertificateSigningRequest object The status certificate field is either empty or contains a X 509 certificate encoded in PEM format The CertificateSigningRequest status certificate field is empty until the signer does this Once the status certificate field has been populated the request has been completed and clients can now fetch the signed certificate PEM data from the CertificateSigningRequest resource The signers can instead deny certificate signing if the approval conditions are not met In order to reduce the number of old CertificateSigningRequest resources left in a cluster a garbage collection controller runs periodically The garbage collection removes CertificateSigningRequests that have not changed state for some duration Approved requests automatically deleted after 1 hour Denied requests automatically deleted after 1 hour Failed requests automatically deleted after 1 hour Pending requests automatically deleted after 24 hours All requests automatically deleted after the issued certificate has expired Certificate signing authorization authorization To allow creating a CertificateSigningRequest and retrieving any CertificateSigningRequest Verbs create get list watch group certificates k8s io resource certificatesigningrequests For example To allow approving a CertificateSigningRequest Verbs get list watch group certificates k8s io resource certificatesigningrequests Verbs update group certificates k8s io resource certificatesigningrequests approval Verbs approve group certificates k8s io resource signers resourceName signerNameDomain signerNamePath or signerNameDomain For example To allow signing a CertificateSigningRequest Verbs get list watch group certificates k8s io resource certificatesigningrequests Verbs update group certificates k8s io resource certificatesigningrequests status Verbs sign group certificates k8s io resource signers resourceName signerNameDomain signerNamePath or signerNameDomain Signers Signers abstractly represent the entity or entities that might sign or have signed a security certificate Any signer that is made available for outside a particular cluster should provide information about how the signer works so that consumers can understand what that means for CertifcateSigningRequests and if enabled ClusterTrustBundles cluster trust bundles This includes 1 Trust distribution how trust anchors CA certificates or certificate bundles are distributed 1 Permitted subjects any restrictions on and behavior when a disallowed subject is requested 1 Permitted x509 extensions including IP subjectAltNames DNS subjectAltNames Email subjectAltNames URI subjectAltNames etc and behavior when a disallowed extension is requested 1 Permitted key usages extended key usages any restrictions on and behavior when usages different than the signer determined usages are specified in the CSR 1 Expiration certificate lifetime whether it is fixed by the signer configurable by the admin determined by the CSR spec expirationSeconds field etc and the behavior when the signer determined expiration is different from the CSR spec expirationSeconds field 1 CA bit allowed disallowed and behavior if a CSR contains a request a for a CA certificate when the signer does not permit it Commonly the status certificate field of a CertificateSigningRequest contains a single PEM encoded X 509 certificate once the CSR is approved and the certificate is issued Some signers store multiple certificates into the status certificate field In that case the documentation for the signer should specify the meaning of additional certificates for example this might be the certificate plus intermediates to be presented during TLS handshakes If you want to make the trust anchor root certificate available this should be done separately from a CertificateSigningRequest and its status certificate field For example you could use a ClusterTrustBundle The PKCS 10 signing request format does not have a standard mechanism to specify a certificate expiration or lifetime The expiration or lifetime therefore has to be set through the spec expirationSeconds field of the CSR object The built in signers use the ClusterSigningDuration configuration option which defaults to 1 year the cluster signing duration command line flag of the kube controller manager as the default when no spec expirationSeconds is specified When spec expirationSeconds is specified the minimum of spec expirationSeconds and ClusterSigningDuration is used The spec expirationSeconds field was added in Kubernetes v1 22 Earlier versions of Kubernetes do not honor this field Kubernetes API servers prior to v1 22 will silently drop this field when the object is created Kubernetes signers Kubernetes provides built in signers that each have a well known signerName 1 kubernetes io kube apiserver client signs certificates that will be honored as client certificates by the API server Never auto approved by 1 Trust distribution signed certificates must be honored as client certificates by the API server The CA bundle is not distributed by any other means 1 Permitted subjects no subject restrictions but approvers and signers may choose not to approve or sign Certain subjects like cluster admin level users or groups vary between distributions and installations but deserve additional scrutiny before approval and signing The CertificateSubjectRestriction admission plugin is enabled by default to restrict system masters but it is often not the only cluster admin subject in a cluster 1 Permitted x509 extensions honors subjectAltName and key usage extensions and discards other extensions 1 Permitted key usages must include client auth Must not include key usages beyond digital signature key encipherment client auth 1 Expiration certificate lifetime for the kube controller manager implementation of this signer set to the minimum of the cluster signing duration option or if specified the spec expirationSeconds field of the CSR object 1 CA bit allowed disallowed not allowed 1 kubernetes io kube apiserver client kubelet signs client certificates that will be honored as client certificates by the API server May be auto approved by 1 Trust distribution signed certificates must be honored as client certificates by the API server The CA bundle is not distributed by any other means 1 Permitted subjects organizations are exactly system nodes common name is system node NODE NAME 1 Permitted x509 extensions honors key usage extensions forbids subjectAltName extensions and drops other extensions 1 Permitted key usages key encipherment digital signature client auth or digital signature client auth 1 Expiration certificate lifetime for the kube controller manager implementation of this signer set to the minimum of the cluster signing duration option or if specified the spec expirationSeconds field of the CSR object 1 CA bit allowed disallowed not allowed 1 kubernetes io kubelet serving signs serving certificates that are honored as a valid kubelet serving certificate by the API server but has no other guarantees Never auto approved by 1 Trust distribution signed certificates must be honored by the API server as valid to terminate connections to a kubelet The CA bundle is not distributed by any other means 1 Permitted subjects organizations are exactly system nodes common name is system node NODE NAME 1 Permitted x509 extensions honors key usage and DNSName IPAddress subjectAltName extensions forbids EmailAddress and URI subjectAltName extensions drops other extensions At least one DNS or IP subjectAltName must be present 1 Permitted key usages key encipherment digital signature server auth or digital signature server auth 1 Expiration certificate lifetime for the kube controller manager implementation of this signer set to the minimum of the cluster signing duration option or if specified the spec expirationSeconds field of the CSR object 1 CA bit allowed disallowed not allowed 1 kubernetes io legacy unknown has no guarantees for trust at all Some third party distributions of Kubernetes may honor client certificates signed by it The stable CertificateSigningRequest API version certificates k8s io v1 and later does not allow to set the signerName as kubernetes io legacy unknown Never auto approved by 1 Trust distribution None There is no standard trust or distribution for this signer in a Kubernetes cluster 1 Permitted subjects any 1 Permitted x509 extensions honors subjectAltName and key usage extensions and discards other extensions 1 Permitted key usages any 1 Expiration certificate lifetime for the kube controller manager implementation of this signer set to the minimum of the cluster signing duration option or if specified the spec expirationSeconds field of the CSR object 1 CA bit allowed disallowed not allowed The kube controller manager implements control plane signing signer control plane for each of the built in signers Failures for all of these are only reported in kube controller manager logs The spec expirationSeconds field was added in Kubernetes v1 22 Earlier versions of Kubernetes do not honor this field Kubernetes API servers prior to v1 22 will silently drop this field when the object is created Distribution of trust happens out of band for these signers Any trust outside of those described above are strictly coincidental For instance some distributions may honor kubernetes io legacy unknown as client certificates for the kube apiserver but this is not a standard None of these usages are related to ServiceAccount token secrets data ca crt in any way That CA bundle is only guaranteed to verify a connection to the API server using the default service kubernetes default svc Custom signers You can also introduce your own custom signer which should have a similar prefixed name but using your own domain name For example if you represent an open source project that uses the domain open fictional example then you might use issuer open fictional example service mesh as a signer name A custom signer uses the Kubernetes API to issue a certificate See API based signers signer api Signing Control plane signer signer control plane The Kubernetes control plane implements each of the Kubernetes signers docs reference access authn authz certificate signing requests kubernetes signers as part of the kube controller manager Prior to Kubernetes v1 18 the kube controller manager would sign any CSRs that were marked as approved The spec expirationSeconds field was added in Kubernetes v1 22 Earlier versions of Kubernetes do not honor this field Kubernetes API servers prior to v1 22 will silently drop this field when the object is created API based signers signer api Users of the REST API can sign CSRs by submitting an UPDATE request to the status subresource of the CSR to be signed As part of this request the status certificate field should be set to contain the signed certificate This field contains one or more PEM encoded certificates All PEM blocks must have the CERTIFICATE label contain no headers and the encoded data must be a BER encoded ASN 1 Certificate structure as described in section 4 of RFC5280 https tools ietf org html rfc5280 section 4 1 Example certificate content BEGIN CERTIFICATE MIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b oAwDQYJKoZIhvcNAQEL BQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV BAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4 MB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz dGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9 ZmU3 e1zfOywLdoQxrPi o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm kmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh Q7Et13wrvTJqBMJo1GTwQuF HYOku0NF DLqbZIcpI08yQKyrBgYz2uO51 oNp8a sTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7 2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj YTBfMA4GA1UdDwEB wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB Af8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC ggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr L0 TCrqmuaaliUa42jQTt2OVsVP L8ofFunj KjpQU0bvKJPLMRKtmxbhXuQCQi1 qCRkp8o93mHvEz3mTUN D1cfQ2fpsBENLnpS0F4G JyY2Vrh19 X8 mImMEK5eOy o0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY XeDhEcg lJvz9Eyo2 aGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd M1fLPhLyR54fGaY 7 X8P9AZzPefAkwizeXwe9ii6 a08vWoiE4 END CERTIFICATE Non PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated to allow for explanatory text as described in section 5 2 of RFC7468 https www rfc editor org rfc rfc7468 section 5 2 When encoded in JSON or YAML this field is base 64 encoded A CertificateSigningRequest containing the example certificate above would look like this yaml apiVersion certificates k8s io v1 kind CertificateSigningRequest status certificate LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS Approval or rejection approval rejection Before a signer signers issues a certificate based on a CertificateSigningRequest the signer typically checks that the issuance for that CSR has been approved Control plane automated approval approval rejection control plane The kube controller manager ships with a built in approver for certificates with a signerName of kubernetes io kube apiserver client kubelet that delegates various permissions on CSRs for node credentials to authorization The kube controller manager POSTs SubjectAccessReview resources to the API server in order to check authorization for certificate approval Approval or rejection using kubectl approval rejection kubectl A Kubernetes administrator with appropriate permissions can manually approve or deny CertificateSigningRequests by using the kubectl certificate approve and kubectl certificate deny commands To approve a CSR with kubectl shell kubectl certificate approve certificate signing request name Likewise to deny a CSR shell kubectl certificate deny certificate signing request name Approval or rejection using the Kubernetes API approval rejection api client Users of the REST API can approve CSRs by submitting an UPDATE request to the approval subresource of the CSR to be approved For example you could write an that watches for a particular kind of CSR and then sends an UPDATE to approve them When you make an approval or rejection request set either the Approved or Denied status condition based on the state you determine For Approved CSRs yaml apiVersion certificates k8s io v1 kind CertificateSigningRequest status conditions lastUpdateTime 2020 02 08T11 37 35Z lastTransitionTime 2020 02 08T11 37 35Z message Approved by my custom approver controller reason ApprovedByMyPolicy You can set this to any string type Approved For Denied CSRs yaml apiVersion certificates k8s io v1 kind CertificateSigningRequest status conditions lastUpdateTime 2020 02 08T11 37 35Z lastTransitionTime 2020 02 08T11 37 35Z message Denied by my custom approver controller reason DeniedByMyPolicy You can set this to any string type Denied It s usual to set status conditions reason to a machine friendly reason code using TitleCase this is a convention but you can set it to anything you like If you want to add a note for human consumption use the status conditions message field Cluster trust bundles cluster trust bundles In Kubernetes you must enable the ClusterTrustBundle feature gate docs reference command line tools reference feature gates and the certificates k8s io v1alpha1 in order to use this API A ClusterTrustBundles is a cluster scoped object for distributing X 509 trust anchors root certificates to workloads within the cluster They re designed to work well with the signer signers concept from CertificateSigningRequests ClusterTrustBundles can be used in two modes signer linked ctb signer linked and signer unlinked ctb signer unlinked Common properties and validation ctb common All ClusterTrustBundle objects have strong validation on the contents of their trustBundle field That field must contain one or more X 509 certificates DER serialized each wrapped in a PEM CERTIFICATE block The certificates must parse as valid X 509 certificates Esoteric PEM features like inter block data and intra block headers are either rejected during object validation or can be ignored by consumers of the object Additionally consumers are allowed to reorder the certificates in the bundle with their own arbitrary but stable ordering ClusterTrustBundle objects should be considered world readable within the cluster If your cluster uses RBAC docs reference access authn authz rbac authorization all ServiceAccounts have a default grant that allows them to get list and watch all ClusterTrustBundle objects If you use your own authorization mechanism and you have enabled ClusterTrustBundles in your cluster you should set up an equivalent rule to make these objects public within the cluster so that they work as intended If you do not have permission to list cluster trust bundles by default in your cluster you can impersonate a service account you have access to in order to see available ClusterTrustBundles bash kubectl get clustertrustbundles as system serviceaccount mynamespace default Signer linked ClusterTrustBundles ctb signer linked Signer linked ClusterTrustBundles are associated with a signer name like this yaml apiVersion certificates k8s io v1alpha1 kind ClusterTrustBundle metadata name example com mysigner foo spec signerName example com mysigner trustBundle PEM data These ClusterTrustBundles are intended to be maintained by a signer specific controller in the cluster so they have several security features To create or update a signer linked ClusterTrustBundle you must be permitted to attest on the signer custom authorization verb attest API group certificates k8s io resource path signers You can configure authorization for the specific resource name signerNameDomain signerNamePath or match a pattern such as signerNameDomain Signer linked ClusterTrustBundles must be named with a prefix derived from their spec signerName field Slashes are replaced with colons and a final colon is appended This is followed by an arbitrary name For example the signer example com mysigner can be linked to a ClusterTrustBundle example com mysigner arbitrary name Signer linked ClusterTrustBundles will typically be consumed in workloads by a combination of a field selector docs concepts overview working with objects field selectors on the signer name and a separate label selector docs concepts overview working with objects labels label selectors Signer unlinked ClusterTrustBundles ctb signer unlinked Signer unlinked ClusterTrustBundles have an empty spec signerName field like this yaml apiVersion certificates k8s io v1alpha1 kind ClusterTrustBundle metadata name foo spec no signerName specified so the field is blank trustBundle PEM data They are primarily intended for cluster configuration use cases Each signer unlinked ClusterTrustBundle is an independent object in contrast to the customary grouping behavior of signer linked ClusterTrustBundles Signer unlinked ClusterTrustBundles have no attest verb requirement Instead you control access to them directly using the usual mechanisms such as role based access control To distinguish them from signer linked ClusterTrustBundles the names of signer unlinked ClusterTrustBundles must not contain a colon Accessing ClusterTrustBundles from pods ctb projection The contents of ClusterTrustBundles can be injected into the container filesystem similar to ConfigMaps and Secrets See the clusterTrustBundle projected volume source docs concepts storage projected volumes clustertrustbundle for more details TODO this should become a task page How to issue a certificate for a user normal user A few steps are required in order to get a normal user to be able to authenticate and invoke an API First this user must have a certificate issued by the Kubernetes cluster and then present that certificate to the Kubernetes API Create private key The following scripts show how to generate PKI private key and CSR It is important to set CN and O attribute of the CSR CN is the name of the user and O is the group that this user will belong to You can refer to RBAC docs reference access authn authz rbac for standard groups shell openssl genrsa out myuser key 2048 openssl req new key myuser key out myuser csr subj CN myuser Create a CertificateSigningRequest create certificatessigningrequest Create a CertificateSigningRequest docs reference kubernetes api authentication resources certificate signing request v1 and submit it to a Kubernetes Cluster via kubectl Below is a script to generate the CertificateSigningRequest shell cat EOF kubectl apply f apiVersion certificates k8s io v1 kind CertificateSigningRequest metadata name myuser spec request LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo signerName kubernetes io kube apiserver client expirationSeconds 86400 one day usages client auth EOF Some points to note usages has to be client auth expirationSeconds could be made longer i e 864000 for ten days or shorter i e 3600 for one hour request is the base64 encoded value of the CSR file content You can get the content using this command shell cat myuser csr base64 tr d n Approve the CertificateSigningRequest approve certificate signing request Use kubectl to create a CSR and approve it Get the list of CSRs shell kubectl get csr Approve the CSR shell kubectl certificate approve myuser Get the certificate Retrieve the certificate from the CSR shell kubectl get csr myuser o yaml The certificate value is in Base64 encoded format under status certificate Export the issued certificate from the CertificateSigningRequest shell kubectl get csr myuser o jsonpath status certificate base64 d myuser crt Create Role and RoleBinding With the certificate created it is time to define the Role and RoleBinding for this user to access Kubernetes cluster resources This is a sample command to create a Role for this new user shell kubectl create role developer verb create verb get verb list verb update verb delete resource pods This is a sample command to create a RoleBinding for this new user shell kubectl create rolebinding developer binding myuser role developer user myuser Add to kubeconfig The last step is to add this user into the kubeconfig file First you need to add new credentials shell kubectl config set credentials myuser client key myuser key client certificate myuser crt embed certs true Then you need to add the context shell kubectl config set context myuser cluster kubernetes user myuser To test it change the context to myuser shell kubectl config use context myuser Read Manage TLS Certificates in a Cluster docs tasks tls managing tls in a cluster View the source code for the kube controller manager built in signer https github com kubernetes kubernetes blob 32ec6c212ec9415f604ffc1f4c1f29b782968ff1 pkg controller certificates signer cfssl signer go View the source code for the kube controller manager built in approver https github com kubernetes kubernetes blob 32ec6c212ec9415f604ffc1f4c1f29b782968ff1 pkg controller certificates approver sarapprove go For details of X 509 itself refer to RFC 5280 https tools ietf org html rfc5280 section 3 1 section 3 1 For information on the syntax of PKCS 10 certificate signing requests refer to RFC 2986 https tools ietf org html rfc2986 Read about the ClusterTrustBundle API |
kubernetes reference title Dynamic Admission Control liggitt deads2k contenttype concept reviewers caesarxuchao jpbetz lavalamp smarterclayton | ---
reviewers:
- smarterclayton
- lavalamp
- caesarxuchao
- deads2k
- liggitt
- jpbetz
title: Dynamic Admission Control
content_type: concept
weight: 45
---
<!-- overview -->
In addition to [compiled-in admission plugins](/docs/reference/access-authn-authz/admission-controllers/),
admission plugins can be developed as extensions and run as webhooks configured at runtime.
This page describes how to build, configure, use, and monitor admission webhooks.
<!-- body -->
## What are admission webhooks?
Admission webhooks are HTTP callbacks that receive admission requests and do
something with them. You can define two types of admission webhooks,
[validating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook)
and
[mutating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook).
Mutating admission webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults.
After all object modifications are complete, and after the incoming object is validated by the API server,
validating admission webhooks are invoked and can reject requests to enforce custom policies.
Admission webhooks that need to guarantee they see the final state of the object in order to enforce policy
should use a validating admission webhook, since objects can be modified after being seen by mutating webhooks.
## Experimenting with admission webhooks
Admission webhooks are essentially part of the cluster control-plane. You should
write and deploy them with great caution. Please read the
[user guides](/docs/reference/access-authn-authz/extensible-admission-controllers/#write-an-admission-webhook-server)
for instructions if you intend to write/deploy production-grade admission webhooks.
In the following, we describe how to quickly experiment with admission webhooks.
### Prerequisites
* Ensure that MutatingAdmissionWebhook and ValidatingAdmissionWebhook
admission controllers are enabled.
[Here](/docs/reference/access-authn-authz/admission-controllers/#is-there-a-recommended-set-of-admission-controllers-to-use)
is a recommended set of admission controllers to enable in general.
* Ensure that the `admissionregistration.k8s.io/v1` API is enabled.
### Write an admission webhook server
Please refer to the implementation of the [admission webhook server](https://github.com/kubernetes/kubernetes/blob/release-1.21/test/images/agnhost/webhook/main.go)
that is validated in a Kubernetes e2e test. The webhook handles the
`AdmissionReview` request sent by the API servers, and sends back its decision
as an `AdmissionReview` object in the same version it received.
See the [webhook request](#request) section for details on the data sent to webhooks.
See the [webhook response](#response) section for the data expected from webhooks.
The example admission webhook server leaves the `ClientAuth` field
[empty](https://github.com/kubernetes/kubernetes/blob/v1.22.0/test/images/agnhost/webhook/config.go#L38-L39),
which defaults to `NoClientCert`. This means that the webhook server does not
authenticate the identity of the clients, supposedly API servers. If you need
mutual TLS or other ways to authenticate the clients, see
how to [authenticate API servers](#authenticate-apiservers).
### Deploy the admission webhook service
The webhook server in the e2e test is deployed in the Kubernetes cluster, via
the [deployment API](/docs/reference/generated/kubernetes-api//#deployment-v1-apps).
The test also creates a [service](/docs/reference/generated/kubernetes-api//#service-v1-core)
as the front-end of the webhook server. See
[code](https://github.com/kubernetes/kubernetes/blob/v1.22.0/test/e2e/apimachinery/webhook.go#L748).
You may also deploy your webhooks outside of the cluster. You will need to update
your webhook configurations accordingly.
### Configure admission webhooks on the fly
You can dynamically configure what resources are subject to what admission
webhooks via
[ValidatingWebhookConfiguration](/docs/reference/generated/kubernetes-api//#validatingwebhookconfiguration-v1-admissionregistration-k8s-io)
or
[MutatingWebhookConfiguration](/docs/reference/generated/kubernetes-api//#mutatingwebhookconfiguration-v1-admissionregistration-k8s-io).
The following is an example `ValidatingWebhookConfiguration`, a mutating webhook configuration is similar.
See the [webhook configuration](#webhook-configuration) section for details about each config field.
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: "pod-policy.example.com"
webhooks:
- name: "pod-policy.example.com"
rules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE"]
resources: ["pods"]
scope: "Namespaced"
clientConfig:
service:
namespace: "example-namespace"
name: "example-service"
caBundle: <CA_BUNDLE>
admissionReviewVersions: ["v1"]
sideEffects: None
timeoutSeconds: 5
```
You must replace the `<CA_BUNDLE>` in the above example by a valid CA bundle
which is a PEM-encoded (field value is Base64 encoded) CA bundle for validating the webhook's server certificate.
The `scope` field specifies if only cluster-scoped resources ("Cluster") or namespace-scoped
resources ("Namespaced") will match this rule. "∗" means that there are no scope restrictions.
When using `clientConfig.service`, the server cert must be valid for
`<svc_name>.<svc_namespace>.svc`.
Default timeout for a webhook call is 10 seconds,
You can set the `timeout` and it is encouraged to use a short timeout for webhooks.
If the webhook call times out, the request is handled according to the webhook's
failure policy.
When an API server receives a request that matches one of the `rules`, the
API server sends an `admissionReview` request to webhook as specified in the
`clientConfig`.
After you create the webhook configuration, the system will take a few seconds
to honor the new configuration.
### Authenticate API servers {#authenticate-apiservers}
If your admission webhooks require authentication, you can configure the
API servers to use basic auth, bearer token, or a cert to authenticate itself to
the webhooks. There are three steps to complete the configuration.
* When starting the API server, specify the location of the admission control
configuration file via the `--admission-control-config-file` flag.
* In the admission control configuration file, specify where the
MutatingAdmissionWebhook controller and ValidatingAdmissionWebhook controller
should read the credentials. The credentials are stored in kubeConfig files
(yes, the same schema that's used by kubectl), so the field name is
`kubeConfigFile`. Here is an example admission control configuration file:
```yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ValidatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: "<path-to-kubeconfig-file>"
- name: MutatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: "<path-to-kubeconfig-file>"
```
```yaml
# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: ValidatingAdmissionWebhook
configuration:
# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration
apiVersion: apiserver.config.k8s.io/v1alpha1
kind: WebhookAdmission
kubeConfigFile: "<path-to-kubeconfig-file>"
- name: MutatingAdmissionWebhook
configuration:
# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration
apiVersion: apiserver.config.k8s.io/v1alpha1
kind: WebhookAdmission
kubeConfigFile: "<path-to-kubeconfig-file>"
```
For more information about `AdmissionConfiguration`, see the
[AdmissionConfiguration (v1) reference](/docs/reference/config-api/apiserver-webhookadmission.v1/).
See the [webhook configuration](#webhook-configuration) section for details about each config field.
In the kubeConfig file, provide the credentials:
```yaml
apiVersion: v1
kind: Config
users:
# name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to.
# If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers.
#
# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service:
# - name: webhook1.ns1.svc
# user: ...
#
# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+:
# - name: webhook1.ns1.svc:8443
# user: ...
# and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers:
# - name: webhook1.ns1.svc
# user: ...
#
# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples:
# A webhook with `url: https://www.example.com`:
# - name: www.example.com
# user: ...
#
# A webhook with `url: https://www.example.com:443`:
# - name: www.example.com:443
# user: ...
#
# A webhook with `url: https://www.example.com:8443`:
# - name: www.example.com:8443
# user: ...
#
- name: 'webhook1.ns1.svc'
user:
client-certificate-data: "<pem encoded certificate>"
client-key-data: "<pem encoded key>"
# The `name` supports using * to wildcard-match prefixing segments.
- name: '*.webhook-company.org'
user:
password: "<password>"
username: "<name>"
# '*' is the default match.
- name: '*'
user:
token: "<token>"
```
Of course you need to set up the webhook server to handle these authentication requests.
## Webhook request and response
### Request
Webhooks are sent as POST requests, with `Content-Type: application/json`,
with an `AdmissionReview` API object in the `admission.k8s.io` API group
serialized to JSON as the body.
Webhooks can specify what versions of `AdmissionReview` objects they accept
with the `admissionReviewVersions` field in their configuration:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
admissionReviewVersions: ["v1", "v1beta1"]
```
`admissionReviewVersions` is a required field when creating webhook configurations.
Webhooks are required to support at least one `AdmissionReview`
version understood by the current and previous API server.
API servers send the first `AdmissionReview` version in the `admissionReviewVersions` list they support.
If none of the versions in the list are supported by the API server, the configuration will not be allowed to be created.
If an API server encounters a webhook configuration that was previously created and does not support any of the `AdmissionReview`
versions the API server knows how to send, attempts to call to the webhook will fail and be subject to the [failure policy](#failure-policy).
This example shows the data contained in an `AdmissionReview` object
for a request to update the `scale` subresource of an `apps/v1` `Deployment`:
```yaml
apiVersion: admission.k8s.io/v1
kind: AdmissionReview
request:
# Random uid uniquely identifying this admission call
uid: 705ab4f5-6393-11e8-b7cc-42010a800002
# Fully-qualified group/version/kind of the incoming object
kind:
group: autoscaling
version: v1
kind: Scale
# Fully-qualified group/version/kind of the resource being modified
resource:
group: apps
version: v1
resource: deployments
# subresource, if the request is to a subresource
subResource: scale
# Fully-qualified group/version/kind of the incoming object in the original request to the API server.
# This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the
# original request to the API server was converted to a version the webhook registered for.
requestKind:
group: autoscaling
version: v1
kind: Scale
# Fully-qualified group/version/kind of the resource being modified in the original request to the API server.
# This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the
# original request to the API server was converted to a version the webhook registered for.
requestResource:
group: apps
version: v1
resource: deployments
# subresource, if the request is to a subresource
# This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the
# original request to the API server was converted to a version the webhook registered for.
requestSubResource: scale
# Name of the resource being modified
name: my-deployment
# Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)
namespace: my-namespace
# operation can be CREATE, UPDATE, DELETE, or CONNECT
operation: UPDATE
userInfo:
# Username of the authenticated user making the request to the API server
username: admin
# UID of the authenticated user making the request to the API server
uid: 014fbff9a07c
# Group memberships of the authenticated user making the request to the API server
groups:
- system:authenticated
- my-admin-group
# Arbitrary extra info associated with the user making the request to the API server.
# This is populated by the API server authentication layer and should be included
# if any SubjectAccessReview checks are performed by the webhook.
extra:
some-key:
- some-value1
- some-value2
# object is the new object being admitted.
# It is null for DELETE operations.
object:
apiVersion: autoscaling/v1
kind: Scale
# oldObject is the existing object.
# It is null for CREATE and CONNECT operations.
oldObject:
apiVersion: autoscaling/v1
kind: Scale
# options contains the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions, UpdateOptions, or DeleteOptions.
# It is null for CONNECT operations.
options:
apiVersion: meta.k8s.io/v1
kind: UpdateOptions
# dryRun indicates the API request is running in dry run mode and will not be persisted.
# Webhooks with side effects should avoid actuating those side effects when dryRun is true.
# See http://k8s.io/docs/reference/using-api/api-concepts/#make-a-dry-run-request for more details.
dryRun: False
```
### Response
Webhooks respond with a 200 HTTP status code, `Content-Type: application/json`,
and a body containing an `AdmissionReview` object (in the same version they were sent),
with the `response` stanza populated, serialized to JSON.
At a minimum, the `response` stanza must contain the following fields:
* `uid`, copied from the `request.uid` sent to the webhook
* `allowed`, either set to `true` or `false`
Example of a minimal response from a webhook to allow a request:
```json
{
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": {
"uid": "<value from request.uid>",
"allowed": true
}
}
```
Example of a minimal response from a webhook to forbid a request:
```json
{
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": {
"uid": "<value from request.uid>",
"allowed": false
}
}
```
When rejecting a request, the webhook can customize the http code and message returned to the user
using the `status` field. The specified status object is returned to the user.
See the [API documentation](/docs/reference/generated/kubernetes-api//#status-v1-meta)
for details about the `status` type.
Example of a response to forbid a request, customizing the HTTP status code and message presented to the user:
```json
{
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": {
"uid": "<value from request.uid>",
"allowed": false,
"status": {
"code": 403,
"message": "You cannot do this because it is Tuesday and your name starts with A"
}
}
}
```
When allowing a request, a mutating admission webhook may optionally modify the incoming object as well.
This is done using the `patch` and `patchType` fields in the response.
The only currently supported `patchType` is `JSONPatch`.
See [JSON patch](https://jsonpatch.com/) documentation for more details.
For `patchType: JSONPatch`, the `patch` field contains a base64-encoded array of JSON patch operations.
As an example, a single patch operation that would set `spec.replicas` would be
`[{"op": "add", "path": "/spec/replicas", "value": 3}]`
Base64-encoded, this would be `W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=`
So a webhook response to add that label would be:
```json
{
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": {
"uid": "<value from request.uid>",
"allowed": true,
"patchType": "JSONPatch",
"patch": "W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0="
}
}
```
Admission webhooks can optionally return warning messages that are returned to the requesting client
in HTTP `Warning` headers with a warning code of 299. Warnings can be sent with allowed or rejected admission responses.
If you're implementing a webhook that returns a warning:
* Don't include a "Warning:" prefix in the message
* Use warning messages to describe problems the client making the API request should correct or be aware of
* Limit warnings to 120 characters if possible
Individual warning messages over 256 characters may be truncated by the API server before being returned to clients.
If more than 4096 characters of warning messages are added (from all sources), additional warning messages are ignored.
```json
{
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": {
"uid": "<value from request.uid>",
"allowed": true,
"warnings": [
"duplicate envvar entries specified with name MY_ENV",
"memory request less than 4MB specified for container mycontainer, which will not start successfully"
]
}
}
```
## Webhook configuration
To register admission webhooks, create `MutatingWebhookConfiguration` or `ValidatingWebhookConfiguration` API objects.
The name of a `MutatingWebhookConfiguration` or a `ValidatingWebhookConfiguration` object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
Each configuration can contain one or more webhooks.
If multiple webhooks are specified in a single configuration, each must be given a unique name.
This is required in order to make resulting audit logs and metrics easier to match up to active
configurations.
Each webhook defines the following things.
### Matching requests: rules
Each webhook must specify a list of rules used to determine if a request to the API server should be sent to the webhook.
Each rule specifies one or more operations, apiGroups, apiVersions, and resources, and a resource scope:
* `operations` lists one or more operations to match. Can be `"CREATE"`, `"UPDATE"`, `"DELETE"`, `"CONNECT"`,
or `"*"` to match all.
* `apiGroups` lists one or more API groups to match. `""` is the core API group. `"*"` matches all API groups.
* `apiVersions` lists one or more API versions to match. `"*"` matches all API versions.
* `resources` lists one or more resources to match.
* `"*"` matches all resources, but not subresources.
* `"*/*"` matches all resources and subresources.
* `"pods/*"` matches all subresources of pods.
* `"*/status"` matches all status subresources.
* `scope` specifies a scope to match. Valid values are `"Cluster"`, `"Namespaced"`, and `"*"`.
Subresources match the scope of their parent resource. Default is `"*"`.
* `"Cluster"` means that only cluster-scoped resources will match this rule (Namespace API objects are cluster-scoped).
* `"Namespaced"` means that only namespaced resources will match this rule.
* `"*"` means that there are no scope restrictions.
If an incoming request matches one of the specified `operations`, `groups`, `versions`,
`resources`, and `scope` for any of a webhook's `rules`, the request is sent to the webhook.
Here are other examples of rules that could be used to specify which resources should be intercepted.
Match `CREATE` or `UPDATE` requests to `apps/v1` and `apps/v1beta1` `deployments` and `replicasets`:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
...
webhooks:
- name: my-webhook.example.com
rules:
- operations: ["CREATE", "UPDATE"]
apiGroups: ["apps"]
apiVersions: ["v1", "v1beta1"]
resources: ["deployments", "replicasets"]
scope: "Namespaced"
...
```
Match create requests for all resources (but not subresources) in all API groups and versions:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
rules:
- operations: ["CREATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*"]
scope: "*"
```
Match update requests for all `status` subresources in all API groups and versions:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
rules:
- operations: ["UPDATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*/status"]
scope: "*"
```
### Matching requests: objectSelector
Webhooks may optionally limit which requests are intercepted based on the labels of the
objects they would be sent, by specifying an `objectSelector`. If specified, the objectSelector
is evaluated against both the object and oldObject that would be sent to the webhook,
and is considered to match if either object matches the selector.
A null object (`oldObject` in the case of create, or `newObject` in the case of delete),
or an object that cannot have labels (like a `DeploymentRollback` or a `PodProxyOptions` object)
is not considered to match.
Use the object selector only if the webhook is opt-in, because end users may skip
the admission webhook by setting the labels.
This example shows a mutating webhook that would match a `CREATE` of any resource (but not subresources) with the label `foo: bar`:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
objectSelector:
matchLabels:
foo: bar
rules:
- operations: ["CREATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*"]
scope: "*"
```
See [labels concept](/docs/concepts/overview/working-with-objects/labels)
for more examples of label selectors.
### Matching requests: namespaceSelector
Webhooks may optionally limit which requests for namespaced resources are intercepted,
based on the labels of the containing namespace, by specifying a `namespaceSelector`.
The `namespaceSelector` decides whether to run the webhook on a request for a namespaced resource
(or a Namespace object), based on whether the namespace's labels match the selector.
If the object itself is a namespace, the matching is performed on object.metadata.labels.
If the object is a cluster scoped resource other than a Namespace, `namespaceSelector` has no effect.
This example shows a mutating webhook that matches a `CREATE` of any namespaced resource inside a namespace
that does not have a "runlevel" label of "0" or "1":
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
namespaceSelector:
matchExpressions:
- key: runlevel
operator: NotIn
values: ["0","1"]
rules:
- operations: ["CREATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*"]
scope: "Namespaced"
```
This example shows a validating webhook that matches a `CREATE` of any namespaced resource inside
a namespace that is associated with the "environment" of "prod" or "staging":
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
namespaceSelector:
matchExpressions:
- key: environment
operator: In
values: ["prod","staging"]
rules:
- operations: ["CREATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*"]
scope: "Namespaced"
```
See [labels concept](/docs/concepts/overview/working-with-objects/labels)
for more examples of label selectors.
### Matching requests: matchPolicy
API servers can make objects available via multiple API groups or versions.
For example, if a webhook only specified a rule for some API groups/versions
(like `apiGroups:["apps"], apiVersions:["v1","v1beta1"]`),
and a request was made to modify the resource via another API group/version (like `extensions/v1beta1`),
the request would not be sent to the webhook.
The `matchPolicy` lets a webhook define how its `rules` are used to match incoming requests.
Allowed values are `Exact` or `Equivalent`.
* `Exact` means a request should be intercepted only if it exactly matches a specified rule.
* `Equivalent` means a request should be intercepted if it modifies a resource listed in `rules`,
even via another API group or version.
In the example given above, the webhook that only registered for `apps/v1` could use `matchPolicy`:
* `matchPolicy: Exact` would mean the `extensions/v1beta1` request would not be sent to the webhook
* `matchPolicy: Equivalent` means the `extensions/v1beta1` request would be sent to the webhook
(with the objects converted to a version the webhook had specified: `apps/v1`)
Specifying `Equivalent` is recommended, and ensures that webhooks continue to intercept the
resources they expect when upgrades enable new versions of the resource in the API server.
When a resource stops being served by the API server, it is no longer considered equivalent to
other versions of that resource that are still served.
For example, `extensions/v1beta1` deployments were first deprecated and then removed (in Kubernetes v1.16).
Since that removal, a webhook with a `apiGroups:["extensions"], apiVersions:["v1beta1"], resources:["deployments"]` rule
does not intercept deployments created via `apps/v1` APIs. For that reason, webhooks should prefer registering
for stable versions of resources.
This example shows a validating webhook that intercepts modifications to deployments (no matter the API group or version),
and is always sent an `apps/v1` `Deployment` object:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
matchPolicy: Equivalent
rules:
- operations: ["CREATE","UPDATE","DELETE"]
apiGroups: ["apps"]
apiVersions: ["v1"]
resources: ["deployments"]
scope: "Namespaced"
```
The `matchPolicy` for an admission webhooks defaults to `Equivalent`.
### Matching requests: `matchConditions`
You can define _match conditions_ for webhooks if you need fine-grained request filtering. These
conditions are useful if you find that match rules, `objectSelectors` and `namespaceSelectors` still
doesn't provide the filtering you want over when to call out over HTTP. Match conditions are
[CEL expressions](/docs/reference/using-api/cel/). All match conditions must evaluate to true for the
webhook to be called.
Here is an example illustrating a few different uses for match conditions:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
matchPolicy: Equivalent
rules:
- operations: ['CREATE','UPDATE']
apiGroups: ['*']
apiVersions: ['*']
resources: ['*']
failurePolicy: 'Ignore' # Fail-open (optional)
sideEffects: None
clientConfig:
service:
namespace: my-namespace
name: my-webhook
caBundle: '<omitted>'
# You can have up to 64 matchConditions per webhook
matchConditions:
- name: 'exclude-leases' # Each match condition must have a unique name
expression: '!(request.resource.group == "coordination.k8s.io" && request.resource.resource == "leases")' # Match non-lease resources.
- name: 'exclude-kubelet-requests'
expression: '!("system:nodes" in request.userInfo.groups)' # Match requests made by non-node users.
- name: 'rbac' # Skip RBAC requests, which are handled by the second webhook.
expression: 'request.resource.group != "rbac.authorization.k8s.io"'
# This example illustrates the use of the 'authorizer'. The authorization check is more expensive
# than a simple expression, so in this example it is scoped to only RBAC requests by using a second
# webhook. Both webhooks can be served by the same endpoint.
- name: rbac.my-webhook.example.com
matchPolicy: Equivalent
rules:
- operations: ['CREATE','UPDATE']
apiGroups: ['rbac.authorization.k8s.io']
apiVersions: ['*']
resources: ['*']
failurePolicy: 'Fail' # Fail-closed (the default)
sideEffects: None
clientConfig:
service:
namespace: my-namespace
name: my-webhook
caBundle: '<omitted>'
# You can have up to 64 matchConditions per webhook
matchConditions:
- name: 'breakglass'
# Skip requests made by users authorized to 'breakglass' on this webhook.
# The 'breakglass' API verb does not need to exist outside this check.
expression: '!authorizer.group("admissionregistration.k8s.io").resource("validatingwebhookconfigurations").name("my-webhook.example.com").check("breakglass").allowed()'
```
You can define up to 64 elements in the `matchConditions` field per webhook.
Match conditions have access to the following CEL variables:
- `object` - The object from the incoming request. The value is null for DELETE requests. The object
version may be converted based on the [matchPolicy](#matching-requests-matchpolicy).
- `oldObject` - The existing object. The value is null for CREATE requests.
- `request` - The request portion of the [AdmissionReview](#request), excluding `object` and `oldObject`.
- `authorizer` - A CEL Authorizer. May be used to perform authorization checks for the principal
(authenticated user) of the request. See
[Authz](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz) in the Kubernetes CEL library
documentation for more details.
- `authorizer.requestResource` - A shortcut for an authorization check configured with the request
resource (group, resource, (subresource), namespace, name).
For more information on CEL expressions, refer to the
[Common Expression Language in Kubernetes reference](/docs/reference/using-api/cel/).
In the event of an error evaluating a match condition the webhook is never called. Whether to reject
the request is determined as follows:
1. If **any** match condition evaluated to `false` (regardless of other errors), the API server skips the webhook.
2. Otherwise:
- for [`failurePolicy: Fail`](#failure-policy), reject the request (without calling the webhook).
- for [`failurePolicy: Ignore`](#failure-policy), proceed with the request but skip the webhook.
### Contacting the webhook
Once the API server has determined a request should be sent to a webhook,
it needs to know how to contact the webhook. This is specified in the `clientConfig`
stanza of the webhook configuration.
Webhooks can either be called via a URL or a service reference,
and can optionally include a custom CA bundle to use to verify the TLS connection.
#### URL
`url` gives the location of the webhook, in standard URL form
(`scheme://host:port/path`).
The `host` should not refer to a service running in the cluster; use
a service reference by specifying the `service` field instead.
The host might be resolved via external DNS in some API servers
(e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would
be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is
risky unless you take great care to run this webhook on all hosts
which run an API server which might need to make calls to this
webhook. Such installations are likely to be non-portable or not readily
run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".
Attempting to use a user or basic auth (for example `user:password@`) is not allowed.
Fragments (`#...`) and query parameters (`?...`) are also not allowed.
Here is an example of a mutating webhook configured to call a URL
(and expects the TLS certificate to be verified using system trust roots, so does not specify a caBundle):
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
clientConfig:
url: "https://my-webhook.example.com:9443/my-webhook-path"
```
#### Service reference
The `service` stanza inside `clientConfig` is a reference to the service for this webhook.
If the webhook is running within the cluster, then you should use `service` instead of `url`.
The service namespace and name are required. The port is optional and defaults to 443.
The path is optional and defaults to "/".
Here is an example of a mutating webhook configured to call a service on port "1234"
at the subpath "/my-path", and to verify the TLS connection against the ServerName
`my-service-name.my-service-namespace.svc` using a custom CA bundle:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
clientConfig:
caBundle: <CA_BUNDLE>
service:
namespace: my-service-namespace
name: my-service-name
path: /my-path
port: 1234
```
You must replace the `<CA_BUNDLE>` in the above example by a valid CA bundle
which is a PEM-encoded CA bundle for validating the webhook's server certificate.
### Side effects
Webhooks typically operate only on the content of the `AdmissionReview` sent to them.
Some webhooks, however, make out-of-band changes as part of processing admission requests.
Webhooks that make out-of-band changes ("side effects") must also have a reconciliation mechanism
(like a controller) that periodically determines the actual state of the world, and adjusts
the out-of-band data modified by the admission webhook to reflect reality.
This is because a call to an admission webhook does not guarantee the admitted object will be persisted as is, or at all.
Later webhooks can modify the content of the object, a conflict could be encountered while writing to storage,
or the server could power off before persisting the object.
Additionally, webhooks with side effects must skip those side-effects when `dryRun: true` admission requests are handled.
A webhook must explicitly indicate that it will not have side-effects when run with `dryRun`,
or the dry-run request will not be sent to the webhook and the API request will fail instead.
Webhooks indicate whether they have side effects using the `sideEffects` field in the webhook configuration:
* `None`: calling the webhook will have no side effects.
* `NoneOnDryRun`: calling the webhook will possibly have side effects, but if a request with
`dryRun: true` is sent to the webhook, the webhook will suppress the side effects (the webhook
is `dryRun`-aware).
Here is an example of a validating webhook indicating it has no side effects on `dryRun: true` requests:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
sideEffects: NoneOnDryRun
```
### Timeouts
Because webhooks add to API request latency, they should evaluate as quickly as possible.
`timeoutSeconds` allows configuring how long the API server should wait for a webhook to respond
before treating the call as a failure.
If the timeout expires before the webhook responds, the webhook call will be ignored or
the API call will be rejected based on the [failure policy](#failure-policy).
The timeout value must be between 1 and 30 seconds.
Here is an example of a validating webhook with a custom timeout of 2 seconds:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
timeoutSeconds: 2
```
The timeout for an admission webhook defaults to 10 seconds.
### Reinvocation policy
A single ordering of mutating admissions plugins (including webhooks) does not work for all cases
(see https://issue.k8s.io/64333 as an example). A mutating webhook can add a new sub-structure
to the object (like adding a `container` to a `pod`), and other mutating plugins which have already
run may have opinions on those new structures (like setting an `imagePullPolicy` on all containers).
To allow mutating admission plugins to observe changes made by other plugins,
built-in mutating admission plugins are re-run if a mutating webhook modifies an object,
and mutating webhooks can specify a `reinvocationPolicy` to control whether they are reinvoked as well.
`reinvocationPolicy` may be set to `Never` or `IfNeeded`. It defaults to `Never`.
* `Never`: the webhook must not be called more than once in a single admission evaluation.
* `IfNeeded`: the webhook may be called again as part of the admission evaluation if the object
being admitted is modified by other admission plugins after the initial webhook call.
The important elements to note are:
* The number of additional invocations is not guaranteed to be exactly one.
* If additional invocations result in further modifications to the object, webhooks are not
guaranteed to be invoked again.
* Webhooks that use this option may be reordered to minimize the number of additional invocations.
* To validate an object after all mutations are guaranteed complete, use a validating admission
webhook instead (recommended for webhooks with side-effects).
Here is an example of a mutating webhook opting into being re-invoked if later admission plugins
modify the object:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
reinvocationPolicy: IfNeeded
```
Mutating webhooks must be [idempotent](#idempotence), able to successfully process an object they have already admitted
and potentially modified. This is true for all mutating admission webhooks, since any change they can make
in an object could already exist in the user-provided object, but it is essential for webhooks that opt into reinvocation.
### Failure policy
`failurePolicy` defines how unrecognized errors and timeout errors from the admission webhook
are handled. Allowed values are `Ignore` or `Fail`.
* `Ignore` means that an error calling the webhook is ignored and the API request is allowed to continue.
* `Fail` means that an error calling the webhook causes the admission to fail and the API request to be rejected.
Here is a mutating webhook configured to reject an API request if errors are encountered calling the admission webhook:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
failurePolicy: Fail
```
The default `failurePolicy` for an admission webhooks is `Fail`.
## Monitoring admission webhooks
The API server provides ways to monitor admission webhook behaviors. These
monitoring mechanisms help cluster admins to answer questions like:
1. Which mutating webhook mutated the object in a API request?
2. What change did the mutating webhook applied to the object?
3. Which webhooks are frequently rejecting API requests? What's the reason for a rejection?
### Mutating webhook auditing annotations
Sometimes it's useful to know which mutating webhook mutated the object in a API request, and what change did the
webhook apply.
The Kubernetes API server performs [auditing](/docs/tasks/debug/debug-cluster/audit/) on each
mutating webhook invocation. Each invocation generates an auditing annotation
capturing if a request object is mutated by the invocation, and optionally generates an annotation
capturing the applied patch from the webhook admission response. The annotations are set in the
audit event for given request on given stage of its execution, which is then pre-processed
according to a certain policy and written to a backend.
The audit level of a event determines which annotations get recorded:
- At `Metadata` audit level or higher, an annotation with key
`mutation.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` gets logged with JSON
payload indicating a webhook gets invoked for given request and whether it mutated the object or not.
For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is
ordered the third in the mutating webhook chain, and didn't mutated the request object during the
invocation.
```yaml
# the audit event recorded
{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"annotations": {
"mutation.webhook.admission.k8s.io/round_1_index_2": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook.example.com\",\"mutated\": false}"
# other annotations
...
}
# other fields
...
}
```
```yaml
# the annotation value deserialized
{
"configuration": "my-mutating-webhook-configuration.example.com",
"webhook": "my-webhook.example.com",
"mutated": false
}
```
The following annotation gets recorded for a webhook being invoked in the first round. The webhook
is ordered the first in the mutating webhook chain, and mutated the request object during the
invocation.
```yaml
# the audit event recorded
{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"annotations": {
"mutation.webhook.admission.k8s.io/round_0_index_0": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"mutated\": true}"
# other annotations
...
}
# other fields
...
}
```
```yaml
# the annotation value deserialized
{
"configuration": "my-mutating-webhook-configuration.example.com",
"webhook": "my-webhook-always-mutate.example.com",
"mutated": true
}
```
- At `Request` audit level or higher, an annotation with key
`patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` gets logged with JSON payload indicating
a webhook gets invoked for given request and what patch gets applied to the request object.
For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is ordered the fourth in the
mutating webhook chain, and responded with a JSON patch which got applied to the request object.
```yaml
# the audit event recorded
{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"annotations": {
"patch.webhook.admission.k8s.io/round_1_index_3": "{\"configuration\":\"my-other-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"patch\":[{\"op\":\"add\",\"path\":\"/data/mutation-stage\",\"value\":\"yes\"}],\"patchType\":\"JSONPatch\"}"
# other annotations
...
}
# other fields
...
}
```
```yaml
# the annotation value deserialized
{
"configuration": "my-other-mutating-webhook-configuration.example.com",
"webhook": "my-webhook-always-mutate.example.com",
"patchType": "JSONPatch",
"patch": [
{
"op": "add",
"path": "/data/mutation-stage",
"value": "yes"
}
]
}
```
### Admission webhook metrics
The API server exposes Prometheus metrics from the `/metrics` endpoint, which can be used for monitoring and
diagnosing API server status. The following metrics record status related to admission webhooks.
#### API server admission webhook rejection count
Sometimes it's useful to know which admission webhooks are frequently rejecting API requests, and the
reason for a rejection.
The API server exposes a Prometheus counter metric recording admission webhook rejections. The
metrics are labelled to identify the causes of webhook rejection(s):
- `name`: the name of the webhook that rejected a request.
- `operation`: the operation type of the request, can be one of `CREATE`,
`UPDATE`, `DELETE` and `CONNECT`.
- `type`: the admission webhook type, can be one of `admit` and `validating`.
- `error_type`: identifies if an error occurred during the webhook invocation
that caused the rejection. Its value can be one of:
- `calling_webhook_error`: unrecognized errors or timeout errors from the admission webhook happened and the
webhook's [Failure policy](#failure-policy) is set to `Fail`.
- `no_error`: no error occurred. The webhook rejected the request with `allowed: false` in the admission
response. The metrics label `rejection_code` records the `.status.code` set in the admission response.
- `apiserver_internal_error`: an API server internal error happened.
- `rejection_code`: the HTTP status code set in the admission response when a
webhook rejected a request.
Example of the rejection count metrics:
```
# HELP apiserver_admission_webhook_rejection_count [ALPHA] Admission webhook rejection count, identified by name and broken out for each admission type (validating or admit) and operation. Additional labels specify an error type (calling_webhook_error or apiserver_internal_error if an error occurred; no_error otherwise) and optionally a non-zero rejection code if the webhook rejects the request with an HTTP status code (honored by the apiserver when the code is greater or equal to 400). Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.
# TYPE apiserver_admission_webhook_rejection_count counter
apiserver_admission_webhook_rejection_count{error_type="calling_webhook_error",name="always-timeout-webhook.example.com",operation="CREATE",rejection_code="0",type="validating"} 1
apiserver_admission_webhook_rejection_count{error_type="calling_webhook_error",name="invalid-admission-response-webhook.example.com",operation="CREATE",rejection_code="0",type="validating"} 1
apiserver_admission_webhook_rejection_count{error_type="no_error",name="deny-unwanted-configmap-data.example.com",operation="CREATE",rejection_code="400",type="validating"} 13
```
## Best practices and warnings
### Idempotence
An idempotent mutating admission webhook is able to successfully process an object it has already admitted
and potentially modified. The admission can be applied multiple times without changing the result beyond
the initial application.
#### Example of idempotent mutating admission webhooks:
1. For a `CREATE` pod request, set the field `.spec.securityContext.runAsNonRoot` of the
pod to true, to enforce security best practices.
2. For a `CREATE` pod request, if the field `.spec.containers[].resources.limits`
of a container is not set, set default resource limits.
3. For a `CREATE` pod request, inject a sidecar container with name `foo-sidecar` if no container
with the name `foo-sidecar` already exists.
In the cases above, the webhook can be safely reinvoked, or admit an object that already has the fields set.
#### Example of non-idempotent mutating admission webhooks:
1. For a `CREATE` pod request, inject a sidecar container with name `foo-sidecar`
suffixed with the current timestamp (e.g. `foo-sidecar-19700101-000000`).
2. For a `CREATE`/`UPDATE` pod request, reject if the pod has label `"env"` set,
otherwise add an `"env": "prod"` label to the pod.
3. For a `CREATE` pod request, blindly append a sidecar container named
`foo-sidecar` without looking to see if there is already a `foo-sidecar`
container in the pod.
In the first case above, reinvoking the webhook can result in the same sidecar being injected multiple times to a pod, each time
with a different container name. Similarly the webhook can inject duplicated containers if the sidecar already exists in
a user-provided pod.
In the second case above, reinvoking the webhook will result in the webhook failing on its own output.
In the third case above, reinvoking the webhook will result in duplicated containers in the pod spec, which makes
the request invalid and rejected by the API server.
### Intercepting all versions of an object
It is recommended that admission webhooks should always intercept all versions of an object by setting `.webhooks[].matchPolicy`
to `Equivalent`. It is also recommended that admission webhooks should prefer registering for stable versions of resources.
Failure to intercept all versions of an object can result in admission policies not being enforced for requests in certain
versions. See [Matching requests: matchPolicy](#matching-requests-matchpolicy) for examples.
### Availability
It is recommended that admission webhooks should evaluate as quickly as possible (typically in
milliseconds), since they add to API request latency.
It is encouraged to use a small timeout for webhooks. See [Timeouts](#timeouts) for more detail.
It is recommended that admission webhooks should leverage some format of load-balancing, to
provide high availability and performance benefits. If a webhook is running within the cluster,
you can run multiple webhook backends behind a service to leverage the load-balancing that service
supports.
### Guaranteeing the final state of the object is seen
Admission webhooks that need to guarantee they see the final state of the object in order to enforce policy
should use a validating admission webhook, since objects can be modified after being seen by mutating webhooks.
For example, a mutating admission webhook is configured to inject a sidecar container with name
"foo-sidecar" on every `CREATE` pod request. If the sidecar *must* be present, a validating
admisson webhook should also be configured to intercept `CREATE` pod requests, and validate that a
container with name "foo-sidecar" with the expected configuration exists in the to-be-created
object.
### Avoiding deadlocks in self-hosted webhooks
A webhook running inside the cluster might cause deadlocks for its own deployment if it is configured
to intercept resources required to start its own pods.
For example, a mutating admission webhook is configured to admit `CREATE` pod requests only if a certain label is set in the
pod (e.g. `"env": "prod"`). The webhook server runs in a deployment which doesn't set the `"env"` label.
When a node that runs the webhook server pods
becomes unhealthy, the webhook deployment will try to reschedule the pods to another node. However the requests will
get rejected by the existing webhook server since the `"env"` label is unset, and the migration cannot happen.
It is recommended to exclude the namespace where your webhook is running with a
[namespaceSelector](#matching-requests-namespaceselector).
### Side effects
It is recommended that admission webhooks should avoid side effects if possible, which means the webhooks operate only on the
content of the `AdmissionReview` sent to them, and do not make out-of-band changes. The `.webhooks[].sideEffects` field should
be set to `None` if a webhook doesn't have any side effect.
If side effects are required during the admission evaluation, they must be suppressed when processing an
`AdmissionReview` object with `dryRun` set to `true`, and the `.webhooks[].sideEffects` field should be
set to `NoneOnDryRun`. See [Side effects](#side-effects) for more detail.
### Avoiding operating on the kube-system namespace
The `kube-system` namespace contains objects created by the Kubernetes system,
e.g. service accounts for the control plane components, pods like `kube-dns`.
Accidentally mutating or rejecting requests in the `kube-system` namespace may
cause the control plane components to stop functioning or introduce unknown behavior.
If your admission webhooks don't intend to modify the behavior of the Kubernetes control
plane, exclude the `kube-system` namespace from being intercepted using a
[`namespaceSelector`](#matching-requests-namespaceselector). | kubernetes reference | reviewers smarterclayton lavalamp caesarxuchao deads2k liggitt jpbetz title Dynamic Admission Control content type concept weight 45 overview In addition to compiled in admission plugins docs reference access authn authz admission controllers admission plugins can be developed as extensions and run as webhooks configured at runtime This page describes how to build configure use and monitor admission webhooks body What are admission webhooks Admission webhooks are HTTP callbacks that receive admission requests and do something with them You can define two types of admission webhooks validating admission webhook docs reference access authn authz admission controllers validatingadmissionwebhook and mutating admission webhook docs reference access authn authz admission controllers mutatingadmissionwebhook Mutating admission webhooks are invoked first and can modify objects sent to the API server to enforce custom defaults After all object modifications are complete and after the incoming object is validated by the API server validating admission webhooks are invoked and can reject requests to enforce custom policies Admission webhooks that need to guarantee they see the final state of the object in order to enforce policy should use a validating admission webhook since objects can be modified after being seen by mutating webhooks Experimenting with admission webhooks Admission webhooks are essentially part of the cluster control plane You should write and deploy them with great caution Please read the user guides docs reference access authn authz extensible admission controllers write an admission webhook server for instructions if you intend to write deploy production grade admission webhooks In the following we describe how to quickly experiment with admission webhooks Prerequisites Ensure that MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controllers are enabled Here docs reference access authn authz admission controllers is there a recommended set of admission controllers to use is a recommended set of admission controllers to enable in general Ensure that the admissionregistration k8s io v1 API is enabled Write an admission webhook server Please refer to the implementation of the admission webhook server https github com kubernetes kubernetes blob release 1 21 test images agnhost webhook main go that is validated in a Kubernetes e2e test The webhook handles the AdmissionReview request sent by the API servers and sends back its decision as an AdmissionReview object in the same version it received See the webhook request request section for details on the data sent to webhooks See the webhook response response section for the data expected from webhooks The example admission webhook server leaves the ClientAuth field empty https github com kubernetes kubernetes blob v1 22 0 test images agnhost webhook config go L38 L39 which defaults to NoClientCert This means that the webhook server does not authenticate the identity of the clients supposedly API servers If you need mutual TLS or other ways to authenticate the clients see how to authenticate API servers authenticate apiservers Deploy the admission webhook service The webhook server in the e2e test is deployed in the Kubernetes cluster via the deployment API docs reference generated kubernetes api deployment v1 apps The test also creates a service docs reference generated kubernetes api service v1 core as the front end of the webhook server See code https github com kubernetes kubernetes blob v1 22 0 test e2e apimachinery webhook go L748 You may also deploy your webhooks outside of the cluster You will need to update your webhook configurations accordingly Configure admission webhooks on the fly You can dynamically configure what resources are subject to what admission webhooks via ValidatingWebhookConfiguration docs reference generated kubernetes api validatingwebhookconfiguration v1 admissionregistration k8s io or MutatingWebhookConfiguration docs reference generated kubernetes api mutatingwebhookconfiguration v1 admissionregistration k8s io The following is an example ValidatingWebhookConfiguration a mutating webhook configuration is similar See the webhook configuration webhook configuration section for details about each config field yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration metadata name pod policy example com webhooks name pod policy example com rules apiGroups apiVersions v1 operations CREATE resources pods scope Namespaced clientConfig service namespace example namespace name example service caBundle CA BUNDLE admissionReviewVersions v1 sideEffects None timeoutSeconds 5 You must replace the CA BUNDLE in the above example by a valid CA bundle which is a PEM encoded field value is Base64 encoded CA bundle for validating the webhook s server certificate The scope field specifies if only cluster scoped resources Cluster or namespace scoped resources Namespaced will match this rule lowast means that there are no scope restrictions When using clientConfig service the server cert must be valid for svc name svc namespace svc Default timeout for a webhook call is 10 seconds You can set the timeout and it is encouraged to use a short timeout for webhooks If the webhook call times out the request is handled according to the webhook s failure policy When an API server receives a request that matches one of the rules the API server sends an admissionReview request to webhook as specified in the clientConfig After you create the webhook configuration the system will take a few seconds to honor the new configuration Authenticate API servers authenticate apiservers If your admission webhooks require authentication you can configure the API servers to use basic auth bearer token or a cert to authenticate itself to the webhooks There are three steps to complete the configuration When starting the API server specify the location of the admission control configuration file via the admission control config file flag In the admission control configuration file specify where the MutatingAdmissionWebhook controller and ValidatingAdmissionWebhook controller should read the credentials The credentials are stored in kubeConfig files yes the same schema that s used by kubectl so the field name is kubeConfigFile Here is an example admission control configuration file yaml apiVersion apiserver config k8s io v1 kind AdmissionConfiguration plugins name ValidatingAdmissionWebhook configuration apiVersion apiserver config k8s io v1 kind WebhookAdmissionConfiguration kubeConfigFile path to kubeconfig file name MutatingAdmissionWebhook configuration apiVersion apiserver config k8s io v1 kind WebhookAdmissionConfiguration kubeConfigFile path to kubeconfig file yaml Deprecated in v1 17 in favor of apiserver config k8s io v1 apiVersion apiserver k8s io v1alpha1 kind AdmissionConfiguration plugins name ValidatingAdmissionWebhook configuration Deprecated in v1 17 in favor of apiserver config k8s io v1 kind WebhookAdmissionConfiguration apiVersion apiserver config k8s io v1alpha1 kind WebhookAdmission kubeConfigFile path to kubeconfig file name MutatingAdmissionWebhook configuration Deprecated in v1 17 in favor of apiserver config k8s io v1 kind WebhookAdmissionConfiguration apiVersion apiserver config k8s io v1alpha1 kind WebhookAdmission kubeConfigFile path to kubeconfig file For more information about AdmissionConfiguration see the AdmissionConfiguration v1 reference docs reference config api apiserver webhookadmission v1 See the webhook configuration webhook configuration section for details about each config field In the kubeConfig file provide the credentials yaml apiVersion v1 kind Config users name should be set to the DNS name of the service or the host including port of the URL the webhook is configured to speak to If a non 443 port is used for services it must be included in the name when configuring 1 16 API servers For a webhook configured to speak to a service on the default port 443 specify the DNS name of the service name webhook1 ns1 svc user For a webhook configured to speak to a service on non default port e g 8443 specify the DNS name and port of the service in 1 16 name webhook1 ns1 svc 8443 user and optionally create a second stanza using only the DNS name of the service for compatibility with 1 15 API servers name webhook1 ns1 svc user For webhooks configured to speak to a URL match the host and port specified in the webhook s URL Examples A webhook with url https www example com name www example com user A webhook with url https www example com 443 name www example com 443 user A webhook with url https www example com 8443 name www example com 8443 user name webhook1 ns1 svc user client certificate data pem encoded certificate client key data pem encoded key The name supports using to wildcard match prefixing segments name webhook company org user password password username name is the default match name user token token Of course you need to set up the webhook server to handle these authentication requests Webhook request and response Request Webhooks are sent as POST requests with Content Type application json with an AdmissionReview API object in the admission k8s io API group serialized to JSON as the body Webhooks can specify what versions of AdmissionReview objects they accept with the admissionReviewVersions field in their configuration yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration webhooks name my webhook example com admissionReviewVersions v1 v1beta1 admissionReviewVersions is a required field when creating webhook configurations Webhooks are required to support at least one AdmissionReview version understood by the current and previous API server API servers send the first AdmissionReview version in the admissionReviewVersions list they support If none of the versions in the list are supported by the API server the configuration will not be allowed to be created If an API server encounters a webhook configuration that was previously created and does not support any of the AdmissionReview versions the API server knows how to send attempts to call to the webhook will fail and be subject to the failure policy failure policy This example shows the data contained in an AdmissionReview object for a request to update the scale subresource of an apps v1 Deployment yaml apiVersion admission k8s io v1 kind AdmissionReview request Random uid uniquely identifying this admission call uid 705ab4f5 6393 11e8 b7cc 42010a800002 Fully qualified group version kind of the incoming object kind group autoscaling version v1 kind Scale Fully qualified group version kind of the resource being modified resource group apps version v1 resource deployments subresource if the request is to a subresource subResource scale Fully qualified group version kind of the incoming object in the original request to the API server This only differs from kind if the webhook specified matchPolicy Equivalent and the original request to the API server was converted to a version the webhook registered for requestKind group autoscaling version v1 kind Scale Fully qualified group version kind of the resource being modified in the original request to the API server This only differs from resource if the webhook specified matchPolicy Equivalent and the original request to the API server was converted to a version the webhook registered for requestResource group apps version v1 resource deployments subresource if the request is to a subresource This only differs from subResource if the webhook specified matchPolicy Equivalent and the original request to the API server was converted to a version the webhook registered for requestSubResource scale Name of the resource being modified name my deployment Namespace of the resource being modified if the resource is namespaced or is a Namespace object namespace my namespace operation can be CREATE UPDATE DELETE or CONNECT operation UPDATE userInfo Username of the authenticated user making the request to the API server username admin UID of the authenticated user making the request to the API server uid 014fbff9a07c Group memberships of the authenticated user making the request to the API server groups system authenticated my admin group Arbitrary extra info associated with the user making the request to the API server This is populated by the API server authentication layer and should be included if any SubjectAccessReview checks are performed by the webhook extra some key some value1 some value2 object is the new object being admitted It is null for DELETE operations object apiVersion autoscaling v1 kind Scale oldObject is the existing object It is null for CREATE and CONNECT operations oldObject apiVersion autoscaling v1 kind Scale options contains the options for the operation being admitted like meta k8s io v1 CreateOptions UpdateOptions or DeleteOptions It is null for CONNECT operations options apiVersion meta k8s io v1 kind UpdateOptions dryRun indicates the API request is running in dry run mode and will not be persisted Webhooks with side effects should avoid actuating those side effects when dryRun is true See http k8s io docs reference using api api concepts make a dry run request for more details dryRun False Response Webhooks respond with a 200 HTTP status code Content Type application json and a body containing an AdmissionReview object in the same version they were sent with the response stanza populated serialized to JSON At a minimum the response stanza must contain the following fields uid copied from the request uid sent to the webhook allowed either set to true or false Example of a minimal response from a webhook to allow a request json apiVersion admission k8s io v1 kind AdmissionReview response uid value from request uid allowed true Example of a minimal response from a webhook to forbid a request json apiVersion admission k8s io v1 kind AdmissionReview response uid value from request uid allowed false When rejecting a request the webhook can customize the http code and message returned to the user using the status field The specified status object is returned to the user See the API documentation docs reference generated kubernetes api status v1 meta for details about the status type Example of a response to forbid a request customizing the HTTP status code and message presented to the user json apiVersion admission k8s io v1 kind AdmissionReview response uid value from request uid allowed false status code 403 message You cannot do this because it is Tuesday and your name starts with A When allowing a request a mutating admission webhook may optionally modify the incoming object as well This is done using the patch and patchType fields in the response The only currently supported patchType is JSONPatch See JSON patch https jsonpatch com documentation for more details For patchType JSONPatch the patch field contains a base64 encoded array of JSON patch operations As an example a single patch operation that would set spec replicas would be op add path spec replicas value 3 Base64 encoded this would be W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0 So a webhook response to add that label would be json apiVersion admission k8s io v1 kind AdmissionReview response uid value from request uid allowed true patchType JSONPatch patch W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0 Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299 Warnings can be sent with allowed or rejected admission responses If you re implementing a webhook that returns a warning Don t include a Warning prefix in the message Use warning messages to describe problems the client making the API request should correct or be aware of Limit warnings to 120 characters if possible Individual warning messages over 256 characters may be truncated by the API server before being returned to clients If more than 4096 characters of warning messages are added from all sources additional warning messages are ignored json apiVersion admission k8s io v1 kind AdmissionReview response uid value from request uid allowed true warnings duplicate envvar entries specified with name MY ENV memory request less than 4MB specified for container mycontainer which will not start successfully Webhook configuration To register admission webhooks create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object must be a valid DNS subdomain name docs concepts overview working with objects names dns subdomain names Each configuration can contain one or more webhooks If multiple webhooks are specified in a single configuration each must be given a unique name This is required in order to make resulting audit logs and metrics easier to match up to active configurations Each webhook defines the following things Matching requests rules Each webhook must specify a list of rules used to determine if a request to the API server should be sent to the webhook Each rule specifies one or more operations apiGroups apiVersions and resources and a resource scope operations lists one or more operations to match Can be CREATE UPDATE DELETE CONNECT or to match all apiGroups lists one or more API groups to match is the core API group matches all API groups apiVersions lists one or more API versions to match matches all API versions resources lists one or more resources to match matches all resources but not subresources matches all resources and subresources pods matches all subresources of pods status matches all status subresources scope specifies a scope to match Valid values are Cluster Namespaced and Subresources match the scope of their parent resource Default is Cluster means that only cluster scoped resources will match this rule Namespace API objects are cluster scoped Namespaced means that only namespaced resources will match this rule means that there are no scope restrictions If an incoming request matches one of the specified operations groups versions resources and scope for any of a webhook s rules the request is sent to the webhook Here are other examples of rules that could be used to specify which resources should be intercepted Match CREATE or UPDATE requests to apps v1 and apps v1beta1 deployments and replicasets yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration webhooks name my webhook example com rules operations CREATE UPDATE apiGroups apps apiVersions v1 v1beta1 resources deployments replicasets scope Namespaced Match create requests for all resources but not subresources in all API groups and versions yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration webhooks name my webhook example com rules operations CREATE apiGroups apiVersions resources scope Match update requests for all status subresources in all API groups and versions yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration webhooks name my webhook example com rules operations UPDATE apiGroups apiVersions resources status scope Matching requests objectSelector Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent by specifying an objectSelector If specified the objectSelector is evaluated against both the object and oldObject that would be sent to the webhook and is considered to match if either object matches the selector A null object oldObject in the case of create or newObject in the case of delete or an object that cannot have labels like a DeploymentRollback or a PodProxyOptions object is not considered to match Use the object selector only if the webhook is opt in because end users may skip the admission webhook by setting the labels This example shows a mutating webhook that would match a CREATE of any resource but not subresources with the label foo bar yaml apiVersion admissionregistration k8s io v1 kind MutatingWebhookConfiguration webhooks name my webhook example com objectSelector matchLabels foo bar rules operations CREATE apiGroups apiVersions resources scope See labels concept docs concepts overview working with objects labels for more examples of label selectors Matching requests namespaceSelector Webhooks may optionally limit which requests for namespaced resources are intercepted based on the labels of the containing namespace by specifying a namespaceSelector The namespaceSelector decides whether to run the webhook on a request for a namespaced resource or a Namespace object based on whether the namespace s labels match the selector If the object itself is a namespace the matching is performed on object metadata labels If the object is a cluster scoped resource other than a Namespace namespaceSelector has no effect This example shows a mutating webhook that matches a CREATE of any namespaced resource inside a namespace that does not have a runlevel label of 0 or 1 yaml apiVersion admissionregistration k8s io v1 kind MutatingWebhookConfiguration webhooks name my webhook example com namespaceSelector matchExpressions key runlevel operator NotIn values 0 1 rules operations CREATE apiGroups apiVersions resources scope Namespaced This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the environment of prod or staging yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration webhooks name my webhook example com namespaceSelector matchExpressions key environment operator In values prod staging rules operations CREATE apiGroups apiVersions resources scope Namespaced See labels concept docs concepts overview working with objects labels for more examples of label selectors Matching requests matchPolicy API servers can make objects available via multiple API groups or versions For example if a webhook only specified a rule for some API groups versions like apiGroups apps apiVersions v1 v1beta1 and a request was made to modify the resource via another API group version like extensions v1beta1 the request would not be sent to the webhook The matchPolicy lets a webhook define how its rules are used to match incoming requests Allowed values are Exact or Equivalent Exact means a request should be intercepted only if it exactly matches a specified rule Equivalent means a request should be intercepted if it modifies a resource listed in rules even via another API group or version In the example given above the webhook that only registered for apps v1 could use matchPolicy matchPolicy Exact would mean the extensions v1beta1 request would not be sent to the webhook matchPolicy Equivalent means the extensions v1beta1 request would be sent to the webhook with the objects converted to a version the webhook had specified apps v1 Specifying Equivalent is recommended and ensures that webhooks continue to intercept the resources they expect when upgrades enable new versions of the resource in the API server When a resource stops being served by the API server it is no longer considered equivalent to other versions of that resource that are still served For example extensions v1beta1 deployments were first deprecated and then removed in Kubernetes v1 16 Since that removal a webhook with a apiGroups extensions apiVersions v1beta1 resources deployments rule does not intercept deployments created via apps v1 APIs For that reason webhooks should prefer registering for stable versions of resources This example shows a validating webhook that intercepts modifications to deployments no matter the API group or version and is always sent an apps v1 Deployment object yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration webhooks name my webhook example com matchPolicy Equivalent rules operations CREATE UPDATE DELETE apiGroups apps apiVersions v1 resources deployments scope Namespaced The matchPolicy for an admission webhooks defaults to Equivalent Matching requests matchConditions You can define match conditions for webhooks if you need fine grained request filtering These conditions are useful if you find that match rules objectSelectors and namespaceSelectors still doesn t provide the filtering you want over when to call out over HTTP Match conditions are CEL expressions docs reference using api cel All match conditions must evaluate to true for the webhook to be called Here is an example illustrating a few different uses for match conditions yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration webhooks name my webhook example com matchPolicy Equivalent rules operations CREATE UPDATE apiGroups apiVersions resources failurePolicy Ignore Fail open optional sideEffects None clientConfig service namespace my namespace name my webhook caBundle omitted You can have up to 64 matchConditions per webhook matchConditions name exclude leases Each match condition must have a unique name expression request resource group coordination k8s io request resource resource leases Match non lease resources name exclude kubelet requests expression system nodes in request userInfo groups Match requests made by non node users name rbac Skip RBAC requests which are handled by the second webhook expression request resource group rbac authorization k8s io This example illustrates the use of the authorizer The authorization check is more expensive than a simple expression so in this example it is scoped to only RBAC requests by using a second webhook Both webhooks can be served by the same endpoint name rbac my webhook example com matchPolicy Equivalent rules operations CREATE UPDATE apiGroups rbac authorization k8s io apiVersions resources failurePolicy Fail Fail closed the default sideEffects None clientConfig service namespace my namespace name my webhook caBundle omitted You can have up to 64 matchConditions per webhook matchConditions name breakglass Skip requests made by users authorized to breakglass on this webhook The breakglass API verb does not need to exist outside this check expression authorizer group admissionregistration k8s io resource validatingwebhookconfigurations name my webhook example com check breakglass allowed You can define up to 64 elements in the matchConditions field per webhook Match conditions have access to the following CEL variables object The object from the incoming request The value is null for DELETE requests The object version may be converted based on the matchPolicy matching requests matchpolicy oldObject The existing object The value is null for CREATE requests request The request portion of the AdmissionReview request excluding object and oldObject authorizer A CEL Authorizer May be used to perform authorization checks for the principal authenticated user of the request See Authz https pkg go dev k8s io apiserver pkg cel library Authz in the Kubernetes CEL library documentation for more details authorizer requestResource A shortcut for an authorization check configured with the request resource group resource subresource namespace name For more information on CEL expressions refer to the Common Expression Language in Kubernetes reference docs reference using api cel In the event of an error evaluating a match condition the webhook is never called Whether to reject the request is determined as follows 1 If any match condition evaluated to false regardless of other errors the API server skips the webhook 2 Otherwise for failurePolicy Fail failure policy reject the request without calling the webhook for failurePolicy Ignore failure policy proceed with the request but skip the webhook Contacting the webhook Once the API server has determined a request should be sent to a webhook it needs to know how to contact the webhook This is specified in the clientConfig stanza of the webhook configuration Webhooks can either be called via a URL or a service reference and can optionally include a custom CA bundle to use to verify the TLS connection URL url gives the location of the webhook in standard URL form scheme host port path The host should not refer to a service running in the cluster use a service reference by specifying the service field instead The host might be resolved via external DNS in some API servers e g kube apiserver cannot resolve in cluster DNS as that would be a layering violation host may also be an IP address Please note that using localhost or 127 0 0 1 as a host is risky unless you take great care to run this webhook on all hosts which run an API server which might need to make calls to this webhook Such installations are likely to be non portable or not readily run in a new cluster The scheme must be https the URL must begin with https Attempting to use a user or basic auth for example user password is not allowed Fragments and query parameters are also not allowed Here is an example of a mutating webhook configured to call a URL and expects the TLS certificate to be verified using system trust roots so does not specify a caBundle yaml apiVersion admissionregistration k8s io v1 kind MutatingWebhookConfiguration webhooks name my webhook example com clientConfig url https my webhook example com 9443 my webhook path Service reference The service stanza inside clientConfig is a reference to the service for this webhook If the webhook is running within the cluster then you should use service instead of url The service namespace and name are required The port is optional and defaults to 443 The path is optional and defaults to Here is an example of a mutating webhook configured to call a service on port 1234 at the subpath my path and to verify the TLS connection against the ServerName my service name my service namespace svc using a custom CA bundle yaml apiVersion admissionregistration k8s io v1 kind MutatingWebhookConfiguration webhooks name my webhook example com clientConfig caBundle CA BUNDLE service namespace my service namespace name my service name path my path port 1234 You must replace the CA BUNDLE in the above example by a valid CA bundle which is a PEM encoded CA bundle for validating the webhook s server certificate Side effects Webhooks typically operate only on the content of the AdmissionReview sent to them Some webhooks however make out of band changes as part of processing admission requests Webhooks that make out of band changes side effects must also have a reconciliation mechanism like a controller that periodically determines the actual state of the world and adjusts the out of band data modified by the admission webhook to reflect reality This is because a call to an admission webhook does not guarantee the admitted object will be persisted as is or at all Later webhooks can modify the content of the object a conflict could be encountered while writing to storage or the server could power off before persisting the object Additionally webhooks with side effects must skip those side effects when dryRun true admission requests are handled A webhook must explicitly indicate that it will not have side effects when run with dryRun or the dry run request will not be sent to the webhook and the API request will fail instead Webhooks indicate whether they have side effects using the sideEffects field in the webhook configuration None calling the webhook will have no side effects NoneOnDryRun calling the webhook will possibly have side effects but if a request with dryRun true is sent to the webhook the webhook will suppress the side effects the webhook is dryRun aware Here is an example of a validating webhook indicating it has no side effects on dryRun true requests yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration webhooks name my webhook example com sideEffects NoneOnDryRun Timeouts Because webhooks add to API request latency they should evaluate as quickly as possible timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating the call as a failure If the timeout expires before the webhook responds the webhook call will be ignored or the API call will be rejected based on the failure policy failure policy The timeout value must be between 1 and 30 seconds Here is an example of a validating webhook with a custom timeout of 2 seconds yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration webhooks name my webhook example com timeoutSeconds 2 The timeout for an admission webhook defaults to 10 seconds Reinvocation policy A single ordering of mutating admissions plugins including webhooks does not work for all cases see https issue k8s io 64333 as an example A mutating webhook can add a new sub structure to the object like adding a container to a pod and other mutating plugins which have already run may have opinions on those new structures like setting an imagePullPolicy on all containers To allow mutating admission plugins to observe changes made by other plugins built in mutating admission plugins are re run if a mutating webhook modifies an object and mutating webhooks can specify a reinvocationPolicy to control whether they are reinvoked as well reinvocationPolicy may be set to Never or IfNeeded It defaults to Never Never the webhook must not be called more than once in a single admission evaluation IfNeeded the webhook may be called again as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call The important elements to note are The number of additional invocations is not guaranteed to be exactly one If additional invocations result in further modifications to the object webhooks are not guaranteed to be invoked again Webhooks that use this option may be reordered to minimize the number of additional invocations To validate an object after all mutations are guaranteed complete use a validating admission webhook instead recommended for webhooks with side effects Here is an example of a mutating webhook opting into being re invoked if later admission plugins modify the object yaml apiVersion admissionregistration k8s io v1 kind MutatingWebhookConfiguration webhooks name my webhook example com reinvocationPolicy IfNeeded Mutating webhooks must be idempotent idempotence able to successfully process an object they have already admitted and potentially modified This is true for all mutating admission webhooks since any change they can make in an object could already exist in the user provided object but it is essential for webhooks that opt into reinvocation Failure policy failurePolicy defines how unrecognized errors and timeout errors from the admission webhook are handled Allowed values are Ignore or Fail Ignore means that an error calling the webhook is ignored and the API request is allowed to continue Fail means that an error calling the webhook causes the admission to fail and the API request to be rejected Here is a mutating webhook configured to reject an API request if errors are encountered calling the admission webhook yaml apiVersion admissionregistration k8s io v1 kind MutatingWebhookConfiguration webhooks name my webhook example com failurePolicy Fail The default failurePolicy for an admission webhooks is Fail Monitoring admission webhooks The API server provides ways to monitor admission webhook behaviors These monitoring mechanisms help cluster admins to answer questions like 1 Which mutating webhook mutated the object in a API request 2 What change did the mutating webhook applied to the object 3 Which webhooks are frequently rejecting API requests What s the reason for a rejection Mutating webhook auditing annotations Sometimes it s useful to know which mutating webhook mutated the object in a API request and what change did the webhook apply The Kubernetes API server performs auditing docs tasks debug debug cluster audit on each mutating webhook invocation Each invocation generates an auditing annotation capturing if a request object is mutated by the invocation and optionally generates an annotation capturing the applied patch from the webhook admission response The annotations are set in the audit event for given request on given stage of its execution which is then pre processed according to a certain policy and written to a backend The audit level of a event determines which annotations get recorded At Metadata audit level or higher an annotation with key mutation webhook admission k8s io round round idx index order idx gets logged with JSON payload indicating a webhook gets invoked for given request and whether it mutated the object or not For example the following annotation gets recorded for a webhook being reinvoked The webhook is ordered the third in the mutating webhook chain and didn t mutated the request object during the invocation yaml the audit event recorded kind Event apiVersion audit k8s io v1 annotations mutation webhook admission k8s io round 1 index 2 configuration my mutating webhook configuration example com webhook my webhook example com mutated false other annotations other fields yaml the annotation value deserialized configuration my mutating webhook configuration example com webhook my webhook example com mutated false The following annotation gets recorded for a webhook being invoked in the first round The webhook is ordered the first in the mutating webhook chain and mutated the request object during the invocation yaml the audit event recorded kind Event apiVersion audit k8s io v1 annotations mutation webhook admission k8s io round 0 index 0 configuration my mutating webhook configuration example com webhook my webhook always mutate example com mutated true other annotations other fields yaml the annotation value deserialized configuration my mutating webhook configuration example com webhook my webhook always mutate example com mutated true At Request audit level or higher an annotation with key patch webhook admission k8s io round round idx index order idx gets logged with JSON payload indicating a webhook gets invoked for given request and what patch gets applied to the request object For example the following annotation gets recorded for a webhook being reinvoked The webhook is ordered the fourth in the mutating webhook chain and responded with a JSON patch which got applied to the request object yaml the audit event recorded kind Event apiVersion audit k8s io v1 annotations patch webhook admission k8s io round 1 index 3 configuration my other mutating webhook configuration example com webhook my webhook always mutate example com patch op add path data mutation stage value yes patchType JSONPatch other annotations other fields yaml the annotation value deserialized configuration my other mutating webhook configuration example com webhook my webhook always mutate example com patchType JSONPatch patch op add path data mutation stage value yes Admission webhook metrics The API server exposes Prometheus metrics from the metrics endpoint which can be used for monitoring and diagnosing API server status The following metrics record status related to admission webhooks API server admission webhook rejection count Sometimes it s useful to know which admission webhooks are frequently rejecting API requests and the reason for a rejection The API server exposes a Prometheus counter metric recording admission webhook rejections The metrics are labelled to identify the causes of webhook rejection s name the name of the webhook that rejected a request operation the operation type of the request can be one of CREATE UPDATE DELETE and CONNECT type the admission webhook type can be one of admit and validating error type identifies if an error occurred during the webhook invocation that caused the rejection Its value can be one of calling webhook error unrecognized errors or timeout errors from the admission webhook happened and the webhook s Failure policy failure policy is set to Fail no error no error occurred The webhook rejected the request with allowed false in the admission response The metrics label rejection code records the status code set in the admission response apiserver internal error an API server internal error happened rejection code the HTTP status code set in the admission response when a webhook rejected a request Example of the rejection count metrics HELP apiserver admission webhook rejection count ALPHA Admission webhook rejection count identified by name and broken out for each admission type validating or admit and operation Additional labels specify an error type calling webhook error or apiserver internal error if an error occurred no error otherwise and optionally a non zero rejection code if the webhook rejects the request with an HTTP status code honored by the apiserver when the code is greater or equal to 400 Codes greater than 600 are truncated to 600 to keep the metrics cardinality bounded TYPE apiserver admission webhook rejection count counter apiserver admission webhook rejection count error type calling webhook error name always timeout webhook example com operation CREATE rejection code 0 type validating 1 apiserver admission webhook rejection count error type calling webhook error name invalid admission response webhook example com operation CREATE rejection code 0 type validating 1 apiserver admission webhook rejection count error type no error name deny unwanted configmap data example com operation CREATE rejection code 400 type validating 13 Best practices and warnings Idempotence An idempotent mutating admission webhook is able to successfully process an object it has already admitted and potentially modified The admission can be applied multiple times without changing the result beyond the initial application Example of idempotent mutating admission webhooks 1 For a CREATE pod request set the field spec securityContext runAsNonRoot of the pod to true to enforce security best practices 2 For a CREATE pod request if the field spec containers resources limits of a container is not set set default resource limits 3 For a CREATE pod request inject a sidecar container with name foo sidecar if no container with the name foo sidecar already exists In the cases above the webhook can be safely reinvoked or admit an object that already has the fields set Example of non idempotent mutating admission webhooks 1 For a CREATE pod request inject a sidecar container with name foo sidecar suffixed with the current timestamp e g foo sidecar 19700101 000000 2 For a CREATE UPDATE pod request reject if the pod has label env set otherwise add an env prod label to the pod 3 For a CREATE pod request blindly append a sidecar container named foo sidecar without looking to see if there is already a foo sidecar container in the pod In the first case above reinvoking the webhook can result in the same sidecar being injected multiple times to a pod each time with a different container name Similarly the webhook can inject duplicated containers if the sidecar already exists in a user provided pod In the second case above reinvoking the webhook will result in the webhook failing on its own output In the third case above reinvoking the webhook will result in duplicated containers in the pod spec which makes the request invalid and rejected by the API server Intercepting all versions of an object It is recommended that admission webhooks should always intercept all versions of an object by setting webhooks matchPolicy to Equivalent It is also recommended that admission webhooks should prefer registering for stable versions of resources Failure to intercept all versions of an object can result in admission policies not being enforced for requests in certain versions See Matching requests matchPolicy matching requests matchpolicy for examples Availability It is recommended that admission webhooks should evaluate as quickly as possible typically in milliseconds since they add to API request latency It is encouraged to use a small timeout for webhooks See Timeouts timeouts for more detail It is recommended that admission webhooks should leverage some format of load balancing to provide high availability and performance benefits If a webhook is running within the cluster you can run multiple webhook backends behind a service to leverage the load balancing that service supports Guaranteeing the final state of the object is seen Admission webhooks that need to guarantee they see the final state of the object in order to enforce policy should use a validating admission webhook since objects can be modified after being seen by mutating webhooks For example a mutating admission webhook is configured to inject a sidecar container with name foo sidecar on every CREATE pod request If the sidecar must be present a validating admisson webhook should also be configured to intercept CREATE pod requests and validate that a container with name foo sidecar with the expected configuration exists in the to be created object Avoiding deadlocks in self hosted webhooks A webhook running inside the cluster might cause deadlocks for its own deployment if it is configured to intercept resources required to start its own pods For example a mutating admission webhook is configured to admit CREATE pod requests only if a certain label is set in the pod e g env prod The webhook server runs in a deployment which doesn t set the env label When a node that runs the webhook server pods becomes unhealthy the webhook deployment will try to reschedule the pods to another node However the requests will get rejected by the existing webhook server since the env label is unset and the migration cannot happen It is recommended to exclude the namespace where your webhook is running with a namespaceSelector matching requests namespaceselector Side effects It is recommended that admission webhooks should avoid side effects if possible which means the webhooks operate only on the content of the AdmissionReview sent to them and do not make out of band changes The webhooks sideEffects field should be set to None if a webhook doesn t have any side effect If side effects are required during the admission evaluation they must be suppressed when processing an AdmissionReview object with dryRun set to true and the webhooks sideEffects field should be set to NoneOnDryRun See Side effects side effects for more detail Avoiding operating on the kube system namespace The kube system namespace contains objects created by the Kubernetes system e g service accounts for the control plane components pods like kube dns Accidentally mutating or rejecting requests in the kube system namespace may cause the control plane components to stop functioning or introduce unknown behavior If your admission webhooks don t intend to modify the behavior of the Kubernetes control plane exclude the kube system namespace from being intercepted using a namespaceSelector matching requests namespaceselector |
kubernetes reference weight 95 tallclair liggitt contenttype concept title Mapping PodSecurityPolicies to Pod Security Standards reviewers overview | ---
reviewers:
- tallclair
- liggitt
title: Mapping PodSecurityPolicies to Pod Security Standards
content_type: concept
weight: 95
---
<!-- overview -->
The tables below enumerate the configuration parameters on
`PodSecurityPolicy` objects, whether the field mutates
and/or validates pods, and how the configuration values map to the
[Pod Security Standards](/docs/concepts/security/pod-security-standards/).
For each applicable parameter, the allowed values for the
[Baseline](/docs/concepts/security/pod-security-standards/#baseline) and
[Restricted](/docs/concepts/security/pod-security-standards/#restricted) profiles are listed.
Anything outside the allowed values for those profiles would fall under the
[Privileged](/docs/concepts/security/pod-security-standards/#privileged) profile. "No opinion"
means all values are allowed under all Pod Security Standards.
For a step-by-step migration guide, see
[Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller](/docs/tasks/configure-pod-container/migrate-from-psp/).
<!-- body -->
## PodSecurityPolicy Spec
The fields enumerated in this table are part of the `PodSecurityPolicySpec`, which is specified
under the `.spec` field path.
<table class="no-word-break">
<caption style="display:none">Mapping PodSecurityPolicySpec fields to Pod Security Standards</caption>
<tbody>
<tr>
<th><code>PodSecurityPolicySpec</code></th>
<th>Type</th>
<th>Pod Security Standards Equivalent</th>
</tr>
<tr>
<td><code>privileged</code></td>
<td>Validating</td>
<td><b>Baseline & Restricted</b>: <code>false</code> / undefined / nil</td>
</tr>
<tr>
<td><code>defaultAddCapabilities</code></td>
<td>Mutating & Validating</td>
<td>Requirements match <code>allowedCapabilities</code> below.</td>
</tr>
<tr>
<td><code>allowedCapabilities</code></td>
<td>Validating</td>
<td>
<p><b>Baseline</b>: subset of</p>
<ul>
<li><code>AUDIT_WRITE</code></li>
<li><code>CHOWN</code></li>
<li><code>DAC_OVERRIDE</code></li>
<li><code>FOWNER</code></li>
<li><code>FSETID</code></li>
<li><code>KILL</code></li>
<li><code>MKNOD</code></li>
<li><code>NET_BIND_SERVICE</code></li>
<li><code>SETFCAP</code></li>
<li><code>SETGID</code></li>
<li><code>SETPCAP</code></li>
<li><code>SETUID</code></li>
<li><code>SYS_CHROOT</code></li>
</ul>
<p><b>Restricted</b>: empty / undefined / nil OR a list containing <i>only</i> <code>NET_BIND_SERVICE</code>
</td>
</tr>
<tr>
<td><code>requiredDropCapabilities</code></td>
<td>Mutating & Validating</td>
<td>
<p><b>Baseline</b>: no opinion</p>
<p><b>Restricted</b>: must include <code>ALL</code></p>
</td>
</tr>
<tr>
<td><code>volumes</code></td>
<td>Validating</td>
<td>
<p><b>Baseline</b>: anything except</p>
<ul>
<li><code>hostPath</code></li>
<li><code>*</code></li>
</ul>
<p><b>Restricted</b>: subset of</p>
<ul>
<li><code>configMap</code></li>
<li><code>csi</code></li>
<li><code>downwardAPI</code></li>
<li><code>emptyDir</code></li>
<li><code>ephemeral</code></li>
<li><code>persistentVolumeClaim</code></li>
<li><code>projected</code></li>
<li><code>secret</code></li>
</ul>
</td>
</tr>
<tr>
<td><code>hostNetwork</code></td>
<td>Validating</td>
<td><b>Baseline & Restricted</b>: <code>false</code> / undefined / nil</td>
</tr>
<tr>
<td><code>hostPorts</code></td>
<td>Validating</td>
<td><b>Baseline & Restricted</b>: undefined / nil / empty</td>
</tr>
<tr>
<td><code>hostPID</code></td>
<td>Validating</td>
<td><b>Baseline & Restricted</b>: <code>false</code> / undefined / nil</td>
</tr>
<tr>
<td><code>hostIPC</code></td>
<td>Validating</td>
<td><b>Baseline & Restricted</b>: <code>false</code> / undefined / nil</td>
</tr>
<tr>
<td><code>seLinux</code></td>
<td>Mutating & Validating</td>
<td>
<p><b>Baseline & Restricted</b>:
<code>seLinux.rule</code> is <code>MustRunAs</code>, with the following <code>options</code></p>
<ul>
<li><code>user</code> is unset (<code>""</code> / undefined / nil)</li>
<li><code>role</code> is unset (<code>""</code> / undefined / nil)</li>
<li><code>type</code> is unset or one of: <code>container_t, container_init_t, container_kvm_t, container_engine_t</code></li>
<li><code>level</code> is anything</li>
</ul>
</td>
</tr>
<tr>
<td><code>runAsUser</code></td>
<td>Mutating & Validating</td>
<td>
<p><b>Baseline</b>: Anything</p>
<p><b>Restricted</b>: <code>rule</code> is <code>MustRunAsNonRoot</code></p>
</td>
</tr>
<tr>
<td><code>runAsGroup</code></td>
<td>Mutating (MustRunAs) & Validating</td>
<td>
<i>No opinion</i>
</td>
</tr>
<tr>
<td><code>supplementalGroups</code></td>
<td>Mutating & Validating</td>
<td>
<i>No opinion</i>
</td>
</tr>
<tr>
<td><code>fsGroup</code></td>
<td>Mutating & Validating</td>
<td>
<i>No opinion</i>
</td>
</tr>
<tr>
<td><code>readOnlyRootFilesystem</code></td>
<td>Mutating & Validating</td>
<td>
<i>No opinion</i>
</td>
</tr>
<tr>
<td><code>defaultAllowPrivilegeEscalation</code></td>
<td>Mutating</td>
<td>
<i>No opinion (non-validating)</i>
</td>
</tr>
<tr>
<td><code>allowPrivilegeEscalation</code></td>
<td>Mutating & Validating</td>
<td>
<p><i>Only mutating if set to <code>false</code></i></p>
<p><b>Baseline</b>: No opinion</p>
<p><b>Restricted</b>: <code>false</code></p>
</td>
</tr>
<tr>
<td><code>allowedHostPaths</code></td>
<td>Validating</td>
<td><i>No opinion (volumes takes precedence)</i></td>
</tr>
<tr>
<td><code>allowedFlexVolumes</code></td>
<td>Validating</td>
<td><i>No opinion (volumes takes precedence)</i></td>
</tr>
<tr>
<td><code>allowedCSIDrivers</code></td>
<td>Validating</td>
<td><i>No opinion (volumes takes precedence)</i></td>
</tr>
<tr>
<td><code>allowedUnsafeSysctls</code></td>
<td>Validating</td>
<td><b>Baseline & Restricted</b>: undefined / nil / empty</td>
</tr>
<tr>
<td><code>forbiddenSysctls</code></td>
<td>Validating</td>
<td><i>No opinion</i></td>
</tr>
<tr>
<td><code>allowedProcMountTypes</code><br><i>(alpha feature)</i></td>
<td>Validating</td>
<td><b>Baseline & Restricted</b>: <code>["Default"]</code> OR undefined / nil / empty</td>
</tr>
<tr>
<td><code>runtimeClass</code><br><code> .defaultRuntimeClassName</code></td>
<td>Mutating</td>
<td><i>No opinion</i></td>
</tr>
<tr>
<td><code>runtimeClass</code><br><code> .allowedRuntimeClassNames</code></td>
<td>Validating</td>
<td><i>No opinion</i></td>
</tr>
</tbody>
</table>
## PodSecurityPolicy annotations
The [annotations](/docs/concepts/overview/working-with-objects/annotations/) enumerated in this
table can be specified under `.metadata.annotations` on the PodSecurityPolicy object.
<table class="no-word-break">
<caption style="display:none">Mapping PodSecurityPolicy annotations to Pod Security Standards</caption>
<tbody>
<tr>
<th><code>PSP Annotation</code></th>
<th>Type</th>
<th>Pod Security Standards Equivalent</th>
</tr>
<tr>
<td><code>seccomp.security.alpha.kubernetes.io</code><br><code>/defaultProfileName</code></td>
<td>Mutating</td>
<td><i>No opinion</i></td>
</tr>
<tr>
<td><code>seccomp.security.alpha.kubernetes.io</code><br><code>/allowedProfileNames</code></td>
<td>Validating</td>
<td>
<p><b>Baseline</b>: <code>"runtime/default,"</code> <i>(Trailing comma to allow unset)</i></p>
<p><b>Restricted</b>: <code>"runtime/default"</code> <i>(No trailing comma)</i></p>
<p><i><code>localhost/*</code> values are also permitted for both Baseline & Restricted.</i></p>
</td>
</tr>
<tr>
<td><code>apparmor.security.beta.kubernetes.io</code><br><code>/defaultProfileName</code></td>
<td>Mutating</td>
<td><i>No opinion</i></td>
</tr>
<tr>
<td><code>apparmor.security.beta.kubernetes.io</code><br><code>/allowedProfileNames</code></td>
<td>Validating</td>
<td>
<p><b>Baseline</b>: <code>"runtime/default,"</code> <i>(Trailing comma to allow unset)</i></p>
<p><b>Restricted</b>: <code>"runtime/default"</code> <i>(No trailing comma)</i></p>
<p><i><code>localhost/*</code> values are also permitted for both Baseline & Restricted.</i></p>
</td>
</tr>
</tbody>
</table> | kubernetes reference | reviewers tallclair liggitt title Mapping PodSecurityPolicies to Pod Security Standards content type concept weight 95 overview The tables below enumerate the configuration parameters on PodSecurityPolicy objects whether the field mutates and or validates pods and how the configuration values map to the Pod Security Standards docs concepts security pod security standards For each applicable parameter the allowed values for the Baseline docs concepts security pod security standards baseline and Restricted docs concepts security pod security standards restricted profiles are listed Anything outside the allowed values for those profiles would fall under the Privileged docs concepts security pod security standards privileged profile No opinion means all values are allowed under all Pod Security Standards For a step by step migration guide see Migrate from PodSecurityPolicy to the Built In PodSecurity Admission Controller docs tasks configure pod container migrate from psp body PodSecurityPolicy Spec The fields enumerated in this table are part of the PodSecurityPolicySpec which is specified under the spec field path table class no word break caption style display none Mapping PodSecurityPolicySpec fields to Pod Security Standards caption tbody tr th code PodSecurityPolicySpec code th th Type th th Pod Security Standards Equivalent th tr tr td code privileged code td td Validating td td b Baseline Restricted b code false code undefined nil td tr tr td code defaultAddCapabilities code td td Mutating Validating td td Requirements match code allowedCapabilities code below td tr tr td code allowedCapabilities code td td Validating td td p b Baseline b subset of p ul li code AUDIT WRITE code li li code CHOWN code li li code DAC OVERRIDE code li li code FOWNER code li li code FSETID code li li code KILL code li li code MKNOD code li li code NET BIND SERVICE code li li code SETFCAP code li li code SETGID code li li code SETPCAP code li li code SETUID code li li code SYS CHROOT code li ul p b Restricted b empty undefined nil OR a list containing i only i code NET BIND SERVICE code td tr tr td code requiredDropCapabilities code td td Mutating Validating td td p b Baseline b no opinion p p b Restricted b must include code ALL code p td tr tr td code volumes code td td Validating td td p b Baseline b anything except p ul li code hostPath code li li code code li ul p b Restricted b subset of p ul li code configMap code li li code csi code li li code downwardAPI code li li code emptyDir code li li code ephemeral code li li code persistentVolumeClaim code li li code projected code li li code secret code li ul td tr tr td code hostNetwork code td td Validating td td b Baseline Restricted b code false code undefined nil td tr tr td code hostPorts code td td Validating td td b Baseline Restricted b undefined nil empty td tr tr td code hostPID code td td Validating td td b Baseline Restricted b code false code undefined nil td tr tr td code hostIPC code td td Validating td td b Baseline Restricted b code false code undefined nil td tr tr td code seLinux code td td Mutating Validating td td p b Baseline Restricted b code seLinux rule code is code MustRunAs code with the following code options code p ul li code user code is unset code code undefined nil li li code role code is unset code code undefined nil li li code type code is unset or one of code container t container init t container kvm t container engine t code li li code level code is anything li ul td tr tr td code runAsUser code td td Mutating Validating td td p b Baseline b Anything p p b Restricted b code rule code is code MustRunAsNonRoot code p td tr tr td code runAsGroup code td td Mutating MustRunAs Validating td td i No opinion i td tr tr td code supplementalGroups code td td Mutating Validating td td i No opinion i td tr tr td code fsGroup code td td Mutating Validating td td i No opinion i td tr tr td code readOnlyRootFilesystem code td td Mutating Validating td td i No opinion i td tr tr td code defaultAllowPrivilegeEscalation code td td Mutating td td i No opinion non validating i td tr tr td code allowPrivilegeEscalation code td td Mutating Validating td td p i Only mutating if set to code false code i p p b Baseline b No opinion p p b Restricted b code false code p td tr tr td code allowedHostPaths code td td Validating td td i No opinion volumes takes precedence i td tr tr td code allowedFlexVolumes code td td Validating td td i No opinion volumes takes precedence i td tr tr td code allowedCSIDrivers code td td Validating td td i No opinion volumes takes precedence i td tr tr td code allowedUnsafeSysctls code td td Validating td td b Baseline Restricted b undefined nil empty td tr tr td code forbiddenSysctls code td td Validating td td i No opinion i td tr tr td code allowedProcMountTypes code br i alpha feature i td td Validating td td b Baseline Restricted b code Default code OR undefined nil empty td tr tr td code runtimeClass code br code nbsp defaultRuntimeClassName code td td Mutating td td i No opinion i td tr tr td code runtimeClass code br code nbsp allowedRuntimeClassNames code td td Validating td td i No opinion i td tr tbody table PodSecurityPolicy annotations The annotations docs concepts overview working with objects annotations enumerated in this table can be specified under metadata annotations on the PodSecurityPolicy object table class no word break caption style display none Mapping PodSecurityPolicy annotations to Pod Security Standards caption tbody tr th code PSP Annotation code th th Type th th Pod Security Standards Equivalent th tr tr td code seccomp security alpha kubernetes io code br code defaultProfileName code td td Mutating td td i No opinion i td tr tr td code seccomp security alpha kubernetes io code br code allowedProfileNames code td td Validating td td p b Baseline b code runtime default code i Trailing comma to allow unset i p p b Restricted b code runtime default code i No trailing comma i p p i code localhost code values are also permitted for both Baseline Restricted i p td tr tr td code apparmor security beta kubernetes io code br code defaultProfileName code td td Mutating td td i No opinion i td tr tr td code apparmor security beta kubernetes io code br code allowedProfileNames code td td Validating td td p b Baseline b code runtime default code i Trailing comma to allow unset i p p b Restricted b code runtime default code i No trailing comma i p p i code localhost code values are also permitted for both Baseline Restricted i p td tr tbody table |
kubernetes reference title TLS bootstrapping liggitt contenttype concept mikedanese reviewers awly weight 120 smarterclayton | ---
reviewers:
- mikedanese
- liggitt
- smarterclayton
- awly
title: TLS bootstrapping
content_type: concept
weight: 120
---
<!-- overview -->
In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need
to communicate with Kubernetes control plane components, specifically kube-apiserver.
In order to ensure that communication is kept private, not interfered with, and ensure that
each component of the cluster is talking to another trusted component, we strongly
recommend using client TLS certificates on nodes.
The normal process of bootstrapping these components, especially worker nodes that need certificates
so they can communicate safely with kube-apiserver, can be a challenging process as it is often outside
of the scope of Kubernetes and requires significant additional work.
This in turn, can make it challenging to initialize or scale a cluster.
In order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request
and signing API. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439).
This document describes the process of node initialization, how to set up TLS client certificate bootstrapping for
kubelets, and how it works.
<!-- body -->
## Initialization process
When a worker node starts up, the kubelet does the following:
1. Look for its `kubeconfig` file
1. Retrieve the URL of the API server and credentials, normally a TLS key and signed certificate from the `kubeconfig` file
1. Attempt to communicate with the API server using the credentials.
Assuming that the kube-apiserver successfully validates the kubelet's credentials,
it will treat the kubelet as a valid node, and begin to assign pods to it.
Note that the above process depends upon:
* Existence of a key and certificate on the local host in the `kubeconfig`
* The certificate having been signed by a Certificate Authority (CA) trusted by the kube-apiserver
All of the following are responsibilities of whoever sets up and manages the cluster:
1. Creating the CA key and certificate
1. Distributing the CA certificate to the control plane nodes, where kube-apiserver is running
1. Creating a key and certificate for each kubelet; strongly recommended to have a unique one, with a unique CN, for each kubelet
1. Signing the kubelet certificate using the CA key
1. Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running
The TLS Bootstrapping described in this document is intended to simplify, and partially or even
completely automate, steps 3 onwards, as these are the most common when initializing or scaling
a cluster.
### Bootstrap initialization
In the bootstrap initialization process, the following occurs:
1. kubelet begins
1. kubelet sees that it does _not_ have a `kubeconfig` file
1. kubelet searches for and finds a `bootstrap-kubeconfig` file
1. kubelet reads its bootstrap file, retrieving the URL of the API server and a limited usage "token"
1. kubelet connects to the API server, authenticates using the token
1. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR)
1. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet`
1. CSR is approved in one of two ways:
* If configured, kube-controller-manager automatically approves the CSR
* If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl`
1. Certificate is created for the kubelet
1. Certificate is issued to the kubelet
1. kubelet retrieves the certificate
1. kubelet creates a proper `kubeconfig` with the key and signed certificate
1. kubelet begins normal operation
1. Optional: if configured, kubelet automatically requests renewal of the certificate when it is close to expiry
1. The renewed certificate is approved and issued, either automatically or manually, depending on configuration.
The rest of this document describes the necessary steps to configure TLS Bootstrapping, and its limitations.
## Configuration
To configure for TLS bootstrapping and optional automatic approval, you must configure options on the following components:
* kube-apiserver
* kube-controller-manager
* kubelet
* in-cluster resources: `ClusterRoleBinding` and potentially `ClusterRole`
In addition, you need your Kubernetes Certificate Authority (CA).
## Certificate Authority
As without bootstrapping, you will need a Certificate Authority (CA) key and certificate.
As without bootstrapping, these will be used to sign the kubelet certificate. As before,
it is your responsibility to distribute them to control plane nodes.
For the purposes of this document, we will assume these have been distributed to control
plane nodes at `/var/lib/kubernetes/ca.pem` (certificate) and `/var/lib/kubernetes/ca-key.pem` (key).
We will refer to these as "Kubernetes CA certificate and key".
All Kubernetes components that use these certificates - kubelet, kube-apiserver,
kube-controller-manager - assume the key and certificate to be PEM-encoded.
## kube-apiserver configuration
The kube-apiserver has several requirements to enable TLS bootstrapping:
* Recognizing CA that signs the client certificate
* Authenticating the bootstrapping kubelet to the `system:bootstrappers` group
* Authorize the bootstrapping kubelet to create a certificate signing request (CSR)
### Recognizing client certificates
This is normal for all client certificate authentication.
If not already set, add the `--client-ca-file=FILENAME` flag to the kube-apiserver command to enable
client certificate authentication, referencing a certificate authority bundle
containing the signing certificate, for example
`--client-ca-file=/var/lib/kubernetes/ca.pem`.
### Initial bootstrap authentication
In order for the bootstrapping kubelet to connect to kube-apiserver and request a certificate,
it must first authenticate to the server. You can use any
[authenticator](/docs/reference/access-authn-authz/authentication/) that can authenticate the kubelet.
While any authentication strategy can be used for the kubelet's initial
bootstrap credentials, the following two authenticators are recommended for ease
of provisioning.
1. [Bootstrap Tokens](#bootstrap-tokens)
1. [Token authentication file](#token-authentication-file)
Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets,
and does not require any additional flags when starting kube-apiserver.
Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to:
1. create and retrieve CSRs
1. be automatically approved to request node client certificates, if automatic approval is enabled.
A kubelet authenticating using bootstrap tokens is authenticated as a user in the group
`system:bootstrappers`, which is the standard method to use.
As this feature matures, you
should ensure tokens are bound to a Role Based Access Control (RBAC) policy
which limits requests (using the [bootstrap token](/docs/reference/access-authn-authz/bootstrap-tokens/)) strictly to client
requests related to certificate provisioning. With RBAC in place, scoping the
tokens to a group allows for great flexibility. For example, you could disable a
particular bootstrap group's access when you are done provisioning the nodes.
#### Bootstrap tokens
Bootstrap tokens are described in detail [here](/docs/reference/access-authn-authz/bootstrap-tokens/).
These are tokens that are stored as secrets in the Kubernetes cluster, and then issued to the individual kubelet.
You can use a single token for an entire cluster, or issue one per worker node.
The process is two-fold:
1. Create a Kubernetes secret with the token ID, secret and scope(s).
1. Issue the token to the kubelet
From the kubelet's perspective, one token is like another and has no special meaning.
From the kube-apiserver's perspective, however, the bootstrap token is special.
Due to its `type`, `namespace` and `name`, kube-apiserver recognizes it as a special token,
and grants anyone authenticating with that token special bootstrap rights, notably treating
them as a member of the `system:bootstrappers` group. This fulfills a basic requirement
for TLS bootstrapping.
The details for creating the secret are available [here](/docs/reference/access-authn-authz/bootstrap-tokens/).
If you want to use bootstrap tokens, you must enable it on kube-apiserver with the flag:
```console
--enable-bootstrap-token-auth=true
```
#### Token authentication file
kube-apiserver has the ability to accept tokens as authentication.
These tokens are arbitrary but should represent at least 128 bits of entropy derived
from a secure random number generator (such as `/dev/urandom` on most modern Linux
systems). There are multiple ways you can generate a token. For example:
```shell
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
```
This will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`.
The token file should look like the following example, where the first three
values can be anything and the quoted group name should be as depicted:
```console
02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:bootstrappers"
```
Add the `--token-auth-file=FILENAME` flag to the kube-apiserver command (in your
systemd unit file perhaps) to enable the token file. See docs
[here](/docs/reference/access-authn-authz/authentication/#static-token-file) for
further details.
### Authorize kubelet to create CSR
Now that the bootstrapping node is _authenticated_ as part of the
`system:bootstrappers` group, it needs to be _authorized_ to create a
certificate signing request (CSR) as well as retrieve it when done.
Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and
only these) permissions, `system:node-bootstrapper`.
To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers`
group to the cluster role `system:node-bootstrapper`.
```yaml
# enable bootstrapping nodes to create CSR
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
```
## kube-controller-manager configuration
While the apiserver receives the requests for certificates from the kubelet and authenticates those requests,
the controller-manager is responsible for issuing actual signed certificates.
The controller-manager performs this function via a certificate-issuing control loop.
This takes the form of a
[cfssl](https://blog.cloudflare.com/introducing-cfssl/) local signer using
assets on disk. Currently, all certificates issued have one year validity and a
default set of key usages.
In order for the controller-manager to sign certificates, it needs the following:
* access to the "Kubernetes CA key and certificate" that you created and distributed
* enabling CSR signing
### Access to key and certificate
As described earlier, you need to create a Kubernetes CA key and certificate, and distribute it to the control plane nodes.
These will be used by the controller-manager to sign the kubelet certificates.
Since these signed certificates will, in turn, be used by the kubelet to authenticate as a regular kubelet
to kube-apiserver, it is important that the CA provided to the controller-manager at this stage also be
trusted by kube-apiserver for authentication. This is provided to kube-apiserver with the flag `--client-ca-file=FILENAME`
(for example, `--client-ca-file=/var/lib/kubernetes/ca.pem`), as described in the kube-apiserver configuration section.
To provide the Kubernetes CA key and certificate to kube-controller-manager, use the following flags:
```shell
--cluster-signing-cert-file="/etc/path/to/kubernetes/ca/ca.crt" --cluster-signing-key-file="/etc/path/to/kubernetes/ca/ca.key"
```
For example:
```shell
--cluster-signing-cert-file="/var/lib/kubernetes/ca.pem" --cluster-signing-key-file="/var/lib/kubernetes/ca-key.pem"
```
The validity duration of signed certificates can be configured with flag:
```shell
--cluster-signing-duration
```
### Approval
In order to approve CSRs, you need to tell the controller-manager that it is acceptable to approve them. This is done by granting
RBAC permissions to the correct group.
There are two distinct sets of permissions:
* `nodeclient`: If a node is creating a new certificate for a node, then it does not have a certificate yet.
It is authenticating using one of the tokens listed above, and thus is part of the group `system:bootstrappers`.
* `selfnodeclient`: If a node is renewing its certificate, then it already has a certificate (by definition),
which it uses continuously to authenticate as part of the group `system:nodes`.
To enable the kubelet to request and receive a new certificate, create a `ClusterRoleBinding` that binds
the group in which the bootstrapping node is a member `system:bootstrappers` to the `ClusterRole` that
grants it permission, `system:certificates.k8s.io:certificatesigningrequests:nodeclient`:
```yaml
# Approve all CSRs for the group "system:bootstrappers"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
```
To enable the kubelet to renew its own client certificate, create a `ClusterRoleBinding` that binds
the group in which the fully functioning node is a member `system:nodes` to the `ClusterRole` that
grants it permission, `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`:
```yaml
# Approve renewal CSRs for the group "system:nodes"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
```
The `csrapproving` controller that ships as part of
[kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) and is enabled
by default. The controller uses the
[`SubjectAccessReview` API](/docs/reference/access-authn-authz/authorization/#checking-api-access) to
determine if a given user is authorized to request a CSR, then approves based on
the authorization outcome. To prevent conflicts with other approvers, the
built-in approver doesn't explicitly deny CSRs. It only ignores unauthorized
requests. The controller also prunes expired certificates as part of garbage
collection.
## kubelet configuration
Finally, with the control plane nodes properly set up and all of the necessary
authentication and authorization in place, we can configure the kubelet.
The kubelet requires the following configuration to bootstrap:
* A path to store the key and certificate it generates (optional, can use default)
* A path to a `kubeconfig` file that does not yet exist; it will place the bootstrapped config file here
* A path to a bootstrap `kubeconfig` file to provide the URL for the server and bootstrap credentials, e.g. a bootstrap token
* Optional: instructions to rotate certificates
The bootstrap `kubeconfig` should be in a path available to the kubelet, for example `/var/lib/kubelet/bootstrap-kubeconfig`.
Its format is identical to a normal `kubeconfig` file. A sample file might look as follows:
```yaml
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/lib/kubernetes/ca.pem
server: https://my.server.example.com:6443
name: bootstrap
contexts:
- context:
cluster: bootstrap
user: kubelet-bootstrap
name: bootstrap
current-context: bootstrap
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 07401b.f395accd246ae52d
```
The important elements to note are:
* `certificate-authority`: path to a CA file, used to validate the server certificate presented by kube-apiserver
* `server`: URL to kube-apiserver
* `token`: the token to use
The format of the token does not matter, as long as it matches what kube-apiserver expects. In the above example, we used a bootstrap token.
As stated earlier, _any_ valid authentication method can be used, not only tokens.
Because the bootstrap `kubeconfig` _is_ a standard `kubeconfig`, you can use `kubectl` to generate it. To create the above example file:
```
kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://my.server.example.com:6443' --certificate-authority=/var/lib/kubernetes/ca.pem
kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap
```
To indicate to the kubelet to use the bootstrap `kubeconfig`, use the following kubelet flag:
```
--bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig" --kubeconfig="/var/lib/kubelet/kubeconfig"
```
When starting the kubelet, if the file specified via `--kubeconfig` does not
exist, the bootstrap kubeconfig specified via `--bootstrap-kubeconfig` is used
to request a client certificate from the API server. On approval of the
certificate request and receipt back by the kubelet, a kubeconfig file
referencing the generated key and obtained certificate is written to the path
specified by `--kubeconfig`. The certificate and key file will be placed in the
directory specified by `--cert-dir`.
### Client and serving certificates
All of the above relate to kubelet _client_ certificates, specifically, the certificates a kubelet
uses to authenticate to kube-apiserver.
A kubelet also can use _serving_ certificates. The kubelet itself exposes an https endpoint for certain features.
To secure these, the kubelet can do one of:
* use provided key and certificate, via the `--tls-private-key-file` and `--tls-cert-file` flags
* create self-signed key and certificate, if a key and certificate are not provided
* request serving certificates from the cluster server, via the CSR API
The client certificate provided by TLS bootstrapping is signed, by default, for `client auth` only, and thus cannot
be used as serving certificates, or `server auth`.
However, you _can_ enable its server certificate, at least partially, via certificate rotation.
### Certificate rotation
Kubernetes v1.8 and higher kubelet implements features for enabling
rotation of its client and/or serving certificates. Note, rotation of serving
certificate is a __beta__ feature and requires the `RotateKubeletServerCertificate`
feature flag on the kubelet (enabled by default).
You can configure the kubelet to rotate its client certificates by creating new CSRs
as its existing credentials expire. To enable this feature, use the `rotateCertificates`
field of [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
or pass the following command line argument to the kubelet (deprecated):
```
--rotate-certificates
```
Enabling `RotateKubeletServerCertificate` causes the kubelet **both** to request a serving
certificate after bootstrapping its client credentials **and** to rotate that
certificate. To enable this behavior, use the field `serverTLSBootstrap` of
the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
or pass the following command line argument to the kubelet (deprecated):
```
--rotate-server-certificates
```
The CSR approving controllers implemented in core Kubernetes do not
approve node _serving_ certificates for
[security reasons](https://github.com/kubernetes/community/pull/1982). To use
`RotateKubeletServerCertificate` operators need to run a custom approving
controller, or manually approve the serving certificate requests.
A deployment-specific approval process for kubelet serving certificates should typically only approve CSRs which:
1. are requested by nodes (ensure the `spec.username` field is of the form
`system:node:<nodeName>` and `spec.groups` contains `system:nodes`)
1. request usages for a serving certificate (ensure `spec.usages` contains `server auth`,
optionally contains `digital signature` and `key encipherment`, and contains no other usages)
1. only have IP and DNS subjectAltNames that belong to the requesting node,
and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request
in `spec.request` to verify `subjectAltNames`)
## Other authenticating components
All of TLS bootstrapping described in this document relates to the kubelet. However,
other components may need to communicate directly with kube-apiserver. Notable is kube-proxy, which
is part of the Kubernetes node components and runs on every node, but may also include other components such as monitoring or networking.
Like the kubelet, these other components also require a method of authenticating to kube-apiserver.
You have several options for generating these credentials:
* The old way: Create and distribute certificates the same way you did for kubelet before TLS bootstrapping
* DaemonSet: Since the kubelet itself is loaded on each node, and is sufficient to start base services,
you can run kube-proxy and other node-specific services not as a standalone process, but rather as a
daemonset in the `kube-system` namespace. Since it will be in-cluster, you can give it a proper service
account with appropriate permissions to perform its activities. This may be the simplest way to configure
such services.
## kubectl approval
CSRs can be approved outside of the approval flows built into the controller
manager.
The signing controller does not immediately sign all certificate requests.
Instead, it waits until they have been flagged with an "Approved" status by an
appropriately-privileged user. This flow is intended to allow for automated
approval handled by an external approval controller or the approval controller
implemented in the core controller-manager. However cluster administrators can
also manually approve certificate requests using kubectl. An administrator can
list CSRs with `kubectl get csr` and describe one in detail with
`kubectl describe csr <name>`. An administrator can approve or deny a CSR with
`kubectl certificate approve <name>` and `kubectl certificate deny <name>`. | kubernetes reference | reviewers mikedanese liggitt smarterclayton awly title TLS bootstrapping content type concept weight 120 overview In a Kubernetes cluster the components on the worker nodes kubelet and kube proxy need to communicate with Kubernetes control plane components specifically kube apiserver In order to ensure that communication is kept private not interfered with and ensure that each component of the cluster is talking to another trusted component we strongly recommend using client TLS certificates on nodes The normal process of bootstrapping these components especially worker nodes that need certificates so they can communicate safely with kube apiserver can be a challenging process as it is often outside of the scope of Kubernetes and requires significant additional work This in turn can make it challenging to initialize or scale a cluster In order to simplify the process beginning in version 1 4 Kubernetes introduced a certificate request and signing API The proposal can be found here https github com kubernetes kubernetes pull 20439 This document describes the process of node initialization how to set up TLS client certificate bootstrapping for kubelets and how it works body Initialization process When a worker node starts up the kubelet does the following 1 Look for its kubeconfig file 1 Retrieve the URL of the API server and credentials normally a TLS key and signed certificate from the kubeconfig file 1 Attempt to communicate with the API server using the credentials Assuming that the kube apiserver successfully validates the kubelet s credentials it will treat the kubelet as a valid node and begin to assign pods to it Note that the above process depends upon Existence of a key and certificate on the local host in the kubeconfig The certificate having been signed by a Certificate Authority CA trusted by the kube apiserver All of the following are responsibilities of whoever sets up and manages the cluster 1 Creating the CA key and certificate 1 Distributing the CA certificate to the control plane nodes where kube apiserver is running 1 Creating a key and certificate for each kubelet strongly recommended to have a unique one with a unique CN for each kubelet 1 Signing the kubelet certificate using the CA key 1 Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running The TLS Bootstrapping described in this document is intended to simplify and partially or even completely automate steps 3 onwards as these are the most common when initializing or scaling a cluster Bootstrap initialization In the bootstrap initialization process the following occurs 1 kubelet begins 1 kubelet sees that it does not have a kubeconfig file 1 kubelet searches for and finds a bootstrap kubeconfig file 1 kubelet reads its bootstrap file retrieving the URL of the API server and a limited usage token 1 kubelet connects to the API server authenticates using the token 1 kubelet now has limited credentials to create and retrieve a certificate signing request CSR 1 kubelet creates a CSR for itself with the signerName set to kubernetes io kube apiserver client kubelet 1 CSR is approved in one of two ways If configured kube controller manager automatically approves the CSR If configured an outside process possibly a person approves the CSR using the Kubernetes API or via kubectl 1 Certificate is created for the kubelet 1 Certificate is issued to the kubelet 1 kubelet retrieves the certificate 1 kubelet creates a proper kubeconfig with the key and signed certificate 1 kubelet begins normal operation 1 Optional if configured kubelet automatically requests renewal of the certificate when it is close to expiry 1 The renewed certificate is approved and issued either automatically or manually depending on configuration The rest of this document describes the necessary steps to configure TLS Bootstrapping and its limitations Configuration To configure for TLS bootstrapping and optional automatic approval you must configure options on the following components kube apiserver kube controller manager kubelet in cluster resources ClusterRoleBinding and potentially ClusterRole In addition you need your Kubernetes Certificate Authority CA Certificate Authority As without bootstrapping you will need a Certificate Authority CA key and certificate As without bootstrapping these will be used to sign the kubelet certificate As before it is your responsibility to distribute them to control plane nodes For the purposes of this document we will assume these have been distributed to control plane nodes at var lib kubernetes ca pem certificate and var lib kubernetes ca key pem key We will refer to these as Kubernetes CA certificate and key All Kubernetes components that use these certificates kubelet kube apiserver kube controller manager assume the key and certificate to be PEM encoded kube apiserver configuration The kube apiserver has several requirements to enable TLS bootstrapping Recognizing CA that signs the client certificate Authenticating the bootstrapping kubelet to the system bootstrappers group Authorize the bootstrapping kubelet to create a certificate signing request CSR Recognizing client certificates This is normal for all client certificate authentication If not already set add the client ca file FILENAME flag to the kube apiserver command to enable client certificate authentication referencing a certificate authority bundle containing the signing certificate for example client ca file var lib kubernetes ca pem Initial bootstrap authentication In order for the bootstrapping kubelet to connect to kube apiserver and request a certificate it must first authenticate to the server You can use any authenticator docs reference access authn authz authentication that can authenticate the kubelet While any authentication strategy can be used for the kubelet s initial bootstrap credentials the following two authenticators are recommended for ease of provisioning 1 Bootstrap Tokens bootstrap tokens 1 Token authentication file token authentication file Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets and does not require any additional flags when starting kube apiserver Whichever method you choose the requirement is that the kubelet be able to authenticate as a user with the rights to 1 create and retrieve CSRs 1 be automatically approved to request node client certificates if automatic approval is enabled A kubelet authenticating using bootstrap tokens is authenticated as a user in the group system bootstrappers which is the standard method to use As this feature matures you should ensure tokens are bound to a Role Based Access Control RBAC policy which limits requests using the bootstrap token docs reference access authn authz bootstrap tokens strictly to client requests related to certificate provisioning With RBAC in place scoping the tokens to a group allows for great flexibility For example you could disable a particular bootstrap group s access when you are done provisioning the nodes Bootstrap tokens Bootstrap tokens are described in detail here docs reference access authn authz bootstrap tokens These are tokens that are stored as secrets in the Kubernetes cluster and then issued to the individual kubelet You can use a single token for an entire cluster or issue one per worker node The process is two fold 1 Create a Kubernetes secret with the token ID secret and scope s 1 Issue the token to the kubelet From the kubelet s perspective one token is like another and has no special meaning From the kube apiserver s perspective however the bootstrap token is special Due to its type namespace and name kube apiserver recognizes it as a special token and grants anyone authenticating with that token special bootstrap rights notably treating them as a member of the system bootstrappers group This fulfills a basic requirement for TLS bootstrapping The details for creating the secret are available here docs reference access authn authz bootstrap tokens If you want to use bootstrap tokens you must enable it on kube apiserver with the flag console enable bootstrap token auth true Token authentication file kube apiserver has the ability to accept tokens as authentication These tokens are arbitrary but should represent at least 128 bits of entropy derived from a secure random number generator such as dev urandom on most modern Linux systems There are multiple ways you can generate a token For example shell head c 16 dev urandom od An t x tr d This will generate tokens that look like 02b50b05283e98dd0fd71db496ef01e8 The token file should look like the following example where the first three values can be anything and the quoted group name should be as depicted console 02b50b05283e98dd0fd71db496ef01e8 kubelet bootstrap 10001 system bootstrappers Add the token auth file FILENAME flag to the kube apiserver command in your systemd unit file perhaps to enable the token file See docs here docs reference access authn authz authentication static token file for further details Authorize kubelet to create CSR Now that the bootstrapping node is authenticated as part of the system bootstrappers group it needs to be authorized to create a certificate signing request CSR as well as retrieve it when done Fortunately Kubernetes ships with a ClusterRole with precisely these and only these permissions system node bootstrapper To do this you only need to create a ClusterRoleBinding that binds the system bootstrappers group to the cluster role system node bootstrapper yaml enable bootstrapping nodes to create CSR apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name create csrs for bootstrapping subjects kind Group name system bootstrappers apiGroup rbac authorization k8s io roleRef kind ClusterRole name system node bootstrapper apiGroup rbac authorization k8s io kube controller manager configuration While the apiserver receives the requests for certificates from the kubelet and authenticates those requests the controller manager is responsible for issuing actual signed certificates The controller manager performs this function via a certificate issuing control loop This takes the form of a cfssl https blog cloudflare com introducing cfssl local signer using assets on disk Currently all certificates issued have one year validity and a default set of key usages In order for the controller manager to sign certificates it needs the following access to the Kubernetes CA key and certificate that you created and distributed enabling CSR signing Access to key and certificate As described earlier you need to create a Kubernetes CA key and certificate and distribute it to the control plane nodes These will be used by the controller manager to sign the kubelet certificates Since these signed certificates will in turn be used by the kubelet to authenticate as a regular kubelet to kube apiserver it is important that the CA provided to the controller manager at this stage also be trusted by kube apiserver for authentication This is provided to kube apiserver with the flag client ca file FILENAME for example client ca file var lib kubernetes ca pem as described in the kube apiserver configuration section To provide the Kubernetes CA key and certificate to kube controller manager use the following flags shell cluster signing cert file etc path to kubernetes ca ca crt cluster signing key file etc path to kubernetes ca ca key For example shell cluster signing cert file var lib kubernetes ca pem cluster signing key file var lib kubernetes ca key pem The validity duration of signed certificates can be configured with flag shell cluster signing duration Approval In order to approve CSRs you need to tell the controller manager that it is acceptable to approve them This is done by granting RBAC permissions to the correct group There are two distinct sets of permissions nodeclient If a node is creating a new certificate for a node then it does not have a certificate yet It is authenticating using one of the tokens listed above and thus is part of the group system bootstrappers selfnodeclient If a node is renewing its certificate then it already has a certificate by definition which it uses continuously to authenticate as part of the group system nodes To enable the kubelet to request and receive a new certificate create a ClusterRoleBinding that binds the group in which the bootstrapping node is a member system bootstrappers to the ClusterRole that grants it permission system certificates k8s io certificatesigningrequests nodeclient yaml Approve all CSRs for the group system bootstrappers apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name auto approve csrs for group subjects kind Group name system bootstrappers apiGroup rbac authorization k8s io roleRef kind ClusterRole name system certificates k8s io certificatesigningrequests nodeclient apiGroup rbac authorization k8s io To enable the kubelet to renew its own client certificate create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system nodes to the ClusterRole that grants it permission system certificates k8s io certificatesigningrequests selfnodeclient yaml Approve renewal CSRs for the group system nodes apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name auto approve renewals for nodes subjects kind Group name system nodes apiGroup rbac authorization k8s io roleRef kind ClusterRole name system certificates k8s io certificatesigningrequests selfnodeclient apiGroup rbac authorization k8s io The csrapproving controller that ships as part of kube controller manager docs reference command line tools reference kube controller manager and is enabled by default The controller uses the SubjectAccessReview API docs reference access authn authz authorization checking api access to determine if a given user is authorized to request a CSR then approves based on the authorization outcome To prevent conflicts with other approvers the built in approver doesn t explicitly deny CSRs It only ignores unauthorized requests The controller also prunes expired certificates as part of garbage collection kubelet configuration Finally with the control plane nodes properly set up and all of the necessary authentication and authorization in place we can configure the kubelet The kubelet requires the following configuration to bootstrap A path to store the key and certificate it generates optional can use default A path to a kubeconfig file that does not yet exist it will place the bootstrapped config file here A path to a bootstrap kubeconfig file to provide the URL for the server and bootstrap credentials e g a bootstrap token Optional instructions to rotate certificates The bootstrap kubeconfig should be in a path available to the kubelet for example var lib kubelet bootstrap kubeconfig Its format is identical to a normal kubeconfig file A sample file might look as follows yaml apiVersion v1 kind Config clusters cluster certificate authority var lib kubernetes ca pem server https my server example com 6443 name bootstrap contexts context cluster bootstrap user kubelet bootstrap name bootstrap current context bootstrap preferences users name kubelet bootstrap user token 07401b f395accd246ae52d The important elements to note are certificate authority path to a CA file used to validate the server certificate presented by kube apiserver server URL to kube apiserver token the token to use The format of the token does not matter as long as it matches what kube apiserver expects In the above example we used a bootstrap token As stated earlier any valid authentication method can be used not only tokens Because the bootstrap kubeconfig is a standard kubeconfig you can use kubectl to generate it To create the above example file kubectl config kubeconfig var lib kubelet bootstrap kubeconfig set cluster bootstrap server https my server example com 6443 certificate authority var lib kubernetes ca pem kubectl config kubeconfig var lib kubelet bootstrap kubeconfig set credentials kubelet bootstrap token 07401b f395accd246ae52d kubectl config kubeconfig var lib kubelet bootstrap kubeconfig set context bootstrap user kubelet bootstrap cluster bootstrap kubectl config kubeconfig var lib kubelet bootstrap kubeconfig use context bootstrap To indicate to the kubelet to use the bootstrap kubeconfig use the following kubelet flag bootstrap kubeconfig var lib kubelet bootstrap kubeconfig kubeconfig var lib kubelet kubeconfig When starting the kubelet if the file specified via kubeconfig does not exist the bootstrap kubeconfig specified via bootstrap kubeconfig is used to request a client certificate from the API server On approval of the certificate request and receipt back by the kubelet a kubeconfig file referencing the generated key and obtained certificate is written to the path specified by kubeconfig The certificate and key file will be placed in the directory specified by cert dir Client and serving certificates All of the above relate to kubelet client certificates specifically the certificates a kubelet uses to authenticate to kube apiserver A kubelet also can use serving certificates The kubelet itself exposes an https endpoint for certain features To secure these the kubelet can do one of use provided key and certificate via the tls private key file and tls cert file flags create self signed key and certificate if a key and certificate are not provided request serving certificates from the cluster server via the CSR API The client certificate provided by TLS bootstrapping is signed by default for client auth only and thus cannot be used as serving certificates or server auth However you can enable its server certificate at least partially via certificate rotation Certificate rotation Kubernetes v1 8 and higher kubelet implements features for enabling rotation of its client and or serving certificates Note rotation of serving certificate is a beta feature and requires the RotateKubeletServerCertificate feature flag on the kubelet enabled by default You can configure the kubelet to rotate its client certificates by creating new CSRs as its existing credentials expire To enable this feature use the rotateCertificates field of kubelet configuration file docs tasks administer cluster kubelet config file or pass the following command line argument to the kubelet deprecated rotate certificates Enabling RotateKubeletServerCertificate causes the kubelet both to request a serving certificate after bootstrapping its client credentials and to rotate that certificate To enable this behavior use the field serverTLSBootstrap of the kubelet configuration file docs tasks administer cluster kubelet config file or pass the following command line argument to the kubelet deprecated rotate server certificates The CSR approving controllers implemented in core Kubernetes do not approve node serving certificates for security reasons https github com kubernetes community pull 1982 To use RotateKubeletServerCertificate operators need to run a custom approving controller or manually approve the serving certificate requests A deployment specific approval process for kubelet serving certificates should typically only approve CSRs which 1 are requested by nodes ensure the spec username field is of the form system node nodeName and spec groups contains system nodes 1 request usages for a serving certificate ensure spec usages contains server auth optionally contains digital signature and key encipherment and contains no other usages 1 only have IP and DNS subjectAltNames that belong to the requesting node and have no URI and Email subjectAltNames parse the x509 Certificate Signing Request in spec request to verify subjectAltNames Other authenticating components All of TLS bootstrapping described in this document relates to the kubelet However other components may need to communicate directly with kube apiserver Notable is kube proxy which is part of the Kubernetes node components and runs on every node but may also include other components such as monitoring or networking Like the kubelet these other components also require a method of authenticating to kube apiserver You have several options for generating these credentials The old way Create and distribute certificates the same way you did for kubelet before TLS bootstrapping DaemonSet Since the kubelet itself is loaded on each node and is sufficient to start base services you can run kube proxy and other node specific services not as a standalone process but rather as a daemonset in the kube system namespace Since it will be in cluster you can give it a proper service account with appropriate permissions to perform its activities This may be the simplest way to configure such services kubectl approval CSRs can be approved outside of the approval flows built into the controller manager The signing controller does not immediately sign all certificate requests Instead it waits until they have been flagged with an Approved status by an appropriately privileged user This flow is intended to allow for automated approval handled by an external approval controller or the approval controller implemented in the core controller manager However cluster administrators can also manually approve certificate requests using kubectl An administrator can list CSRs with kubectl get csr and describe one in detail with kubectl describe csr name An administrator can approve or deny a CSR with kubectl certificate approve name and kubectl certificate deny name |
kubernetes reference weight 20 liggitt contenttype concept title Kubernetes API Concepts reviewers lavalamp smarterclayton | ---
title: Kubernetes API Concepts
reviewers:
- smarterclayton
- lavalamp
- liggitt
content_type: concept
weight: 20
---
<!-- overview -->
The Kubernetes API is a resource-based (RESTful) programmatic interface
provided via HTTP. It supports retrieving, creating, updating, and deleting
primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE,
GET).
For some resources, the API includes additional subresources that allow
fine-grained authorization (such as separate views for Pod details and
log retrievals), and can accept and serve those resources in different
representations for convenience or efficiency.
Kubernetes supports efficient change notifications on resources via
_watches_:
Kubernetes also provides consistent list operations so that API clients can
effectively cache, track, and synchronize the state of resources.
You can view the [API reference](/docs/reference/kubernetes-api/) online,
or read on to learn about the API in general.
<!-- body -->
## Kubernetes API terminology {#standard-api-terminology}
Kubernetes generally leverages common RESTful terminology to describe the
API concepts:
* A *resource type* is the name used in the URL (`pods`, `namespaces`, `services`)
* All resource types have a concrete representation (their object schema) which is called a *kind*
* A list of instances of a resource type is known as a *collection*
* A single instance of a resource type is called a *resource*, and also usually represents an *object*
* For some resource types, the API includes one or more *sub-resources*, which are represented as URI paths below the resource
Most Kubernetes API resource types are
–
they represent a concrete instance of a concept on the cluster, like a
pod or namespace. A smaller number of API resource types are *virtual* in
that they often represent operations on objects, rather than objects, such
as a permission check
(use a POST with a JSON-encoded body of `SubjectAccessReview` to the
`subjectaccessreviews` resource), or the `eviction` sub-resource of a Pod
(used to trigger
[API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/)).
### Object names
All objects you can create via the API have a unique object
to allow idempotent creation and
retrieval, except that virtual resource types may not have unique names if they are
not retrievable, or do not rely on idempotency.
Within a , only one object
of a given kind can have a given name at a time. However, if you delete the object,
you can make a new object with the same name. Some objects are not namespaced (for
example: Nodes), and so their names must be unique across the whole cluster.
### API verbs
Almost all object resource types support the standard HTTP verbs - GET, POST, PUT, PATCH,
and DELETE. Kubernetes also uses its own verbs, which are often written in lowercase to distinguish
them from HTTP verbs.
Kubernetes uses the term **list** to describe returning a [collection](#collections) of
resources to distinguish from retrieving a single resource which is usually called
a **get**. If you sent an HTTP GET request with the `?watch` query parameter,
Kubernetes calls this a **watch** and not a **get** (see
[Efficient detection of changes](#efficient-detection-of-changes) for more details).
For PUT requests, Kubernetes internally classifies these as either **create** or **update**
based on the state of the existing object. An **update** is different from a **patch**; the
HTTP verb for a **patch** is PATCH.
## Resource URIs
All resource types are either scoped by the cluster (`/apis/GROUP/VERSION/*`) or to a
namespace (`/apis/GROUP/VERSION/namespaces/NAMESPACE/*`). A namespace-scoped resource
type will be deleted when its namespace is deleted and access to that resource type
is controlled by authorization checks on the namespace scope.
Note: core resources use `/api` instead of `/apis` and omit the GROUP path segment.
Examples:
* `/api/v1/namespaces`
* `/api/v1/pods`
* `/api/v1/namespaces/my-namespace/pods`
* `/apis/apps/v1/deployments`
* `/apis/apps/v1/namespaces/my-namespace/deployments`
* `/apis/apps/v1/namespaces/my-namespace/deployments/my-deployment`
You can also access collections of resources (for example: listing all Nodes).
The following paths are used to retrieve collections and resources:
* Cluster-scoped resources:
* `GET /apis/GROUP/VERSION/RESOURCETYPE` - return the collection of resources of the resource type
* `GET /apis/GROUP/VERSION/RESOURCETYPE/NAME` - return the resource with NAME under the resource type
* Namespace-scoped resources:
* `GET /apis/GROUP/VERSION/RESOURCETYPE` - return the collection of all
instances of the resource type across all namespaces
* `GET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE` - return
collection of all instances of the resource type in NAMESPACE
* `GET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME` -
return the instance of the resource type with NAME in NAMESPACE
Since a namespace is a cluster-scoped resource type, you can retrieve the list
(“collection”) of all namespaces with `GET /api/v1/namespaces` and details about
a particular namespace with `GET /api/v1/namespaces/NAME`.
* Cluster-scoped subresource: `GET /apis/GROUP/VERSION/RESOURCETYPE/NAME/SUBRESOURCE`
* Namespace-scoped subresource: `GET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME/SUBRESOURCE`
The verbs supported for each subresource will differ depending on the object -
see the [API reference](/docs/reference/kubernetes-api/) for more information. It
is not possible to access sub-resources across multiple resources - generally a new
virtual resource type would be used if that becomes necessary.
## HTTP media types {#alternate-representations-of-resources}
Over HTTP, Kubernetes supports JSON and Protobuf wire encodings.
By default, Kubernetes returns objects in [JSON serialization](#json-encoding), using the
`application/json` media type. Although JSON is the default, clients may request a response in
YAML, or use the more efficient binary [Protobuf representation](#protobuf-encoding) for better performance at scale.
The Kubernetes API implements standard HTTP content type negotiation: passing an
`Accept` header with a `GET` call will request that the server tries to return
a response in your preferred media type. If you want to send an object in Protobuf to
the server for a `PUT` or `POST` request, you must set the `Content-Type` request header
appropriately.
If you request an available media type, the API server returns a response with a suitable
`Content-Type`; if none of the media types you request are supported, the API server returns
a `406 Not acceptable` error message.
All built-in resource types support the `application/json` media type.
### JSON resource encoding {#json-encoding}
The Kubernetes API defaults to using [JSON](https://www.json.org/json-en.html) for encoding
HTTP message bodies.
For example:
1. List all of the pods on a cluster, without specifying a preferred format
```
GET /api/v1/pods
```
```
200 OK
Content-Type: application/json
… JSON encoded collection of Pods (PodList object)
```
1. Create a pod by sending JSON to the server, requesting a JSON response.
```
POST /api/v1/namespaces/test/pods
Content-Type: application/json
Accept: application/json
… JSON encoded Pod object
```
```
200 OK
Content-Type: application/json
{
"kind": "Pod",
"apiVersion": "v1",
…
}
```
### YAML resource encoding {#yaml-encoding}
Kubernetes also supports the [`application/yaml`](https://www.rfc-editor.org/rfc/rfc9512.html)
media type for both requests and responses. [`YAML`](https://yaml.org/)
can be used for defining Kubernetes manifests and API interactions.
For example:
1. List all of the pods on a cluster in YAML format
```
GET /api/v1/pods
Accept: application/yaml
```
```
200 OK
Content-Type: application/yaml
… YAML encoded collection of Pods (PodList object)
```
1. Create a pod by sending YAML-encoded data to the server, requesting a YAML response:
```
POST /api/v1/namespaces/test/pods
Content-Type: application/yaml
Accept: application/yaml
… YAML encoded Pod object
```
```
200 OK
Content-Type: application/yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
…
```
### Kubernetes Protobuf encoding {#protobuf-encoding}
Kubernetes uses an envelope wrapper to encode [Protobuf](https://protobuf.dev/) responses.
That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf
(as opposed to JSON). The 4 byte magic number data is followed by a Protobuf encoded wrapper message, which
describes the encoding and type of the underlying object. Within the Protobuf wrapper message,
the inner object data is recorded using the `raw` field of Unknown (see the [IDL](#protobuf-encoding-idl)
for more detail).
For example:
1. List all of the pods on a cluster in Protobuf format.
```
GET /api/v1/pods
Accept: application/vnd.kubernetes.protobuf
```
```
200 OK
Content-Type: application/vnd.kubernetes.protobuf
… JSON encoded collection of Pods (PodList object)
```
1. Create a pod by sending Protobuf encoded data to the server, but request a response
in JSON.
```
POST /api/v1/namespaces/test/pods
Content-Type: application/vnd.kubernetes.protobuf
Accept: application/json
… binary encoded Pod object
```
```
200 OK
Content-Type: application/json
{
"kind": "Pod",
"apiVersion": "v1",
...
}
```
You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that
supports it, for both reads and writes. Only some API resource types are [compatible](#protobuf-encoding-compatibility)
with Protobuf.
<a id="protobuf-encoding-idl" />
The wrapper format is:
```
A four byte magic number prefix:
Bytes 0-3: "k8s\x00" [0x6b, 0x38, 0x73, 0x00]
An encoded Protobuf message with the following IDL:
message Unknown {
// typeMeta should have the string values for "kind" and "apiVersion" as set on the JSON object
optional TypeMeta typeMeta = 1;
// raw will hold the complete serialized object in protobuf. See the protobuf definitions in the client libraries for a given kind.
optional bytes raw = 2;
// contentEncoding is encoding used for the raw data. Unspecified means no encoding.
optional string contentEncoding = 3;
// contentType is the serialization method used to serialize 'raw'. Unspecified means application/vnd.kubernetes.protobuf and is usually
// omitted.
optional string contentType = 4;
}
message TypeMeta {
// apiVersion is the group/version for this type
optional string apiVersion = 1;
// kind is the name of the object schema. A protobuf definition should exist for this object.
optional string kind = 2;
}
```
Clients that receive a response in `application/vnd.kubernetes.protobuf` that does
not match the expected prefix should reject the response, as future versions may need
to alter the serialization format in an incompatible way and will do so by changing
the prefix.
#### Compatibility with Kubernetes Protobuf {#protobuf-encoding-compatibility}
Not all API resource types support Kubernetes' Protobuf encoding; specifically, Protobuf isn't
available for resources that are defined as
or are served via the
.
As a client, if you might need to work with extension types you should specify multiple
content types in the request `Accept` header to support fallback to JSON.
For example:
```
Accept: application/vnd.kubernetes.protobuf, application/json
```
## Efficient detection of changes
The Kubernetes API allows clients to make an initial request for an object or a
collection, and then to track changes since that initial request: a **watch**. Clients
can send a **list** or a **get** and then make a follow-up **watch** request.
To make this change tracking possible, every Kubernetes object has a `resourceVersion`
field representing the version of that resource as stored in the underlying persistence
layer. When retrieving a collection of resources (either namespace or cluster scoped),
the response from the API server contains a `resourceVersion` value. The client can
use that `resourceVersion` to initiate a **watch** against the API server.
When you send a **watch** request, the API server responds with a stream of
changes. These changes itemize the outcome of operations (such as **create**, **delete**,
and **update**) that occurred after the `resourceVersion` you specified as a parameter
to the **watch** request. The overall **watch** mechanism allows a client to fetch
the current state and then subscribe to subsequent changes, without missing any events.
If a client **watch** is disconnected then that client can start a new **watch** from
the last returned `resourceVersion`; the client could also perform a fresh **get** /
**list** request and begin again. See [Resource Version Semantics](#resource-versions)
for more detail.
For example:
1. List all of the pods in a given namespace.
```
GET /api/v1/namespaces/test/pods
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {"resourceVersion":"10245"},
"items": [...]
}
```
2. Starting from resource version 10245, receive notifications of any API operations
(such as **create**, **delete**, **patch** or **update**) that affect Pods in the
_test_ namespace. Each change notification is a JSON document. The HTTP response body
(served as `application/json`) consists a series of JSON documents.
```
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245
---
200 OK
Transfer-Encoding: chunked
Content-Type: application/json
{
"type": "ADDED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...}
}
{
"type": "MODIFIED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "11020", ...}, ...}
}
...
```
A given Kubernetes server will only preserve a historical record of changes for a
limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default.
When the requested **watch** operations fail because the historical version of that
resource is not available, clients must handle the case by recognizing the status code
`410 Gone`, clearing their local cache, performing a new **get** or **list** operation,
and starting the **watch** from the `resourceVersion` that was returned.
For subscribing to collections, Kubernetes client libraries typically offer some form
of standard tool for this **list**-then-**watch** logic. (In the Go client library,
this is called a `Reflector` and is located in the `k8s.io/client-go/tools/cache` package.)
### Watch bookmarks {#watch-bookmarks}
To mitigate the impact of short history window, the Kubernetes API provides a watch
event named `BOOKMARK`. It is a special kind of event to mark that all changes up
to a given `resourceVersion` the client is requesting have already been sent. The
document representing the `BOOKMARK` event is of the type requested by the request,
but only includes a `.metadata.resourceVersion` field. For example:
```
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true
---
200 OK
Transfer-Encoding: chunked
Content-Type: application/json
{
"type": "ADDED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...}
}
...
{
"type": "BOOKMARK",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "12746"} }
}
```
As a client, you can request `BOOKMARK` events by setting the
`allowWatchBookmarks=true` query parameter to a **watch** request, but you shouldn't
assume bookmarks are returned at any specific interval, nor can clients assume that
the API server will send any `BOOKMARK` event even when requested.
## Streaming lists
On large clusters, retrieving the collection of some resource types may result in
a significant increase of resource usage (primarily RAM) on the control plane.
In order to alleviate its impact and simplify the user experience of the **list** + **watch**
pattern, Kubernetes v1.27 introduces as an alpha feature the support
for requesting the initial state (previously requested via the **list** request) as part of
the **watch** request.
Provided that the `WatchList` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
is enabled, this can be achieved by specifying `sendInitialEvents=true` as query string parameter
in a **watch** request. If set, the API server starts the watch stream with synthetic init
events (of type `ADDED`) to build the whole state of all existing objects followed by a
[`BOOKMARK` event](/docs/reference/using-api/api-concepts/#watch-bookmarks)
(if requested via `allowWatchBookmarks=true` option). The bookmark event includes the resource version
to which is synced. After sending the bookmark event, the API server continues as for any other **watch**
request.
When you set `sendInitialEvents=true` in the query string, Kubernetes also requires that you set
`resourceVersionMatch` to `NotOlderThan` value.
If you provided `resourceVersion` in the query string without providing a value or don't provide
it at all, this is interpreted as a request for _consistent read_;
the bookmark event is sent when the state is synced at least to the moment of a consistent read
from when the request started to be processed. If you specify `resourceVersion` (in the query string),
the bookmark event is sent when the state is synced at least to the provided resource version.
### Example {#example-streaming-lists}
An example: you want to watch a collection of Pods. For that collection, the current resource version
is 10245 and there are two pods: `foo` and `bar`. Then sending the following request (explicitly requesting
_consistent read_ by setting empty resource version using `resourceVersion=`) could result
in the following sequence of events:
```
GET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan
---
200 OK
Transfer-Encoding: chunked
Content-Type: application/json
{
"type": "ADDED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "8467", "name": "foo"}, ...}
}
{
"type": "ADDED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "5726", "name": "bar"}, ...}
}
{
"type": "BOOKMARK",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10245"} }
}
...
<followed by regular watch stream starting from resourceVersion="10245">
```
## Response compression
`APIResponseCompression` is an option that allows the API server to compress the responses for **get**
and **list** requests, reducing the network bandwidth and improving the performance of large-scale clusters.
It is enabled by default since Kubernetes 1.16 and it can be disabled by including
`APIResponseCompression=false` in the `--feature-gates` flag on the API server.
API response compression can significantly reduce the size of the response, especially for large resources or
[collections](/docs/reference/using-api/api-concepts/#collections).
For example, a **list** request for pods can return hundreds of kilobytes or even megabytes of data,
depending on the number of pods and their attributes. By compressing the response, the network bandwidth
can be saved and the latency can be reduced.
To verify if `APIResponseCompression` is working, you can send a **get** or **list** request to the
API server with an `Accept-Encoding` header, and check the response size and headers. For example:
```
GET /api/v1/pods
Accept-Encoding: gzip
---
200 OK
Content-Type: application/json
content-encoding: gzip
...
```
The `content-encoding` header indicates that the response is compressed with `gzip`.
## Retrieving large results sets in chunks
On large clusters, retrieving the collection of some resource types may result in
very large responses that can impact the server and client. For instance, a cluster
may have tens of thousands of Pods, each of which is equivalent to roughly 2 KiB of
encoded JSON. Retrieving all pods across all namespaces may result in a very large
response (10-20MB) and consume a large amount of server resources.
The Kubernetes API server supports the ability to break a single large collection request
into many smaller chunks while preserving the consistency of the total request. Each
chunk can be returned sequentially which reduces both the total size of the request and
allows user-oriented clients to display results incrementally to improve responsiveness.
You can request that the API server handles a **list** by serving single collection
using pages (which Kubernetes calls _chunks_). To retrieve a single collection in
chunks, two query parameters `limit` and `continue` are supported on requests against
collections, and a response field `continue` is returned from all **list** operations
in the collection's `metadata` field. A client should specify the maximum results they
wish to receive in each chunk with `limit` and the server will return up to `limit`
resources in the result and include a `continue` value if there are more resources
in the collection.
As an API client, you can then pass this `continue` value to the API server on the
next request, to instruct the server to return the next page (_chunk_) of results. By
continuing until the server returns an empty `continue` value, you can retrieve the
entire collection.
Like a **watch** operation, a `continue` token will expire after a short amount
of time (by default 5 minutes) and return a `410 Gone` if more results cannot be
returned. In this case, the client will need to start from the beginning or omit the
`limit` parameter.
For example, if there are 1,253 pods on the cluster and you want to receive chunks
of 500 pods at a time, request those chunks as follows:
1. List all of the pods on a cluster, retrieving up to 500 pods each time.
```
GET /api/v1/pods?limit=500
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion":"10245",
"continue": "ENCODED_CONTINUE_TOKEN",
"remainingItemCount": 753,
...
},
"items": [...] // returns pods 1-500
}
```
1. Continue the previous call, retrieving the next set of 500 pods.
```
GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion":"10245",
"continue": "ENCODED_CONTINUE_TOKEN_2",
"remainingItemCount": 253,
...
},
"items": [...] // returns pods 501-1000
}
```
1. Continue the previous call, retrieving the last 253 pods.
```
GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion":"10245",
"continue": "", // continue token is empty because we have reached the end of the list
...
},
"items": [...] // returns pods 1001-1253
}
```
Notice that the `resourceVersion` of the collection remains constant across each request,
indicating the server is showing you a consistent snapshot of the pods. Pods that
are created, updated, or deleted after version `10245` would not be shown unless
you make a separate **list** request without the `continue` token. This allows you
to break large requests into smaller chunks and then perform a **watch** operation
on the full set without missing any updates.
`remainingItemCount` is the number of subsequent items in the collection that are not
included in this response. If the **list** request contained label or field
then the number of
remaining items is unknown and the API server does not include a `remainingItemCount`
field in its response.
If the **list** is complete (either because it is not chunking, or because this is the
last chunk), then there are no more remaining items and the API server does not include a
`remainingItemCount` field in its response. The intended use of the `remainingItemCount`
is estimating the size of a collection.
## Collections
In Kubernetes terminology, the response you get from a **list** is
a _collection_. However, Kubernetes defines concrete kinds for
collections of different types of resource. Collections have a kind
named for the resource kind, with `List` appended.
When you query the API for a particular type, all items returned by that query are
of that type. For example, when you **list** Services, the collection response
has `kind` set to
[`ServiceList`](/docs/reference/kubernetes-api/service-resources/service-v1/#ServiceList);
each item in that collection represents a single Service. For example:
```
GET /api/v1/services
```
```yaml
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "2947301"
},
"items": [
{
"metadata": {
"name": "kubernetes",
"namespace": "default",
...
"metadata": {
"name": "kube-dns",
"namespace": "kube-system",
...
```
There are dozens of collection types (such as `PodList`, `ServiceList`,
and `NodeList`) defined in the Kubernetes API.
You can get more information about each collection type from the
[Kubernetes API](/docs/reference/kubernetes-api/) documentation.
Some tools, such as `kubectl`, represent the Kubernetes collection
mechanism slightly differently from the Kubernetes API itself.
Because the output of `kubectl` might include the response from
multiple **list** operations at the API level, `kubectl` represents
a list of items using `kind: List`. For example:
```shell
kubectl get services -A -o yaml
```
```yaml
apiVersion: v1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
items:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-06-03T14:54:12Z"
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
...
- apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
creationTimestamp: "2021-06-03T14:54:14Z"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: CoreDNS
name: kube-dns
namespace: kube-system
```
Keep in mind that the Kubernetes API does not have a `kind` named `List`.
`kind: List` is a client-side, internal implementation detail for processing
collections that might be of different kinds of object. Avoid depending on
`kind: List` in automation or other code.
## Receiving resources as Tables
When you run `kubectl get`, the default output format is a simple tabular
representation of one or more instances of a particular resource type. In the past,
clients were required to reproduce the tabular and describe output implemented in
`kubectl` to perform simple lists of objects.
A few limitations of that approach include non-trivial logic when dealing with
certain objects. Additionally, types provided by API aggregation or third party
resources are not known at compile time. This means that generic implementations
had to be in place for types unrecognized by a client.
In order to avoid potential limitations as described above, clients may request
the Table representation of objects, delegating specific details of printing to the
server. The Kubernetes API implements standard HTTP content type negotiation: passing
an `Accept` header containing a value of `application/json;as=Table;g=meta.k8s.io;v=v1`
with a `GET` call will request that the server return objects in the Table content
type.
For example, list all of the pods on a cluster in the Table format.
```
GET /api/v1/pods
Accept: application/json;as=Table;g=meta.k8s.io;v=v1
---
200 OK
Content-Type: application/json
{
"kind": "Table",
"apiVersion": "meta.k8s.io/v1",
...
"columnDefinitions": [
...
]
}
```
For API resource types that do not have a custom Table definition known to the control
plane, the API server returns a default Table response that consists of the resource's
`name` and `creationTimestamp` fields.
```
GET /apis/crd.example.com/v1alpha1/namespaces/default/resources
---
200 OK
Content-Type: application/json
...
{
"kind": "Table",
"apiVersion": "meta.k8s.io/v1",
...
"columnDefinitions": [
{
"name": "Name",
"type": "string",
...
},
{
"name": "Created At",
"type": "date",
...
}
]
}
```
Not all API resource types support a Table response; for example, a
might not define field-to-table mappings, and an APIService that
[extends the core Kubernetes API](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
might not serve Table responses at all. If you are implementing a client that
uses the Table information and must work against all resource types, including
extensions, you should make requests that specify multiple content types in the
`Accept` header. For example:
```
Accept: application/json;as=Table;g=meta.k8s.io;v=v1, application/json
```
## Resource deletion
When you **delete** a resource this takes place in two phases.
1. _finalization_
2. removal
```yaml
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"finalizers": ["url.io/neat-finalization", "other-url.io/my-finalizer"],
"deletionTimestamp": nil,
}
}
```
When a client first sends a **delete** to request the removal of a resource, the `.metadata.deletionTimestamp` is set to the current time.
Once the `.metadata.deletionTimestamp` is set, external controllers that act on finalizers
may start performing their cleanup work at any time, in any order.
Order is **not** enforced between finalizers because it would introduce significant
risk of stuck `.metadata.finalizers`.
The `.metadata.finalizers` field is shared: any actor with permission can reorder it.
If the finalizer list were processed in order, then this might lead to a situation
in which the component responsible for the first finalizer in the list is
waiting for some signal (field value, external system, or other) produced by a
component responsible for a finalizer later in the list, resulting in a deadlock.
Without enforced ordering, finalizers are free to order amongst themselves and are
not vulnerable to ordering changes in the list.
Once the last finalizer is removed, the resource is actually removed from etcd.
## Single resource API
The Kubernetes API verbs **get**, **create**, **update**, **patch**,
**delete** and **proxy** support single resources only.
These verbs with single resource support have no support for submitting multiple
resources together in an ordered or unordered list or transaction.
When clients (including kubectl) act on a set of resources, the client makes a series
of single-resource API requests, then aggregates the responses if needed.
By contrast, the Kubernetes API verbs **list** and **watch** allow getting multiple
resources, and **deletecollection** allows deleting multiple resources.
## Field validation
Kubernetes always validates the type of fields. For example, if a field in the
API is defined as a number, you cannot set the field to a text value. If a field
is defined as an array of strings, you can only provide an array. Some fields
allow you to omit them, other fields are required. Omitting a required field
from an API request is an error.
If you make a request with an extra field, one that the cluster's control plane
does not recognize, then the behavior of the API server is more complicated.
By default, the API server drops fields that it does not recognize
from an input that it receives (for example, the JSON body of a `PUT` request).
There are two situations where the API server drops fields that you supplied in
an HTTP request.
These situations are:
1. The field is unrecognized because it is not in the resource's OpenAPI schema. (One
exception to this is for that explicitly choose not to prune unknown
fields via `x-kubernetes-preserve-unknown-fields`).
1. The field is duplicated in the object.
### Validation for unrecognized or duplicate fields {#setting-the-field-validation-level}
From 1.25 onward, unrecognized or duplicate fields in an object are detected via
validation on the server when you use HTTP verbs that can submit data (`POST`, `PUT`, and `PATCH`). Possible levels of
validation are `Ignore`, `Warn` (default), and `Strict`.
`Ignore`
: The API server succeeds in handling the request as it would without the erroneous fields
being set, dropping all unknown and duplicate fields and giving no indication it
has done so.
`Warn`
: (Default) The API server succeeds in handling the request, and reports a
warning to the client. The warning is sent using the `Warning:` response header,
adding one warning item for each unknown or duplicate field. For more
information about warnings and the Kubernetes API, see the blog article
[Warning: Helpful Warnings Ahead](/blog/2020/09/03/warnings/).
`Strict`
: The API server rejects the request with a 400 Bad Request error when it
detects any unknown or duplicate fields. The response message from the API
server specifies all the unknown or duplicate fields that the API server has
detected.
The field validation level is set by the `fieldValidation` query parameter.
If you submit a request that specifies an unrecognized field, and that is also invalid for
a different reason (for example, the request provides a string value where the API expects
an integer for a known field), then the API server responds with a 400 Bad Request error, but will
not provide any information on unknown or duplicate fields (only which fatal
error it encountered first).
You always receive an error response in this case, no matter what field validation level you requested.
Tools that submit requests to the server (such as `kubectl`), might set their own
defaults that are different from the `Warn` validation level that the API server uses
by default.
The `kubectl` tool uses the `--validate` flag to set the level of field
validation. It accepts the values `ignore`, `warn`, and `strict` while
also accepting the values `true` (equivalent to `strict`) and `false`
(equivalent to `ignore`). The default validation setting for kubectl is
`--validate=true`, which means strict server-side field validation.
When kubectl cannot connect to an API server with field validation (API servers
prior to Kubernetes 1.27), it will fall back to using client-side validation.
Client-side validation will be removed entirely in a future version of kubectl.
Prior to Kubernetes 1.25 `kubectl --validate` was used to toggle client-side validation on or off as
a boolean flag.
## Dry-run
When you use HTTP verbs that can modify resources (`POST`, `PUT`, `PATCH`, and
`DELETE`), you can submit your request in a _dry run_ mode. Dry run mode helps to
evaluate a request through the typical request stages (admission chain, validation,
merge conflicts) up until persisting objects to storage. The response body for the
request is as close as possible to a non-dry-run response. Kubernetes guarantees that
dry-run requests will not be persisted in storage or have any other side effects.
### Make a dry-run request
Dry-run is triggered by setting the `dryRun` query parameter. This parameter is a
string, working as an enum, and the only accepted values are:
[no value set]
: Allow side effects. You request this with a query string such as `?dryRun`
or `?dryRun&pretty=true`. The response is the final object that would have been
persisted, or an error if the request could not be fulfilled.
`All`
: Every stage runs as normal, except for the final storage stage where side effects
are prevented.
When you set `?dryRun=All`, any relevant
are run, validating admission controllers check the request post-mutation, merge is
performed on `PATCH`, fields are defaulted, and schema validation occurs. The changes
are not persisted to the underlying storage, but the final object which would have
been persisted is still returned to the user, along with the normal status code.
If the non-dry-run version of a request would trigger an admission controller that has
side effects, the request will be failed rather than risk an unwanted side effect. All
built in admission control plugins support dry-run. Additionally, admission webhooks can
declare in their
[configuration object](/docs/reference/generated/kubernetes-api//#validatingwebhook-v1-admissionregistration-k8s-io)
that they do not have side effects, by setting their `sideEffects` field to `None`.
If a webhook actually does have side effects, then the `sideEffects` field should be
set to "NoneOnDryRun". That change is appropriate provided that the webhook is also
be modified to understand the `DryRun` field in AdmissionReview, and to prevent side
effects on any request marked as dry runs.
Here is an example dry-run request that uses `?dryRun=All`:
```
POST /api/v1/namespaces/test/pods?dryRun=All
Content-Type: application/json
Accept: application/json
```
The response would look the same as for non-dry-run request, but the values of some
generated fields may differ.
### Generated values
Some values of an object are typically generated before the object is persisted. It
is important not to rely upon the values of these fields set by a dry-run request,
since these values will likely be different in dry-run mode from when the real
request is made. Some of these fields are:
* `name`: if `generateName` is set, `name` will have a unique random name
* `creationTimestamp` / `deletionTimestamp`: records the time of creation/deletion
* `UID`: [uniquely identifies](/docs/concepts/overview/working-with-objects/names/#uids)
the object and is randomly generated (non-deterministic)
* `resourceVersion`: tracks the persisted version of the object
* Any field set by a mutating admission controller
* For the `Service` resource: Ports or IP addresses that the kube-apiserver assigns to Service objects
### Dry-run authorization
Authorization for dry-run and non-dry-run requests is identical. Thus, to make
a dry-run request, you must be authorized to make the non-dry-run request.
For example, to run a dry-run **patch** for a Deployment, you must be authorized
to perform that **patch**. Here is an example of a rule for Kubernetes
that allows patching
Deployments:
```yaml
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["patch"]
```
See [Authorization Overview](/docs/reference/access-authn-authz/authorization/).
## Updates to existing resources {#patch-and-apply}
Kubernetes provides several ways to update existing objects.
You can read [choosing an update mechanism](#update-mechanism-choose) to
learn about which approach might be best for your use case.
You can overwrite (**update**) an existing resource - for example, a ConfigMap -
using an HTTP PUT. For a PUT request, it is the client's responsibility to specify
the `resourceVersion` (taking this from the object being updated). Kubernetes uses
that `resourceVersion` information so that the API server can detect lost updates
and reject requests made by a client that is out of date with the cluster.
In the event that the resource has changed (the `resourceVersion` the client
provided is stale), the API server returns a `409 Conflict` error response.
Instead of sending a PUT request, the client can send an instruction to the API
server to **patch** an existing resource. A **patch** is typically appropriate
if the change that the client wants to make isn't conditional on the existing data.
Clients that need effective detection of lost updates should consider
making their request conditional on the existing `resourceVersion` (either HTTP PUT or HTTP PATCH),
and then handle any retries that are needed in case there is a conflict.
The Kubernetes API supports four different PATCH operations, determined by their
corresponding HTTP `Content-Type` header:
`application/apply-patch+yaml`
: Server Side Apply YAML (a Kubernetes-specific extension, based on YAML).
All JSON documents are valid YAML, so you can also submit JSON using this
media type. See [Server Side Apply serialization](/docs/reference/using-api/server-side-apply/#serialization)
for more details.
To Kubernetes, this is a **create** operation if the object does not exist,
or a **patch** operation if the object already exists.
`application/json-patch+json`
: JSON Patch, as defined in [RFC6902](https://tools.ietf.org/html/rfc6902).
A JSON patch is a sequence of operations that are executed on the resource;
for example `{"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}`.
To Kubernetes, this is a **patch** operation.
A **patch** using `application/json-patch+json` can include conditions to
validate consistency, allowing the operation to fail if those conditions
are not met (for example, to avoid a lost update).
`application/merge-patch+json`
: JSON Merge Patch, as defined in [RFC7386](https://tools.ietf.org/html/rfc7386).
A JSON Merge Patch is essentially a partial representation of the resource.
The submitted JSON is combined with the current resource to create a new one,
then the new one is saved.
To Kubernetes, this is a **patch** operation.
`application/strategic-merge-patch+json`
: Strategic Merge Patch (a Kubernetes-specific extension based on JSON).
Strategic Merge Patch is a custom implementation of JSON Merge Patch.
You can only use Strategic Merge Patch with built-in APIs, or with aggregated
API servers that have special support for it. You cannot use
`application/strategic-merge-patch+json` with any API
defined using a .
The Kubernetes _server side apply_ mechanism has superseded Strategic Merge
Patch.
Kubernetes' [Server Side Apply](/docs/reference/using-api/server-side-apply/)
feature allows the control plane to track managed fields for newly created objects.
Server Side Apply provides a clear pattern for managing field conflicts,
offers server-side **apply** and **update** operations, and replaces the
client-side functionality of `kubectl apply`.
For Server-Side Apply, Kubernetes treats the request as a **create** if the object
does not yet exist, and a **patch** otherwise. For other requests that use PATCH
at the HTTP level, the logical Kubernetes operation is always **patch**.
See [Server Side Apply](/docs/reference/using-api/server-side-apply/) for more details.
### Choosing an update mechanism {#update-mechanism-choose}
#### HTTP PUT to replace existing resource {#update-mechanism-update}
The **update** (HTTP `PUT`) operation is simple to implement and flexible,
but has drawbacks:
* You need to handle conflicts where the `resourceVersion` of the object changes
between your client reading it and trying to write it back. Kubernetes always
detects the conflict, but you as the client author need to implement retries.
* You might accidentally drop fields if you decode an object locally (for example,
using client-go, you could receive fields that your client does not know how to
handle - and then drop them as part of your update.
* If there's a lot of contention on the object (even on a field, or set of fields,
that you're not trying to edit), you might have trouble sending the update.
The problem is worse for larger objects and for objects with many fields.
#### HTTP PATCH using JSON Patch {#update-mechanism-json-patch}
A **patch** update is helpful, because:
* As you're only sending differences, you have less data to send in the `PATCH`
request.
* You can make changes that rely on existing values, such as copying the
value of a particular field into an annotation.
* Unlike with an **update** (HTTP `PUT`), making your change can happen right away
even if there are frequent changes to unrelated fields): you usually would
not need to retry.
* You might still need to specify the `resourceVersion` (to match an existing object)
if you want to be extra careful to avoid lost updates
* It's still good practice to write in some retry logic in case of errors.
* You can use test conditions to careful craft specific update conditions.
For example, you can increment a counter without reading it if the existing
value matches what you expect. You can do this with no lost update risk,
even if the object has changed in other ways since you last wrote to it.
(If the test condition fails, you can fall back to reading the current value
and then write back the changed number).
However:
* You need more local (client) logic to build the patch; it helps a lot if you have
a library implementation of JSON Patch, or even for making a JSON Patch specifically against Kubernetes.
* As the author of client software, you need to be careful when building the patch
(the HTTP request body) not to drop fields (the order of operations matters).
#### HTTP PATCH using Server-Side Apply {#update-mechanism-server-side-apply}
Server-Side Apply has some clear benefits:
* A single round trip: it rarely requires making a `GET` request first.
* and you can still detect conflicts for unexpected changes
* you have the option to force override a conflict, if appropriate
* Client implementations are easy to make.
* You get an atomic create-or-update operation without extra effort
(similar to `UPSERT` in some SQL dialects).
However:
* Server-Side Apply does not work at all for field changes that depend on a current value of the object.
* You can only apply updates to objects. Some resources in the Kubernetes HTTP API are
not objects (they do not have a `.metadata` field), and Server-Side Apply
is only relevant for Kubernetes objects.
## Resource versions
Resource versions are strings that identify the server's internal version of an
object. Resource versions can be used by clients to determine when objects have
changed, or to express data consistency requirements when getting, listing and
watching resources. Resource versions must be treated as opaque by clients and passed
unmodified back to the server.
You must not assume resource versions are numeric or collatable. API clients may
only compare two resource versions for equality (this means that you must not compare
resource versions for greater-than or less-than relationships).
### `resourceVersion` fields in metadata {#resourceversion-in-metadata}
Clients find resource versions in resources, including the resources from the response
stream for a **watch**, or when using **list** to enumerate resources.
[v1.meta/ObjectMeta](/docs/reference/generated/kubernetes-api//#objectmeta-v1-meta) -
The `metadata.resourceVersion` of a resource instance identifies the resource version the instance was last modified at.
[v1.meta/ListMeta](/docs/reference/generated/kubernetes-api//#listmeta-v1-meta) -
The `metadata.resourceVersion` of a resource collection (the response to a **list**) identifies the
resource version at which the collection was constructed.
### `resourceVersion` parameters in query strings {#the-resourceversion-parameter}
The **get**, **list**, and **watch** operations support the `resourceVersion` parameter.
From version v1.19, Kubernetes API servers also support the `resourceVersionMatch`
parameter on _list_ requests.
The API server interprets the `resourceVersion` parameter differently depending
on the operation you request, and on the value of `resourceVersion`. If you set
`resourceVersionMatch` then this also affects the way matching happens.
### Semantics for **get** and **list**
For **get** and **list**, the semantics of `resourceVersion` are:
**get:**
| resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" |
|-----------------------|---------------------|----------------------------------------|
| Most Recent | Any | Not older than |
**list:**
From version v1.19, Kubernetes API servers support the `resourceVersionMatch` parameter
on _list_ requests. If you set both `resourceVersion` and `resourceVersionMatch`, the
`resourceVersionMatch` parameter determines how the API server interprets
`resourceVersion`.
You should always set the `resourceVersionMatch` parameter when setting
`resourceVersion` on a **list** request. However, be prepared to handle the case
where the API server that responds is unaware of `resourceVersionMatch`
and ignores it.
Unless you have strong consistency requirements, using `resourceVersionMatch=NotOlderThan` and
a known `resourceVersion` is preferable since it can achieve better performance and scalability
of your cluster than leaving `resourceVersion` and `resourceVersionMatch` unset, which requires
quorum read to be served.
Setting the `resourceVersionMatch` parameter without setting `resourceVersion` is not valid.
This table explains the behavior of **list** requests with various combinations of
`resourceVersion` and `resourceVersionMatch`:
| resourceVersionMatch param | paging params | resourceVersion not set | resourceVersion="0" | resourceVersion="{value other than 0}" |
|----------------------------|---------------|-------------------------|---------------------|----------------------------------------|
| _unset_ | _limit unset_ | Most Recent | Any | Not older than |
| _unset_ | limit=\<n\>, _continue unset_ | Most Recent | Any | Exact |
| _unset_ | limit=\<n\>, continue=\<token\>| Continue Token, Exact | Invalid, treated as Continue Token, Exact | Invalid, HTTP `400 Bad Request` |
| `resourceVersionMatch=Exact` | _limit unset_ | Invalid | Invalid | Exact |
| `resourceVersionMatch=Exact` | limit=\<n\>, _continue unset_ | Invalid | Invalid | Exact |
| `resourceVersionMatch=NotOlderThan` | _limit unset_ | Invalid | Any | Not older than |
| `resourceVersionMatch=NotOlderThan` | limit=\<n\>, _continue unset_ | Invalid | Any | Not older than |
If your cluster's API server does not honor the `resourceVersionMatch` parameter,
the behavior is the same as if you did not set it.
The meaning of the **get** and **list** semantics are:
Any
: Return data at any resource version. The newest available resource version is preferred,
but strong consistency is not required; data at any resource version may be served. It is possible
for the request to return data at a much older resource version that the client has previously
observed, particularly in high availability configurations, due to partitions or stale
caches. Clients that cannot tolerate this should not use this semantic.
Most recent
: Return data at the most recent resource version. The returned data must be
consistent (in detail: served from etcd via a quorum read).
For etcd v3.4.31+ and v3.5.13+ Kubernetes serves “most recent” reads from the _watch cache_:
an internal, in-memory store within the API server that caches and mirrors the state of data
persisted into etcd. Kubernetes requests progress notification to maintain cache consistency against
the etcd persistence layer. Kubernetes versions v1.28 through to v1.30 also supported this
feature, although as Alpha it was not recommended for production nor enabled by default until the v1.31 release.
Not older than
: Return data at least as new as the provided `resourceVersion`. The newest
available data is preferred, but any data not older than the provided `resourceVersion` may be
served. For **list** requests to servers that honor the `resourceVersionMatch` parameter, this
guarantees that the collection's `.metadata.resourceVersion` is not older than the requested
`resourceVersion`, but does not make any guarantee about the `.metadata.resourceVersion` of any
of the items in that collection.
Exact
: Return data at the exact resource version provided. If the provided `resourceVersion` is
unavailable, the server responds with HTTP 410 "Gone". For **list** requests to servers that honor the
`resourceVersionMatch` parameter, this guarantees that the collection's `.metadata.resourceVersion`
is the same as the `resourceVersion` you requested in the query string. That guarantee does
not apply to the `.metadata.resourceVersion` of any items within that collection.
Continue Token, Exact
: Return data at the resource version of the initial paginated **list** call. The returned _continue
tokens_ are responsible for keeping track of the initially provided resource version for all paginated
**list** calls after the initial paginated **list**.
When you **list** resources and receive a collection response, the response includes the
[list metadata](/docs/reference/generated/kubernetes-api/v/#listmeta-v1-meta)
of the collection as well as
[object metadata](/docs/reference/generated/kubernetes-api/v/#objectmeta-v1-meta)
for each item in that collection. For individual objects found within a collection response,
`.metadata.resourceVersion` tracks when that object was last updated, and not how up-to-date
the object is when served.
When using `resourceVersionMatch=NotOlderThan` and limit is set, clients must
handle HTTP 410 "Gone" responses. For example, the client might retry with a
newer `resourceVersion` or fall back to `resourceVersion=""`.
When using `resourceVersionMatch=Exact` and `limit` is unset, clients must
verify that the collection's `.metadata.resourceVersion` matches
the requested `resourceVersion`, and handle the case where it does not. For
example, the client might fall back to a request with `limit` set.
### Semantics for **watch**
For **watch**, the semantics of resource version are:
**watch:**
| resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" |
|-------------------------------------|----------------------------|----------------------------------------|
| Get State and Start at Most Recent | Get State and Start at Any | Start at Exact |
The meaning of those **watch** semantics are:
Get State and Start at Any
: Watches initialized this way may return arbitrarily stale
data. Please review this semantic before using it, and favor the other semantics
where possible.
Start a **watch** at any resource version; the most recent resource version
available is preferred, but not required. Any starting resource version is
allowed. It is possible for the **watch** to start at a much older resource
version that the client has previously observed, particularly in high availability
configurations, due to partitions or stale caches. Clients that cannot tolerate
this apparent rewinding should not start a **watch** with this semantic. To
establish initial state, the **watch** begins with synthetic "Added" events for
all resource instances that exist at the starting resource version. All following
watch events are for all changes that occurred after the resource version the
**watch** started at.
Get State and Start at Most Recent
: Start a **watch** at the most recent resource version, which must be consistent
(in detail: served from etcd via a quorum read). To establish initial state,
the **watch** begins with synthetic "Added" events of all resources instances
that exist at the starting resource version. All following watch events are for
all changes that occurred after the resource version the **watch** started at.
Start at Exact
: Start a **watch** at an exact resource version. The watch events are for all changes
after the provided resource version. Unlike "Get State and Start at Most Recent"
and "Get State and Start at Any", the **watch** is not started with synthetic
"Added" events for the provided resource version. The client is assumed to already
have the initial state at the starting resource version since the client provided
the resource version.
### "410 Gone" responses
Servers are not required to serve all older resource versions and may return a HTTP
`410 (Gone)` status code if a client requests a `resourceVersion` older than the
server has retained. Clients must be able to tolerate `410 (Gone)` responses. See
[Efficient detection of changes](#efficient-detection-of-changes) for details on
how to handle `410 (Gone)` responses when watching resources.
If you request a `resourceVersion` outside the applicable limit then, depending
on whether a request is served from cache or not, the API server may reply with a
`410 Gone` HTTP response.
### Unavailable resource versions
Servers are not required to serve unrecognized resource versions. If you request
**list** or **get** for a resource version that the API server does not recognize,
then the API server may either:
* wait briefly for the resource version to become available, then timeout with a
`504 (Gateway Timeout)` if the provided resource versions does not become available
in a reasonable amount of time;
* respond with a `Retry-After` response header indicating how many seconds a client
should wait before retrying the request.
If you request a resource version that an API server does not recognize, the
kube-apiserver additionally identifies its error responses with a "Too large resource
version" message.
If you make a **watch** request for an unrecognized resource version, the API server
may wait indefinitely (until the request timeout) for the resource version to become
available. | kubernetes reference | title Kubernetes API Concepts reviewers smarterclayton lavalamp liggitt content type concept weight 20 overview The Kubernetes API is a resource based RESTful programmatic interface provided via HTTP It supports retrieving creating updating and deleting primary resources via the standard HTTP verbs POST PUT PATCH DELETE GET For some resources the API includes additional subresources that allow fine grained authorization such as separate views for Pod details and log retrievals and can accept and serve those resources in different representations for convenience or efficiency Kubernetes supports efficient change notifications on resources via watches Kubernetes also provides consistent list operations so that API clients can effectively cache track and synchronize the state of resources You can view the API reference docs reference kubernetes api online or read on to learn about the API in general body Kubernetes API terminology standard api terminology Kubernetes generally leverages common RESTful terminology to describe the API concepts A resource type is the name used in the URL pods namespaces services All resource types have a concrete representation their object schema which is called a kind A list of instances of a resource type is known as a collection A single instance of a resource type is called a resource and also usually represents an object For some resource types the API includes one or more sub resources which are represented as URI paths below the resource Most Kubernetes API resource types are they represent a concrete instance of a concept on the cluster like a pod or namespace A smaller number of API resource types are virtual in that they often represent operations on objects rather than objects such as a permission check use a POST with a JSON encoded body of SubjectAccessReview to the subjectaccessreviews resource or the eviction sub resource of a Pod used to trigger API initiated eviction docs concepts scheduling eviction api eviction Object names All objects you can create via the API have a unique object to allow idempotent creation and retrieval except that virtual resource types may not have unique names if they are not retrievable or do not rely on idempotency Within a only one object of a given kind can have a given name at a time However if you delete the object you can make a new object with the same name Some objects are not namespaced for example Nodes and so their names must be unique across the whole cluster API verbs Almost all object resource types support the standard HTTP verbs GET POST PUT PATCH and DELETE Kubernetes also uses its own verbs which are often written in lowercase to distinguish them from HTTP verbs Kubernetes uses the term list to describe returning a collection collections of resources to distinguish from retrieving a single resource which is usually called a get If you sent an HTTP GET request with the watch query parameter Kubernetes calls this a watch and not a get see Efficient detection of changes efficient detection of changes for more details For PUT requests Kubernetes internally classifies these as either create or update based on the state of the existing object An update is different from a patch the HTTP verb for a patch is PATCH Resource URIs All resource types are either scoped by the cluster apis GROUP VERSION or to a namespace apis GROUP VERSION namespaces NAMESPACE A namespace scoped resource type will be deleted when its namespace is deleted and access to that resource type is controlled by authorization checks on the namespace scope Note core resources use api instead of apis and omit the GROUP path segment Examples api v1 namespaces api v1 pods api v1 namespaces my namespace pods apis apps v1 deployments apis apps v1 namespaces my namespace deployments apis apps v1 namespaces my namespace deployments my deployment You can also access collections of resources for example listing all Nodes The following paths are used to retrieve collections and resources Cluster scoped resources GET apis GROUP VERSION RESOURCETYPE return the collection of resources of the resource type GET apis GROUP VERSION RESOURCETYPE NAME return the resource with NAME under the resource type Namespace scoped resources GET apis GROUP VERSION RESOURCETYPE return the collection of all instances of the resource type across all namespaces GET apis GROUP VERSION namespaces NAMESPACE RESOURCETYPE return collection of all instances of the resource type in NAMESPACE GET apis GROUP VERSION namespaces NAMESPACE RESOURCETYPE NAME return the instance of the resource type with NAME in NAMESPACE Since a namespace is a cluster scoped resource type you can retrieve the list collection of all namespaces with GET api v1 namespaces and details about a particular namespace with GET api v1 namespaces NAME Cluster scoped subresource GET apis GROUP VERSION RESOURCETYPE NAME SUBRESOURCE Namespace scoped subresource GET apis GROUP VERSION namespaces NAMESPACE RESOURCETYPE NAME SUBRESOURCE The verbs supported for each subresource will differ depending on the object see the API reference docs reference kubernetes api for more information It is not possible to access sub resources across multiple resources generally a new virtual resource type would be used if that becomes necessary HTTP media types alternate representations of resources Over HTTP Kubernetes supports JSON and Protobuf wire encodings By default Kubernetes returns objects in JSON serialization json encoding using the application json media type Although JSON is the default clients may request a response in YAML or use the more efficient binary Protobuf representation protobuf encoding for better performance at scale The Kubernetes API implements standard HTTP content type negotiation passing an Accept header with a GET call will request that the server tries to return a response in your preferred media type If you want to send an object in Protobuf to the server for a PUT or POST request you must set the Content Type request header appropriately If you request an available media type the API server returns a response with a suitable Content Type if none of the media types you request are supported the API server returns a 406 Not acceptable error message All built in resource types support the application json media type JSON resource encoding json encoding The Kubernetes API defaults to using JSON https www json org json en html for encoding HTTP message bodies For example 1 List all of the pods on a cluster without specifying a preferred format GET api v1 pods 200 OK Content Type application json JSON encoded collection of Pods PodList object 1 Create a pod by sending JSON to the server requesting a JSON response POST api v1 namespaces test pods Content Type application json Accept application json JSON encoded Pod object 200 OK Content Type application json kind Pod apiVersion v1 YAML resource encoding yaml encoding Kubernetes also supports the application yaml https www rfc editor org rfc rfc9512 html media type for both requests and responses YAML https yaml org can be used for defining Kubernetes manifests and API interactions For example 1 List all of the pods on a cluster in YAML format GET api v1 pods Accept application yaml 200 OK Content Type application yaml YAML encoded collection of Pods PodList object 1 Create a pod by sending YAML encoded data to the server requesting a YAML response POST api v1 namespaces test pods Content Type application yaml Accept application yaml YAML encoded Pod object 200 OK Content Type application yaml apiVersion v1 kind Pod metadata name my pod Kubernetes Protobuf encoding protobuf encoding Kubernetes uses an envelope wrapper to encode Protobuf https protobuf dev responses That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf as opposed to JSON The 4 byte magic number data is followed by a Protobuf encoded wrapper message which describes the encoding and type of the underlying object Within the Protobuf wrapper message the inner object data is recorded using the raw field of Unknown see the IDL protobuf encoding idl for more detail For example 1 List all of the pods on a cluster in Protobuf format GET api v1 pods Accept application vnd kubernetes protobuf 200 OK Content Type application vnd kubernetes protobuf JSON encoded collection of Pods PodList object 1 Create a pod by sending Protobuf encoded data to the server but request a response in JSON POST api v1 namespaces test pods Content Type application vnd kubernetes protobuf Accept application json binary encoded Pod object 200 OK Content Type application json kind Pod apiVersion v1 You can use both techniques together and use Kubernetes Protobuf encoding to interact with any API that supports it for both reads and writes Only some API resource types are compatible protobuf encoding compatibility with Protobuf a id protobuf encoding idl The wrapper format is A four byte magic number prefix Bytes 0 3 k8s x00 0x6b 0x38 0x73 0x00 An encoded Protobuf message with the following IDL message Unknown typeMeta should have the string values for kind and apiVersion as set on the JSON object optional TypeMeta typeMeta 1 raw will hold the complete serialized object in protobuf See the protobuf definitions in the client libraries for a given kind optional bytes raw 2 contentEncoding is encoding used for the raw data Unspecified means no encoding optional string contentEncoding 3 contentType is the serialization method used to serialize raw Unspecified means application vnd kubernetes protobuf and is usually omitted optional string contentType 4 message TypeMeta apiVersion is the group version for this type optional string apiVersion 1 kind is the name of the object schema A protobuf definition should exist for this object optional string kind 2 Clients that receive a response in application vnd kubernetes protobuf that does not match the expected prefix should reject the response as future versions may need to alter the serialization format in an incompatible way and will do so by changing the prefix Compatibility with Kubernetes Protobuf protobuf encoding compatibility Not all API resource types support Kubernetes Protobuf encoding specifically Protobuf isn t available for resources that are defined as or are served via the As a client if you might need to work with extension types you should specify multiple content types in the request Accept header to support fallback to JSON For example Accept application vnd kubernetes protobuf application json Efficient detection of changes The Kubernetes API allows clients to make an initial request for an object or a collection and then to track changes since that initial request a watch Clients can send a list or a get and then make a follow up watch request To make this change tracking possible every Kubernetes object has a resourceVersion field representing the version of that resource as stored in the underlying persistence layer When retrieving a collection of resources either namespace or cluster scoped the response from the API server contains a resourceVersion value The client can use that resourceVersion to initiate a watch against the API server When you send a watch request the API server responds with a stream of changes These changes itemize the outcome of operations such as create delete and update that occurred after the resourceVersion you specified as a parameter to the watch request The overall watch mechanism allows a client to fetch the current state and then subscribe to subsequent changes without missing any events If a client watch is disconnected then that client can start a new watch from the last returned resourceVersion the client could also perform a fresh get list request and begin again See Resource Version Semantics resource versions for more detail For example 1 List all of the pods in a given namespace GET api v1 namespaces test pods 200 OK Content Type application json kind PodList apiVersion v1 metadata resourceVersion 10245 items 2 Starting from resource version 10245 receive notifications of any API operations such as create delete patch or update that affect Pods in the test namespace Each change notification is a JSON document The HTTP response body served as application json consists a series of JSON documents GET api v1 namespaces test pods watch 1 resourceVersion 10245 200 OK Transfer Encoding chunked Content Type application json type ADDED object kind Pod apiVersion v1 metadata resourceVersion 10596 type MODIFIED object kind Pod apiVersion v1 metadata resourceVersion 11020 A given Kubernetes server will only preserve a historical record of changes for a limited time Clusters using etcd 3 preserve changes in the last 5 minutes by default When the requested watch operations fail because the historical version of that resource is not available clients must handle the case by recognizing the status code 410 Gone clearing their local cache performing a new get or list operation and starting the watch from the resourceVersion that was returned For subscribing to collections Kubernetes client libraries typically offer some form of standard tool for this list then watch logic In the Go client library this is called a Reflector and is located in the k8s io client go tools cache package Watch bookmarks watch bookmarks To mitigate the impact of short history window the Kubernetes API provides a watch event named BOOKMARK It is a special kind of event to mark that all changes up to a given resourceVersion the client is requesting have already been sent The document representing the BOOKMARK event is of the type requested by the request but only includes a metadata resourceVersion field For example GET api v1 namespaces test pods watch 1 resourceVersion 10245 allowWatchBookmarks true 200 OK Transfer Encoding chunked Content Type application json type ADDED object kind Pod apiVersion v1 metadata resourceVersion 10596 type BOOKMARK object kind Pod apiVersion v1 metadata resourceVersion 12746 As a client you can request BOOKMARK events by setting the allowWatchBookmarks true query parameter to a watch request but you shouldn t assume bookmarks are returned at any specific interval nor can clients assume that the API server will send any BOOKMARK event even when requested Streaming lists On large clusters retrieving the collection of some resource types may result in a significant increase of resource usage primarily RAM on the control plane In order to alleviate its impact and simplify the user experience of the list watch pattern Kubernetes v1 27 introduces as an alpha feature the support for requesting the initial state previously requested via the list request as part of the watch request Provided that the WatchList feature gate docs reference command line tools reference feature gates is enabled this can be achieved by specifying sendInitialEvents true as query string parameter in a watch request If set the API server starts the watch stream with synthetic init events of type ADDED to build the whole state of all existing objects followed by a BOOKMARK event docs reference using api api concepts watch bookmarks if requested via allowWatchBookmarks true option The bookmark event includes the resource version to which is synced After sending the bookmark event the API server continues as for any other watch request When you set sendInitialEvents true in the query string Kubernetes also requires that you set resourceVersionMatch to NotOlderThan value If you provided resourceVersion in the query string without providing a value or don t provide it at all this is interpreted as a request for consistent read the bookmark event is sent when the state is synced at least to the moment of a consistent read from when the request started to be processed If you specify resourceVersion in the query string the bookmark event is sent when the state is synced at least to the provided resource version Example example streaming lists An example you want to watch a collection of Pods For that collection the current resource version is 10245 and there are two pods foo and bar Then sending the following request explicitly requesting consistent read by setting empty resource version using resourceVersion could result in the following sequence of events GET api v1 namespaces test pods watch 1 sendInitialEvents true allowWatchBookmarks true resourceVersion resourceVersionMatch NotOlderThan 200 OK Transfer Encoding chunked Content Type application json type ADDED object kind Pod apiVersion v1 metadata resourceVersion 8467 name foo type ADDED object kind Pod apiVersion v1 metadata resourceVersion 5726 name bar type BOOKMARK object kind Pod apiVersion v1 metadata resourceVersion 10245 followed by regular watch stream starting from resourceVersion 10245 Response compression APIResponseCompression is an option that allows the API server to compress the responses for get and list requests reducing the network bandwidth and improving the performance of large scale clusters It is enabled by default since Kubernetes 1 16 and it can be disabled by including APIResponseCompression false in the feature gates flag on the API server API response compression can significantly reduce the size of the response especially for large resources or collections docs reference using api api concepts collections For example a list request for pods can return hundreds of kilobytes or even megabytes of data depending on the number of pods and their attributes By compressing the response the network bandwidth can be saved and the latency can be reduced To verify if APIResponseCompression is working you can send a get or list request to the API server with an Accept Encoding header and check the response size and headers For example GET api v1 pods Accept Encoding gzip 200 OK Content Type application json content encoding gzip The content encoding header indicates that the response is compressed with gzip Retrieving large results sets in chunks On large clusters retrieving the collection of some resource types may result in very large responses that can impact the server and client For instance a cluster may have tens of thousands of Pods each of which is equivalent to roughly 2 KiB of encoded JSON Retrieving all pods across all namespaces may result in a very large response 10 20MB and consume a large amount of server resources The Kubernetes API server supports the ability to break a single large collection request into many smaller chunks while preserving the consistency of the total request Each chunk can be returned sequentially which reduces both the total size of the request and allows user oriented clients to display results incrementally to improve responsiveness You can request that the API server handles a list by serving single collection using pages which Kubernetes calls chunks To retrieve a single collection in chunks two query parameters limit and continue are supported on requests against collections and a response field continue is returned from all list operations in the collection s metadata field A client should specify the maximum results they wish to receive in each chunk with limit and the server will return up to limit resources in the result and include a continue value if there are more resources in the collection As an API client you can then pass this continue value to the API server on the next request to instruct the server to return the next page chunk of results By continuing until the server returns an empty continue value you can retrieve the entire collection Like a watch operation a continue token will expire after a short amount of time by default 5 minutes and return a 410 Gone if more results cannot be returned In this case the client will need to start from the beginning or omit the limit parameter For example if there are 1 253 pods on the cluster and you want to receive chunks of 500 pods at a time request those chunks as follows 1 List all of the pods on a cluster retrieving up to 500 pods each time GET api v1 pods limit 500 200 OK Content Type application json kind PodList apiVersion v1 metadata resourceVersion 10245 continue ENCODED CONTINUE TOKEN remainingItemCount 753 items returns pods 1 500 1 Continue the previous call retrieving the next set of 500 pods GET api v1 pods limit 500 continue ENCODED CONTINUE TOKEN 200 OK Content Type application json kind PodList apiVersion v1 metadata resourceVersion 10245 continue ENCODED CONTINUE TOKEN 2 remainingItemCount 253 items returns pods 501 1000 1 Continue the previous call retrieving the last 253 pods GET api v1 pods limit 500 continue ENCODED CONTINUE TOKEN 2 200 OK Content Type application json kind PodList apiVersion v1 metadata resourceVersion 10245 continue continue token is empty because we have reached the end of the list items returns pods 1001 1253 Notice that the resourceVersion of the collection remains constant across each request indicating the server is showing you a consistent snapshot of the pods Pods that are created updated or deleted after version 10245 would not be shown unless you make a separate list request without the continue token This allows you to break large requests into smaller chunks and then perform a watch operation on the full set without missing any updates remainingItemCount is the number of subsequent items in the collection that are not included in this response If the list request contained label or field then the number of remaining items is unknown and the API server does not include a remainingItemCount field in its response If the list is complete either because it is not chunking or because this is the last chunk then there are no more remaining items and the API server does not include a remainingItemCount field in its response The intended use of the remainingItemCount is estimating the size of a collection Collections In Kubernetes terminology the response you get from a list is a collection However Kubernetes defines concrete kinds for collections of different types of resource Collections have a kind named for the resource kind with List appended When you query the API for a particular type all items returned by that query are of that type For example when you list Services the collection response has kind set to ServiceList docs reference kubernetes api service resources service v1 ServiceList each item in that collection represents a single Service For example GET api v1 services yaml kind ServiceList apiVersion v1 metadata resourceVersion 2947301 items metadata name kubernetes namespace default metadata name kube dns namespace kube system There are dozens of collection types such as PodList ServiceList and NodeList defined in the Kubernetes API You can get more information about each collection type from the Kubernetes API docs reference kubernetes api documentation Some tools such as kubectl represent the Kubernetes collection mechanism slightly differently from the Kubernetes API itself Because the output of kubectl might include the response from multiple list operations at the API level kubectl represents a list of items using kind List For example shell kubectl get services A o yaml yaml apiVersion v1 kind List metadata resourceVersion selfLink items apiVersion v1 kind Service metadata creationTimestamp 2021 06 03T14 54 12Z labels component apiserver provider kubernetes name kubernetes namespace default apiVersion v1 kind Service metadata annotations prometheus io port 9153 prometheus io scrape true creationTimestamp 2021 06 03T14 54 14Z labels k8s app kube dns kubernetes io cluster service true kubernetes io name CoreDNS name kube dns namespace kube system Keep in mind that the Kubernetes API does not have a kind named List kind List is a client side internal implementation detail for processing collections that might be of different kinds of object Avoid depending on kind List in automation or other code Receiving resources as Tables When you run kubectl get the default output format is a simple tabular representation of one or more instances of a particular resource type In the past clients were required to reproduce the tabular and describe output implemented in kubectl to perform simple lists of objects A few limitations of that approach include non trivial logic when dealing with certain objects Additionally types provided by API aggregation or third party resources are not known at compile time This means that generic implementations had to be in place for types unrecognized by a client In order to avoid potential limitations as described above clients may request the Table representation of objects delegating specific details of printing to the server The Kubernetes API implements standard HTTP content type negotiation passing an Accept header containing a value of application json as Table g meta k8s io v v1 with a GET call will request that the server return objects in the Table content type For example list all of the pods on a cluster in the Table format GET api v1 pods Accept application json as Table g meta k8s io v v1 200 OK Content Type application json kind Table apiVersion meta k8s io v1 columnDefinitions For API resource types that do not have a custom Table definition known to the control plane the API server returns a default Table response that consists of the resource s name and creationTimestamp fields GET apis crd example com v1alpha1 namespaces default resources 200 OK Content Type application json kind Table apiVersion meta k8s io v1 columnDefinitions name Name type string name Created At type date Not all API resource types support a Table response for example a might not define field to table mappings and an APIService that extends the core Kubernetes API docs concepts extend kubernetes api extension apiserver aggregation might not serve Table responses at all If you are implementing a client that uses the Table information and must work against all resource types including extensions you should make requests that specify multiple content types in the Accept header For example Accept application json as Table g meta k8s io v v1 application json Resource deletion When you delete a resource this takes place in two phases 1 finalization 2 removal yaml kind ConfigMap apiVersion v1 metadata finalizers url io neat finalization other url io my finalizer deletionTimestamp nil When a client first sends a delete to request the removal of a resource the metadata deletionTimestamp is set to the current time Once the metadata deletionTimestamp is set external controllers that act on finalizers may start performing their cleanup work at any time in any order Order is not enforced between finalizers because it would introduce significant risk of stuck metadata finalizers The metadata finalizers field is shared any actor with permission can reorder it If the finalizer list were processed in order then this might lead to a situation in which the component responsible for the first finalizer in the list is waiting for some signal field value external system or other produced by a component responsible for a finalizer later in the list resulting in a deadlock Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list Once the last finalizer is removed the resource is actually removed from etcd Single resource API The Kubernetes API verbs get create update patch delete and proxy support single resources only These verbs with single resource support have no support for submitting multiple resources together in an ordered or unordered list or transaction When clients including kubectl act on a set of resources the client makes a series of single resource API requests then aggregates the responses if needed By contrast the Kubernetes API verbs list and watch allow getting multiple resources and deletecollection allows deleting multiple resources Field validation Kubernetes always validates the type of fields For example if a field in the API is defined as a number you cannot set the field to a text value If a field is defined as an array of strings you can only provide an array Some fields allow you to omit them other fields are required Omitting a required field from an API request is an error If you make a request with an extra field one that the cluster s control plane does not recognize then the behavior of the API server is more complicated By default the API server drops fields that it does not recognize from an input that it receives for example the JSON body of a PUT request There are two situations where the API server drops fields that you supplied in an HTTP request These situations are 1 The field is unrecognized because it is not in the resource s OpenAPI schema One exception to this is for that explicitly choose not to prune unknown fields via x kubernetes preserve unknown fields 1 The field is duplicated in the object Validation for unrecognized or duplicate fields setting the field validation level From 1 25 onward unrecognized or duplicate fields in an object are detected via validation on the server when you use HTTP verbs that can submit data POST PUT and PATCH Possible levels of validation are Ignore Warn default and Strict Ignore The API server succeeds in handling the request as it would without the erroneous fields being set dropping all unknown and duplicate fields and giving no indication it has done so Warn Default The API server succeeds in handling the request and reports a warning to the client The warning is sent using the Warning response header adding one warning item for each unknown or duplicate field For more information about warnings and the Kubernetes API see the blog article Warning Helpful Warnings Ahead blog 2020 09 03 warnings Strict The API server rejects the request with a 400 Bad Request error when it detects any unknown or duplicate fields The response message from the API server specifies all the unknown or duplicate fields that the API server has detected The field validation level is set by the fieldValidation query parameter If you submit a request that specifies an unrecognized field and that is also invalid for a different reason for example the request provides a string value where the API expects an integer for a known field then the API server responds with a 400 Bad Request error but will not provide any information on unknown or duplicate fields only which fatal error it encountered first You always receive an error response in this case no matter what field validation level you requested Tools that submit requests to the server such as kubectl might set their own defaults that are different from the Warn validation level that the API server uses by default The kubectl tool uses the validate flag to set the level of field validation It accepts the values ignore warn and strict while also accepting the values true equivalent to strict and false equivalent to ignore The default validation setting for kubectl is validate true which means strict server side field validation When kubectl cannot connect to an API server with field validation API servers prior to Kubernetes 1 27 it will fall back to using client side validation Client side validation will be removed entirely in a future version of kubectl Prior to Kubernetes 1 25 kubectl validate was used to toggle client side validation on or off as a boolean flag Dry run When you use HTTP verbs that can modify resources POST PUT PATCH and DELETE you can submit your request in a dry run mode Dry run mode helps to evaluate a request through the typical request stages admission chain validation merge conflicts up until persisting objects to storage The response body for the request is as close as possible to a non dry run response Kubernetes guarantees that dry run requests will not be persisted in storage or have any other side effects Make a dry run request Dry run is triggered by setting the dryRun query parameter This parameter is a string working as an enum and the only accepted values are no value set Allow side effects You request this with a query string such as dryRun or dryRun pretty true The response is the final object that would have been persisted or an error if the request could not be fulfilled All Every stage runs as normal except for the final storage stage where side effects are prevented When you set dryRun All any relevant are run validating admission controllers check the request post mutation merge is performed on PATCH fields are defaulted and schema validation occurs The changes are not persisted to the underlying storage but the final object which would have been persisted is still returned to the user along with the normal status code If the non dry run version of a request would trigger an admission controller that has side effects the request will be failed rather than risk an unwanted side effect All built in admission control plugins support dry run Additionally admission webhooks can declare in their configuration object docs reference generated kubernetes api validatingwebhook v1 admissionregistration k8s io that they do not have side effects by setting their sideEffects field to None If a webhook actually does have side effects then the sideEffects field should be set to NoneOnDryRun That change is appropriate provided that the webhook is also be modified to understand the DryRun field in AdmissionReview and to prevent side effects on any request marked as dry runs Here is an example dry run request that uses dryRun All POST api v1 namespaces test pods dryRun All Content Type application json Accept application json The response would look the same as for non dry run request but the values of some generated fields may differ Generated values Some values of an object are typically generated before the object is persisted It is important not to rely upon the values of these fields set by a dry run request since these values will likely be different in dry run mode from when the real request is made Some of these fields are name if generateName is set name will have a unique random name creationTimestamp deletionTimestamp records the time of creation deletion UID uniquely identifies docs concepts overview working with objects names uids the object and is randomly generated non deterministic resourceVersion tracks the persisted version of the object Any field set by a mutating admission controller For the Service resource Ports or IP addresses that the kube apiserver assigns to Service objects Dry run authorization Authorization for dry run and non dry run requests is identical Thus to make a dry run request you must be authorized to make the non dry run request For example to run a dry run patch for a Deployment you must be authorized to perform that patch Here is an example of a rule for Kubernetes that allows patching Deployments yaml rules apiGroups apps resources deployments verbs patch See Authorization Overview docs reference access authn authz authorization Updates to existing resources patch and apply Kubernetes provides several ways to update existing objects You can read choosing an update mechanism update mechanism choose to learn about which approach might be best for your use case You can overwrite update an existing resource for example a ConfigMap using an HTTP PUT For a PUT request it is the client s responsibility to specify the resourceVersion taking this from the object being updated Kubernetes uses that resourceVersion information so that the API server can detect lost updates and reject requests made by a client that is out of date with the cluster In the event that the resource has changed the resourceVersion the client provided is stale the API server returns a 409 Conflict error response Instead of sending a PUT request the client can send an instruction to the API server to patch an existing resource A patch is typically appropriate if the change that the client wants to make isn t conditional on the existing data Clients that need effective detection of lost updates should consider making their request conditional on the existing resourceVersion either HTTP PUT or HTTP PATCH and then handle any retries that are needed in case there is a conflict The Kubernetes API supports four different PATCH operations determined by their corresponding HTTP Content Type header application apply patch yaml Server Side Apply YAML a Kubernetes specific extension based on YAML All JSON documents are valid YAML so you can also submit JSON using this media type See Server Side Apply serialization docs reference using api server side apply serialization for more details To Kubernetes this is a create operation if the object does not exist or a patch operation if the object already exists application json patch json JSON Patch as defined in RFC6902 https tools ietf org html rfc6902 A JSON patch is a sequence of operations that are executed on the resource for example op add path a b c value foo bar To Kubernetes this is a patch operation A patch using application json patch json can include conditions to validate consistency allowing the operation to fail if those conditions are not met for example to avoid a lost update application merge patch json JSON Merge Patch as defined in RFC7386 https tools ietf org html rfc7386 A JSON Merge Patch is essentially a partial representation of the resource The submitted JSON is combined with the current resource to create a new one then the new one is saved To Kubernetes this is a patch operation application strategic merge patch json Strategic Merge Patch a Kubernetes specific extension based on JSON Strategic Merge Patch is a custom implementation of JSON Merge Patch You can only use Strategic Merge Patch with built in APIs or with aggregated API servers that have special support for it You cannot use application strategic merge patch json with any API defined using a The Kubernetes server side apply mechanism has superseded Strategic Merge Patch Kubernetes Server Side Apply docs reference using api server side apply feature allows the control plane to track managed fields for newly created objects Server Side Apply provides a clear pattern for managing field conflicts offers server side apply and update operations and replaces the client side functionality of kubectl apply For Server Side Apply Kubernetes treats the request as a create if the object does not yet exist and a patch otherwise For other requests that use PATCH at the HTTP level the logical Kubernetes operation is always patch See Server Side Apply docs reference using api server side apply for more details Choosing an update mechanism update mechanism choose HTTP PUT to replace existing resource update mechanism update The update HTTP PUT operation is simple to implement and flexible but has drawbacks You need to handle conflicts where the resourceVersion of the object changes between your client reading it and trying to write it back Kubernetes always detects the conflict but you as the client author need to implement retries You might accidentally drop fields if you decode an object locally for example using client go you could receive fields that your client does not know how to handle and then drop them as part of your update If there s a lot of contention on the object even on a field or set of fields that you re not trying to edit you might have trouble sending the update The problem is worse for larger objects and for objects with many fields HTTP PATCH using JSON Patch update mechanism json patch A patch update is helpful because As you re only sending differences you have less data to send in the PATCH request You can make changes that rely on existing values such as copying the value of a particular field into an annotation Unlike with an update HTTP PUT making your change can happen right away even if there are frequent changes to unrelated fields you usually would not need to retry You might still need to specify the resourceVersion to match an existing object if you want to be extra careful to avoid lost updates It s still good practice to write in some retry logic in case of errors You can use test conditions to careful craft specific update conditions For example you can increment a counter without reading it if the existing value matches what you expect You can do this with no lost update risk even if the object has changed in other ways since you last wrote to it If the test condition fails you can fall back to reading the current value and then write back the changed number However You need more local client logic to build the patch it helps a lot if you have a library implementation of JSON Patch or even for making a JSON Patch specifically against Kubernetes As the author of client software you need to be careful when building the patch the HTTP request body not to drop fields the order of operations matters HTTP PATCH using Server Side Apply update mechanism server side apply Server Side Apply has some clear benefits A single round trip it rarely requires making a GET request first and you can still detect conflicts for unexpected changes you have the option to force override a conflict if appropriate Client implementations are easy to make You get an atomic create or update operation without extra effort similar to UPSERT in some SQL dialects However Server Side Apply does not work at all for field changes that depend on a current value of the object You can only apply updates to objects Some resources in the Kubernetes HTTP API are not objects they do not have a metadata field and Server Side Apply is only relevant for Kubernetes objects Resource versions Resource versions are strings that identify the server s internal version of an object Resource versions can be used by clients to determine when objects have changed or to express data consistency requirements when getting listing and watching resources Resource versions must be treated as opaque by clients and passed unmodified back to the server You must not assume resource versions are numeric or collatable API clients may only compare two resource versions for equality this means that you must not compare resource versions for greater than or less than relationships resourceVersion fields in metadata resourceversion in metadata Clients find resource versions in resources including the resources from the response stream for a watch or when using list to enumerate resources v1 meta ObjectMeta docs reference generated kubernetes api objectmeta v1 meta The metadata resourceVersion of a resource instance identifies the resource version the instance was last modified at v1 meta ListMeta docs reference generated kubernetes api listmeta v1 meta The metadata resourceVersion of a resource collection the response to a list identifies the resource version at which the collection was constructed resourceVersion parameters in query strings the resourceversion parameter The get list and watch operations support the resourceVersion parameter From version v1 19 Kubernetes API servers also support the resourceVersionMatch parameter on list requests The API server interprets the resourceVersion parameter differently depending on the operation you request and on the value of resourceVersion If you set resourceVersionMatch then this also affects the way matching happens Semantics for get and list For get and list the semantics of resourceVersion are get resourceVersion unset resourceVersion 0 resourceVersion value other than 0 Most Recent Any Not older than list From version v1 19 Kubernetes API servers support the resourceVersionMatch parameter on list requests If you set both resourceVersion and resourceVersionMatch the resourceVersionMatch parameter determines how the API server interprets resourceVersion You should always set the resourceVersionMatch parameter when setting resourceVersion on a list request However be prepared to handle the case where the API server that responds is unaware of resourceVersionMatch and ignores it Unless you have strong consistency requirements using resourceVersionMatch NotOlderThan and a known resourceVersion is preferable since it can achieve better performance and scalability of your cluster than leaving resourceVersion and resourceVersionMatch unset which requires quorum read to be served Setting the resourceVersionMatch parameter without setting resourceVersion is not valid This table explains the behavior of list requests with various combinations of resourceVersion and resourceVersionMatch resourceVersionMatch param paging params resourceVersion not set resourceVersion 0 resourceVersion value other than 0 unset limit unset Most Recent Any Not older than unset limit n continue unset Most Recent Any Exact unset limit n continue token Continue Token Exact Invalid treated as Continue Token Exact Invalid HTTP 400 Bad Request resourceVersionMatch Exact limit unset Invalid Invalid Exact resourceVersionMatch Exact limit n continue unset Invalid Invalid Exact resourceVersionMatch NotOlderThan limit unset Invalid Any Not older than resourceVersionMatch NotOlderThan limit n continue unset Invalid Any Not older than If your cluster s API server does not honor the resourceVersionMatch parameter the behavior is the same as if you did not set it The meaning of the get and list semantics are Any Return data at any resource version The newest available resource version is preferred but strong consistency is not required data at any resource version may be served It is possible for the request to return data at a much older resource version that the client has previously observed particularly in high availability configurations due to partitions or stale caches Clients that cannot tolerate this should not use this semantic Most recent Return data at the most recent resource version The returned data must be consistent in detail served from etcd via a quorum read For etcd v3 4 31 and v3 5 13 Kubernetes serves most recent reads from the watch cache an internal in memory store within the API server that caches and mirrors the state of data persisted into etcd Kubernetes requests progress notification to maintain cache consistency against the etcd persistence layer Kubernetes versions v1 28 through to v1 30 also supported this feature although as Alpha it was not recommended for production nor enabled by default until the v1 31 release Not older than Return data at least as new as the provided resourceVersion The newest available data is preferred but any data not older than the provided resourceVersion may be served For list requests to servers that honor the resourceVersionMatch parameter this guarantees that the collection s metadata resourceVersion is not older than the requested resourceVersion but does not make any guarantee about the metadata resourceVersion of any of the items in that collection Exact Return data at the exact resource version provided If the provided resourceVersion is unavailable the server responds with HTTP 410 Gone For list requests to servers that honor the resourceVersionMatch parameter this guarantees that the collection s metadata resourceVersion is the same as the resourceVersion you requested in the query string That guarantee does not apply to the metadata resourceVersion of any items within that collection Continue Token Exact Return data at the resource version of the initial paginated list call The returned continue tokens are responsible for keeping track of the initially provided resource version for all paginated list calls after the initial paginated list When you list resources and receive a collection response the response includes the list metadata docs reference generated kubernetes api v listmeta v1 meta of the collection as well as object metadata docs reference generated kubernetes api v objectmeta v1 meta for each item in that collection For individual objects found within a collection response metadata resourceVersion tracks when that object was last updated and not how up to date the object is when served When using resourceVersionMatch NotOlderThan and limit is set clients must handle HTTP 410 Gone responses For example the client might retry with a newer resourceVersion or fall back to resourceVersion When using resourceVersionMatch Exact and limit is unset clients must verify that the collection s metadata resourceVersion matches the requested resourceVersion and handle the case where it does not For example the client might fall back to a request with limit set Semantics for watch For watch the semantics of resource version are watch resourceVersion unset resourceVersion 0 resourceVersion value other than 0 Get State and Start at Most Recent Get State and Start at Any Start at Exact The meaning of those watch semantics are Get State and Start at Any Watches initialized this way may return arbitrarily stale data Please review this semantic before using it and favor the other semantics where possible Start a watch at any resource version the most recent resource version available is preferred but not required Any starting resource version is allowed It is possible for the watch to start at a much older resource version that the client has previously observed particularly in high availability configurations due to partitions or stale caches Clients that cannot tolerate this apparent rewinding should not start a watch with this semantic To establish initial state the watch begins with synthetic Added events for all resource instances that exist at the starting resource version All following watch events are for all changes that occurred after the resource version the watch started at Get State and Start at Most Recent Start a watch at the most recent resource version which must be consistent in detail served from etcd via a quorum read To establish initial state the watch begins with synthetic Added events of all resources instances that exist at the starting resource version All following watch events are for all changes that occurred after the resource version the watch started at Start at Exact Start a watch at an exact resource version The watch events are for all changes after the provided resource version Unlike Get State and Start at Most Recent and Get State and Start at Any the watch is not started with synthetic Added events for the provided resource version The client is assumed to already have the initial state at the starting resource version since the client provided the resource version 410 Gone responses Servers are not required to serve all older resource versions and may return a HTTP 410 Gone status code if a client requests a resourceVersion older than the server has retained Clients must be able to tolerate 410 Gone responses See Efficient detection of changes efficient detection of changes for details on how to handle 410 Gone responses when watching resources If you request a resourceVersion outside the applicable limit then depending on whether a request is served from cache or not the API server may reply with a 410 Gone HTTP response Unavailable resource versions Servers are not required to serve unrecognized resource versions If you request list or get for a resource version that the API server does not recognize then the API server may either wait briefly for the resource version to become available then timeout with a 504 Gateway Timeout if the provided resource versions does not become available in a reasonable amount of time respond with a Retry After response header indicating how many seconds a client should wait before retrying the request If you request a resource version that an API server does not recognize the kube apiserver additionally identifies its error responses with a Too large resource version message If you make a watch request for an unrecognized resource version the API server may wait indefinitely until the request timeout for the resource version to become available |
kubernetes reference title Kubernetes Deprecation Policy bgrant0607 contenttype concept weight 40 reviewers thockin lavalamp | ---
reviewers:
- bgrant0607
- lavalamp
- thockin
title: Kubernetes Deprecation Policy
content_type: concept
weight: 40
---
<!-- overview -->
This document details the deprecation policy for various facets of the system.
<!-- body -->
Kubernetes is a large system with many components and many contributors. As
with any such software, the feature set naturally evolves over time, and
sometimes a feature may need to be removed. This could include an API, a flag,
or even an entire feature. To avoid breaking existing users, Kubernetes follows
a deprecation policy for aspects of the system that are slated to be removed.
## Deprecating parts of the API
Since Kubernetes is an API-driven system, the API has evolved over time to
reflect the evolving understanding of the problem space. The Kubernetes API is
actually a set of APIs, called "API groups", and each API group is
independently versioned. [API versions](/docs/reference/using-api/#api-versioning) fall
into 3 main tracks, each of which has different policies for deprecation:
| Example | Track |
|----------|----------------------------------|
| v1 | GA (generally available, stable) |
| v1beta1 | Beta (pre-release) |
| v1alpha1 | Alpha (experimental) |
A given release of Kubernetes can support any number of API groups and any
number of versions of each.
The following rules govern the deprecation of elements of the API. This
includes:
* REST resources (aka API objects)
* Fields of REST resources
* Annotations on REST resources, including "beta" annotations but not
including "alpha" annotations.
* Enumerated or constant values
* Component config structures
These rules are enforced between official releases, not between
arbitrary commits to master or release branches.
**Rule #1: API elements may only be removed by incrementing the version of the
API group.**
Once an API element has been added to an API group at a particular version, it
can not be removed from that version or have its behavior significantly
changed, regardless of track.
For historical reasons, there are 2 "monolithic" API groups - "core" (no
group name) and "extensions". Resources will incrementally be moved from these
legacy API groups into more domain-specific API groups.
**Rule #2: API objects must be able to round-trip between API versions in a given
release without information loss, with the exception of whole REST resources
that do not exist in some versions.**
For example, an object can be written as v1 and then read back as v2 and
converted to v1, and the resulting v1 resource will be identical to the
original. The representation in v2 might be different from v1, but the system
knows how to convert between them in both directions. Additionally, any new
field added in v2 must be able to round-trip to v1 and back, which means v1
might have to add an equivalent field or represent it as an annotation.
**Rule #3: An API version in a given track may not be deprecated in favor of a less stable API version.**
* GA API versions can replace beta and alpha API versions.
* Beta API versions can replace earlier beta and alpha API versions, but *may not* replace GA API versions.
* Alpha API versions can replace earlier alpha API versions, but *may not* replace GA or beta API versions.
**Rule #4a: API lifetime is determined by the API stability level**
* GA API versions may be marked as deprecated, but must not be removed within a major version of Kubernetes
* Beta API versions are deprecated no more than 9 months or 3 minor releases after introduction (whichever is longer),
and are no longer served 9 months or 3 minor releases after deprecation (whichever is longer)
* Alpha API versions may be removed in any release without prior deprecation notice
This ensures beta API support covers the [maximum supported version skew of 2 releases](/releases/version-skew-policy/),
and that APIs don't stagnate on unstable beta versions, accumulating production usage that will be
disrupted when support for the beta API ends.
There are no current plans for a major version revision of Kubernetes that removes GA APIs.
Until [#52185](https://github.com/kubernetes/kubernetes/issues/52185) is
resolved, no API versions that have been persisted to storage may be removed.
Serving REST endpoints for those versions may be disabled (subject to the
deprecation timelines in this document), but the API server must remain capable
of decoding/converting previously persisted data from storage.
**Rule #4b: The "preferred" API version and the "storage version" for a given
group may not advance until after a release has been made that supports both the
new version and the previous version**
Users must be able to upgrade to a new release of Kubernetes and then roll back
to a previous release, without converting anything to the new API version or
suffering breakages (unless they explicitly used features only available in the
newer version). This is particularly evident in the stored representation of
objects.
All of this is best illustrated by examples. Imagine a Kubernetes release,
version X, which introduces a new API group. A new Kubernetes release is made
every approximately 4 months (3 per year). The following table describes which
API versions are supported in a series of subsequent releases.
<table>
<thead>
<tr>
<th>Release</th>
<th>API Versions</th>
<th>Preferred/Storage Version</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>X</td>
<td>v1alpha1</td>
<td>v1alpha1</td>
<td></td>
</tr>
<tr>
<td>X+1</td>
<td>v1alpha2</td>
<td>v1alpha2</td>
<td>
<ul>
<li>v1alpha1 is removed. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+2</td>
<td>v1beta1</td>
<td>v1beta1</td>
<td>
<ul>
<li>v1alpha2 is removed. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+3</td>
<td>v1beta2, v1beta1 (deprecated)</td>
<td>v1beta1</td>
<td>
<ul>
<li>v1beta1 is deprecated. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+4</td>
<td>v1beta2, v1beta1 (deprecated)</td>
<td>v1beta2</td>
<td></td>
</tr>
<tr>
<td>X+5</td>
<td>v1, v1beta1 (deprecated), v1beta2 (deprecated)</td>
<td>v1beta2</td>
<td>
<ul>
<li>v1beta2 is deprecated. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+6</td>
<td>v1, v1beta2 (deprecated)</td>
<td>v1</td>
<td>
<ul>
<li>v1beta1 is removed. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+7</td>
<td>v1, v1beta2 (deprecated)</td>
<td>v1</td>
<td></td>
</tr>
<tr>
<td>X+8</td>
<td>v2alpha1, v1</td>
<td>v1</td>
<td>
<ul>
<li>v1beta2 is removed. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+9</td>
<td>v2alpha2, v1</td>
<td>v1</td>
<td>
<ul>
<li>v2alpha1 is removed. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+10</td>
<td>v2beta1, v1</td>
<td>v1</td>
<td>
<ul>
<li>v2alpha2 is removed. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+11</td>
<td>v2beta2, v2beta1 (deprecated), v1</td>
<td>v1</td>
<td>
<ul>
<li>v2beta1 is deprecated. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+12</td>
<td>v2, v2beta2 (deprecated), v2beta1 (deprecated), v1 (deprecated)</td>
<td>v1</td>
<td>
<ul>
<li>v2beta2 is deprecated. See release notes for required actions.</li>
<li>v1 is deprecated in favor of v2, but will not be removed</li>
</ul>
</td>
</tr>
<tr>
<td>X+13</td>
<td>v2, v2beta1 (deprecated), v2beta2 (deprecated), v1 (deprecated)</td>
<td>v2</td>
<td></td>
</tr>
<tr>
<td>X+14</td>
<td>v2, v2beta2 (deprecated), v1 (deprecated)</td>
<td>v2</td>
<td>
<ul>
<li>v2beta1 is removed. See release notes for required actions.</li>
</ul>
</td>
</tr>
<tr>
<td>X+15</td>
<td>v2, v1 (deprecated)</td>
<td>v2</td>
<td>
<ul>
<li>v2beta2 is removed. See release notes for required actions.</li>
</ul>
</td>
</tr>
</tbody>
</table>
### REST resources (aka API objects)
Consider a hypothetical REST resource named Widget, which was present in API v1
in the above timeline, and which needs to be deprecated. We document and
[announce](https://groups.google.com/forum/#!forum/kubernetes-announce) the
deprecation in sync with release X+1. The Widget resource still exists in API
version v1 (deprecated) but not in v2alpha1. The Widget resource continues to
exist and function in releases up to and including X+8. Only in release X+9,
when API v1 has aged out, does the Widget resource cease to exist, and the
behavior get removed.
Starting in Kubernetes v1.19, making an API request to a deprecated REST API endpoint:
1. Returns a `Warning` header
(as defined in [RFC7234, Section 5.5](https://tools.ietf.org/html/rfc7234#section-5.5)) in the API response.
1. Adds a `"k8s.io/deprecated":"true"` annotation to the
[audit event](/docs/tasks/debug/debug-cluster/audit/) recorded for the request.
1. Sets an `apiserver_requested_deprecated_apis` gauge metric to `1` in the `kube-apiserver`
process. The metric has labels for `group`, `version`, `resource`, `subresource` that can be joined
to the `apiserver_request_total` metric, and a `removed_release` label that indicates the
Kubernetes release in which the API will no longer be served. The following Prometheus query
returns information about requests made to deprecated APIs which will be removed in v1.22:
```promql
apiserver_requested_deprecated_apis{removed_release="1.22"} * on(group,version,resource,subresource) group_right() apiserver_request_total
```
### Fields of REST resources
As with whole REST resources, an individual field which was present in API v1
must exist and function until API v1 is removed. Unlike whole resources, the
v2 APIs may choose a different representation for the field, as long as it can
be round-tripped. For example a v1 field named "magnitude" which was
deprecated might be named "deprecatedMagnitude" in API v2. When v1 is
eventually removed, the deprecated field can be removed from v2.
### Enumerated or constant values
As with whole REST resources and fields thereof, a constant value which was
supported in API v1 must exist and function until API v1 is removed.
### Component config structures
Component configs are versioned and managed similar to REST resources.
### Future work
Over time, Kubernetes will introduce more fine-grained API versions, at which
point these rules will be adjusted as needed.
## Deprecating a flag or CLI
The Kubernetes system is comprised of several different programs cooperating.
Sometimes, a Kubernetes release might remove flags or CLI commands
(collectively "CLI elements") in these programs. The individual programs
naturally sort into two main groups - user-facing and admin-facing programs,
which vary slightly in their deprecation policies. Unless a flag is explicitly
prefixed or documented as "alpha" or "beta", it is considered GA.
CLI elements are effectively part of the API to the system, but since they are
not versioned in the same way as the REST API, the rules for deprecation are as
follows:
**Rule #5a: CLI elements of user-facing components (e.g. kubectl) must function
after their announced deprecation for no less than:**
* **GA: 12 months or 2 releases (whichever is longer)**
* **Beta: 3 months or 1 release (whichever is longer)**
* **Alpha: 0 releases**
**Rule #5b: CLI elements of admin-facing components (e.g. kubelet) must function
after their announced deprecation for no less than:**
* **GA: 6 months or 1 release (whichever is longer)**
* **Beta: 3 months or 1 release (whichever is longer)**
* **Alpha: 0 releases**
**Rule #5c: Command line interface (CLI) elements cannot be deprecated in favor of
less stable CLI elements**
Similar to the Rule #3 for APIs, if an element of a command line interface is being replaced with an
alternative implementation, such as by renaming an existing element, or by switching to
use configuration sourced from a file
instead of a command line argument, that recommended alternative must be of
the same or higher stability level.
**Rule #6: Deprecated CLI elements must emit warnings (optionally disable)
when used.**
## Deprecating a feature or behavior
Occasionally a Kubernetes release needs to deprecate some feature or behavior
of the system that is not controlled by the API or CLI. In this case, the
rules for deprecation are as follows:
**Rule #7: Deprecated behaviors must function for no less than 1 year after their
announced deprecation.**
If the feature or behavior is being replaced with an alternative implementation
that requires work to adopt the change, there should be an effort to simplify
the transition whenever possible. If an alternative implementation is under
Kubernetes organization control, the following rules apply:
**Rule #8: The feature of behavior must not be deprecated in favor of an alternative
implementation that is less stable**
For example, a generally available feature cannot be deprecated in favor of a Beta replacement.
The Kubernetes project does, however, encourage users to adopt and transitions to alternative
implementations even before they reach the same maturity level. This is particularly important
for exploring new use cases of a feature or getting an early feedback on the replacement.
Alternative implementations may sometimes be external tools or products,
for example a feature may move from the kubelet to container runtime
that is not under Kubernetes project control. In such cases, the rule cannot be
applied, but there must be an effort to ensure that there is a transition path
that does not compromise on components' maturity levels. In the example with
container runtimes, the effort may involve trying to ensure that popular container runtimes
have versions that offer the same level of stability while implementing that replacement behavior.
Deprecation rules for features and behaviors do not imply that all changes
to the system are governed by this policy.
These rules apply only to significant, user-visible behaviors which impact the
correctness of applications running on Kubernetes or that impact the
administration of Kubernetes clusters, and which are being removed entirely.
An exception to the above rule is _feature gates_. Feature gates are key=value
pairs that allow for users to enable/disable experimental features.
Feature gates are intended to cover the development life cycle of a feature - they
are not intended to be long-term APIs. As such, they are expected to be deprecated
and removed after a feature becomes GA or is dropped.
As a feature moves through the stages, the associated feature gate evolves.
The feature life cycle matched to its corresponding feature gate is:
* Alpha: the feature gate is disabled by default and can be enabled by the user.
* Beta: the feature gate is enabled by default and can be disabled by the user.
* GA: the feature gate is deprecated (see ["Deprecation"](#deprecation)) and becomes
non-operational.
* GA, deprecation window complete: the feature gate is removed and calls to it are
no longer accepted.
### Deprecation
Features can be removed at any point in the life cycle prior to GA. When features are
removed prior to GA, their associated feature gates are also deprecated.
When an invocation tries to disable a non-operational feature gate, the call fails in order
to avoid unsupported scenarios that might otherwise run silently.
In some cases, removing pre-GA features requires considerable time. Feature gates can remain
operational until their associated feature is fully removed, at which point the feature gate
itself can be deprecated.
When removing a feature gate for a GA feature also requires considerable time, calls to
feature gates may remain operational if the feature gate has no effect on the feature,
and if the feature gate causes no errors.
Features intended to be disabled by users should include a mechanism for disabling the
feature in the associated feature gate.
Versioning for feature gates is different from the previously discussed components,
therefore the rules for deprecation are as follows:
**Rule #9: Feature gates must be deprecated when the corresponding feature they control
transitions a lifecycle stage as follows. Feature gates must function for no less than:**
* **Beta feature to GA: 6 months or 2 releases (whichever is longer)**
* **Beta feature to EOL: 3 months or 1 release (whichever is longer)**
* **Alpha feature to EOL: 0 releases**
**Rule #10: Deprecated feature gates must respond with a warning when used. When a feature gate
is deprecated it must be documented in both in the release notes and the corresponding CLI help.
Both warnings and documentation must indicate whether a feature gate is non-operational.**
## Deprecating a metric
Each component of the Kubernetes control-plane exposes metrics (usually the
`/metrics` endpoint), which are typically ingested by cluster administrators.
Not all metrics are the same: some metrics are commonly used as SLIs or used
to determine SLOs, these tend to have greater import. Other metrics are more
experimental in nature or are used primarily in the Kubernetes development
process.
Accordingly, metrics fall under three stability classes (`ALPHA`, `BETA` `STABLE`);
this impacts removal of a metric during a Kubernetes release. These classes
are determined by the perceived importance of the metric. The rules for
deprecating and removing a metric are as follows:
**Rule #11a: Metrics, for the corresponding stability class, must function for no less than:**
* **STABLE: 4 releases or 12 months (whichever is longer)**
* **BETA: 2 releases or 8 months (whichever is longer)**
* **ALPHA: 0 releases**
**Rule #11b: Metrics, after their _announced deprecation_, must function for no less than:**
* **STABLE: 3 releases or 9 months (whichever is longer)**
* **BETA: 1 releases or 4 months (whichever is longer)**
* **ALPHA: 0 releases**
Deprecated metrics will have their description text prefixed with a deprecation notice
string '(Deprecated from x.y)' and a warning log will be emitted during metric
registration. Like their stable undeprecated counterparts, deprecated metrics will
be automatically registered to the metrics endpoint and therefore visible.
On a subsequent release (when the metric's `deprecatedVersion` is equal to
_current_kubernetes_version - 3_), a deprecated metric will become a _hidden_ metric.
**_Unlike_** their deprecated counterparts, hidden metrics will _no longer_ be
automatically registered to the metrics endpoint (hence hidden). However, they
can be explicitly enabled through a command line flag on the binary
(`--show-hidden-metrics-for-version=`). This provides cluster admins an
escape hatch to properly migrate off of a deprecated metric, if they were not
able to react to the earlier deprecation warnings. Hidden metrics should be
deleted after one release.
## Exceptions
No policy can cover every possible situation. This policy is a living
document, and will evolve over time. In practice, there will be situations
that do not fit neatly into this policy, or for which this policy becomes a
serious impediment. Such situations should be discussed with SIGs and project
leaders to find the best solutions for those specific cases, always bearing in
mind that Kubernetes is committed to being a stable system that, as much as
possible, never breaks users. Exceptions will always be announced in all
relevant release notes. | kubernetes reference | reviewers bgrant0607 lavalamp thockin title Kubernetes Deprecation Policy content type concept weight 40 overview This document details the deprecation policy for various facets of the system body Kubernetes is a large system with many components and many contributors As with any such software the feature set naturally evolves over time and sometimes a feature may need to be removed This could include an API a flag or even an entire feature To avoid breaking existing users Kubernetes follows a deprecation policy for aspects of the system that are slated to be removed Deprecating parts of the API Since Kubernetes is an API driven system the API has evolved over time to reflect the evolving understanding of the problem space The Kubernetes API is actually a set of APIs called API groups and each API group is independently versioned API versions docs reference using api api versioning fall into 3 main tracks each of which has different policies for deprecation Example Track v1 GA generally available stable v1beta1 Beta pre release v1alpha1 Alpha experimental A given release of Kubernetes can support any number of API groups and any number of versions of each The following rules govern the deprecation of elements of the API This includes REST resources aka API objects Fields of REST resources Annotations on REST resources including beta annotations but not including alpha annotations Enumerated or constant values Component config structures These rules are enforced between official releases not between arbitrary commits to master or release branches Rule 1 API elements may only be removed by incrementing the version of the API group Once an API element has been added to an API group at a particular version it can not be removed from that version or have its behavior significantly changed regardless of track For historical reasons there are 2 monolithic API groups core no group name and extensions Resources will incrementally be moved from these legacy API groups into more domain specific API groups Rule 2 API objects must be able to round trip between API versions in a given release without information loss with the exception of whole REST resources that do not exist in some versions For example an object can be written as v1 and then read back as v2 and converted to v1 and the resulting v1 resource will be identical to the original The representation in v2 might be different from v1 but the system knows how to convert between them in both directions Additionally any new field added in v2 must be able to round trip to v1 and back which means v1 might have to add an equivalent field or represent it as an annotation Rule 3 An API version in a given track may not be deprecated in favor of a less stable API version GA API versions can replace beta and alpha API versions Beta API versions can replace earlier beta and alpha API versions but may not replace GA API versions Alpha API versions can replace earlier alpha API versions but may not replace GA or beta API versions Rule 4a API lifetime is determined by the API stability level GA API versions may be marked as deprecated but must not be removed within a major version of Kubernetes Beta API versions are deprecated no more than 9 months or 3 minor releases after introduction whichever is longer and are no longer served 9 months or 3 minor releases after deprecation whichever is longer Alpha API versions may be removed in any release without prior deprecation notice This ensures beta API support covers the maximum supported version skew of 2 releases releases version skew policy and that APIs don t stagnate on unstable beta versions accumulating production usage that will be disrupted when support for the beta API ends There are no current plans for a major version revision of Kubernetes that removes GA APIs Until 52185 https github com kubernetes kubernetes issues 52185 is resolved no API versions that have been persisted to storage may be removed Serving REST endpoints for those versions may be disabled subject to the deprecation timelines in this document but the API server must remain capable of decoding converting previously persisted data from storage Rule 4b The preferred API version and the storage version for a given group may not advance until after a release has been made that supports both the new version and the previous version Users must be able to upgrade to a new release of Kubernetes and then roll back to a previous release without converting anything to the new API version or suffering breakages unless they explicitly used features only available in the newer version This is particularly evident in the stored representation of objects All of this is best illustrated by examples Imagine a Kubernetes release version X which introduces a new API group A new Kubernetes release is made every approximately 4 months 3 per year The following table describes which API versions are supported in a series of subsequent releases table thead tr th Release th th API Versions th th Preferred Storage Version th th Notes th tr thead tbody tr td X td td v1alpha1 td td v1alpha1 td td td tr tr td X 1 td td v1alpha2 td td v1alpha2 td td ul li v1alpha1 is removed See release notes for required actions li ul td tr tr td X 2 td td v1beta1 td td v1beta1 td td ul li v1alpha2 is removed See release notes for required actions li ul td tr tr td X 3 td td v1beta2 v1beta1 deprecated td td v1beta1 td td ul li v1beta1 is deprecated See release notes for required actions li ul td tr tr td X 4 td td v1beta2 v1beta1 deprecated td td v1beta2 td td td tr tr td X 5 td td v1 v1beta1 deprecated v1beta2 deprecated td td v1beta2 td td ul li v1beta2 is deprecated See release notes for required actions li ul td tr tr td X 6 td td v1 v1beta2 deprecated td td v1 td td ul li v1beta1 is removed See release notes for required actions li ul td tr tr td X 7 td td v1 v1beta2 deprecated td td v1 td td td tr tr td X 8 td td v2alpha1 v1 td td v1 td td ul li v1beta2 is removed See release notes for required actions li ul td tr tr td X 9 td td v2alpha2 v1 td td v1 td td ul li v2alpha1 is removed See release notes for required actions li ul td tr tr td X 10 td td v2beta1 v1 td td v1 td td ul li v2alpha2 is removed See release notes for required actions li ul td tr tr td X 11 td td v2beta2 v2beta1 deprecated v1 td td v1 td td ul li v2beta1 is deprecated See release notes for required actions li ul td tr tr td X 12 td td v2 v2beta2 deprecated v2beta1 deprecated v1 deprecated td td v1 td td ul li v2beta2 is deprecated See release notes for required actions li li v1 is deprecated in favor of v2 but will not be removed li ul td tr tr td X 13 td td v2 v2beta1 deprecated v2beta2 deprecated v1 deprecated td td v2 td td td tr tr td X 14 td td v2 v2beta2 deprecated v1 deprecated td td v2 td td ul li v2beta1 is removed See release notes for required actions li ul td tr tr td X 15 td td v2 v1 deprecated td td v2 td td ul li v2beta2 is removed See release notes for required actions li ul td tr tbody table REST resources aka API objects Consider a hypothetical REST resource named Widget which was present in API v1 in the above timeline and which needs to be deprecated We document and announce https groups google com forum forum kubernetes announce the deprecation in sync with release X 1 The Widget resource still exists in API version v1 deprecated but not in v2alpha1 The Widget resource continues to exist and function in releases up to and including X 8 Only in release X 9 when API v1 has aged out does the Widget resource cease to exist and the behavior get removed Starting in Kubernetes v1 19 making an API request to a deprecated REST API endpoint 1 Returns a Warning header as defined in RFC7234 Section 5 5 https tools ietf org html rfc7234 section 5 5 in the API response 1 Adds a k8s io deprecated true annotation to the audit event docs tasks debug debug cluster audit recorded for the request 1 Sets an apiserver requested deprecated apis gauge metric to 1 in the kube apiserver process The metric has labels for group version resource subresource that can be joined to the apiserver request total metric and a removed release label that indicates the Kubernetes release in which the API will no longer be served The following Prometheus query returns information about requests made to deprecated APIs which will be removed in v1 22 promql apiserver requested deprecated apis removed release 1 22 on group version resource subresource group right apiserver request total Fields of REST resources As with whole REST resources an individual field which was present in API v1 must exist and function until API v1 is removed Unlike whole resources the v2 APIs may choose a different representation for the field as long as it can be round tripped For example a v1 field named magnitude which was deprecated might be named deprecatedMagnitude in API v2 When v1 is eventually removed the deprecated field can be removed from v2 Enumerated or constant values As with whole REST resources and fields thereof a constant value which was supported in API v1 must exist and function until API v1 is removed Component config structures Component configs are versioned and managed similar to REST resources Future work Over time Kubernetes will introduce more fine grained API versions at which point these rules will be adjusted as needed Deprecating a flag or CLI The Kubernetes system is comprised of several different programs cooperating Sometimes a Kubernetes release might remove flags or CLI commands collectively CLI elements in these programs The individual programs naturally sort into two main groups user facing and admin facing programs which vary slightly in their deprecation policies Unless a flag is explicitly prefixed or documented as alpha or beta it is considered GA CLI elements are effectively part of the API to the system but since they are not versioned in the same way as the REST API the rules for deprecation are as follows Rule 5a CLI elements of user facing components e g kubectl must function after their announced deprecation for no less than GA 12 months or 2 releases whichever is longer Beta 3 months or 1 release whichever is longer Alpha 0 releases Rule 5b CLI elements of admin facing components e g kubelet must function after their announced deprecation for no less than GA 6 months or 1 release whichever is longer Beta 3 months or 1 release whichever is longer Alpha 0 releases Rule 5c Command line interface CLI elements cannot be deprecated in favor of less stable CLI elements Similar to the Rule 3 for APIs if an element of a command line interface is being replaced with an alternative implementation such as by renaming an existing element or by switching to use configuration sourced from a file instead of a command line argument that recommended alternative must be of the same or higher stability level Rule 6 Deprecated CLI elements must emit warnings optionally disable when used Deprecating a feature or behavior Occasionally a Kubernetes release needs to deprecate some feature or behavior of the system that is not controlled by the API or CLI In this case the rules for deprecation are as follows Rule 7 Deprecated behaviors must function for no less than 1 year after their announced deprecation If the feature or behavior is being replaced with an alternative implementation that requires work to adopt the change there should be an effort to simplify the transition whenever possible If an alternative implementation is under Kubernetes organization control the following rules apply Rule 8 The feature of behavior must not be deprecated in favor of an alternative implementation that is less stable For example a generally available feature cannot be deprecated in favor of a Beta replacement The Kubernetes project does however encourage users to adopt and transitions to alternative implementations even before they reach the same maturity level This is particularly important for exploring new use cases of a feature or getting an early feedback on the replacement Alternative implementations may sometimes be external tools or products for example a feature may move from the kubelet to container runtime that is not under Kubernetes project control In such cases the rule cannot be applied but there must be an effort to ensure that there is a transition path that does not compromise on components maturity levels In the example with container runtimes the effort may involve trying to ensure that popular container runtimes have versions that offer the same level of stability while implementing that replacement behavior Deprecation rules for features and behaviors do not imply that all changes to the system are governed by this policy These rules apply only to significant user visible behaviors which impact the correctness of applications running on Kubernetes or that impact the administration of Kubernetes clusters and which are being removed entirely An exception to the above rule is feature gates Feature gates are key value pairs that allow for users to enable disable experimental features Feature gates are intended to cover the development life cycle of a feature they are not intended to be long term APIs As such they are expected to be deprecated and removed after a feature becomes GA or is dropped As a feature moves through the stages the associated feature gate evolves The feature life cycle matched to its corresponding feature gate is Alpha the feature gate is disabled by default and can be enabled by the user Beta the feature gate is enabled by default and can be disabled by the user GA the feature gate is deprecated see Deprecation deprecation and becomes non operational GA deprecation window complete the feature gate is removed and calls to it are no longer accepted Deprecation Features can be removed at any point in the life cycle prior to GA When features are removed prior to GA their associated feature gates are also deprecated When an invocation tries to disable a non operational feature gate the call fails in order to avoid unsupported scenarios that might otherwise run silently In some cases removing pre GA features requires considerable time Feature gates can remain operational until their associated feature is fully removed at which point the feature gate itself can be deprecated When removing a feature gate for a GA feature also requires considerable time calls to feature gates may remain operational if the feature gate has no effect on the feature and if the feature gate causes no errors Features intended to be disabled by users should include a mechanism for disabling the feature in the associated feature gate Versioning for feature gates is different from the previously discussed components therefore the rules for deprecation are as follows Rule 9 Feature gates must be deprecated when the corresponding feature they control transitions a lifecycle stage as follows Feature gates must function for no less than Beta feature to GA 6 months or 2 releases whichever is longer Beta feature to EOL 3 months or 1 release whichever is longer Alpha feature to EOL 0 releases Rule 10 Deprecated feature gates must respond with a warning when used When a feature gate is deprecated it must be documented in both in the release notes and the corresponding CLI help Both warnings and documentation must indicate whether a feature gate is non operational Deprecating a metric Each component of the Kubernetes control plane exposes metrics usually the metrics endpoint which are typically ingested by cluster administrators Not all metrics are the same some metrics are commonly used as SLIs or used to determine SLOs these tend to have greater import Other metrics are more experimental in nature or are used primarily in the Kubernetes development process Accordingly metrics fall under three stability classes ALPHA BETA STABLE this impacts removal of a metric during a Kubernetes release These classes are determined by the perceived importance of the metric The rules for deprecating and removing a metric are as follows Rule 11a Metrics for the corresponding stability class must function for no less than STABLE 4 releases or 12 months whichever is longer BETA 2 releases or 8 months whichever is longer ALPHA 0 releases Rule 11b Metrics after their announced deprecation must function for no less than STABLE 3 releases or 9 months whichever is longer BETA 1 releases or 4 months whichever is longer ALPHA 0 releases Deprecated metrics will have their description text prefixed with a deprecation notice string Deprecated from x y and a warning log will be emitted during metric registration Like their stable undeprecated counterparts deprecated metrics will be automatically registered to the metrics endpoint and therefore visible On a subsequent release when the metric s deprecatedVersion is equal to current kubernetes version 3 a deprecated metric will become a hidden metric Unlike their deprecated counterparts hidden metrics will no longer be automatically registered to the metrics endpoint hence hidden However they can be explicitly enabled through a command line flag on the binary show hidden metrics for version This provides cluster admins an escape hatch to properly migrate off of a deprecated metric if they were not able to react to the earlier deprecation warnings Hidden metrics should be deleted after one release Exceptions No policy can cover every possible situation This policy is a living document and will evolve over time In practice there will be situations that do not fit neatly into this policy or for which this policy becomes a serious impediment Such situations should be discussed with SIGs and project leaders to find the best solutions for those specific cases always bearing in mind that Kubernetes is committed to being a stable system that as much as possible never breaks users Exceptions will always be announced in all relevant release notes |
kubernetes reference liggitt weight 45 reviewers title Deprecated API Migration Guide contenttype reference thockin lavalamp smarterclayton | ---
reviewers:
- liggitt
- lavalamp
- thockin
- smarterclayton
title: "Deprecated API Migration Guide"
weight: 45
content_type: reference
---
<!-- overview -->
As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
When APIs evolve, the old API is deprecated and eventually removed.
This page contains information you need to know when migrating from
deprecated API versions to newer and more stable API versions.
<!-- body -->
## Removed APIs by release
### v1.32
The **v1.32** release will stop serving the following deprecated API versions:
#### Flow control resources {#flowcontrol-resources-v132}
The **flowcontrol.apiserver.k8s.io/v1beta3** API version of FlowSchema and PriorityLevelConfiguration will no longer be served in v1.32.
* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io/v1** API version, available since v1.29.
* All existing persisted objects are accessible via the new API
* Notable changes in **flowcontrol.apiserver.k8s.io/v1**:
* The PriorityLevelConfiguration `spec.limited.nominalConcurrencyShares` field only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30.
### v1.29
The **v1.29** release stopped serving the following deprecated API versions:
#### Flow control resources {#flowcontrol-resources-v129}
The **flowcontrol.apiserver.k8s.io/v1beta2** API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1.29.
* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io/v1** API version, available since v1.29, or the **flowcontrol.apiserver.k8s.io/v1beta3** API version, available since v1.26.
* All existing persisted objects are accessible via the new API
* Notable changes in **flowcontrol.apiserver.k8s.io/v1**:
* The PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field is renamed to `spec.limited.nominalConcurrencyShares` and only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30.
* Notable changes in **flowcontrol.apiserver.k8s.io/v1beta3**:
* The PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field is renamed to `spec.limited.nominalConcurrencyShares`
### v1.27
The **v1.27** release stopped serving the following deprecated API versions:
#### CSIStorageCapacity {#csistoragecapacity-v127}
The **storage.k8s.io/v1beta1** API version of CSIStorageCapacity is no longer served as of v1.27.
* Migrate manifests and API clients to use the **storage.k8s.io/v1** API version, available since v1.24.
* All existing persisted objects are accessible via the new API
* No notable changes
### v1.26
The **v1.26** release stopped serving the following deprecated API versions:
#### Flow control resources {#flowcontrol-resources-v126}
The **flowcontrol.apiserver.k8s.io/v1beta1** API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1.26.
* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io/v1beta2** API version.
* All existing persisted objects are accessible via the new API
* No notable changes
#### HorizontalPodAutoscaler {#horizontalpodautoscaler-v126}
The **autoscaling/v2beta2** API version of HorizontalPodAutoscaler is no longer served as of v1.26.
* Migrate manifests and API clients to use the **autoscaling/v2** API version, available since v1.23.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `targetAverageUtilization` is replaced with `target.averageUtilization` and `target.type: Utilization`. See [Autoscaling on multiple metrics and custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics).
### v1.25
The **v1.25** release stopped serving the following deprecated API versions:
#### CronJob {#cronjob-v125}
The **batch/v1beta1** API version of CronJob is no longer served as of v1.25.
* Migrate manifests and API clients to use the **batch/v1** API version, available since v1.21.
* All existing persisted objects are accessible via the new API
* No notable changes
#### EndpointSlice {#endpointslice-v125}
The **discovery.k8s.io/v1beta1** API version of EndpointSlice is no longer served as of v1.25.
* Migrate manifests and API clients to use the **discovery.k8s.io/v1** API version, available since v1.21.
* All existing persisted objects are accessible via the new API
* Notable changes in **discovery.k8s.io/v1**:
* use per Endpoint `nodeName` field instead of deprecated `topology["kubernetes.io/hostname"]` field
* use per Endpoint `zone` field instead of deprecated `topology["topology.kubernetes.io/zone"]` field
* `topology` is replaced with the `deprecatedTopology` field which is not writable in v1
#### Event {#event-v125}
The **events.k8s.io/v1beta1** API version of Event is no longer served as of v1.25.
* Migrate manifests and API clients to use the **events.k8s.io/v1** API version, available since v1.19.
* All existing persisted objects are accessible via the new API
* Notable changes in **events.k8s.io/v1**:
* `type` is limited to `Normal` and `Warning`
* `involvedObject` is renamed to `regarding`
* `action`, `reason`, `reportingController`, and `reportingInstance` are required
when creating new **events.k8s.io/v1** Events
* use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed
to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io/v1** Events)
* use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field
(which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io/v1** Events)
* use `series.count` instead of the deprecated `count` field
(which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io/v1** Events)
* use `reportingController` instead of the deprecated `source.component` field
(which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events)
* use `reportingInstance` instead of the deprecated `source.host` field
(which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io/v1** Events)
#### HorizontalPodAutoscaler {#horizontalpodautoscaler-v125}
The **autoscaling/v2beta1** API version of HorizontalPodAutoscaler is no longer served as of v1.25.
* Migrate manifests and API clients to use the **autoscaling/v2** API version, available since v1.23.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `targetAverageUtilization` is replaced with `target.averageUtilization` and `target.type: Utilization`. See [Autoscaling on multiple metrics and custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics).
#### PodDisruptionBudget {#poddisruptionbudget-v125}
The **policy/v1beta1** API version of PodDisruptionBudget is no longer served as of v1.25.
* Migrate manifests and API clients to use the **policy/v1** API version, available since v1.21.
* All existing persisted objects are accessible via the new API
* Notable changes in **policy/v1**:
* an empty `spec.selector` (`{}`) written to a `policy/v1` PodDisruptionBudget selects all
pods in the namespace (in `policy/v1beta1` an empty `spec.selector` selected no pods).
An unset `spec.selector` selects no pods in either API version.
#### PodSecurityPolicy {#psp-v125}
PodSecurityPolicy in the **policy/v1beta1** API version is no longer served as of v1.25,
and the PodSecurityPolicy admission controller will be removed.
Migrate to [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
or a [3rd party admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/).
For a migration guide, see [Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller](/docs/tasks/configure-pod-container/migrate-from-psp/).
For more information on the deprecation, see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/).
#### RuntimeClass {#runtimeclass-v125}
RuntimeClass in the **node.k8s.io/v1beta1** API version is no longer served as of v1.25.
* Migrate manifests and API clients to use the **node.k8s.io/v1** API version, available since v1.20.
* All existing persisted objects are accessible via the new API
* No notable changes
### v1.22
The **v1.22** release stopped serving the following deprecated API versions:
#### Webhook resources {#webhook-resources-v122}
The **admissionregistration.k8s.io/v1beta1** API version of MutatingWebhookConfiguration
and ValidatingWebhookConfiguration is no longer served as of v1.22.
* Migrate manifests and API clients to use the **admissionregistration.k8s.io/v1** API version, available since v1.16.
* All existing persisted objects are accessible via the new APIs
* Notable changes:
* `webhooks[*].failurePolicy` default changed from `Ignore` to `Fail` for v1
* `webhooks[*].matchPolicy` default changed from `Exact` to `Equivalent` for v1
* `webhooks[*].timeoutSeconds` default changed from `30s` to `10s` for v1
* `webhooks[*].sideEffects` default value is removed, and the field made required,
and only `None` and `NoneOnDryRun` are permitted for v1
* `webhooks[*].admissionReviewVersions` default value is removed and the field made
required for v1 (supported versions for AdmissionReview are `v1` and `v1beta1`)
* `webhooks[*].name` must be unique in the list for objects created via `admissionregistration.k8s.io/v1`
#### CustomResourceDefinition {#customresourcedefinition-v122}
The **apiextensions.k8s.io/v1beta1** API version of CustomResourceDefinition is no longer served as of v1.22.
* Migrate manifests and API clients to use the **apiextensions.k8s.io/v1** API version, available since v1.16.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `spec.scope` is no longer defaulted to `Namespaced` and must be explicitly specified
* `spec.version` is removed in v1; use `spec.versions` instead
* `spec.validation` is removed in v1; use `spec.versions[*].schema` instead
* `spec.subresources` is removed in v1; use `spec.versions[*].subresources` instead
* `spec.additionalPrinterColumns` is removed in v1; use `spec.versions[*].additionalPrinterColumns` instead
* `spec.conversion.webhookClientConfig` is moved to `spec.conversion.webhook.clientConfig` in v1
* `spec.conversion.conversionReviewVersions` is moved to `spec.conversion.webhook.conversionReviewVersions` in v1
* `spec.versions[*].schema.openAPIV3Schema` is now required when creating v1 CustomResourceDefinition objects,
and must be a [structural schema](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema)
* `spec.preserveUnknownFields: true` is disallowed when creating v1 CustomResourceDefinition objects;
it must be specified within schema definitions as `x-kubernetes-preserve-unknown-fields: true`
* In `additionalPrinterColumns` items, the `JSONPath` field was renamed to `jsonPath` in v1
(fixes [#66531](https://github.com/kubernetes/kubernetes/issues/66531))
#### APIService {#apiservice-v122}
The **apiregistration.k8s.io/v1beta1** API version of APIService is no longer served as of v1.22.
* Migrate manifests and API clients to use the **apiregistration.k8s.io/v1** API version, available since v1.10.
* All existing persisted objects are accessible via the new API
* No notable changes
#### TokenReview {#tokenreview-v122}
The **authentication.k8s.io/v1beta1** API version of TokenReview is no longer served as of v1.22.
* Migrate manifests and API clients to use the **authentication.k8s.io/v1** API version, available since v1.6.
* No notable changes
#### SubjectAccessReview resources {#subjectaccessreview-resources-v122}
The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview,
SelfSubjectAccessReview, SubjectAccessReview, and SelfSubjectRulesReview is no longer served as of v1.22.
* Migrate manifests and API clients to use the **authorization.k8s.io/v1** API version, available since v1.6.
* Notable changes:
* `spec.group` was renamed to `spec.groups` in v1 (fixes [#32709](https://github.com/kubernetes/kubernetes/issues/32709))
#### CertificateSigningRequest {#certificatesigningrequest-v122}
The **certificates.k8s.io/v1beta1** API version of CertificateSigningRequest is no longer served as of v1.22.
* Migrate manifests and API clients to use the **certificates.k8s.io/v1** API version, available since v1.19.
* All existing persisted objects are accessible via the new API
* Notable changes in `certificates.k8s.io/v1`:
* For API clients requesting certificates:
* `spec.signerName` is now required
(see [known Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)),
and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API
* `spec.usages` is now required, may not contain duplicate values, and must only contain known usages
* For API clients approving or signing certificates:
* `status.conditions` may not contain duplicate types
* `status.conditions[*].status` is now required
* `status.certificate` must be PEM-encoded, and contain only `CERTIFICATE` blocks
#### Lease {#lease-v122}
The **coordination.k8s.io/v1beta1** API version of Lease is no longer served as of v1.22.
* Migrate manifests and API clients to use the **coordination.k8s.io/v1** API version, available since v1.14.
* All existing persisted objects are accessible via the new API
* No notable changes
#### Ingress {#ingress-v122}
The **extensions/v1beta1** and **networking.k8s.io/v1beta1** API versions of Ingress is no longer served as of v1.22.
* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.19.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `spec.backend` is renamed to `spec.defaultBackend`
* The backend `serviceName` field is renamed to `service.name`
* Numeric backend `servicePort` fields are renamed to `service.port.number`
* String backend `servicePort` fields are renamed to `service.port.name`
* `pathType` is now required for each specified path. Options are `Prefix`,
`Exact`, and `ImplementationSpecific`. To match the undefined `v1beta1` behavior, use `ImplementationSpecific`.
#### IngressClass {#ingressclass-v122}
The **networking.k8s.io/v1beta1** API version of IngressClass is no longer served as of v1.22.
* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.19.
* All existing persisted objects are accessible via the new API
* No notable changes
#### RBAC resources {#rbac-resources-v122}
The **rbac.authorization.k8s.io/v1beta1** API version of ClusterRole, ClusterRoleBinding,
Role, and RoleBinding is no longer served as of v1.22.
* Migrate manifests and API clients to use the **rbac.authorization.k8s.io/v1** API version, available since v1.8.
* All existing persisted objects are accessible via the new APIs
* No notable changes
#### PriorityClass {#priorityclass-v122}
The **scheduling.k8s.io/v1beta1** API version of PriorityClass is no longer served as of v1.22.
* Migrate manifests and API clients to use the **scheduling.k8s.io/v1** API version, available since v1.14.
* All existing persisted objects are accessible via the new API
* No notable changes
#### Storage resources {#storage-resources-v122}
The **storage.k8s.io/v1beta1** API version of CSIDriver, CSINode, StorageClass, and VolumeAttachment is no longer served as of v1.22.
* Migrate manifests and API clients to use the **storage.k8s.io/v1** API version
* CSIDriver is available in **storage.k8s.io/v1** since v1.19.
* CSINode is available in **storage.k8s.io/v1** since v1.17
* StorageClass is available in **storage.k8s.io/v1** since v1.6
* VolumeAttachment is available in **storage.k8s.io/v1** v1.13
* All existing persisted objects are accessible via the new APIs
* No notable changes
### v1.16
The **v1.16** release stopped serving the following deprecated API versions:
#### NetworkPolicy {#networkpolicy-v116}
The **extensions/v1beta1** API version of NetworkPolicy is no longer served as of v1.16.
* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.8.
* All existing persisted objects are accessible via the new API
#### DaemonSet {#daemonset-v116}
The **extensions/v1beta1** and **apps/v1beta2** API versions of DaemonSet are no longer served as of v1.16.
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `spec.templateGeneration` is removed
* `spec.selector` is now required and immutable after creation; use the existing
template labels as the selector for seamless upgrades
* `spec.updateStrategy.type` now defaults to `RollingUpdate`
(the default in `extensions/v1beta1` was `OnDelete`)
#### Deployment {#deployment-v116}
The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions of Deployment are no longer served as of v1.16.
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `spec.rollbackTo` is removed
* `spec.selector` is now required and immutable after creation; use the existing
template labels as the selector for seamless upgrades
* `spec.progressDeadlineSeconds` now defaults to `600` seconds
(the default in `extensions/v1beta1` was no deadline)
* `spec.revisionHistoryLimit` now defaults to `10`
(the default in `apps/v1beta1` was `2`, the default in `extensions/v1beta1` was to retain all)
* `maxSurge` and `maxUnavailable` now default to `25%`
(the default in `extensions/v1beta1` was `1`)
#### StatefulSet {#statefulset-v116}
The **apps/v1beta1** and **apps/v1beta2** API versions of StatefulSet are no longer served as of v1.16.
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `spec.selector` is now required and immutable after creation;
use the existing template labels as the selector for seamless upgrades
* `spec.updateStrategy.type` now defaults to `RollingUpdate`
(the default in `apps/v1beta1` was `OnDelete`)
#### ReplicaSet {#replicaset-v116}
The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions of ReplicaSet are no longer served as of v1.16.
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
#### PodSecurityPolicy {#psp-v116}
The **extensions/v1beta1** API version of PodSecurityPolicy is no longer served as of v1.16.
* Migrate manifests and API client to use the **policy/v1beta1** API version, available since v1.10.
* Note that the **policy/v1beta1** API version of PodSecurityPolicy will be removed in v1.25.
## What to do
### Test with deprecated APIs disabled
You can test your clusters by starting an API server with specific API versions disabled
to simulate upcoming removals. Add the following flag to the API server startup arguments:
`--runtime-config=<group>/<version>=false`
For example:
`--runtime-config=admissionregistration.k8s.io/v1beta1=false,apiextensions.k8s.io/v1beta1,...`
### Locate use of deprecated APIs
Use [client warnings, metrics, and audit information available in 1.19+](/blog/2020/09/03/warnings/#deprecation-warnings)
to locate use of deprecated APIs.
### Migrate to non-deprecated APIs
* Update custom integrations and controllers to call the non-deprecated APIs
* Change YAML files to reference the non-deprecated APIs
You can use the `kubectl convert` command to automatically convert an existing object:
`kubectl convert -f <file> --output-version <group>/<version>`.
For example, to convert an older Deployment to `apps/v1`, you can run:
`kubectl convert -f ./my-deployment.yaml --output-version apps/v1`
This conversion may use non-ideal default values. To learn more about a specific
resource, check the Kubernetes [API reference](/docs/reference/kubernetes-api/).
The `kubectl convert` tool is not installed by default, although
in fact it once was part of `kubectl` itself. For more details, you can read the
[deprecation and removal issue](https://github.com/kubernetes/kubectl/issues/725)
for the built-in subcommand.
To learn how to set up `kubectl convert` on your computer, visit the page that is right for your
operating system:
[Linux](/docs/tasks/tools/install-kubectl-linux/#install-kubectl-convert-plugin),
[macOS](/docs/tasks/tools/install-kubectl-macos/#install-kubectl-convert-plugin), or
[Windows](/docs/tasks/tools/install-kubectl-windows/#install-kubectl-convert-plugin).
| kubernetes reference | reviewers liggitt lavalamp thockin smarterclayton title Deprecated API Migration Guide weight 45 content type reference overview As the Kubernetes API evolves APIs are periodically reorganized or upgraded When APIs evolve the old API is deprecated and eventually removed This page contains information you need to know when migrating from deprecated API versions to newer and more stable API versions body Removed APIs by release v1 32 The v1 32 release will stop serving the following deprecated API versions Flow control resources flowcontrol resources v132 The flowcontrol apiserver k8s io v1beta3 API version of FlowSchema and PriorityLevelConfiguration will no longer be served in v1 32 Migrate manifests and API clients to use the flowcontrol apiserver k8s io v1 API version available since v1 29 All existing persisted objects are accessible via the new API Notable changes in flowcontrol apiserver k8s io v1 The PriorityLevelConfiguration spec limited nominalConcurrencyShares field only defaults to 30 when unspecified and an explicit value of 0 is not changed to 30 v1 29 The v1 29 release stopped serving the following deprecated API versions Flow control resources flowcontrol resources v129 The flowcontrol apiserver k8s io v1beta2 API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1 29 Migrate manifests and API clients to use the flowcontrol apiserver k8s io v1 API version available since v1 29 or the flowcontrol apiserver k8s io v1beta3 API version available since v1 26 All existing persisted objects are accessible via the new API Notable changes in flowcontrol apiserver k8s io v1 The PriorityLevelConfiguration spec limited assuredConcurrencyShares field is renamed to spec limited nominalConcurrencyShares and only defaults to 30 when unspecified and an explicit value of 0 is not changed to 30 Notable changes in flowcontrol apiserver k8s io v1beta3 The PriorityLevelConfiguration spec limited assuredConcurrencyShares field is renamed to spec limited nominalConcurrencyShares v1 27 The v1 27 release stopped serving the following deprecated API versions CSIStorageCapacity csistoragecapacity v127 The storage k8s io v1beta1 API version of CSIStorageCapacity is no longer served as of v1 27 Migrate manifests and API clients to use the storage k8s io v1 API version available since v1 24 All existing persisted objects are accessible via the new API No notable changes v1 26 The v1 26 release stopped serving the following deprecated API versions Flow control resources flowcontrol resources v126 The flowcontrol apiserver k8s io v1beta1 API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1 26 Migrate manifests and API clients to use the flowcontrol apiserver k8s io v1beta2 API version All existing persisted objects are accessible via the new API No notable changes HorizontalPodAutoscaler horizontalpodautoscaler v126 The autoscaling v2beta2 API version of HorizontalPodAutoscaler is no longer served as of v1 26 Migrate manifests and API clients to use the autoscaling v2 API version available since v1 23 All existing persisted objects are accessible via the new API Notable changes targetAverageUtilization is replaced with target averageUtilization and target type Utilization See Autoscaling on multiple metrics and custom metrics docs tasks run application horizontal pod autoscale walkthrough autoscaling on multiple metrics and custom metrics v1 25 The v1 25 release stopped serving the following deprecated API versions CronJob cronjob v125 The batch v1beta1 API version of CronJob is no longer served as of v1 25 Migrate manifests and API clients to use the batch v1 API version available since v1 21 All existing persisted objects are accessible via the new API No notable changes EndpointSlice endpointslice v125 The discovery k8s io v1beta1 API version of EndpointSlice is no longer served as of v1 25 Migrate manifests and API clients to use the discovery k8s io v1 API version available since v1 21 All existing persisted objects are accessible via the new API Notable changes in discovery k8s io v1 use per Endpoint nodeName field instead of deprecated topology kubernetes io hostname field use per Endpoint zone field instead of deprecated topology topology kubernetes io zone field topology is replaced with the deprecatedTopology field which is not writable in v1 Event event v125 The events k8s io v1beta1 API version of Event is no longer served as of v1 25 Migrate manifests and API clients to use the events k8s io v1 API version available since v1 19 All existing persisted objects are accessible via the new API Notable changes in events k8s io v1 type is limited to Normal and Warning involvedObject is renamed to regarding action reason reportingController and reportingInstance are required when creating new events k8s io v1 Events use eventTime instead of the deprecated firstTimestamp field which is renamed to deprecatedFirstTimestamp and not permitted in new events k8s io v1 Events use series lastObservedTime instead of the deprecated lastTimestamp field which is renamed to deprecatedLastTimestamp and not permitted in new events k8s io v1 Events use series count instead of the deprecated count field which is renamed to deprecatedCount and not permitted in new events k8s io v1 Events use reportingController instead of the deprecated source component field which is renamed to deprecatedSource component and not permitted in new events k8s io v1 Events use reportingInstance instead of the deprecated source host field which is renamed to deprecatedSource host and not permitted in new events k8s io v1 Events HorizontalPodAutoscaler horizontalpodautoscaler v125 The autoscaling v2beta1 API version of HorizontalPodAutoscaler is no longer served as of v1 25 Migrate manifests and API clients to use the autoscaling v2 API version available since v1 23 All existing persisted objects are accessible via the new API Notable changes targetAverageUtilization is replaced with target averageUtilization and target type Utilization See Autoscaling on multiple metrics and custom metrics docs tasks run application horizontal pod autoscale walkthrough autoscaling on multiple metrics and custom metrics PodDisruptionBudget poddisruptionbudget v125 The policy v1beta1 API version of PodDisruptionBudget is no longer served as of v1 25 Migrate manifests and API clients to use the policy v1 API version available since v1 21 All existing persisted objects are accessible via the new API Notable changes in policy v1 an empty spec selector written to a policy v1 PodDisruptionBudget selects all pods in the namespace in policy v1beta1 an empty spec selector selected no pods An unset spec selector selects no pods in either API version PodSecurityPolicy psp v125 PodSecurityPolicy in the policy v1beta1 API version is no longer served as of v1 25 and the PodSecurityPolicy admission controller will be removed Migrate to Pod Security Admission docs concepts security pod security admission or a 3rd party admission webhook docs reference access authn authz extensible admission controllers For a migration guide see Migrate from PodSecurityPolicy to the Built In PodSecurity Admission Controller docs tasks configure pod container migrate from psp For more information on the deprecation see PodSecurityPolicy Deprecation Past Present and Future blog 2021 04 06 podsecuritypolicy deprecation past present and future RuntimeClass runtimeclass v125 RuntimeClass in the node k8s io v1beta1 API version is no longer served as of v1 25 Migrate manifests and API clients to use the node k8s io v1 API version available since v1 20 All existing persisted objects are accessible via the new API No notable changes v1 22 The v1 22 release stopped serving the following deprecated API versions Webhook resources webhook resources v122 The admissionregistration k8s io v1beta1 API version of MutatingWebhookConfiguration and ValidatingWebhookConfiguration is no longer served as of v1 22 Migrate manifests and API clients to use the admissionregistration k8s io v1 API version available since v1 16 All existing persisted objects are accessible via the new APIs Notable changes webhooks failurePolicy default changed from Ignore to Fail for v1 webhooks matchPolicy default changed from Exact to Equivalent for v1 webhooks timeoutSeconds default changed from 30s to 10s for v1 webhooks sideEffects default value is removed and the field made required and only None and NoneOnDryRun are permitted for v1 webhooks admissionReviewVersions default value is removed and the field made required for v1 supported versions for AdmissionReview are v1 and v1beta1 webhooks name must be unique in the list for objects created via admissionregistration k8s io v1 CustomResourceDefinition customresourcedefinition v122 The apiextensions k8s io v1beta1 API version of CustomResourceDefinition is no longer served as of v1 22 Migrate manifests and API clients to use the apiextensions k8s io v1 API version available since v1 16 All existing persisted objects are accessible via the new API Notable changes spec scope is no longer defaulted to Namespaced and must be explicitly specified spec version is removed in v1 use spec versions instead spec validation is removed in v1 use spec versions schema instead spec subresources is removed in v1 use spec versions subresources instead spec additionalPrinterColumns is removed in v1 use spec versions additionalPrinterColumns instead spec conversion webhookClientConfig is moved to spec conversion webhook clientConfig in v1 spec conversion conversionReviewVersions is moved to spec conversion webhook conversionReviewVersions in v1 spec versions schema openAPIV3Schema is now required when creating v1 CustomResourceDefinition objects and must be a structural schema docs tasks extend kubernetes custom resources custom resource definitions specifying a structural schema spec preserveUnknownFields true is disallowed when creating v1 CustomResourceDefinition objects it must be specified within schema definitions as x kubernetes preserve unknown fields true In additionalPrinterColumns items the JSONPath field was renamed to jsonPath in v1 fixes 66531 https github com kubernetes kubernetes issues 66531 APIService apiservice v122 The apiregistration k8s io v1beta1 API version of APIService is no longer served as of v1 22 Migrate manifests and API clients to use the apiregistration k8s io v1 API version available since v1 10 All existing persisted objects are accessible via the new API No notable changes TokenReview tokenreview v122 The authentication k8s io v1beta1 API version of TokenReview is no longer served as of v1 22 Migrate manifests and API clients to use the authentication k8s io v1 API version available since v1 6 No notable changes SubjectAccessReview resources subjectaccessreview resources v122 The authorization k8s io v1beta1 API version of LocalSubjectAccessReview SelfSubjectAccessReview SubjectAccessReview and SelfSubjectRulesReview is no longer served as of v1 22 Migrate manifests and API clients to use the authorization k8s io v1 API version available since v1 6 Notable changes spec group was renamed to spec groups in v1 fixes 32709 https github com kubernetes kubernetes issues 32709 CertificateSigningRequest certificatesigningrequest v122 The certificates k8s io v1beta1 API version of CertificateSigningRequest is no longer served as of v1 22 Migrate manifests and API clients to use the certificates k8s io v1 API version available since v1 19 All existing persisted objects are accessible via the new API Notable changes in certificates k8s io v1 For API clients requesting certificates spec signerName is now required see known Kubernetes signers docs reference access authn authz certificate signing requests kubernetes signers and requests for kubernetes io legacy unknown are not allowed to be created via the certificates k8s io v1 API spec usages is now required may not contain duplicate values and must only contain known usages For API clients approving or signing certificates status conditions may not contain duplicate types status conditions status is now required status certificate must be PEM encoded and contain only CERTIFICATE blocks Lease lease v122 The coordination k8s io v1beta1 API version of Lease is no longer served as of v1 22 Migrate manifests and API clients to use the coordination k8s io v1 API version available since v1 14 All existing persisted objects are accessible via the new API No notable changes Ingress ingress v122 The extensions v1beta1 and networking k8s io v1beta1 API versions of Ingress is no longer served as of v1 22 Migrate manifests and API clients to use the networking k8s io v1 API version available since v1 19 All existing persisted objects are accessible via the new API Notable changes spec backend is renamed to spec defaultBackend The backend serviceName field is renamed to service name Numeric backend servicePort fields are renamed to service port number String backend servicePort fields are renamed to service port name pathType is now required for each specified path Options are Prefix Exact and ImplementationSpecific To match the undefined v1beta1 behavior use ImplementationSpecific IngressClass ingressclass v122 The networking k8s io v1beta1 API version of IngressClass is no longer served as of v1 22 Migrate manifests and API clients to use the networking k8s io v1 API version available since v1 19 All existing persisted objects are accessible via the new API No notable changes RBAC resources rbac resources v122 The rbac authorization k8s io v1beta1 API version of ClusterRole ClusterRoleBinding Role and RoleBinding is no longer served as of v1 22 Migrate manifests and API clients to use the rbac authorization k8s io v1 API version available since v1 8 All existing persisted objects are accessible via the new APIs No notable changes PriorityClass priorityclass v122 The scheduling k8s io v1beta1 API version of PriorityClass is no longer served as of v1 22 Migrate manifests and API clients to use the scheduling k8s io v1 API version available since v1 14 All existing persisted objects are accessible via the new API No notable changes Storage resources storage resources v122 The storage k8s io v1beta1 API version of CSIDriver CSINode StorageClass and VolumeAttachment is no longer served as of v1 22 Migrate manifests and API clients to use the storage k8s io v1 API version CSIDriver is available in storage k8s io v1 since v1 19 CSINode is available in storage k8s io v1 since v1 17 StorageClass is available in storage k8s io v1 since v1 6 VolumeAttachment is available in storage k8s io v1 v1 13 All existing persisted objects are accessible via the new APIs No notable changes v1 16 The v1 16 release stopped serving the following deprecated API versions NetworkPolicy networkpolicy v116 The extensions v1beta1 API version of NetworkPolicy is no longer served as of v1 16 Migrate manifests and API clients to use the networking k8s io v1 API version available since v1 8 All existing persisted objects are accessible via the new API DaemonSet daemonset v116 The extensions v1beta1 and apps v1beta2 API versions of DaemonSet are no longer served as of v1 16 Migrate manifests and API clients to use the apps v1 API version available since v1 9 All existing persisted objects are accessible via the new API Notable changes spec templateGeneration is removed spec selector is now required and immutable after creation use the existing template labels as the selector for seamless upgrades spec updateStrategy type now defaults to RollingUpdate the default in extensions v1beta1 was OnDelete Deployment deployment v116 The extensions v1beta1 apps v1beta1 and apps v1beta2 API versions of Deployment are no longer served as of v1 16 Migrate manifests and API clients to use the apps v1 API version available since v1 9 All existing persisted objects are accessible via the new API Notable changes spec rollbackTo is removed spec selector is now required and immutable after creation use the existing template labels as the selector for seamless upgrades spec progressDeadlineSeconds now defaults to 600 seconds the default in extensions v1beta1 was no deadline spec revisionHistoryLimit now defaults to 10 the default in apps v1beta1 was 2 the default in extensions v1beta1 was to retain all maxSurge and maxUnavailable now default to 25 the default in extensions v1beta1 was 1 StatefulSet statefulset v116 The apps v1beta1 and apps v1beta2 API versions of StatefulSet are no longer served as of v1 16 Migrate manifests and API clients to use the apps v1 API version available since v1 9 All existing persisted objects are accessible via the new API Notable changes spec selector is now required and immutable after creation use the existing template labels as the selector for seamless upgrades spec updateStrategy type now defaults to RollingUpdate the default in apps v1beta1 was OnDelete ReplicaSet replicaset v116 The extensions v1beta1 apps v1beta1 and apps v1beta2 API versions of ReplicaSet are no longer served as of v1 16 Migrate manifests and API clients to use the apps v1 API version available since v1 9 All existing persisted objects are accessible via the new API Notable changes spec selector is now required and immutable after creation use the existing template labels as the selector for seamless upgrades PodSecurityPolicy psp v116 The extensions v1beta1 API version of PodSecurityPolicy is no longer served as of v1 16 Migrate manifests and API client to use the policy v1beta1 API version available since v1 10 Note that the policy v1beta1 API version of PodSecurityPolicy will be removed in v1 25 What to do Test with deprecated APIs disabled You can test your clusters by starting an API server with specific API versions disabled to simulate upcoming removals Add the following flag to the API server startup arguments runtime config group version false For example runtime config admissionregistration k8s io v1beta1 false apiextensions k8s io v1beta1 Locate use of deprecated APIs Use client warnings metrics and audit information available in 1 19 blog 2020 09 03 warnings deprecation warnings to locate use of deprecated APIs Migrate to non deprecated APIs Update custom integrations and controllers to call the non deprecated APIs Change YAML files to reference the non deprecated APIs You can use the kubectl convert command to automatically convert an existing object kubectl convert f file output version group version For example to convert an older Deployment to apps v1 you can run kubectl convert f my deployment yaml output version apps v1 This conversion may use non ideal default values To learn more about a specific resource check the Kubernetes API reference docs reference kubernetes api The kubectl convert tool is not installed by default although in fact it once was part of kubectl itself For more details you can read the deprecation and removal issue https github com kubernetes kubectl issues 725 for the built in subcommand To learn how to set up kubectl convert on your computer visit the page that is right for your operating system Linux docs tasks tools install kubectl linux install kubectl convert plugin macOS docs tasks tools install kubectl macos install kubectl convert plugin or Windows docs tasks tools install kubectl windows install kubectl convert plugin |
kubernetes reference title Server Side Apply liggitt contenttype concept apelisse reviewers weight 25 lavalamp smarterclayton | ---
title: Server-Side Apply
reviewers:
- smarterclayton
- apelisse
- lavalamp
- liggitt
content_type: concept
weight: 25
---
<!-- overview -->
Kubernetes supports multiple appliers collaborating to manage the fields
of a single [object](/docs/concepts/overview/working-with-objects/).
Server-Side Apply provides an optional mechanism for your cluster's control plane to track
changes to an object's fields. At the level of a specific resource, Server-Side
Apply records and tracks information about control over the fields of that object.
Server-Side Apply helps users and
manage their resources through declarative configuration. Clients can create and modify
declaratively by submitting their _fully specified intent_.
A fully specified intent is a partial object that only includes the fields and
values for which the user has an opinion. That intent either creates a new
object (using default values for unspecified fields), or is
[combined](#merge-strategy), by the API server, with the existing object.
[Comparison with Client-Side Apply](#comparison-with-client-side-apply) explains
how Server-Side Apply differs from the original, client-side `kubectl apply`
implementation.
<!-- body -->
## Field management
The Kubernetes API server tracks _managed fields_ for all newly created objects.
When trying to apply an object, fields that have a different value and are owned by
another [manager](#managers) will result in a [conflict](#conflicts). This is done
in order to signal that the operation might undo another collaborator's changes.
Writes to objects with managed fields can be forced, in which case the value of any
conflicted field will be overridden, and the ownership will be transferred.
Whenever a field's value does change, ownership moves from its current manager to the
manager making the change.
Apply checks if there are any other field managers that also own the
field. If the field is not owned by any other field managers, that field is
set to its default value (if there is one), or otherwise is deleted from the
object.
The same rule applies to fields that are lists, associative lists, or maps.
For a user to manage a field, in the Server-Side Apply sense, means that the
user relies on and expects the value of the field not to change. The user who
last made an assertion about the value of a field will be recorded as the
current field manager. This can be done by changing the field manager
details explicitly using HTTP `POST` (**create**), `PUT` (**update**), or non-apply
`PATCH` (**patch**). You can also declare and record a field manager
by including a value for that field in a Server-Side Apply operation.
A Server-Side Apply **patch** request requires the client to provide its identity
as a [field manager](#managers). When using Server-Side Apply, trying to change a
field that is controlled by a different manager results in a rejected
request unless the client forces an override.
For details of overrides, see [Conflicts](#conflicts).
When two or more appliers set a field to the same value, they share ownership of
that field. Any subsequent attempt to change the value of the shared field, by any of
the appliers, results in a conflict. Shared field owners may give up ownership
of a field by making a Server-Side Apply **patch** request that doesn't include
that field.
Field management details are stored in a `managedFields` field that is part of an
object's [`metadata`](/docs/reference/kubernetes-api/common-definitions/object-meta/).
If you remove a field from a manifest and apply that manifest, Server-Side
Apply checks if there are any other field managers that also own the field.
If the field is not owned by any other field managers, it is either deleted
from the live object or reset to its default value, if it has one.
The same rule applies to associative list or map items.
Compared to the (legacy)
[`kubectl.kubernetes.io/last-applied-configuration`](/docs/reference/labels-annotations-taints/#kubectl-kubernetes-io-last-applied-configuration)
annotation managed by `kubectl`, Server-Side Apply uses a more declarative
approach, that tracks a user's (or client's) field management, rather than
a user's last applied state. As a side effect of using Server-Side Apply,
information about which field manager manages each field in an object also
becomes available.
### Example {#ssa-example-configmap}
A simple example of an object created using Server-Side Apply could look like this:
`kubectl get` omits managed fields by default.
Add `--show-managed-fields` to show `managedFields` when the output format is either `json` or `yaml`.
```yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: test-cm
namespace: default
labels:
test-label: test
managedFields:
- manager: kubectl
operation: Apply # note capitalization: "Apply" (or "Update")
apiVersion: v1
time: "2010-10-10T0:00:00Z"
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:test-label: {}
f:data:
f:key: {}
data:
key: some value
```
That example ConfigMap object contains a single field management record in
`.metadata.managedFields`. The field management record consists of basic information
about the managing entity itself, plus details about the fields being managed and
the relevant operation (`Apply` or `Update`). If the request that last changed that
field was a Server-Side Apply **patch** then the value of `operation` is `Apply`;
otherwise, it is `Update`.
There is another possible outcome. A client could submit an invalid request
body. If the fully specified intent does not produce a valid object, the
request fails.
It is however possible to change `.metadata.managedFields` through an
**update**, or through a **patch** operation that does not use Server-Side Apply.
Doing so is highly discouraged, but might be a reasonable option to try if,
for example, the `.metadata.managedFields` get into an inconsistent state
(which should not happen in normal operations).
The format of `managedFields` is [described](/docs/reference/kubernetes-api/common-definitions/object-meta/#System)
in the Kubernetes API reference.
The `.metadata.managedFields` field is managed by the API server.
You should avoid updating it manually.
### Conflicts
A _conflict_ is a special status error that occurs when an `Apply` operation tries
to change a field that another manager also claims to manage. This prevents an
applier from unintentionally overwriting the value set by another user. When
this occurs, the applier has 3 options to resolve the conflicts:
* **Overwrite value, become sole manager:** If overwriting the value was
intentional (or if the applier is an automated process like a controller) the
applier should set the `force` query parameter to true (for `kubectl apply`,
you use the `--force-conflicts` command line parameter), and make the request
again. This forces the operation to succeed, changes the value of the field,
and removes the field from all other managers' entries in `managedFields`.
* **Don't overwrite value, give up management claim:** If the applier doesn't
care about the value of the field any more, the applier can remove it from their
local model of the resource, and make a new request with that particular field
omitted. This leaves the value unchanged, and causes the field to be removed
from the applier's entry in `managedFields`.
* **Don't overwrite value, become shared manager:** If the applier still cares
about the value of a field, but doesn't want to overwrite it, they can
change the value of that field in their local model of the resource so as to
match the value of the object on the server, and then make a new request that
takes into account that local update. Doing so leaves the value unchanged,
and causes that field's management to be shared by the applier along with all
other field managers that already claimed to manage it.
### Field managers {#managers}
Managers identify distinct workflows that are modifying the object (especially
useful on conflicts!), and can be specified through the
[`fieldManager`](/docs/reference/kubernetes-api/common-parameters/common-parameters/#fieldManager)
query parameter as part of a modifying request. When you Apply to a resource,
the `fieldManager` parameter is required.
For other updates, the API server infers a field manager identity from the
"User-Agent:" HTTP header (if present).
When you use the `kubectl` tool to perform a Server-Side Apply operation, `kubectl`
sets the manager identity to `"kubectl"` by default.
## Serialization
At the protocol level, Kubernetes represents Server-Side Apply message bodies
as [YAML](https://yaml.org/), with the media type `application/apply-patch+yaml`.
Whether you are submitting JSON data or YAML data, use
`application/apply-patch+yaml` as the `Content-Type` header value.
All JSON documents are valid YAML. However, Kubernetes has a bug where it uses a YAML
parser that does not fully implement the YAML specification. Some JSON escapes may
not be recognized.
The serialization is the same as for Kubernetes objects, with the exception that
clients are not required to send a complete object.
Here's an example of a Server-Side Apply message body (fully specified intent):
```yaml
{
"apiVersion": "v1",
"kind": "ConfigMap"
}
```
(this would make a no-change update, provided that it was sent as the body
of a **patch** request to a valid `v1/configmaps` resource, and with the
appropriate request `Content-Type`).
## Operations in scope for field management {#apply-and-update}
The Kubernetes API operations where field management is considered are:
1. Server-Side Apply (HTTP `PATCH`, with content type `application/apply-patch+yaml`)
2. Replacing an existing object (**update** to Kubernetes; `PUT` at the HTTP level)
Both operations update `.metadata.managedFields`, but behave a little differently.
Unless you specify a forced override, an apply operation that encounters field-level
conflicts always fails; by contrast, if you make a change using **update** that would
affect a managed field, a conflict never provokes failure of the operation.
All Server-Side Apply **patch** requests are required to identify themselves by providing a
`fieldManager` query parameter, while the query parameter is optional for **update**
operations. Finally, when using the `Apply` operation you cannot define `managedFields` in
the body of the request that you submit.
An example object with multiple managers could look like this:
```yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: test-cm
namespace: default
labels:
test-label: test
managedFields:
- manager: kubectl
operation: Apply
time: '2019-03-30T15:00:00.000Z'
apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:test-label: {}
- manager: kube-controller-manager
operation: Update
apiVersion: v1
time: '2019-03-30T16:00:00.000Z'
fieldsType: FieldsV1
fieldsV1:
f:data:
f:key: {}
data:
key: new value
```
In this example, a second operation was run as an **update** by the manager called
`kube-controller-manager`. The update request succeeded and changed a value in the data
field, which caused that field's management to change to the `kube-controller-manager`.
If this update has instead been attempted using Server-Side Apply, the request
would have failed due to conflicting ownership.
## Merge strategy
The merging strategy, implemented with Server-Side Apply, provides a generally
more stable object lifecycle. Server-Side Apply tries to merge fields based on
the actor who manages them instead of overruling based on values. This way
multiple actors can update the same object without causing unexpected interference.
When a user sends a _fully-specified intent_ object to the Server-Side Apply
endpoint, the server merges it with the live object favoring the value from the
request body if it is specified in both places. If the set of items present in
the applied config is not a superset of the items applied by the same user last
time, each missing item not managed by any other appliers is removed. For
more information about how an object's schema is used to make decisions when
merging, see
[sigs.k8s.io/structured-merge-diff](https://sigs.k8s.io/structured-merge-diff).
The Kubernetes API (and the Go code that implements that API for Kubernetes) allows
defining _merge strategy markers_. These markers describe the merge strategy supported
for fields within Kubernetes objects.
For a ,
you can set these markers when you define the custom resource.
| Golang marker | OpenAPI extension | Possible values | Description |
| --------------- | ---------------------------- | ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `//+listType` | `x-kubernetes-list-type` | `atomic`/`set`/`map` | Applicable to lists. `set` applies to lists that include only scalar elements. These elements must be unique. `map` applies to lists of nested types only. The key values (see `listMapKey`) must be unique in the list. `atomic` can apply to any list. If configured as `atomic`, the entire list is replaced during merge. At any point in time, a single manager owns the list. If `set` or `map`, different managers can manage entries separately. |
| `//+listMapKey` | `x-kubernetes-list-map-keys` | List of field names, e.g. `["port", "protocol"]` | Only applicable when `+listType=map`. A list of field names whose values uniquely identify entries in the list. While there can be multiple keys, `listMapKey` is singular because keys need to be specified individually in the Go type. The key fields must be scalars. |
| `//+mapType` | `x-kubernetes-map-type` | `atomic`/`granular` | Applicable to maps. `atomic` means that the map can only be entirely replaced by a single manager. `granular` means that the map supports separate managers updating individual fields. |
| `//+structType` | `x-kubernetes-map-type` | `atomic`/`granular` | Applicable to structs; otherwise same usage and OpenAPI annotation as `//+mapType`. |
If `listType` is missing, the API server interprets a
`patchStrategy=merge` marker as a `listType=map` and the
corresponding `patchMergeKey` marker as a `listMapKey`.
The `atomic` list type is recursive.
(In the [Go](https://go.dev/) code for Kubernetes, these markers are specified as
comments and code authors need not repeat them as field tags).
## Custom resources and Server-Side Apply
By default, Server-Side Apply treats custom resources as unstructured data. All
keys are treated the same as struct fields, and all lists are considered atomic.
If the CustomResourceDefinition defines a
[schema](/docs/reference/generated/kubernetes-api/#jsonschemaprops-v1-apiextensions-k8s-io)
that contains annotations as defined in the previous [Merge Strategy](#merge-strategy)
section, these annotations will be used when merging objects of this
type.
### Compatibility across topology changes
On rare occurrences, the author for a CustomResourceDefinition (CRD) or built-in
may want to change the specific topology of a field in their resource,
without incrementing its API version. Changing the topology of types,
by upgrading the cluster or updating the CRD, has different consequences when
updating existing objects. There are two categories of changes: when a field goes from
`map`/`set`/`granular` to `atomic`, and the other way around.
When the `listType`, `mapType`, or `structType` changes from
`map`/`set`/`granular` to `atomic`, the whole list, map, or struct of
existing objects will end-up being owned by actors who owned an element
of these types. This means that any further change to these objects
would cause a conflict.
When a `listType`, `mapType`, or `structType` changes from `atomic` to
`map`/`set`/`granular`, the API server is unable to infer the new
ownership of these fields. Because of that, no conflict will be produced
when objects have these fields updated. For that reason, it is not
recommended to change a type from `atomic` to `map`/`set`/`granular`.
Take for example, the custom resource:
```yaml
---
apiVersion: example.com/v1
kind: Foo
metadata:
name: foo-sample
managedFields:
- manager: "manager-one"
operation: Apply
apiVersion: example.com/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:data: {}
spec:
data:
key1: val1
key2: val2
```
Before `spec.data` gets changed from `atomic` to `granular`,
`manager-one` owns the field `spec.data`, and all the fields within it
(`key1` and `key2`). When the CRD gets changed to make `spec.data`
`granular`, `manager-one` continues to own the top-level field
`spec.data` (meaning no other managers can delete the map called `data`
without a conflict), but it no longer owns `key1` and `key2`, so another
manager can then modify or delete those fields without conflict.
## Using Server-Side Apply in a controller
As a developer of a controller, you can use Server-Side Apply as a way to
simplify the update logic of your controller. The main differences with a
read-modify-write and/or patch are the following:
* the applied object must contain all the fields that the controller cares about.
* there is no way to remove fields that haven't been applied by the controller
before (controller can still send a **patch** or **update** for these use-cases).
* the object doesn't have to be read beforehand; `resourceVersion` doesn't have
to be specified.
It is strongly recommended for controllers to always force conflicts on objects that
they own and manage, since they might not be able to resolve or act on these conflicts.
## Transferring ownership
In addition to the concurrency controls provided by [conflict resolution](#conflicts),
Server-Side Apply provides ways to perform coordinated
field ownership transfers from users to controllers.
This is best explained by example. Let's look at how to safely transfer
ownership of the `replicas` field from a user to a controller while enabling
automatic horizontal scaling for a Deployment, using the HorizontalPodAutoscaler
resource and its accompanying controller.
Say a user has defined Deployment with `replicas` set to the desired value:
And the user has created the Deployment using Server-Side Apply, like so:
```shell
kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment.yaml --server-side
```
Then later, automatic scaling is enabled for the Deployment; for example:
```shell
kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10
```
Now, the user would like to remove `replicas` from their configuration, so they
don't accidentally fight with the HorizontalPodAutoscaler (HPA) and its controller.
However, there is a race: it might take some time before the HPA feels the need
to adjust `.spec.replicas`; if the user removes `.spec.replicas` before the HPA writes
to the field and becomes its owner, then the API server would set `.spec.replicas` to
1 (the default replica count for Deployment).
This is not what the user wants to happen, even temporarily - it might well degrade
a running workload.
There are two solutions:
- (basic) Leave `replicas` in the configuration; when the HPA eventually writes to that
field, the system gives the user a conflict over it. At that point, it is safe
to remove from the configuration.
- (more advanced) If, however, the user doesn't want to wait, for example
because they want to keep the cluster legible to their colleagues, then they
can take the following steps to make it safe to remove `replicas` from their
configuration:
First, the user defines a new manifest containing only the `replicas` field:
```yaml
# Save this file as 'nginx-deployment-replicas-only.yaml'.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
```
The YAML file for SSA in this case only contains the fields you want to change.
You are not supposed to provide a fully compliant Deployment manifest if you only
want to modify the `spec.replicas` field using SSA.
The user applies that manifest using a private field manager name. In this example,
the user picked `handover-to-hpa`:
```shell
kubectl apply -f nginx-deployment-replicas-only.yaml \
--server-side --field-manager=handover-to-hpa \
--validate=false
```
If the apply results in a conflict with the HPA controller, then do nothing. The
conflict indicates the controller has claimed the field earlier in the
process than it sometimes does.
At this point the user may remove the `replicas` field from their manifest:
Note that whenever the HPA controller sets the `replicas` field to a new value,
the temporary field manager will no longer own any fields and will be
automatically deleted. No further clean up is required.
### Transferring ownership between managers
Field managers can transfer ownership of a field between each other by setting the field
to the same value in both of their applied configurations, causing them to share
ownership of the field. Once the managers share ownership of the field, one of them
can remove the field from their applied configuration to give up ownership and
complete the transfer to the other field manager.
## Comparison with Client-Side Apply
Server-Side Apply is meant both as a replacement for the original client-side
implementation of the `kubectl apply` subcommand, and as simple and effective
mechanism for
to enact their changes.
Compared to the `last-applied` annotation managed by `kubectl`, Server-Side
Apply uses a more declarative approach, which tracks an object's field management,
rather than a user's last applied state. This means that as a side effect of
using Server-Side Apply, information about which field manager manages each
field in an object also becomes available.
A consequence of the conflict detection and resolution implemented by Server-Side
Apply is that an applier always has up to date field values in their local
state. If they don't, they get a conflict the next time they apply. Any of the
three options to resolve conflicts results in the applied configuration being an
up to date subset of the object on the server's fields.
This is different from Client-Side Apply, where outdated values which have been
overwritten by other users are left in an applier's local config. These values
only become accurate when the user updates that specific field, if ever, and an
applier has no way of knowing whether their next apply will overwrite other
users' changes.
Another difference is that an applier using Client-Side Apply is unable to
change the API version they are using, but Server-Side Apply supports this use
case.
## Migration between client-side and server-side apply
### Upgrading from client-side apply to server-side apply
Client-side apply users who manage a resource with `kubectl apply` can start
using server-side apply with the following flag.
```shell
kubectl apply --server-side [--dry-run=server]
```
By default, field management of the object transfers from client-side apply to
kubectl server-side apply, without encountering conflicts.
Keep the `last-applied-configuration` annotation up to date.
The annotation infers client-side applies managed fields.
Any fields not managed by client-side apply raise conflicts.
For example, if you used `kubectl scale` to update the replicas field after
client-side apply, then this field is not owned by client-side apply and
creates conflicts on `kubectl apply --server-side`.
This behavior applies to server-side apply with the `kubectl` field manager.
As an exception, you can opt-out of this behavior by specifying a different,
non-default field manager, as seen in the following example. The default field
manager for kubectl server-side apply is `kubectl`.
```shell
kubectl apply --server-side --field-manager=my-manager [--dry-run=server]
```
### Downgrading from server-side apply to client-side apply
If you manage a resource with `kubectl apply --server-side`,
you can downgrade to client-side apply directly with `kubectl apply`.
Downgrading works because kubectl Server-Side Apply keeps the
`last-applied-configuration` annotation up-to-date if you use
`kubectl apply`.
This behavior applies to Server-Side Apply with the `kubectl` field manager.
As an exception, you can opt-out of this behavior by specifying a different,
non-default field manager, as seen in the following example. The default field
manager for kubectl server-side apply is `kubectl`.
```shell
kubectl apply --server-side --field-manager=my-manager [--dry-run=server]
```
## API implementation
The `PATCH` verb for a resource that supports Server-Side Apply can accepts the
unofficial `application/apply-patch+yaml` content type. Users of Server-Side
Apply can send partially specified objects as YAML as the body of a `PATCH` request
to the URI of a resource. When applying a configuration, you should always include all the
fields that are important to the outcome (such as a desired state) that you want to define.
All JSON messages are valid YAML. Some clients specify Server-Side Apply requests using YAML
request bodies that are also valid JSON.
### Access control and permissions {#rbac-and-permissions}
Since Server-Side Apply is a type of `PATCH`, a principal (such as a Role for Kubernetes
) requires the **patch** permission to
edit existing resources, and also needs the **create** verb permission in order to create
new resources with Server-Side Apply.
## Clearing `managedFields`
It is possible to strip all `managedFields` from an object by overwriting them
using a **patch** (JSON Merge Patch, Strategic Merge Patch, JSON Patch), or
through an **update** (HTTP `PUT`); in other words, through every write operation
other than **apply**. This can be done by overwriting the `managedFields` field
with an empty entry. Two examples are:
```console
PATCH /api/v1/namespaces/default/configmaps/example-cm
Accept: application/json
Content-Type: application/merge-patch+json
{
"metadata": {
"managedFields": [
{}
]
}
}
```
```console
PATCH /api/v1/namespaces/default/configmaps/example-cm
Accept: application/json
Content-Type: application/json-patch+json
If-Match: 1234567890123456789
[{"op": "replace", "path": "/metadata/managedFields", "value": [{}]}]
```
This will overwrite the `managedFields` with a list containing a single empty
entry that then results in the `managedFields` being stripped entirely from the
object. Note that setting the `managedFields` to an empty list will not
reset the field. This is on purpose, so `managedFields` never get stripped by
clients not aware of the field.
In cases where the reset operation is combined with changes to other fields
than the `managedFields`, this will result in the `managedFields` being reset
first and the other changes being processed afterwards. As a result the
applier takes ownership of any fields updated in the same request.
Server-Side Apply does not correctly track ownership on
sub-resources that don't receive the resource object type. If you are
using Server-Side Apply with such a sub-resource, the changed fields
may not be tracked.
##
You can read about `managedFields` within the Kubernetes API reference for the
[`metadata`](/docs/reference/kubernetes-api/common-definitions/object-meta/)
top level field. | kubernetes reference | title Server Side Apply reviewers smarterclayton apelisse lavalamp liggitt content type concept weight 25 overview Kubernetes supports multiple appliers collaborating to manage the fields of a single object docs concepts overview working with objects Server Side Apply provides an optional mechanism for your cluster s control plane to track changes to an object s fields At the level of a specific resource Server Side Apply records and tracks information about control over the fields of that object Server Side Apply helps users and manage their resources through declarative configuration Clients can create and modify declaratively by submitting their fully specified intent A fully specified intent is a partial object that only includes the fields and values for which the user has an opinion That intent either creates a new object using default values for unspecified fields or is combined merge strategy by the API server with the existing object Comparison with Client Side Apply comparison with client side apply explains how Server Side Apply differs from the original client side kubectl apply implementation body Field management The Kubernetes API server tracks managed fields for all newly created objects When trying to apply an object fields that have a different value and are owned by another manager managers will result in a conflict conflicts This is done in order to signal that the operation might undo another collaborator s changes Writes to objects with managed fields can be forced in which case the value of any conflicted field will be overridden and the ownership will be transferred Whenever a field s value does change ownership moves from its current manager to the manager making the change Apply checks if there are any other field managers that also own the field If the field is not owned by any other field managers that field is set to its default value if there is one or otherwise is deleted from the object The same rule applies to fields that are lists associative lists or maps For a user to manage a field in the Server Side Apply sense means that the user relies on and expects the value of the field not to change The user who last made an assertion about the value of a field will be recorded as the current field manager This can be done by changing the field manager details explicitly using HTTP POST create PUT update or non apply PATCH patch You can also declare and record a field manager by including a value for that field in a Server Side Apply operation A Server Side Apply patch request requires the client to provide its identity as a field manager managers When using Server Side Apply trying to change a field that is controlled by a different manager results in a rejected request unless the client forces an override For details of overrides see Conflicts conflicts When two or more appliers set a field to the same value they share ownership of that field Any subsequent attempt to change the value of the shared field by any of the appliers results in a conflict Shared field owners may give up ownership of a field by making a Server Side Apply patch request that doesn t include that field Field management details are stored in a managedFields field that is part of an object s metadata docs reference kubernetes api common definitions object meta If you remove a field from a manifest and apply that manifest Server Side Apply checks if there are any other field managers that also own the field If the field is not owned by any other field managers it is either deleted from the live object or reset to its default value if it has one The same rule applies to associative list or map items Compared to the legacy kubectl kubernetes io last applied configuration docs reference labels annotations taints kubectl kubernetes io last applied configuration annotation managed by kubectl Server Side Apply uses a more declarative approach that tracks a user s or client s field management rather than a user s last applied state As a side effect of using Server Side Apply information about which field manager manages each field in an object also becomes available Example ssa example configmap A simple example of an object created using Server Side Apply could look like this kubectl get omits managed fields by default Add show managed fields to show managedFields when the output format is either json or yaml yaml apiVersion v1 kind ConfigMap metadata name test cm namespace default labels test label test managedFields manager kubectl operation Apply note capitalization Apply or Update apiVersion v1 time 2010 10 10T0 00 00Z fieldsType FieldsV1 fieldsV1 f metadata f labels f test label f data f key data key some value That example ConfigMap object contains a single field management record in metadata managedFields The field management record consists of basic information about the managing entity itself plus details about the fields being managed and the relevant operation Apply or Update If the request that last changed that field was a Server Side Apply patch then the value of operation is Apply otherwise it is Update There is another possible outcome A client could submit an invalid request body If the fully specified intent does not produce a valid object the request fails It is however possible to change metadata managedFields through an update or through a patch operation that does not use Server Side Apply Doing so is highly discouraged but might be a reasonable option to try if for example the metadata managedFields get into an inconsistent state which should not happen in normal operations The format of managedFields is described docs reference kubernetes api common definitions object meta System in the Kubernetes API reference The metadata managedFields field is managed by the API server You should avoid updating it manually Conflicts A conflict is a special status error that occurs when an Apply operation tries to change a field that another manager also claims to manage This prevents an applier from unintentionally overwriting the value set by another user When this occurs the applier has 3 options to resolve the conflicts Overwrite value become sole manager If overwriting the value was intentional or if the applier is an automated process like a controller the applier should set the force query parameter to true for kubectl apply you use the force conflicts command line parameter and make the request again This forces the operation to succeed changes the value of the field and removes the field from all other managers entries in managedFields Don t overwrite value give up management claim If the applier doesn t care about the value of the field any more the applier can remove it from their local model of the resource and make a new request with that particular field omitted This leaves the value unchanged and causes the field to be removed from the applier s entry in managedFields Don t overwrite value become shared manager If the applier still cares about the value of a field but doesn t want to overwrite it they can change the value of that field in their local model of the resource so as to match the value of the object on the server and then make a new request that takes into account that local update Doing so leaves the value unchanged and causes that field s management to be shared by the applier along with all other field managers that already claimed to manage it Field managers managers Managers identify distinct workflows that are modifying the object especially useful on conflicts and can be specified through the fieldManager docs reference kubernetes api common parameters common parameters fieldManager query parameter as part of a modifying request When you Apply to a resource the fieldManager parameter is required For other updates the API server infers a field manager identity from the User Agent HTTP header if present When you use the kubectl tool to perform a Server Side Apply operation kubectl sets the manager identity to kubectl by default Serialization At the protocol level Kubernetes represents Server Side Apply message bodies as YAML https yaml org with the media type application apply patch yaml Whether you are submitting JSON data or YAML data use application apply patch yaml as the Content Type header value All JSON documents are valid YAML However Kubernetes has a bug where it uses a YAML parser that does not fully implement the YAML specification Some JSON escapes may not be recognized The serialization is the same as for Kubernetes objects with the exception that clients are not required to send a complete object Here s an example of a Server Side Apply message body fully specified intent yaml apiVersion v1 kind ConfigMap this would make a no change update provided that it was sent as the body of a patch request to a valid v1 configmaps resource and with the appropriate request Content Type Operations in scope for field management apply and update The Kubernetes API operations where field management is considered are 1 Server Side Apply HTTP PATCH with content type application apply patch yaml 2 Replacing an existing object update to Kubernetes PUT at the HTTP level Both operations update metadata managedFields but behave a little differently Unless you specify a forced override an apply operation that encounters field level conflicts always fails by contrast if you make a change using update that would affect a managed field a conflict never provokes failure of the operation All Server Side Apply patch requests are required to identify themselves by providing a fieldManager query parameter while the query parameter is optional for update operations Finally when using the Apply operation you cannot define managedFields in the body of the request that you submit An example object with multiple managers could look like this yaml apiVersion v1 kind ConfigMap metadata name test cm namespace default labels test label test managedFields manager kubectl operation Apply time 2019 03 30T15 00 00 000Z apiVersion v1 fieldsType FieldsV1 fieldsV1 f metadata f labels f test label manager kube controller manager operation Update apiVersion v1 time 2019 03 30T16 00 00 000Z fieldsType FieldsV1 fieldsV1 f data f key data key new value In this example a second operation was run as an update by the manager called kube controller manager The update request succeeded and changed a value in the data field which caused that field s management to change to the kube controller manager If this update has instead been attempted using Server Side Apply the request would have failed due to conflicting ownership Merge strategy The merging strategy implemented with Server Side Apply provides a generally more stable object lifecycle Server Side Apply tries to merge fields based on the actor who manages them instead of overruling based on values This way multiple actors can update the same object without causing unexpected interference When a user sends a fully specified intent object to the Server Side Apply endpoint the server merges it with the live object favoring the value from the request body if it is specified in both places If the set of items present in the applied config is not a superset of the items applied by the same user last time each missing item not managed by any other appliers is removed For more information about how an object s schema is used to make decisions when merging see sigs k8s io structured merge diff https sigs k8s io structured merge diff The Kubernetes API and the Go code that implements that API for Kubernetes allows defining merge strategy markers These markers describe the merge strategy supported for fields within Kubernetes objects For a you can set these markers when you define the custom resource Golang marker OpenAPI extension Possible values Description listType x kubernetes list type atomic set map Applicable to lists set applies to lists that include only scalar elements These elements must be unique map applies to lists of nested types only The key values see listMapKey must be unique in the list atomic can apply to any list If configured as atomic the entire list is replaced during merge At any point in time a single manager owns the list If set or map different managers can manage entries separately listMapKey x kubernetes list map keys List of field names e g port protocol Only applicable when listType map A list of field names whose values uniquely identify entries in the list While there can be multiple keys listMapKey is singular because keys need to be specified individually in the Go type The key fields must be scalars mapType x kubernetes map type atomic granular Applicable to maps atomic means that the map can only be entirely replaced by a single manager granular means that the map supports separate managers updating individual fields structType x kubernetes map type atomic granular Applicable to structs otherwise same usage and OpenAPI annotation as mapType If listType is missing the API server interprets a patchStrategy merge marker as a listType map and the corresponding patchMergeKey marker as a listMapKey The atomic list type is recursive In the Go https go dev code for Kubernetes these markers are specified as comments and code authors need not repeat them as field tags Custom resources and Server Side Apply By default Server Side Apply treats custom resources as unstructured data All keys are treated the same as struct fields and all lists are considered atomic If the CustomResourceDefinition defines a schema docs reference generated kubernetes api jsonschemaprops v1 apiextensions k8s io that contains annotations as defined in the previous Merge Strategy merge strategy section these annotations will be used when merging objects of this type Compatibility across topology changes On rare occurrences the author for a CustomResourceDefinition CRD or built in may want to change the specific topology of a field in their resource without incrementing its API version Changing the topology of types by upgrading the cluster or updating the CRD has different consequences when updating existing objects There are two categories of changes when a field goes from map set granular to atomic and the other way around When the listType mapType or structType changes from map set granular to atomic the whole list map or struct of existing objects will end up being owned by actors who owned an element of these types This means that any further change to these objects would cause a conflict When a listType mapType or structType changes from atomic to map set granular the API server is unable to infer the new ownership of these fields Because of that no conflict will be produced when objects have these fields updated For that reason it is not recommended to change a type from atomic to map set granular Take for example the custom resource yaml apiVersion example com v1 kind Foo metadata name foo sample managedFields manager manager one operation Apply apiVersion example com v1 fieldsType FieldsV1 fieldsV1 f spec f data spec data key1 val1 key2 val2 Before spec data gets changed from atomic to granular manager one owns the field spec data and all the fields within it key1 and key2 When the CRD gets changed to make spec data granular manager one continues to own the top level field spec data meaning no other managers can delete the map called data without a conflict but it no longer owns key1 and key2 so another manager can then modify or delete those fields without conflict Using Server Side Apply in a controller As a developer of a controller you can use Server Side Apply as a way to simplify the update logic of your controller The main differences with a read modify write and or patch are the following the applied object must contain all the fields that the controller cares about there is no way to remove fields that haven t been applied by the controller before controller can still send a patch or update for these use cases the object doesn t have to be read beforehand resourceVersion doesn t have to be specified It is strongly recommended for controllers to always force conflicts on objects that they own and manage since they might not be able to resolve or act on these conflicts Transferring ownership In addition to the concurrency controls provided by conflict resolution conflicts Server Side Apply provides ways to perform coordinated field ownership transfers from users to controllers This is best explained by example Let s look at how to safely transfer ownership of the replicas field from a user to a controller while enabling automatic horizontal scaling for a Deployment using the HorizontalPodAutoscaler resource and its accompanying controller Say a user has defined Deployment with replicas set to the desired value And the user has created the Deployment using Server Side Apply like so shell kubectl apply f https k8s io examples application ssa nginx deployment yaml server side Then later automatic scaling is enabled for the Deployment for example shell kubectl autoscale deployment nginx deployment cpu percent 50 min 1 max 10 Now the user would like to remove replicas from their configuration so they don t accidentally fight with the HorizontalPodAutoscaler HPA and its controller However there is a race it might take some time before the HPA feels the need to adjust spec replicas if the user removes spec replicas before the HPA writes to the field and becomes its owner then the API server would set spec replicas to 1 the default replica count for Deployment This is not what the user wants to happen even temporarily it might well degrade a running workload There are two solutions basic Leave replicas in the configuration when the HPA eventually writes to that field the system gives the user a conflict over it At that point it is safe to remove from the configuration more advanced If however the user doesn t want to wait for example because they want to keep the cluster legible to their colleagues then they can take the following steps to make it safe to remove replicas from their configuration First the user defines a new manifest containing only the replicas field yaml Save this file as nginx deployment replicas only yaml apiVersion apps v1 kind Deployment metadata name nginx deployment spec replicas 3 The YAML file for SSA in this case only contains the fields you want to change You are not supposed to provide a fully compliant Deployment manifest if you only want to modify the spec replicas field using SSA The user applies that manifest using a private field manager name In this example the user picked handover to hpa shell kubectl apply f nginx deployment replicas only yaml server side field manager handover to hpa validate false If the apply results in a conflict with the HPA controller then do nothing The conflict indicates the controller has claimed the field earlier in the process than it sometimes does At this point the user may remove the replicas field from their manifest Note that whenever the HPA controller sets the replicas field to a new value the temporary field manager will no longer own any fields and will be automatically deleted No further clean up is required Transferring ownership between managers Field managers can transfer ownership of a field between each other by setting the field to the same value in both of their applied configurations causing them to share ownership of the field Once the managers share ownership of the field one of them can remove the field from their applied configuration to give up ownership and complete the transfer to the other field manager Comparison with Client Side Apply Server Side Apply is meant both as a replacement for the original client side implementation of the kubectl apply subcommand and as simple and effective mechanism for to enact their changes Compared to the last applied annotation managed by kubectl Server Side Apply uses a more declarative approach which tracks an object s field management rather than a user s last applied state This means that as a side effect of using Server Side Apply information about which field manager manages each field in an object also becomes available A consequence of the conflict detection and resolution implemented by Server Side Apply is that an applier always has up to date field values in their local state If they don t they get a conflict the next time they apply Any of the three options to resolve conflicts results in the applied configuration being an up to date subset of the object on the server s fields This is different from Client Side Apply where outdated values which have been overwritten by other users are left in an applier s local config These values only become accurate when the user updates that specific field if ever and an applier has no way of knowing whether their next apply will overwrite other users changes Another difference is that an applier using Client Side Apply is unable to change the API version they are using but Server Side Apply supports this use case Migration between client side and server side apply Upgrading from client side apply to server side apply Client side apply users who manage a resource with kubectl apply can start using server side apply with the following flag shell kubectl apply server side dry run server By default field management of the object transfers from client side apply to kubectl server side apply without encountering conflicts Keep the last applied configuration annotation up to date The annotation infers client side applies managed fields Any fields not managed by client side apply raise conflicts For example if you used kubectl scale to update the replicas field after client side apply then this field is not owned by client side apply and creates conflicts on kubectl apply server side This behavior applies to server side apply with the kubectl field manager As an exception you can opt out of this behavior by specifying a different non default field manager as seen in the following example The default field manager for kubectl server side apply is kubectl shell kubectl apply server side field manager my manager dry run server Downgrading from server side apply to client side apply If you manage a resource with kubectl apply server side you can downgrade to client side apply directly with kubectl apply Downgrading works because kubectl Server Side Apply keeps the last applied configuration annotation up to date if you use kubectl apply This behavior applies to Server Side Apply with the kubectl field manager As an exception you can opt out of this behavior by specifying a different non default field manager as seen in the following example The default field manager for kubectl server side apply is kubectl shell kubectl apply server side field manager my manager dry run server API implementation The PATCH verb for a resource that supports Server Side Apply can accepts the unofficial application apply patch yaml content type Users of Server Side Apply can send partially specified objects as YAML as the body of a PATCH request to the URI of a resource When applying a configuration you should always include all the fields that are important to the outcome such as a desired state that you want to define All JSON messages are valid YAML Some clients specify Server Side Apply requests using YAML request bodies that are also valid JSON Access control and permissions rbac and permissions Since Server Side Apply is a type of PATCH a principal such as a Role for Kubernetes requires the patch permission to edit existing resources and also needs the create verb permission in order to create new resources with Server Side Apply Clearing managedFields It is possible to strip all managedFields from an object by overwriting them using a patch JSON Merge Patch Strategic Merge Patch JSON Patch or through an update HTTP PUT in other words through every write operation other than apply This can be done by overwriting the managedFields field with an empty entry Two examples are console PATCH api v1 namespaces default configmaps example cm Accept application json Content Type application merge patch json metadata managedFields console PATCH api v1 namespaces default configmaps example cm Accept application json Content Type application json patch json If Match 1234567890123456789 op replace path metadata managedFields value This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object Note that setting the managedFields to an empty list will not reset the field This is on purpose so managedFields never get stripped by clients not aware of the field In cases where the reset operation is combined with changes to other fields than the managedFields this will result in the managedFields being reset first and the other changes being processed afterwards As a result the applier takes ownership of any fields updated in the same request Server Side Apply does not correctly track ownership on sub resources that don t receive the resource object type If you are using Server Side Apply with such a sub resource the changed fields may not be tracked You can read about managedFields within the Kubernetes API reference for the metadata docs reference kubernetes api common definitions object meta top level field |
kubernetes reference cici37 contenttype concept min kubernetes server version 1 25 reviewers weight 35 jpbetz title Common Expression Language in Kubernetes | ---
title: Common Expression Language in Kubernetes
reviewers:
- jpbetz
- cici37
content_type: concept
weight: 35
min-kubernetes-server-version: 1.25
---
<!-- overview -->
The [Common Expression Language (CEL)](https://github.com/google/cel-go) is used
in the Kubernetes API to declare validation rules, policy rules, and other
constraints or conditions.
CEL expressions are evaluated directly in the
, making CEL a
convenient alternative to out-of-process mechanisms, such as webhooks, for many
extensibility use cases. Your CEL expressions continue to execute so long as the
control plane's API server component remains available.
<!-- body -->
## Language overview
The [CEL language](https://github.com/google/cel-spec/blob/master/doc/langdef.md)
has a straightforward syntax that is similar to the expressions in C, C++, Java,
JavaScript and Go.
CEL was designed to be embedded into applications. Each CEL "program" is a
single expression that evaluates to a single value. CEL expressions are
typically short "one-liners" that inline well into the string fields of Kubernetes
API resources.
Inputs to a CEL program are "variables". Each Kubernetes API field that contains
CEL declares in the API documentation which variables are available to use for
that field. For example, in the `x-kubernetes-validations[i].rules` field of
CustomResourceDefinitions, the `self` and `oldSelf` variables are available and
refer to the previous and current state of the custom resource data to be
validated by the CEL expression. Other Kubernetes API fields may declare
different variables. See the API documentation of the API fields to learn which
variables are available for that field.
Example CEL expressions:
| Rule | Purpose |
|------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
| `self.minReplicas <= self.replicas && self.replicas <= self.maxReplicas` | Validate that the three fields defining replicas are ordered appropriately |
| `'Available' in self.stateCounts` | Validate that an entry with the 'Available' key exists in a map |
| `(self.list1.size() == 0) != (self.list2.size() == 0)` | Validate that one of two lists is non-empty, but not both |
| `self.envars.filter(e, e.name = 'MY_ENV').all(e, e.value.matches('^[a-zA-Z]*$'))` | Validate the 'value' field of a listMap entry where key field 'name' is 'MY_ENV' |
| `has(self.expired) && self.created + self.ttl < self.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration |
| `self.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' |
| `self.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 |
| `type(self) == string ? self == '99%' : self == 42` | Validate an int-or-string field for both the int and string cases |
| `self.metadata.name == 'singleton'` | Validate that an object's name matches a specific value (making it a singleton) |
| `self.set1.all(e, !(e in self.set2))` | Validate that two listSets are disjoint |
| `self.names.size() == self.details.size() && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet |
| `self.details.all(key, key.matches('^[a-zA-Z]*$'))` | Validate the keys of the 'details' map |
| `self.details.all(key, self.details[key].matches('^[a-zA-Z]*$'))` | Validate the values of the 'details' map |
## CEL options, language features, and libraries
CEL is configured with the following options, libraries and language features, introduced at the specified Kubernetes versions:
| CEL option, library or language feature | Included | Availablity |
|-----------------------------------------|----------|-------------|
| [Standard macros](https://github.com/google/cel-spec/blob/v0.7.0/doc/langdef.md#macros) | `has`, `all`, `exists`, `exists_one`, `map`, `filter` | All Kubernetes versions |
| [Standard functions](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions) | See [official list of standard definitions](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions) | All Kubernetes versions |
| [Homogeneous Aggregate Literals](https://pkg.go.dev/github.com/google/[email protected]/cel#HomogeneousAggregateLiterals) | | All Kubernetes versions |
| [Default UTC Time Zone](https://pkg.go.dev/github.com/google/[email protected]/cel#DefaultUTCTimeZone) | | All Kubernetes versions |
| [Eagerly Validate Declarations](https://pkg.go.dev/github.com/google/[email protected]/cel#EagerlyValidateDeclarations) | | All Kubernetes versions |
| [Extended strings library](https://pkg.go.dev/github.com/google/cel-go/ext#Strings), Version 1 | `charAt`, `indexOf`, `lastIndexOf`, `lowerAscii`, `upperAscii`, `replace`, `split`, `join`, `substring`, `trim` | All Kubernetes versions |
| Kubernetes list library | See [Kubernetes list library](#kubernetes-list-library) | All Kubernetes versions |
| Kubernetes regex library | See [Kubernetes regex library](#kubernetes-regex-library) | All Kubernetes versions |
| Kubernetes URL library | See [Kubernetes URL library](#kubernetes-url-library) | All Kubernetes versions |
| Kubernetes authorizer library | See [Kubernetes authorizer library](#kubernetes-authorizer-library) | All Kubernetes versions |
| Kubernetes quantity library | See [Kubernetes quantity library](#kubernetes-quantity-library) | Kubernetes versions 1.29+ |
| CEL optional types | See [CEL optional types](https://pkg.go.dev/github.com/google/[email protected]/cel#OptionalTypes) | Kubernetes versions 1.29+ |
| CEL CrossTypeNumericComparisons | See [CEL CrossTypeNumericComparisons](https://pkg.go.dev/github.com/google/[email protected]/cel#CrossTypeNumericComparisons) | Kubernetes versions 1.29+ |
CEL functions, features and language settings support Kubernetes control plane
rollbacks. For example, _CEL Optional Values_ was introduced at Kubernetes 1.29
and so only API servers at that version or newer will accept write requests to
CEL expressions that use _CEL Optional Values_. However, when a cluster is
rolled back to Kubernetes 1.28 CEL expressions using "CEL Optional Values" that
are already stored in API resources will continue to evaluate correctly.
## Kubernetes CEL libraries
In additional to the CEL community libraries, Kubernetes includes CEL libraries
that are available everywhere CEL is used in Kubernetes.
### Kubernetes list library
The list library includes `indexOf` and `lastIndexOf`, which work similar to the
strings functions of the same names. These functions either the first or last
positional index of the provided element in the list.
The list library also includes `min`, `max` and `sum`. Sum is supported on all
number types as well as the duration type. Min and max are supported on all
comparable types.
`isSorted` is also provided as a convenience function and is supported on all
comparable types.
Examples:
| CEL Expression | Purpose |
|------------------------------------------------------------------------------------|-----------------------------------------------------------|
| `names.isSorted()` | Verify that a list of names is kept in alphabetical order |
| `items.map(x, x.weight).sum() == 1.0` | Verify that the "weights" of a list of objects sum to 1.0 |
| `lowPriorities.map(x, x.priority).max() < highPriorities.map(x, x.priority).min()` | Verify that two sets of priorities do not overlap |
| `names.indexOf('should-be-first') == 1` | Require that the first name in a list if a specific value |
See the [Kubernetes List Library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#Lists)
godoc for more information.
### Kubernetes regex library
In addition to the `matches` function provided by the CEL standard library, the
regex library provides `find` and `findAll`, enabling a much wider range of
regex operations.
Examples:
| CEL Expression | Purpose |
|-------------------------------------------------------------|----------------------------------------------------------|
| `"abc 123".find('[0-9]+')` | Find the first number in a string |
| `"1, 2, 3, 4".findAll('[0-9]+').map(x, int(x)).sum() < 100` | Verify that the numbers in a string sum to less than 100 |
See the [Kubernetes regex library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#Regex)
godoc for more information.
### Kubernetes URL library
To make it easier and safer to process URLs, the following functions have been added:
- `isURL(string)` checks if a string is a valid URL according to the
[Go's net/url](https://pkg.go.dev/net/url#URL) package. The string must be an
absolute URL.
- `url(string) URL` converts a string to a URL or results in an error if the
string is not a valid URL.
Once parsed via the `url` function, the resulting URL object has `getScheme`,
`getHost`, `getHostname`, `getPort`, `getEscapedPath` and `getQuery` accessors.
Examples:
| CEL Expression | Purpose |
|-----------------------------------------------------------------|------------------------------------------------|
| `url('https://example.com:80/').getHost()` | Gets the 'example.com:80' host part of the URL |
| `url('https://example.com/path with spaces/').getEscapedPath()` | Returns '/path%20with%20spaces/' |
See the [Kubernetes URL library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#URLs)
godoc for more information.
### Kubernetes authorizer library
For CEL expressions in the API where a variable of type `Authorizer` is available,
the authorizer may be used to perform authorization checks for the principal
(authenticated user) of the request.
API resource checks are performed as follows:
1. Specify the group and resource to check: `Authorizer.group(string).resource(string) ResourceCheck`
1. Optionally call any combination of the following builder functions to further narrow the authorization check.
Note that these functions return the receiver type and can be chained:
- `ResourceCheck.subresource(string) ResourceCheck`
- `ResourceCheck.namespace(string) ResourceCheck`
- `ResourceCheck.name(string) ResourceCheck`
1. Call `ResourceCheck.check(verb string) Decision` to perform the authorization check.
1. Call `allowed() bool` or `reason() string` to inspect the result of the authorization check.
Non-resource authorization performed are used as follows:
1. Specify only a path: `Authorizer.path(string) PathCheck`
1. Call `PathCheck.check(httpVerb string) Decision` to perform the authorization check.
1. Call `allowed() bool` or `reason() string` to inspect the result of the authorization check.
To perform an authorization check for a service account:
- `Authorizer.serviceAccount(namespace string, name string) Authorizer`
| CEL Expression | Purpose |
|----------------|---------|
| `authorizer.group('').resource('pods').namespace('default').check('create').allowed()` | Returns true if the principal (user or service account) is allowed create pods in the 'default' namespace. |
| `authorizer.path('/healthz').check('get').allowed()` | Checks if the principal (user or service account) is authorized to make HTTP GET requests to the /healthz API path. |
| `authorizer.serviceAccount('default', 'myserviceaccount').resource('deployments').check('delete').allowed()` | Checks if the service account is authorized to delete deployments. |
With the alpha `AuthorizeWithSelectors` feature enabled, field and label selectors can be added to authorization checks.
| CEL Expression | Purpose |
|----------------|---------|
| `authorizer.group('').resource('pods').fieldSelector('spec.nodeName=mynode').check('list').allowed()` | Returns true if the principal (user or service account) is allowed to list pods with the field selector `spec.nodeName=mynode`. |
| `authorizer.group('').resource('pods').labelSelector('example.com/mylabel=myvalue').check('list').allowed()` | Returns true if the principal (user or service account) is allowed to list pods with the label selector `example.com/mylabel=myvalue`. |
See the [Kubernetes Authz library](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz)
and [Kubernetes AuthzSelectors library](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#AuthzSelectors)
godoc for more information.
### Kubernetes quantity library
Kubernetes 1.28 adds support for manipulating quantity strings (ex 1.5G, 512k, 20Mi)
- `isQuantity(string)` checks if a string is a valid Quantity according to
[Kubernetes' resource.Quantity](https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#Quantity).
- `quantity(string) Quantity` converts a string to a Quantity or results in an error if the
string is not a valid quantity.
Once parsed via the `quantity` function, the resulting Quantity object has the
following library of member functions:
| Member Function | CEL Return Value | Description |
|-----------------------------|------------------|-------------|
| `isInteger()` | bool | Returns true if and only if asInteger is safe to call without an error |
| `asInteger()` | int | Returns a representation of the current value as an int64 if possible or results in an error if conversion would result in overflow or loss of precision. |
| `asApproximateFloat()` | float | Returns a float64 representation of the quantity which may lose precision. If the value of the quantity is outside the range of a float64 +Inf/-Inf will be returned. |
| `sign()` | int | Returns `1` if the quantity is positive, `-1` if it is negative. `0` if it is zero |
| `add(<Quantity>)` | Quantity | Returns sum of two quantities |
| `add(<int>)` | Quantity | Returns sum of quantity and an integer |
| `sub(<Quantity>)` | Quantity | Returns difference between two quantities |
| `sub(<int>)` | Quantity | Returns difference between a quantity and an integer |
| `isLessThan(<Quantity>)` | bool | Returns true if and only if the receiver is less than the operand |
| `isGreaterThan(<Quantity>)` | bool | Returns true if and only if the receiver is greater than the operand |
| `compareTo(<Quantity>)` | int | Compares receiver to operand and returns 0 if they are equal, 1 if the receiver is greater, or -1 if the receiver is less than the operand |
Examples:
| CEL Expression | Purpose |
|---------------------------------------------------------------------------|-------------------------------------------------------|
| `quantity("500000G").isInteger()` | Test if conversion to integer would throw an error |
| `quantity("50k").asInteger()` | Precise conversion to integer |
| `quantity("9999999999999999999999999999999999999G").asApproximateFloat()` | Lossy conversion to float |
| `quantity("50k").add(quantity("20k"))` | Add two quantities |
| `quantity("50k").sub(20000)` | Subtract an integer from a quantity |
| `quantity("50k").add(20).sub(quantity("100k")).sub(-50000)` | Chain adding and subtracting integers and quantities |
| `quantity("200M").compareTo(quantity("0.2G"))` | Compare two quantities |
| `quantity("150Mi").isGreaterThan(quantity("100Mi"))` | Test if a quantity is greater than the receiver |
| `quantity("50M").isLessThan(quantity("100M"))` | Test if a quantity is less than the receiver |
## Type checking
CEL is a [gradually typed language](https://github.com/google/cel-spec/blob/master/doc/langdef.md#gradual-type-checking).
Some Kubernetes API fields contain fully type checked CEL expressions. For example,
[CustomResourceDefinitions Validation Rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
are fully type checked.
Some Kubernetes API fields contain partially type checked CEL expressions. A
partially type checked expression is an expressions where some of the variables
are statically typed but others are dynamically typed. For example, in the CEL
expressions of
[ValidatingAdmissionPolicies](/docs/reference/access-authn-authz/validating-admission-policy/)
the `request` variable is typed, but the `object` variable is dynamically typed.
As a result, an expression containing `request.namex` would fail type checking
because the `namex` field is not defined. However, `object.namex` would pass
type checking even when the `namex` field is not defined for the resource kinds
that `object` refers to, because `object` is dynamically typed.
The `has()` macro in CEL may be used in CEL expressions to check if a field of a
dynamically typed variable is accessible before attempting to access the field's
value. For example:
```cel
has(object.namex) ? object.namex == 'special' : request.name == 'special'
```
## Type system integration
| OpenAPIv3 type | CEL type |
|----------------------------------------------------|----------|
| 'object' with Properties | object / "message type" (`type(<object>)` evaluates to `selfType<uniqueNumber>.path.to.object.from.self`) |
| 'object' with AdditionalProperties | map |
| 'object' with x-kubernetes-embedded-type | object / "message type", 'apiVersion', 'kind', 'metadata.name' and 'metadata.generateName' are implicitly included in schema |
| 'object' with x-kubernetes-preserve-unknown-fields | object / "message type", unknown fields are NOT accessible in CEL expression |
| x-kubernetes-int-or-string | union of int or string, `self.intOrString < 100 \|\| self.intOrString == '50%'` evaluates to true for both `50` and `"50%"` |
| 'array' | list |
| 'array' with x-kubernetes-list-type=map | list with map based Equality & unique key guarantees |
| 'array' with x-kubernetes-list-type=set | list with set based Equality & unique entry guarantees |
| 'boolean' | boolean |
| 'number' (all formats) | double |
| 'integer' (all formats) | int (64) |
| _no equivalent_ | uint (64) |
| 'null' | null_type |
| 'string' | string |
| 'string' with format=byte (base64 encoded) | bytes |
| 'string' with format=date | timestamp (google.protobuf.Timestamp) |
| 'string' with format=datetime | timestamp (google.protobuf.Timestamp) |
| 'string' with format=duration | duration (google.protobuf.Duration) |
Also see: [CEL types](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#values),
[OpenAPI types](https://swagger.io/specification/#data-types),
[Kubernetes Structural Schemas](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema).
Equality comparison for arrays with `x-kubernetes-list-type` of `set` or `map` ignores element
order. For example `[1, 2] == [2, 1]` if the arrays represent Kubernetes `set` values.
Concatenation on arrays with `x-kubernetes-list-type` use the semantics of the
list type:
`set`
: `X + Y` performs a union where the array positions of all elements in
`X` are preserved and non-intersecting elements in `Y` are appended, retaining
their partial order.
`map`
: `X + Y` performs a merge where the array positions of all keys in `X`
are preserved but the values are overwritten by values in `Y` when the key
sets of `X` and `Y` intersect. Elements in `Y` with non-intersecting keys are
appended, retaining their partial order.
## Escaping
Only Kubernetes resource property names of the form
`[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible from CEL. Accessible property
names are escaped according to the following rules when accessed in the
expression:
| escape sequence | property name equivalent |
|-------------------|----------------------------------------------------------------------------------------------|
| `__underscores__` | `__` |
| `__dot__` | `.` |
| `__dash__` | `-` |
| `__slash__` | `/` |
| `__{keyword}__` | [CEL **RESERVED** keyword](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#syntax) |
When you escape any of CEL's **RESERVED** keywords you need to match the exact property name
use the underscore escaping
(for example, `int` in the word `sprint` would not be escaped and nor would it need to be).
Examples on escaping:
| property name | rule with escaped property name |
|---------------|-----------------------------------|
| `namespace` | `self.__namespace__ > 0` |
| `x-prop` | `self.x__dash__prop > 0` |
| `redact__d` | `self.redact__underscores__d > 0` |
| `string` | `self.startsWith('kube')` |
## Resource constraints
CEL is non-Turing complete and offers a variety of production safety controls to
limit execution time. CEL's _resource constraint_ features provide feedback to
developers about expression complexity and help protect the API server from
excessive resource consumption during evaluation. CEL's resource constraint
features are used to prevent CEL evaluation from consuming excessive API server
resources.
A key element of the resource constraint features is a _cost unit_ that CEL
defines as a way of tracking CPU utilization. Cost units are independent of
system load and hardware. Cost units are also deterministic; for any given CEL
expression and input data, evaluation of the expression by the CEL interpreter
will always result in the same cost.
Many of CEL's core operations have fixed costs. The simplest operations, such as
comparisons (e.g. `<`) have a cost of 1. Some have a higher fixed cost, for
example list literal declarations have a fixed base cost of 40 cost units.
Calls to functions implemented in native code approximate cost based on the time
complexity of the operation. For example: operations that use regular
expressions, such as `match` and `find`, are estimated using an approximated
cost of `length(regexString)*length(inputString)`. The approximated cost
reflects the worst case time complexity of Go's RE2 implementation.
### Runtime cost budget
All CEL expressions evaluated by Kubernetes are constrained by a runtime cost
budget. The runtime cost budget is an estimate of actual CPU utilization
computed by incrementing a cost unit counter while interpreting a CEL
expression. If the CEL interpreter executes too many instructions, the runtime
cost budget will be exceeded, execution of the expressions will be halted, and
an error will result.
Some Kubernetes resources define an additional runtime cost budget that bounds
the execution of multiple expressions. If the sum total of the cost of
expressions exceed the budget, execution of the expressions will be halted, and
an error will result. For example the validation of a custom resource has a
_per-validation_ runtime cost budget for all
[Validation Rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
evaluated to validate the custom resource.
### Estimated cost limits
For some Kubernetes resources, the API server may also check if worst case
estimated running time of CEL expressions would be prohibitively expensive to
execute. If so, the API server prevent the CEL expression from being written to
API resources by rejecting create or update operations containing the CEL
expression to the API resources. This feature offers a stronger assurance that
CEL expressions written to the API resource will be evaluated at runtime without
exceeding the runtime cost budget. | kubernetes reference | title Common Expression Language in Kubernetes reviewers jpbetz cici37 content type concept weight 35 min kubernetes server version 1 25 overview The Common Expression Language CEL https github com google cel go is used in the Kubernetes API to declare validation rules policy rules and other constraints or conditions CEL expressions are evaluated directly in the making CEL a convenient alternative to out of process mechanisms such as webhooks for many extensibility use cases Your CEL expressions continue to execute so long as the control plane s API server component remains available body Language overview The CEL language https github com google cel spec blob master doc langdef md has a straightforward syntax that is similar to the expressions in C C Java JavaScript and Go CEL was designed to be embedded into applications Each CEL program is a single expression that evaluates to a single value CEL expressions are typically short one liners that inline well into the string fields of Kubernetes API resources Inputs to a CEL program are variables Each Kubernetes API field that contains CEL declares in the API documentation which variables are available to use for that field For example in the x kubernetes validations i rules field of CustomResourceDefinitions the self and oldSelf variables are available and refer to the previous and current state of the custom resource data to be validated by the CEL expression Other Kubernetes API fields may declare different variables See the API documentation of the API fields to learn which variables are available for that field Example CEL expressions Rule Purpose self minReplicas self replicas self replicas self maxReplicas Validate that the three fields defining replicas are ordered appropriately Available in self stateCounts Validate that an entry with the Available key exists in a map self list1 size 0 self list2 size 0 Validate that one of two lists is non empty but not both self envars filter e e name MY ENV all e e value matches a zA Z Validate the value field of a listMap entry where key field name is MY ENV has self expired self created self ttl self expired Validate that expired date is after a create date plus a ttl duration self health startsWith ok Validate a health string field has the prefix ok self widgets exists w w key x w foo 10 Validate that the foo property of a listMap item with a key x is less than 10 type self string self 99 self 42 Validate an int or string field for both the int and string cases self metadata name singleton Validate that an object s name matches a specific value making it a singleton self set1 all e e in self set2 Validate that two listSets are disjoint self names size self details size self names all n n in self details Validate the details map is keyed by the items in the names listSet self details all key key matches a zA Z Validate the keys of the details map self details all key self details key matches a zA Z Validate the values of the details map CEL options language features and libraries CEL is configured with the following options libraries and language features introduced at the specified Kubernetes versions CEL option library or language feature Included Availablity Standard macros https github com google cel spec blob v0 7 0 doc langdef md macros has all exists exists one map filter All Kubernetes versions Standard functions https github com google cel spec blob master doc langdef md list of standard definitions See official list of standard definitions https github com google cel spec blob master doc langdef md list of standard definitions All Kubernetes versions Homogeneous Aggregate Literals https pkg go dev github com google cel go v0 17 4 cel HomogeneousAggregateLiterals All Kubernetes versions Default UTC Time Zone https pkg go dev github com google cel go v0 17 4 cel DefaultUTCTimeZone All Kubernetes versions Eagerly Validate Declarations https pkg go dev github com google cel go v0 17 4 cel EagerlyValidateDeclarations All Kubernetes versions Extended strings library https pkg go dev github com google cel go ext Strings Version 1 charAt indexOf lastIndexOf lowerAscii upperAscii replace split join substring trim All Kubernetes versions Kubernetes list library See Kubernetes list library kubernetes list library All Kubernetes versions Kubernetes regex library See Kubernetes regex library kubernetes regex library All Kubernetes versions Kubernetes URL library See Kubernetes URL library kubernetes url library All Kubernetes versions Kubernetes authorizer library See Kubernetes authorizer library kubernetes authorizer library All Kubernetes versions Kubernetes quantity library See Kubernetes quantity library kubernetes quantity library Kubernetes versions 1 29 CEL optional types See CEL optional types https pkg go dev github com google cel go v0 17 4 cel OptionalTypes Kubernetes versions 1 29 CEL CrossTypeNumericComparisons See CEL CrossTypeNumericComparisons https pkg go dev github com google cel go v0 17 4 cel CrossTypeNumericComparisons Kubernetes versions 1 29 CEL functions features and language settings support Kubernetes control plane rollbacks For example CEL Optional Values was introduced at Kubernetes 1 29 and so only API servers at that version or newer will accept write requests to CEL expressions that use CEL Optional Values However when a cluster is rolled back to Kubernetes 1 28 CEL expressions using CEL Optional Values that are already stored in API resources will continue to evaluate correctly Kubernetes CEL libraries In additional to the CEL community libraries Kubernetes includes CEL libraries that are available everywhere CEL is used in Kubernetes Kubernetes list library The list library includes indexOf and lastIndexOf which work similar to the strings functions of the same names These functions either the first or last positional index of the provided element in the list The list library also includes min max and sum Sum is supported on all number types as well as the duration type Min and max are supported on all comparable types isSorted is also provided as a convenience function and is supported on all comparable types Examples CEL Expression Purpose names isSorted Verify that a list of names is kept in alphabetical order items map x x weight sum 1 0 Verify that the weights of a list of objects sum to 1 0 lowPriorities map x x priority max highPriorities map x x priority min Verify that two sets of priorities do not overlap names indexOf should be first 1 Require that the first name in a list if a specific value See the Kubernetes List Library https pkg go dev k8s io apiextensions apiserver pkg apiserver schema cel library Lists godoc for more information Kubernetes regex library In addition to the matches function provided by the CEL standard library the regex library provides find and findAll enabling a much wider range of regex operations Examples CEL Expression Purpose abc 123 find 0 9 Find the first number in a string 1 2 3 4 findAll 0 9 map x int x sum 100 Verify that the numbers in a string sum to less than 100 See the Kubernetes regex library https pkg go dev k8s io apiextensions apiserver pkg apiserver schema cel library Regex godoc for more information Kubernetes URL library To make it easier and safer to process URLs the following functions have been added isURL string checks if a string is a valid URL according to the Go s net url https pkg go dev net url URL package The string must be an absolute URL url string URL converts a string to a URL or results in an error if the string is not a valid URL Once parsed via the url function the resulting URL object has getScheme getHost getHostname getPort getEscapedPath and getQuery accessors Examples CEL Expression Purpose url https example com 80 getHost Gets the example com 80 host part of the URL url https example com path with spaces getEscapedPath Returns path 20with 20spaces See the Kubernetes URL library https pkg go dev k8s io apiextensions apiserver pkg apiserver schema cel library URLs godoc for more information Kubernetes authorizer library For CEL expressions in the API where a variable of type Authorizer is available the authorizer may be used to perform authorization checks for the principal authenticated user of the request API resource checks are performed as follows 1 Specify the group and resource to check Authorizer group string resource string ResourceCheck 1 Optionally call any combination of the following builder functions to further narrow the authorization check Note that these functions return the receiver type and can be chained ResourceCheck subresource string ResourceCheck ResourceCheck namespace string ResourceCheck ResourceCheck name string ResourceCheck 1 Call ResourceCheck check verb string Decision to perform the authorization check 1 Call allowed bool or reason string to inspect the result of the authorization check Non resource authorization performed are used as follows 1 Specify only a path Authorizer path string PathCheck 1 Call PathCheck check httpVerb string Decision to perform the authorization check 1 Call allowed bool or reason string to inspect the result of the authorization check To perform an authorization check for a service account Authorizer serviceAccount namespace string name string Authorizer CEL Expression Purpose authorizer group resource pods namespace default check create allowed Returns true if the principal user or service account is allowed create pods in the default namespace authorizer path healthz check get allowed Checks if the principal user or service account is authorized to make HTTP GET requests to the healthz API path authorizer serviceAccount default myserviceaccount resource deployments check delete allowed Checks if the service account is authorized to delete deployments With the alpha AuthorizeWithSelectors feature enabled field and label selectors can be added to authorization checks CEL Expression Purpose authorizer group resource pods fieldSelector spec nodeName mynode check list allowed Returns true if the principal user or service account is allowed to list pods with the field selector spec nodeName mynode authorizer group resource pods labelSelector example com mylabel myvalue check list allowed Returns true if the principal user or service account is allowed to list pods with the label selector example com mylabel myvalue See the Kubernetes Authz library https pkg go dev k8s io apiserver pkg cel library Authz and Kubernetes AuthzSelectors library https pkg go dev k8s io apiserver pkg cel library AuthzSelectors godoc for more information Kubernetes quantity library Kubernetes 1 28 adds support for manipulating quantity strings ex 1 5G 512k 20Mi isQuantity string checks if a string is a valid Quantity according to Kubernetes resource Quantity https pkg go dev k8s io apimachinery pkg api resource Quantity quantity string Quantity converts a string to a Quantity or results in an error if the string is not a valid quantity Once parsed via the quantity function the resulting Quantity object has the following library of member functions Member Function CEL Return Value Description isInteger bool Returns true if and only if asInteger is safe to call without an error asInteger int Returns a representation of the current value as an int64 if possible or results in an error if conversion would result in overflow or loss of precision asApproximateFloat float Returns a float64 representation of the quantity which may lose precision If the value of the quantity is outside the range of a float64 Inf Inf will be returned sign int Returns 1 if the quantity is positive 1 if it is negative 0 if it is zero add Quantity Quantity Returns sum of two quantities add int Quantity Returns sum of quantity and an integer sub Quantity Quantity Returns difference between two quantities sub int Quantity Returns difference between a quantity and an integer isLessThan Quantity bool Returns true if and only if the receiver is less than the operand isGreaterThan Quantity bool Returns true if and only if the receiver is greater than the operand compareTo Quantity int Compares receiver to operand and returns 0 if they are equal 1 if the receiver is greater or 1 if the receiver is less than the operand Examples CEL Expression Purpose quantity 500000G isInteger Test if conversion to integer would throw an error quantity 50k asInteger Precise conversion to integer quantity 9999999999999999999999999999999999999G asApproximateFloat Lossy conversion to float quantity 50k add quantity 20k Add two quantities quantity 50k sub 20000 Subtract an integer from a quantity quantity 50k add 20 sub quantity 100k sub 50000 Chain adding and subtracting integers and quantities quantity 200M compareTo quantity 0 2G Compare two quantities quantity 150Mi isGreaterThan quantity 100Mi Test if a quantity is greater than the receiver quantity 50M isLessThan quantity 100M Test if a quantity is less than the receiver Type checking CEL is a gradually typed language https github com google cel spec blob master doc langdef md gradual type checking Some Kubernetes API fields contain fully type checked CEL expressions For example CustomResourceDefinitions Validation Rules docs tasks extend kubernetes custom resources custom resource definitions validation rules are fully type checked Some Kubernetes API fields contain partially type checked CEL expressions A partially type checked expression is an expressions where some of the variables are statically typed but others are dynamically typed For example in the CEL expressions of ValidatingAdmissionPolicies docs reference access authn authz validating admission policy the request variable is typed but the object variable is dynamically typed As a result an expression containing request namex would fail type checking because the namex field is not defined However object namex would pass type checking even when the namex field is not defined for the resource kinds that object refers to because object is dynamically typed The has macro in CEL may be used in CEL expressions to check if a field of a dynamically typed variable is accessible before attempting to access the field s value For example cel has object namex object namex special request name special Type system integration OpenAPIv3 type CEL type object with Properties object message type type object evaluates to selfType uniqueNumber path to object from self object with AdditionalProperties map object with x kubernetes embedded type object message type apiVersion kind metadata name and metadata generateName are implicitly included in schema object with x kubernetes preserve unknown fields object message type unknown fields are NOT accessible in CEL expression x kubernetes int or string union of int or string self intOrString 100 self intOrString 50 evaluates to true for both 50 and 50 array list array with x kubernetes list type map list with map based Equality unique key guarantees array with x kubernetes list type set list with set based Equality unique entry guarantees boolean boolean number all formats double integer all formats int 64 no equivalent uint 64 null null type string string string with format byte base64 encoded bytes string with format date timestamp google protobuf Timestamp string with format datetime timestamp google protobuf Timestamp string with format duration duration google protobuf Duration Also see CEL types https github com google cel spec blob v0 6 0 doc langdef md values OpenAPI types https swagger io specification data types Kubernetes Structural Schemas docs tasks extend kubernetes custom resources custom resource definitions specifying a structural schema Equality comparison for arrays with x kubernetes list type of set or map ignores element order For example 1 2 2 1 if the arrays represent Kubernetes set values Concatenation on arrays with x kubernetes list type use the semantics of the list type set X Y performs a union where the array positions of all elements in X are preserved and non intersecting elements in Y are appended retaining their partial order map X Y performs a merge where the array positions of all keys in X are preserved but the values are overwritten by values in Y when the key sets of X and Y intersect Elements in Y with non intersecting keys are appended retaining their partial order Escaping Only Kubernetes resource property names of the form a zA Z a zA Z0 9 are accessible from CEL Accessible property names are escaped according to the following rules when accessed in the expression escape sequence property name equivalent underscores dot dash slash keyword CEL RESERVED keyword https github com google cel spec blob v0 6 0 doc langdef md syntax When you escape any of CEL s RESERVED keywords you need to match the exact property name use the underscore escaping for example int in the word sprint would not be escaped and nor would it need to be Examples on escaping property name rule with escaped property name namespace self namespace 0 x prop self x dash prop 0 redact d self redact underscores d 0 string self startsWith kube Resource constraints CEL is non Turing complete and offers a variety of production safety controls to limit execution time CEL s resource constraint features provide feedback to developers about expression complexity and help protect the API server from excessive resource consumption during evaluation CEL s resource constraint features are used to prevent CEL evaluation from consuming excessive API server resources A key element of the resource constraint features is a cost unit that CEL defines as a way of tracking CPU utilization Cost units are independent of system load and hardware Cost units are also deterministic for any given CEL expression and input data evaluation of the expression by the CEL interpreter will always result in the same cost Many of CEL s core operations have fixed costs The simplest operations such as comparisons e g have a cost of 1 Some have a higher fixed cost for example list literal declarations have a fixed base cost of 40 cost units Calls to functions implemented in native code approximate cost based on the time complexity of the operation For example operations that use regular expressions such as match and find are estimated using an approximated cost of length regexString length inputString The approximated cost reflects the worst case time complexity of Go s RE2 implementation Runtime cost budget All CEL expressions evaluated by Kubernetes are constrained by a runtime cost budget The runtime cost budget is an estimate of actual CPU utilization computed by incrementing a cost unit counter while interpreting a CEL expression If the CEL interpreter executes too many instructions the runtime cost budget will be exceeded execution of the expressions will be halted and an error will result Some Kubernetes resources define an additional runtime cost budget that bounds the execution of multiple expressions If the sum total of the cost of expressions exceed the budget execution of the expressions will be halted and an error will result For example the validation of a custom resource has a per validation runtime cost budget for all Validation Rules docs tasks extend kubernetes custom resources custom resource definitions validation rules evaluated to validate the custom resource Estimated cost limits For some Kubernetes resources the API server may also check if worst case estimated running time of CEL expressions would be prohibitively expensive to execute If so the API server prevent the CEL expression from being written to API resources by rejecting create or update operations containing the CEL expression to the API resources This feature offers a stronger assurance that CEL expressions written to the API resource will be evaluated at runtime without exceeding the runtime cost budget |
kubernetes reference title Kubernetes API health endpoints contenttype concept logicalhan reviewers The Kubernetes glossarytooltip termid kube apiserver text API server provides API endpoints to indicate the current status of the API server overview weight 50 | ---
title: Kubernetes API health endpoints
reviewers:
- logicalhan
content_type: concept
weight: 50
---
<!-- overview -->
The Kubernetes provides API endpoints to indicate the current status of the API server.
This page describes these API endpoints and explains how you can use them.
<!-- body -->
## API endpoints for health
The Kubernetes API server provides 3 API endpoints (`healthz`, `livez` and `readyz`) to indicate the current status of the API server.
The `healthz` endpoint is deprecated (since Kubernetes v1.16), and you should use the more specific `livez` and `readyz` endpoints instead.
The `livez` endpoint can be used with the `--livez-grace-period` [flag](/docs/reference/command-line-tools-reference/kube-apiserver) to specify the startup duration.
For a graceful shutdown you can specify the `--shutdown-delay-duration` [flag](/docs/reference/command-line-tools-reference/kube-apiserver) with the `/readyz` endpoint.
Machines that check the `healthz`/`livez`/`readyz` of the API server should rely on the HTTP status code.
A status code `200` indicates the API server is `healthy`/`live`/`ready`, depending on the called endpoint.
The more verbose options shown below are intended to be used by human operators to debug their cluster or understand the state of the API server.
The following examples will show how you can interact with the health API endpoints.
For all endpoints, you can use the `verbose` parameter to print out the checks and their status.
This can be useful for a human operator to debug the current status of the API server, it is not intended to be consumed by a machine:
```shell
curl -k https://localhost:6443/livez?verbose
```
or from a remote host with authentication:
```shell
kubectl get --raw='/readyz?verbose'
```
The output will look like this:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check passed
The Kubernetes API server also supports to exclude specific checks.
The query parameters can also be combined like in this example:
```shell
curl -k 'https://localhost:6443/readyz?verbose&exclude=etcd'
```
The output show that the `etcd` check is excluded:
[+]ping ok
[+]log ok
[+]etcd excluded: ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
healthz check passed
## Individual health checks
Each individual health check exposes an HTTP endpoint and can be checked individually.
The schema for the individual health checks is `/livez/<healthcheck-name>` or `/readyz/<healthcheck-name>`, where `livez` and `readyz` can be used to indicate if you want to check the liveness or the readiness of the API server, respectively.
The `<healthcheck-name>` path can be discovered using the `verbose` flag from above and take the path between `[+]` and `ok`.
These individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system:
```shell
curl -k https://localhost:6443/livez/etcd
``` | kubernetes reference | title Kubernetes API health endpoints reviewers logicalhan content type concept weight 50 overview The Kubernetes provides API endpoints to indicate the current status of the API server This page describes these API endpoints and explains how you can use them body API endpoints for health The Kubernetes API server provides 3 API endpoints healthz livez and readyz to indicate the current status of the API server The healthz endpoint is deprecated since Kubernetes v1 16 and you should use the more specific livez and readyz endpoints instead The livez endpoint can be used with the livez grace period flag docs reference command line tools reference kube apiserver to specify the startup duration For a graceful shutdown you can specify the shutdown delay duration flag docs reference command line tools reference kube apiserver with the readyz endpoint Machines that check the healthz livez readyz of the API server should rely on the HTTP status code A status code 200 indicates the API server is healthy live ready depending on the called endpoint The more verbose options shown below are intended to be used by human operators to debug their cluster or understand the state of the API server The following examples will show how you can interact with the health API endpoints For all endpoints you can use the verbose parameter to print out the checks and their status This can be useful for a human operator to debug the current status of the API server it is not intended to be consumed by a machine shell curl k https localhost 6443 livez verbose or from a remote host with authentication shell kubectl get raw readyz verbose The output will look like this ping ok log ok etcd ok poststarthook start kube apiserver admission initializer ok poststarthook generic apiserver start informers ok poststarthook start apiextensions informers ok poststarthook start apiextensions controllers ok poststarthook crd informer synced ok poststarthook bootstrap controller ok poststarthook rbac bootstrap roles ok poststarthook scheduling bootstrap system priority classes ok poststarthook start cluster authentication info controller ok poststarthook start kube aggregator informers ok poststarthook apiservice registration controller ok poststarthook apiservice status available controller ok poststarthook kube apiserver autoregistration ok autoregister completion ok poststarthook apiservice openapi controller ok healthz check passed The Kubernetes API server also supports to exclude specific checks The query parameters can also be combined like in this example shell curl k https localhost 6443 readyz verbose exclude etcd The output show that the etcd check is excluded ping ok log ok etcd excluded ok poststarthook start kube apiserver admission initializer ok poststarthook generic apiserver start informers ok poststarthook start apiextensions informers ok poststarthook start apiextensions controllers ok poststarthook crd informer synced ok poststarthook bootstrap controller ok poststarthook rbac bootstrap roles ok poststarthook scheduling bootstrap system priority classes ok poststarthook start cluster authentication info controller ok poststarthook start kube aggregator informers ok poststarthook apiservice registration controller ok poststarthook apiservice status available controller ok poststarthook kube apiserver autoregistration ok autoregister completion ok poststarthook apiservice openapi controller ok shutdown ok healthz check passed Individual health checks Each individual health check exposes an HTTP endpoint and can be checked individually The schema for the individual health checks is livez healthcheck name or readyz healthcheck name where livez and readyz can be used to indicate if you want to check the liveness or the readiness of the API server respectively The healthcheck name path can be discovered using the verbose flag from above and take the path between and ok These individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system shell curl k https localhost 6443 livez etcd |
kubernetes reference weight 20 title API Overview erictune contenttype concept card reviewers jbeda nolist true lavalamp | ---
title: API Overview
reviewers:
- erictune
- lavalamp
- jbeda
content_type: concept
weight: 20
no_list: true
card:
name: reference
weight: 50
title: Overview of API
---
<!-- overview -->
This section provides reference information for the Kubernetes API.
The REST API is the fundamental fabric of Kubernetes. All operations and
communications between components, and external user commands are REST API
calls that the API Server handles. Consequently, everything in the Kubernetes
platform is treated as an API object and has a corresponding entry in the
[API](/docs/reference/generated/kubernetes-api//).
The [Kubernetes API reference](/docs/reference/generated/kubernetes-api//)
lists the API for Kubernetes version .
For general background information, read
[The Kubernetes API](/docs/concepts/overview/kubernetes-api/).
[Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/)
describes how clients can authenticate to the Kubernetes API server, and how their
requests are authorized.
## API versioning
The JSON and Protobuf serialization schemas follow the same guidelines for
schema changes. The following descriptions cover both formats.
The API versioning and software versioning are indirectly related.
The [API and release versioning proposal](https://git.k8s.io/sig-release/release-engineering/versioning.md)
describes the relationship between API versioning and software versioning.
Different API versions indicate different levels of stability and support. You
can find more information about the criteria for each level in the
[API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions).
Here's a summary of each level:
- Alpha:
- The version names contain `alpha` (for example, `v1alpha1`).
- Built-in alpha API versions are disabled by default and must be explicitly enabled in the `kube-apiserver` configuration to be used.
- The software may contain bugs. Enabling a feature may expose bugs.
- Support for an alpha API may be dropped at any time without notice.
- The API may change in incompatible ways in a later software release without notice.
- The software is recommended for use only in short-lived testing clusters,
due to increased risk of bugs and lack of long-term support.
- Beta:
- The version names contain `beta` (for example, `v2beta3`).
- Built-in beta API versions are disabled by default and must be explicitly enabled in the `kube-apiserver` configuration to be used
(**except** for beta versions of APIs introduced prior to Kubernetes 1.22, which were enabled by default).
- Built-in beta API versions have a maximum lifetime of 9 months or 3 minor releases (whichever is longer) from introduction
to deprecation, and 9 months or 3 minor releases (whichever is longer) from deprecation to removal.
- The software is well tested. Enabling a feature is considered safe.
- The support for a feature will not be dropped, though the details may change.
- The schema and/or semantics of objects may change in incompatible ways in
a subsequent beta or stable API version. When this happens, migration
instructions are provided. Adapting to a subsequent beta or stable API version
may require editing or re-creating API objects, and may not be straightforward.
The migration may require downtime for applications that rely on the feature.
- The software is not recommended for production uses. Subsequent releases
may introduce incompatible changes. Use of beta API versions is
required to transition to subsequent beta or stable API versions
once the beta API version is deprecated and no longer served.
Please try beta features and provide feedback. After the features exit beta, it
may not be practical to make more changes.
- Stable:
- The version name is `vX` where `X` is an integer.
- Stable API versions remain available for all future releases within a Kubernetes major version,
and there are no current plans for a major version revision of Kubernetes that removes stable APIs.
## API groups
[API groups](https://git.k8s.io/design-proposals-archive/api-machinery/api-group.md)
make it easier to extend the Kubernetes API.
The API group is specified in a REST path and in the `apiVersion` field of a
serialized object.
There are several API groups in Kubernetes:
* The *core* (also called *legacy*) group is found at REST path `/api/v1`.
The core group is not specified as part of the `apiVersion` field, for
example, `apiVersion: v1`.
* The named groups are at REST path `/apis/$GROUP_NAME/$VERSION` and use
`apiVersion: $GROUP_NAME/$VERSION` (for example, `apiVersion: batch/v1`).
You can find the full list of supported API groups in
[Kubernetes API reference](/docs/reference/generated/kubernetes-api//#-strong-api-groups-strong-).
## Enabling or disabling API groups {#enabling-or-disabling}
Certain resources and API groups are enabled by default. You can enable or
disable them by setting `--runtime-config` on the API server. The
`--runtime-config` flag accepts comma separated `<key>[=<value>]` pairs
describing the runtime configuration of the API server. If the `=<value>`
part is omitted, it is treated as if `=true` is specified. For example:
- to disable `batch/v1`, set `--runtime-config=batch/v1=false`
- to enable `batch/v2alpha1`, set `--runtime-config=batch/v2alpha1`
- to enable a specific version of an API, such as `storage.k8s.io/v1beta1/csistoragecapacities`, set `--runtime-config=storage.k8s.io/v1beta1/csistoragecapacities`
When you enable or disable groups or resources, you need to restart the API
server and controller manager to pick up the `--runtime-config` changes.
## Persistence
Kubernetes stores its serialized state in terms of the API resources by writing them into
.
##
- Learn more about [API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions)
- Read the design documentation for
[aggregator](https://git.k8s.io/design-proposals-archive/api-machinery/aggregated-api-servers.md) | kubernetes reference | title API Overview reviewers erictune lavalamp jbeda content type concept weight 20 no list true card name reference weight 50 title Overview of API overview This section provides reference information for the Kubernetes API The REST API is the fundamental fabric of Kubernetes All operations and communications between components and external user commands are REST API calls that the API Server handles Consequently everything in the Kubernetes platform is treated as an API object and has a corresponding entry in the API docs reference generated kubernetes api The Kubernetes API reference docs reference generated kubernetes api lists the API for Kubernetes version For general background information read The Kubernetes API docs concepts overview kubernetes api Controlling Access to the Kubernetes API docs concepts security controlling access describes how clients can authenticate to the Kubernetes API server and how their requests are authorized API versioning The JSON and Protobuf serialization schemas follow the same guidelines for schema changes The following descriptions cover both formats The API versioning and software versioning are indirectly related The API and release versioning proposal https git k8s io sig release release engineering versioning md describes the relationship between API versioning and software versioning Different API versions indicate different levels of stability and support You can find more information about the criteria for each level in the API Changes documentation https git k8s io community contributors devel sig architecture api changes md alpha beta and stable versions Here s a summary of each level Alpha The version names contain alpha for example v1alpha1 Built in alpha API versions are disabled by default and must be explicitly enabled in the kube apiserver configuration to be used The software may contain bugs Enabling a feature may expose bugs Support for an alpha API may be dropped at any time without notice The API may change in incompatible ways in a later software release without notice The software is recommended for use only in short lived testing clusters due to increased risk of bugs and lack of long term support Beta The version names contain beta for example v2beta3 Built in beta API versions are disabled by default and must be explicitly enabled in the kube apiserver configuration to be used except for beta versions of APIs introduced prior to Kubernetes 1 22 which were enabled by default Built in beta API versions have a maximum lifetime of 9 months or 3 minor releases whichever is longer from introduction to deprecation and 9 months or 3 minor releases whichever is longer from deprecation to removal The software is well tested Enabling a feature is considered safe The support for a feature will not be dropped though the details may change The schema and or semantics of objects may change in incompatible ways in a subsequent beta or stable API version When this happens migration instructions are provided Adapting to a subsequent beta or stable API version may require editing or re creating API objects and may not be straightforward The migration may require downtime for applications that rely on the feature The software is not recommended for production uses Subsequent releases may introduce incompatible changes Use of beta API versions is required to transition to subsequent beta or stable API versions once the beta API version is deprecated and no longer served Please try beta features and provide feedback After the features exit beta it may not be practical to make more changes Stable The version name is vX where X is an integer Stable API versions remain available for all future releases within a Kubernetes major version and there are no current plans for a major version revision of Kubernetes that removes stable APIs API groups API groups https git k8s io design proposals archive api machinery api group md make it easier to extend the Kubernetes API The API group is specified in a REST path and in the apiVersion field of a serialized object There are several API groups in Kubernetes The core also called legacy group is found at REST path api v1 The core group is not specified as part of the apiVersion field for example apiVersion v1 The named groups are at REST path apis GROUP NAME VERSION and use apiVersion GROUP NAME VERSION for example apiVersion batch v1 You can find the full list of supported API groups in Kubernetes API reference docs reference generated kubernetes api strong api groups strong Enabling or disabling API groups enabling or disabling Certain resources and API groups are enabled by default You can enable or disable them by setting runtime config on the API server The runtime config flag accepts comma separated key value pairs describing the runtime configuration of the API server If the value part is omitted it is treated as if true is specified For example to disable batch v1 set runtime config batch v1 false to enable batch v2alpha1 set runtime config batch v2alpha1 to enable a specific version of an API such as storage k8s io v1beta1 csistoragecapacities set runtime config storage k8s io v1beta1 csistoragecapacities When you enable or disable groups or resources you need to restart the API server and controller manager to pick up the runtime config changes Persistence Kubernetes stores its serialized state in terms of the API resources by writing them into Learn more about API conventions https git k8s io community contributors devel sig architecture api conventions md api conventions Read the design documentation for aggregator https git k8s io design proposals archive api machinery aggregated api servers md |
kubernetes setup weight 20 vincepri title Container Runtimes contenttype concept reviewers bart0sh overview | ---
reviewers:
- vincepri
- bart0sh
title: Container Runtimes
content_type: concept
weight: 20
---
<!-- overview -->
You need to install a
into each node in the cluster so that Pods can run there. This page outlines
what is involved and describes related tasks for setting up nodes.
Kubernetes requires that you use a runtime that
conforms with the
(CRI).
See [CRI version support](#cri-versions) for more information.
This page provides an outline of how to use several common container runtimes with
Kubernetes.
- [containerd](#containerd)
- [CRI-O](#cri-o)
- [Docker Engine](#docker)
- [Mirantis Container Runtime](#mcr)
Kubernetes releases before v1.24 included a direct integration with Docker Engine,
using a component named _dockershim_. That special direct integration is no longer
part of Kubernetes (this removal was
[announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)
as part of the v1.20 release).
You can read
[Check whether Dockershim removal affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/)
to understand how this removal might affect you. To learn about migrating from using dockershim, see
[Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/).
If you are running a version of Kubernetes other than v,
check the documentation for that version.
<!-- body -->
## Install and configure prerequisites
### Network configuration
By default, the Linux kernel does not allow IPv4 packets to be routed
between interfaces. Most Kubernetes cluster networking implementations
will change this setting (if needed), but some might expect the
administrator to do it for them. (Some might also expect other sysctl
parameters to be set, kernel modules to be loaded, etc; consult the
documentation for your specific network implementation.)
### Enable IPv4 packet forwarding {#prerequisite-ipv4-forwarding-optional}
To manually enable IPv4 packet forwarding:
```bash
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
```
Verify that `net.ipv4.ip_forward` is set to 1 with:
```bash
sysctl net.ipv4.ip_forward
```
## cgroup drivers
On Linux,
are used to constrain resources that are allocated to processes.
Both the and the
underlying container runtime need to interface with control groups to enforce
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/)
and set resources such as cpu/memory requests and limits. To interface with control
groups, the kubelet and the container runtime need to use a *cgroup driver*.
It's critical that the kubelet and the container runtime use the same cgroup
driver and are configured the same.
There are two cgroup drivers available:
* [`cgroupfs`](#cgroupfs-cgroup-driver)
* [`systemd`](#systemd-cgroup-driver)
### cgroupfs driver {#cgroupfs-cgroup-driver}
The `cgroupfs` driver is the [default cgroup driver in the kubelet](/docs/reference/config-api/kubelet-config.v1beta1).
When the `cgroupfs` driver is used, the kubelet and the container runtime directly interface with
the cgroup filesystem to configure cgroups.
The `cgroupfs` driver is **not** recommended when
[systemd](https://www.freedesktop.org/wiki/Software/systemd/) is the
init system because systemd expects a single cgroup manager on
the system. Additionally, if you use [cgroup v2](/docs/concepts/architecture/cgroups), use the `systemd`
cgroup driver instead of `cgroupfs`.
### systemd cgroup driver {#systemd-cgroup-driver}
When [systemd](https://www.freedesktop.org/wiki/Software/systemd/) is chosen as the init
system for a Linux distribution, the init process generates and consumes a root control group
(`cgroup`) and acts as a cgroup manager.
systemd has a tight integration with cgroups and allocates a cgroup per systemd
unit. As a result, if you use `systemd` as the init system with the `cgroupfs`
driver, the system gets two different cgroup managers.
Two cgroup managers result in two views of the available and in-use resources in
the system. In some cases, nodes that are configured to use `cgroupfs` for the
kubelet and container runtime, but use `systemd` for the rest of the processes become
unstable under resource pressure.
The approach to mitigate this instability is to use `systemd` as the cgroup driver for
the kubelet and the container runtime when systemd is the selected init system.
To set `systemd` as the cgroup driver, edit the
[`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/)
option of `cgroupDriver` and set it to `systemd`. For example:
```yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
...
cgroupDriver: systemd
```
Starting with v1.22 and later, when creating a cluster with kubeadm, if the user does not set
the `cgroupDriver` field under `KubeletConfiguration`, kubeadm defaults it to `systemd`.
If you configure `systemd` as the cgroup driver for the kubelet, you must also
configure `systemd` as the cgroup driver for the container runtime. Refer to
the documentation for your container runtime for instructions. For example:
* [containerd](#containerd-systemd)
* [CRI-O](#cri-o)
In Kubernetes , with the `KubeletCgroupDriverFromCRI`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
enabled and a container runtime that supports the `RuntimeConfig` CRI RPC,
the kubelet automatically detects the appropriate cgroup driver from the runtime,
and ignores the `cgroupDriver` setting within the kubelet configuration.
Changing the cgroup driver of a Node that has joined a cluster is a sensitive operation.
If the kubelet has created Pods using the semantics of one cgroup driver, changing the container
runtime to another cgroup driver can cause errors when trying to re-create the Pod sandbox
for such existing Pods. Restarting the kubelet may not solve such errors.
If you have automation that makes it feasible, replace the node with another using the updated
configuration, or reinstall it using automation.
### Migrating to the `systemd` driver in kubeadm managed clusters
If you wish to migrate to the `systemd` cgroup driver in existing kubeadm managed clusters,
follow [configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
## CRI version support {#cri-versions}
Your container runtime must support at least v1alpha2 of the container runtime interface.
Kubernetes [starting v1.26](/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#cri-api-removal)
_only works_ with v1 of the CRI API. Earlier versions default
to v1 version, however if a container runtime does not support the v1 API, the kubelet falls back to
using the (deprecated) v1alpha2 API instead.
## Container runtimes
### containerd
This section outlines the necessary steps to use containerd as CRI runtime.
To install containerd on your system, follow the instructions on
[getting started with containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).
Return to this step once you've created a valid `config.toml` configuration file.
You can find this file under the path `/etc/containerd/config.toml`.
You can find this file under the path `C:\Program Files\containerd\config.toml`.
On Linux the default CRI socket for containerd is `/run/containerd/containerd.sock`.
On Windows the default CRI endpoint is `npipe://./pipe/containerd-containerd`.
#### Configuring the `systemd` cgroup driver {#containerd-systemd}
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
```
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
```
The `systemd` cgroup driver is recommended if you use [cgroup v2](/docs/concepts/architecture/cgroups).
If you installed containerd from a package (for example, RPM or `.deb`), you may find
that the CRI integration plugin is disabled by default.
You need CRI support enabled to use containerd with Kubernetes. Make sure that `cri`
is not included in the`disabled_plugins` list within `/etc/containerd/config.toml`;
if you made changes to that file, also restart `containerd`.
If you experience container crash loops after the initial cluster installation or after
installing a CNI, the containerd configuration provided with the package might contain
incompatible configuration parameters. Consider resetting the containerd configuration
with `containerd config default > /etc/containerd/config.toml` as specified in
[getting-started.md](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#advanced-topics)
and then set the configuration parameters specified above accordingly.
If you apply this change, make sure to restart containerd:
```shell
sudo systemctl restart containerd
```
When using kubeadm, manually configure the
[cgroup driver for kubelet](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver).
In Kubernetes v1.28, you can enable automatic detection of the
cgroup driver as an alpha feature. See [systemd cgroup driver](#systemd-cgroup-driver)
for more details.
#### Overriding the sandbox (pause) image {#override-pause-image-containerd}
In your [containerd config](https://github.com/containerd/containerd/blob/main/docs/cri/config.md) you can overwrite the
sandbox image by setting the following config:
```toml
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.2"
```
You might need to restart `containerd` as well once you've updated the config file: `systemctl restart containerd`.
### CRI-O
This section contains the necessary steps to install CRI-O as a container runtime.
To install CRI-O, follow [CRI-O Install Instructions](https://github.com/cri-o/packaging/blob/main/README.md#usage).
#### cgroup driver
CRI-O uses the systemd cgroup driver per default, which is likely to work fine
for you. To switch to the `cgroupfs` cgroup driver, either edit
`/etc/crio/crio.conf` or place a drop-in configuration in
`/etc/crio/crio.conf.d/02-cgroup-manager.conf`, for example:
```toml
[crio.runtime]
conmon_cgroup = "pod"
cgroup_manager = "cgroupfs"
```
You should also note the changed `conmon_cgroup`, which has to be set to the value
`pod` when using CRI-O with `cgroupfs`. It is generally necessary to keep the
cgroup driver configuration of the kubelet (usually done via kubeadm) and CRI-O
in sync.
In Kubernetes v1.28, you can enable automatic detection of the
cgroup driver as an alpha feature. See [systemd cgroup driver](#systemd-cgroup-driver)
for more details.
For CRI-O, the CRI socket is `/var/run/crio/crio.sock` by default.
#### Overriding the sandbox (pause) image {#override-pause-image-cri-o}
In your [CRI-O config](https://github.com/cri-o/cri-o/blob/main/docs/crio.conf.5.md) you can set the following
config value:
```toml
[crio.image]
pause_image="registry.k8s.io/pause:3.6"
```
This config option supports live configuration reload to apply this change: `systemctl reload crio` or by sending
`SIGHUP` to the `crio` process.
### Docker Engine {#docker}
These instructions assume that you are using the
[`cri-dockerd`](https://mirantis.github.io/cri-dockerd/) adapter to integrate
Docker Engine with Kubernetes.
1. On each of your nodes, install Docker for your Linux distribution as per
[Install Docker Engine](https://docs.docker.com/engine/install/#server).
2. Install [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/usage/install), following the directions in the install section of the documentation.
For `cri-dockerd`, the CRI socket is `/run/cri-dockerd.sock` by default.
### Mirantis Container Runtime {#mcr}
[Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html) (MCR) is a commercially
available container runtime that was formerly known as Docker Enterprise Edition.
You can use Mirantis Container Runtime with Kubernetes using the open source
[`cri-dockerd`](https://mirantis.github.io/cri-dockerd/) component, included with MCR.
To learn more about how to install Mirantis Container Runtime,
visit [MCR Deployment Guide](https://docs.mirantis.com/mcr/20.10/install.html).
Check the systemd unit named `cri-docker.socket` to find out the path to the CRI
socket.
#### Overriding the sandbox (pause) image {#override-pause-image-cri-dockerd-mcr}
The `cri-dockerd` adapter accepts a command line argument for
specifying which container image to use as the Pod infrastructure container (“pause image”).
The command line argument to use is `--pod-infra-container-image`.
##
As well as a container runtime, your cluster will need a working
[network plugin](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model). | kubernetes setup | reviewers vincepri bart0sh title Container Runtimes content type concept weight 20 overview You need to install a into each node in the cluster so that Pods can run there This page outlines what is involved and describes related tasks for setting up nodes Kubernetes requires that you use a runtime that conforms with the CRI See CRI version support cri versions for more information This page provides an outline of how to use several common container runtimes with Kubernetes containerd containerd CRI O cri o Docker Engine docker Mirantis Container Runtime mcr Kubernetes releases before v1 24 included a direct integration with Docker Engine using a component named dockershim That special direct integration is no longer part of Kubernetes this removal was announced blog 2020 12 08 kubernetes 1 20 release announcement dockershim deprecation as part of the v1 20 release You can read Check whether Dockershim removal affects you docs tasks administer cluster migrating from dockershim check if dockershim removal affects you to understand how this removal might affect you To learn about migrating from using dockershim see Migrating from dockershim docs tasks administer cluster migrating from dockershim If you are running a version of Kubernetes other than v check the documentation for that version body Install and configure prerequisites Network configuration By default the Linux kernel does not allow IPv4 packets to be routed between interfaces Most Kubernetes cluster networking implementations will change this setting if needed but some might expect the administrator to do it for them Some might also expect other sysctl parameters to be set kernel modules to be loaded etc consult the documentation for your specific network implementation Enable IPv4 packet forwarding prerequisite ipv4 forwarding optional To manually enable IPv4 packet forwarding bash sysctl params required by setup params persist across reboots cat EOF sudo tee etc sysctl d k8s conf net ipv4 ip forward 1 EOF Apply sysctl params without reboot sudo sysctl system Verify that net ipv4 ip forward is set to 1 with bash sysctl net ipv4 ip forward cgroup drivers On Linux are used to constrain resources that are allocated to processes Both the and the underlying container runtime need to interface with control groups to enforce resource management for pods and containers docs concepts configuration manage resources containers and set resources such as cpu memory requests and limits To interface with control groups the kubelet and the container runtime need to use a cgroup driver It s critical that the kubelet and the container runtime use the same cgroup driver and are configured the same There are two cgroup drivers available cgroupfs cgroupfs cgroup driver systemd systemd cgroup driver cgroupfs driver cgroupfs cgroup driver The cgroupfs driver is the default cgroup driver in the kubelet docs reference config api kubelet config v1beta1 When the cgroupfs driver is used the kubelet and the container runtime directly interface with the cgroup filesystem to configure cgroups The cgroupfs driver is not recommended when systemd https www freedesktop org wiki Software systemd is the init system because systemd expects a single cgroup manager on the system Additionally if you use cgroup v2 docs concepts architecture cgroups use the systemd cgroup driver instead of cgroupfs systemd cgroup driver systemd cgroup driver When systemd https www freedesktop org wiki Software systemd is chosen as the init system for a Linux distribution the init process generates and consumes a root control group cgroup and acts as a cgroup manager systemd has a tight integration with cgroups and allocates a cgroup per systemd unit As a result if you use systemd as the init system with the cgroupfs driver the system gets two different cgroup managers Two cgroup managers result in two views of the available and in use resources in the system In some cases nodes that are configured to use cgroupfs for the kubelet and container runtime but use systemd for the rest of the processes become unstable under resource pressure The approach to mitigate this instability is to use systemd as the cgroup driver for the kubelet and the container runtime when systemd is the selected init system To set systemd as the cgroup driver edit the KubeletConfiguration docs tasks administer cluster kubelet config file option of cgroupDriver and set it to systemd For example yaml apiVersion kubelet config k8s io v1beta1 kind KubeletConfiguration cgroupDriver systemd Starting with v1 22 and later when creating a cluster with kubeadm if the user does not set the cgroupDriver field under KubeletConfiguration kubeadm defaults it to systemd If you configure systemd as the cgroup driver for the kubelet you must also configure systemd as the cgroup driver for the container runtime Refer to the documentation for your container runtime for instructions For example containerd containerd systemd CRI O cri o In Kubernetes with the KubeletCgroupDriverFromCRI feature gate docs reference command line tools reference feature gates enabled and a container runtime that supports the RuntimeConfig CRI RPC the kubelet automatically detects the appropriate cgroup driver from the runtime and ignores the cgroupDriver setting within the kubelet configuration Changing the cgroup driver of a Node that has joined a cluster is a sensitive operation If the kubelet has created Pods using the semantics of one cgroup driver changing the container runtime to another cgroup driver can cause errors when trying to re create the Pod sandbox for such existing Pods Restarting the kubelet may not solve such errors If you have automation that makes it feasible replace the node with another using the updated configuration or reinstall it using automation Migrating to the systemd driver in kubeadm managed clusters If you wish to migrate to the systemd cgroup driver in existing kubeadm managed clusters follow configuring a cgroup driver docs tasks administer cluster kubeadm configure cgroup driver CRI version support cri versions Your container runtime must support at least v1alpha2 of the container runtime interface Kubernetes starting v1 26 blog 2022 11 18 upcoming changes in kubernetes 1 26 cri api removal only works with v1 of the CRI API Earlier versions default to v1 version however if a container runtime does not support the v1 API the kubelet falls back to using the deprecated v1alpha2 API instead Container runtimes containerd This section outlines the necessary steps to use containerd as CRI runtime To install containerd on your system follow the instructions on getting started with containerd https github com containerd containerd blob main docs getting started md Return to this step once you ve created a valid config toml configuration file You can find this file under the path etc containerd config toml You can find this file under the path C Program Files containerd config toml On Linux the default CRI socket for containerd is run containerd containerd sock On Windows the default CRI endpoint is npipe pipe containerd containerd Configuring the systemd cgroup driver containerd systemd To use the systemd cgroup driver in etc containerd config toml with runc set plugins io containerd grpc v1 cri containerd runtimes runc plugins io containerd grpc v1 cri containerd runtimes runc options SystemdCgroup true The systemd cgroup driver is recommended if you use cgroup v2 docs concepts architecture cgroups If you installed containerd from a package for example RPM or deb you may find that the CRI integration plugin is disabled by default You need CRI support enabled to use containerd with Kubernetes Make sure that cri is not included in the disabled plugins list within etc containerd config toml if you made changes to that file also restart containerd If you experience container crash loops after the initial cluster installation or after installing a CNI the containerd configuration provided with the package might contain incompatible configuration parameters Consider resetting the containerd configuration with containerd config default etc containerd config toml as specified in getting started md https github com containerd containerd blob main docs getting started md advanced topics and then set the configuration parameters specified above accordingly If you apply this change make sure to restart containerd shell sudo systemctl restart containerd When using kubeadm manually configure the cgroup driver for kubelet docs tasks administer cluster kubeadm configure cgroup driver configuring the kubelet cgroup driver In Kubernetes v1 28 you can enable automatic detection of the cgroup driver as an alpha feature See systemd cgroup driver systemd cgroup driver for more details Overriding the sandbox pause image override pause image containerd In your containerd config https github com containerd containerd blob main docs cri config md you can overwrite the sandbox image by setting the following config toml plugins io containerd grpc v1 cri sandbox image registry k8s io pause 3 2 You might need to restart containerd as well once you ve updated the config file systemctl restart containerd CRI O This section contains the necessary steps to install CRI O as a container runtime To install CRI O follow CRI O Install Instructions https github com cri o packaging blob main README md usage cgroup driver CRI O uses the systemd cgroup driver per default which is likely to work fine for you To switch to the cgroupfs cgroup driver either edit etc crio crio conf or place a drop in configuration in etc crio crio conf d 02 cgroup manager conf for example toml crio runtime conmon cgroup pod cgroup manager cgroupfs You should also note the changed conmon cgroup which has to be set to the value pod when using CRI O with cgroupfs It is generally necessary to keep the cgroup driver configuration of the kubelet usually done via kubeadm and CRI O in sync In Kubernetes v1 28 you can enable automatic detection of the cgroup driver as an alpha feature See systemd cgroup driver systemd cgroup driver for more details For CRI O the CRI socket is var run crio crio sock by default Overriding the sandbox pause image override pause image cri o In your CRI O config https github com cri o cri o blob main docs crio conf 5 md you can set the following config value toml crio image pause image registry k8s io pause 3 6 This config option supports live configuration reload to apply this change systemctl reload crio or by sending SIGHUP to the crio process Docker Engine docker These instructions assume that you are using the cri dockerd https mirantis github io cri dockerd adapter to integrate Docker Engine with Kubernetes 1 On each of your nodes install Docker for your Linux distribution as per Install Docker Engine https docs docker com engine install server 2 Install cri dockerd https mirantis github io cri dockerd usage install following the directions in the install section of the documentation For cri dockerd the CRI socket is run cri dockerd sock by default Mirantis Container Runtime mcr Mirantis Container Runtime https docs mirantis com mcr 20 10 overview html MCR is a commercially available container runtime that was formerly known as Docker Enterprise Edition You can use Mirantis Container Runtime with Kubernetes using the open source cri dockerd https mirantis github io cri dockerd component included with MCR To learn more about how to install Mirantis Container Runtime visit MCR Deployment Guide https docs mirantis com mcr 20 10 install html Check the systemd unit named cri docker socket to find out the path to the CRI socket Overriding the sandbox pause image override pause image cri dockerd mcr The cri dockerd adapter accepts a command line argument for specifying which container image to use as the Pod infrastructure container pause image The command line argument to use is pod infra container image As well as a container runtime your cluster will need a working network plugin docs concepts cluster administration networking how to implement the kubernetes network model |
kubernetes setup A production quality Kubernetes cluster requires planning and preparation title Production environment weight 30 If your Kubernetes cluster is to run critical workloads it must be configured to be resilient nolist true overview Create a production quality Kubernetes cluster | ---
title: "Production environment"
description: Create a production-quality Kubernetes cluster
weight: 30
no_list: true
---
<!-- overview -->
A production-quality Kubernetes cluster requires planning and preparation.
If your Kubernetes cluster is to run critical workloads, it must be configured to be resilient.
This page explains steps you can take to set up a production-ready cluster,
or to promote an existing cluster for production use.
If you're already familiar with production setup and want the links, skip to
[What's next](#what-s-next).
<!-- body -->
## Production considerations
Typically, a production Kubernetes cluster environment has more requirements than a
personal learning, development, or test environment Kubernetes. A production environment may require
secure access by many users, consistent availability, and the resources to adapt
to changing demands.
As you decide where you want your production Kubernetes environment to live
(on premises or in a cloud) and the amount of management you want to take
on or hand to others, consider how your requirements for a Kubernetes cluster
are influenced by the following issues:
- *Availability*: A single-machine Kubernetes [learning environment](/docs/setup/#learning-environment)
has a single point of failure. Creating a highly available cluster means considering:
- Separating the control plane from the worker nodes.
- Replicating the control plane components on multiple nodes.
- Load balancing traffic to the cluster’s .
- Having enough worker nodes available, or able to quickly become available, as changing workloads warrant it.
- *Scale*: If you expect your production Kubernetes environment to receive a stable amount of
demand, you might be able to set up for the capacity you need and be done. However,
if you expect demand to grow over time or change dramatically based on things like
season or special events, you need to plan how to scale to relieve increased
pressure from more requests to the control plane and worker nodes or scale down to reduce unused
resources.
- *Security and access management*: You have full admin privileges on your own
Kubernetes learning cluster. But shared clusters with important workloads, and
more than one or two users, require a more refined approach to who and what can
access cluster resources. You can use role-based access control
([RBAC](/docs/reference/access-authn-authz/rbac/)) and other
security mechanisms to make sure that users and workloads can get access to the
resources they need, while keeping workloads, and the cluster itself, secure.
You can set limits on the resources that users and workloads can access
by managing [policies](/docs/concepts/policy/) and
[container resources](/docs/concepts/configuration/manage-resources-containers/).
Before building a Kubernetes production environment on your own, consider
handing off some or all of this job to
[Turnkey Cloud Solutions](/docs/setup/production-environment/turnkey-solutions/)
providers or other [Kubernetes Partners](/partners/).
Options include:
- *Serverless*: Just run workloads on third-party equipment without managing
a cluster at all. You will be charged for things like CPU usage, memory, and
disk requests.
- *Managed control plane*: Let the provider manage the scale and availability
of the cluster's control plane, as well as handle patches and upgrades.
- *Managed worker nodes*: Configure pools of nodes to meet your needs,
then the provider makes sure those nodes are available and ready to implement
upgrades when needed.
- *Integration*: There are providers that integrate Kubernetes with other
services you may need, such as storage, container registries, authentication
methods, and development tools.
Whether you build a production Kubernetes cluster yourself or work with
partners, review the following sections to evaluate your needs as they relate
to your cluster’s *control plane*, *worker nodes*, *user access*, and
*workload resources*.
## Production cluster setup
In a production-quality Kubernetes cluster, the control plane manages the
cluster from services that can be spread across multiple computers
in different ways. Each worker node, however, represents a single entity that
is configured to run Kubernetes pods.
### Production control plane
The simplest Kubernetes cluster has the entire control plane and worker node
services running on the same machine. You can grow that environment by adding
worker nodes, as reflected in the diagram illustrated in
[Kubernetes Components](/docs/concepts/overview/components/).
If the cluster is meant to be available for a short period of time, or can be
discarded if something goes seriously wrong, this might meet your needs.
If you need a more permanent, highly available cluster, however, you should
consider ways of extending the control plane. By design, one-machine control
plane services running on a single machine are not highly available.
If keeping the cluster up and running
and ensuring that it can be repaired if something goes wrong is important,
consider these steps:
- *Choose deployment tools*: You can deploy a control plane using tools such
as kubeadm, kops, and kubespray. See
[Installing Kubernetes with deployment tools](/docs/setup/production-environment/tools/)
to learn tips for production-quality deployments using each of those deployment
methods. Different [Container Runtimes](/docs/setup/production-environment/container-runtimes/)
are available to use with your deployments.
- *Manage certificates*: Secure communications between control plane services
are implemented using certificates. Certificates are automatically generated
during deployment or you can generate them using your own certificate authority.
See [PKI certificates and requirements](/docs/setup/best-practices/certificates/) for details.
- *Configure load balancer for apiserver*: Configure a load balancer
to distribute external API requests to the apiserver service instances running on different nodes. See
[Create an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
for details.
- *Separate and backup etcd service*: The etcd services can either run on the
same machines as other control plane services or run on separate machines, for
extra security and availability. Because etcd stores cluster configuration data,
backing up the etcd database should be done regularly to ensure that you can
repair that database if needed.
See the [etcd FAQ](https://etcd.io/docs/v3.5/faq/) for details on configuring and using etcd.
See [Operating etcd clusters for Kubernetes](/docs/tasks/administer-cluster/configure-upgrade-etcd/)
and [Set up a High Availability etcd cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
for details.
- *Create multiple control plane systems*: For high availability, the
control plane should not be limited to a single machine. If the control plane
services are run by an init service (such as systemd), each service should run on at
least three machines. However, running control plane services as pods in
Kubernetes ensures that the replicated number of services that you request
will always be available.
The scheduler should be fault tolerant,
but not highly available. Some deployment tools set up [Raft](https://raft.github.io/)
consensus algorithm to do leader election of Kubernetes services. If the
primary goes away, another service elects itself and take over.
- *Span multiple zones*: If keeping your cluster available at all times is
critical, consider creating a cluster that runs across multiple data centers,
referred to as zones in cloud environments. Groups of zones are referred to as regions.
By spreading a cluster across
multiple zones in the same region, it can improve the chances that your
cluster will continue to function even if one zone becomes unavailable.
See [Running in multiple zones](/docs/setup/best-practices/multiple-zones/) for details.
- *Manage on-going features*: If you plan to keep your cluster over time,
there are tasks you need to do to maintain its health and security. For example,
if you installed with kubeadm, there are instructions to help you with
[Certificate Management](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/)
and [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/).
See [Administer a Cluster](/docs/tasks/administer-cluster/)
for a longer list of Kubernetes administrative tasks.
To learn about available options when you run control plane services, see
[kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/),
[kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/),
and [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/)
component pages. For highly available control plane examples, see
[Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/),
[Creating Highly Available clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/),
and [Operating etcd clusters for Kubernetes](/docs/tasks/administer-cluster/configure-upgrade-etcd/).
See [Backing up an etcd cluster](/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster)
for information on making an etcd backup plan.
### Production worker nodes
Production-quality workloads need to be resilient and anything they rely
on needs to be resilient (such as CoreDNS). Whether you manage your own
control plane or have a cloud provider do it for you, you still need to
consider how you want to manage your worker nodes (also referred to
simply as *nodes*).
- *Configure nodes*: Nodes can be physical or virtual machines. If you want to
create and manage your own nodes, you can install a supported operating system,
then add and run the appropriate
[Node services](/docs/concepts/architecture/#node-components). Consider:
- The demands of your workloads when you set up nodes by having appropriate memory, CPU, and disk speed and storage capacity available.
- Whether generic computer systems will do or you have workloads that need GPU processors, Windows nodes, or VM isolation.
- *Validate nodes*: See [Valid node setup](/docs/setup/best-practices/node-conformance/)
for information on how to ensure that a node meets the requirements to join
a Kubernetes cluster.
- *Add nodes to the cluster*: If you are managing your own cluster you can
add nodes by setting up your own machines and either adding them manually or
having them register themselves to the cluster’s apiserver. See the
[Nodes](/docs/concepts/architecture/nodes/) section for information on how to set up Kubernetes to add nodes in these ways.
- *Scale nodes*: Have a plan for expanding the capacity your cluster will
eventually need. See [Considerations for large clusters](/docs/setup/best-practices/cluster-large/)
to help determine how many nodes you need, based on the number of pods and
containers you need to run. If you are managing nodes yourself, this can mean
purchasing and installing your own physical equipment.
- *Autoscale nodes*: Read [Cluster Autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling) to learn about the
tools available to automatically manage your nodes and the capacity they
provide.
- *Set up node health checks*: For important workloads, you want to make sure
that the nodes and pods running on those nodes are healthy. Using the
[Node Problem Detector](/docs/tasks/debug/debug-cluster/monitor-node-health/)
daemon, you can ensure your nodes are healthy.
## Production user management
In production, you may be moving from a model where you or a small group of
people are accessing the cluster to where there may potentially be dozens or
hundreds of people. In a learning environment or platform prototype, you might have a single
administrative account for everything you do. In production, you will want
more accounts with different levels of access to different namespaces.
Taking on a production-quality cluster means deciding how you
want to selectively allow access by other users. In particular, you need to
select strategies for validating the identities of those who try to access your
cluster (authentication) and deciding if they have permissions to do what they
are asking (authorization):
- *Authentication*: The apiserver can authenticate users using client
certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.
You can choose which authentication methods you want to use.
Using plugins, the apiserver can leverage your organization’s existing
authentication methods, such as LDAP or Kerberos. See
[Authentication](/docs/reference/access-authn-authz/authentication/)
for a description of these different methods of authenticating Kubernetes users.
- *Authorization*: When you set out to authorize your regular users, you will probably choose
between RBAC and ABAC authorization. See [Authorization Overview](/docs/reference/access-authn-authz/authorization/)
to review different modes for authorizing user accounts (as well as service account access to
your cluster):
- *Role-based access control* ([RBAC](/docs/reference/access-authn-authz/rbac/)): Lets you
assign access to your cluster by allowing specific sets of permissions to authenticated users.
Permissions can be assigned for a specific namespace (Role) or across the entire cluster
(ClusterRole). Then using RoleBindings and ClusterRoleBindings, those permissions can be attached
to particular users.
- *Attribute-based access control* ([ABAC](/docs/reference/access-authn-authz/abac/)): Lets you
create policies based on resource attributes in the cluster and will allow or deny access
based on those attributes. Each line of a policy file identifies versioning properties (apiVersion
and kind) and a map of spec properties to match the subject (user or group), resource property,
non-resource property (/version or /apis), and readonly. See
[Examples](/docs/reference/access-authn-authz/abac/#examples) for details.
As someone setting up authentication and authorization on your production Kubernetes cluster, here are some things to consider:
- *Set the authorization mode*: When the Kubernetes API server
([kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/))
starts, the supported authentication modes must be set using the *--authorization-mode*
flag. For example, that flag in the *kube-adminserver.yaml* file (in */etc/kubernetes/manifests*)
could be set to Node,RBAC. This would allow Node and RBAC authorization for authenticated requests.
- *Create user certificates and role bindings (RBAC)*: If you are using RBAC
authorization, users can create a CertificateSigningRequest (CSR) that can be
signed by the cluster CA. Then you can bind Roles and ClusterRoles to each user.
See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/)
for details.
- *Create policies that combine attributes (ABAC)*: If you are using ABAC
authorization, you can assign combinations of attributes to form policies to
authorize selected users or groups to access particular resources (such as a
pod), namespace, or apiGroup. For more information, see
[Examples](/docs/reference/access-authn-authz/abac/#examples).
- *Consider Admission Controllers*: Additional forms of authorization for
requests that can come in through the API server include
[Webhook Token Authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication).
Webhooks and other special authorization types need to be enabled by adding
[Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
to the API server.
## Set limits on workload resources
Demands from production workloads can cause pressure both inside and outside
of the Kubernetes control plane. Consider these items when setting up for the
needs of your cluster's workloads:
- *Set namespace limits*: Set per-namespace quotas on things like memory and CPU. See
[Manage Memory, CPU, and API Resources](/docs/tasks/administer-cluster/manage-resources/)
for details. You can also set
[Hierarchical Namespaces](/blog/2020/08/14/introducing-hierarchical-namespaces/)
for inheriting limits.
- *Prepare for DNS demand*: If you expect workloads to massively scale up,
your DNS service must be ready to scale up as well. See
[Autoscale the DNS service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/).
- *Create additional service accounts*: User accounts determine what users can
do on a cluster, while a service account defines pod access within a particular
namespace. By default, a pod takes on the default service account from its namespace.
See [Managing Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/)
for information on creating a new service account. For example, you might want to:
- Add secrets that a pod could use to pull images from a particular container registry. See
[Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/)
for an example.
- Assign RBAC permissions to a service account. See
[ServiceAccount permissions](/docs/reference/access-authn-authz/rbac/#service-account-permissions)
for details.
##
- Decide if you want to build your own production Kubernetes or obtain one from
available [Turnkey Cloud Solutions](/docs/setup/production-environment/turnkey-solutions/)
or [Kubernetes Partners](/partners/).
- If you choose to build your own cluster, plan how you want to
handle [certificates](/docs/setup/best-practices/certificates/)
and set up high availability for features such as
[etcd](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
and the
[API server](/docs/setup/production-environment/tools/kubeadm/ha-topology/).
- Choose from [kubeadm](/docs/setup/production-environment/tools/kubeadm/),
[kops](https://kops.sigs.k8s.io/) or
[Kubespray](https://kubespray.io/) deployment methods.
- Configure user management by determining your
[Authentication](/docs/reference/access-authn-authz/authentication/) and
[Authorization](/docs/reference/access-authn-authz/authorization/) methods.
- Prepare for application workloads by setting up
[resource limits](/docs/tasks/administer-cluster/manage-resources/),
[DNS autoscaling](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)
and [service accounts](/docs/reference/access-authn-authz/service-accounts-admin/).
| kubernetes setup | title Production environment description Create a production quality Kubernetes cluster weight 30 no list true overview A production quality Kubernetes cluster requires planning and preparation If your Kubernetes cluster is to run critical workloads it must be configured to be resilient This page explains steps you can take to set up a production ready cluster or to promote an existing cluster for production use If you re already familiar with production setup and want the links skip to What s next what s next body Production considerations Typically a production Kubernetes cluster environment has more requirements than a personal learning development or test environment Kubernetes A production environment may require secure access by many users consistent availability and the resources to adapt to changing demands As you decide where you want your production Kubernetes environment to live on premises or in a cloud and the amount of management you want to take on or hand to others consider how your requirements for a Kubernetes cluster are influenced by the following issues Availability A single machine Kubernetes learning environment docs setup learning environment has a single point of failure Creating a highly available cluster means considering Separating the control plane from the worker nodes Replicating the control plane components on multiple nodes Load balancing traffic to the cluster s Having enough worker nodes available or able to quickly become available as changing workloads warrant it Scale If you expect your production Kubernetes environment to receive a stable amount of demand you might be able to set up for the capacity you need and be done However if you expect demand to grow over time or change dramatically based on things like season or special events you need to plan how to scale to relieve increased pressure from more requests to the control plane and worker nodes or scale down to reduce unused resources Security and access management You have full admin privileges on your own Kubernetes learning cluster But shared clusters with important workloads and more than one or two users require a more refined approach to who and what can access cluster resources You can use role based access control RBAC docs reference access authn authz rbac and other security mechanisms to make sure that users and workloads can get access to the resources they need while keeping workloads and the cluster itself secure You can set limits on the resources that users and workloads can access by managing policies docs concepts policy and container resources docs concepts configuration manage resources containers Before building a Kubernetes production environment on your own consider handing off some or all of this job to Turnkey Cloud Solutions docs setup production environment turnkey solutions providers or other Kubernetes Partners partners Options include Serverless Just run workloads on third party equipment without managing a cluster at all You will be charged for things like CPU usage memory and disk requests Managed control plane Let the provider manage the scale and availability of the cluster s control plane as well as handle patches and upgrades Managed worker nodes Configure pools of nodes to meet your needs then the provider makes sure those nodes are available and ready to implement upgrades when needed Integration There are providers that integrate Kubernetes with other services you may need such as storage container registries authentication methods and development tools Whether you build a production Kubernetes cluster yourself or work with partners review the following sections to evaluate your needs as they relate to your cluster s control plane worker nodes user access and workload resources Production cluster setup In a production quality Kubernetes cluster the control plane manages the cluster from services that can be spread across multiple computers in different ways Each worker node however represents a single entity that is configured to run Kubernetes pods Production control plane The simplest Kubernetes cluster has the entire control plane and worker node services running on the same machine You can grow that environment by adding worker nodes as reflected in the diagram illustrated in Kubernetes Components docs concepts overview components If the cluster is meant to be available for a short period of time or can be discarded if something goes seriously wrong this might meet your needs If you need a more permanent highly available cluster however you should consider ways of extending the control plane By design one machine control plane services running on a single machine are not highly available If keeping the cluster up and running and ensuring that it can be repaired if something goes wrong is important consider these steps Choose deployment tools You can deploy a control plane using tools such as kubeadm kops and kubespray See Installing Kubernetes with deployment tools docs setup production environment tools to learn tips for production quality deployments using each of those deployment methods Different Container Runtimes docs setup production environment container runtimes are available to use with your deployments Manage certificates Secure communications between control plane services are implemented using certificates Certificates are automatically generated during deployment or you can generate them using your own certificate authority See PKI certificates and requirements docs setup best practices certificates for details Configure load balancer for apiserver Configure a load balancer to distribute external API requests to the apiserver service instances running on different nodes See Create an External Load Balancer docs tasks access application cluster create external load balancer for details Separate and backup etcd service The etcd services can either run on the same machines as other control plane services or run on separate machines for extra security and availability Because etcd stores cluster configuration data backing up the etcd database should be done regularly to ensure that you can repair that database if needed See the etcd FAQ https etcd io docs v3 5 faq for details on configuring and using etcd See Operating etcd clusters for Kubernetes docs tasks administer cluster configure upgrade etcd and Set up a High Availability etcd cluster with kubeadm docs setup production environment tools kubeadm setup ha etcd with kubeadm for details Create multiple control plane systems For high availability the control plane should not be limited to a single machine If the control plane services are run by an init service such as systemd each service should run on at least three machines However running control plane services as pods in Kubernetes ensures that the replicated number of services that you request will always be available The scheduler should be fault tolerant but not highly available Some deployment tools set up Raft https raft github io consensus algorithm to do leader election of Kubernetes services If the primary goes away another service elects itself and take over Span multiple zones If keeping your cluster available at all times is critical consider creating a cluster that runs across multiple data centers referred to as zones in cloud environments Groups of zones are referred to as regions By spreading a cluster across multiple zones in the same region it can improve the chances that your cluster will continue to function even if one zone becomes unavailable See Running in multiple zones docs setup best practices multiple zones for details Manage on going features If you plan to keep your cluster over time there are tasks you need to do to maintain its health and security For example if you installed with kubeadm there are instructions to help you with Certificate Management docs tasks administer cluster kubeadm kubeadm certs and Upgrading kubeadm clusters docs tasks administer cluster kubeadm kubeadm upgrade See Administer a Cluster docs tasks administer cluster for a longer list of Kubernetes administrative tasks To learn about available options when you run control plane services see kube apiserver docs reference command line tools reference kube apiserver kube controller manager docs reference command line tools reference kube controller manager and kube scheduler docs reference command line tools reference kube scheduler component pages For highly available control plane examples see Options for Highly Available topology docs setup production environment tools kubeadm ha topology Creating Highly Available clusters with kubeadm docs setup production environment tools kubeadm high availability and Operating etcd clusters for Kubernetes docs tasks administer cluster configure upgrade etcd See Backing up an etcd cluster docs tasks administer cluster configure upgrade etcd backing up an etcd cluster for information on making an etcd backup plan Production worker nodes Production quality workloads need to be resilient and anything they rely on needs to be resilient such as CoreDNS Whether you manage your own control plane or have a cloud provider do it for you you still need to consider how you want to manage your worker nodes also referred to simply as nodes Configure nodes Nodes can be physical or virtual machines If you want to create and manage your own nodes you can install a supported operating system then add and run the appropriate Node services docs concepts architecture node components Consider The demands of your workloads when you set up nodes by having appropriate memory CPU and disk speed and storage capacity available Whether generic computer systems will do or you have workloads that need GPU processors Windows nodes or VM isolation Validate nodes See Valid node setup docs setup best practices node conformance for information on how to ensure that a node meets the requirements to join a Kubernetes cluster Add nodes to the cluster If you are managing your own cluster you can add nodes by setting up your own machines and either adding them manually or having them register themselves to the cluster s apiserver See the Nodes docs concepts architecture nodes section for information on how to set up Kubernetes to add nodes in these ways Scale nodes Have a plan for expanding the capacity your cluster will eventually need See Considerations for large clusters docs setup best practices cluster large to help determine how many nodes you need based on the number of pods and containers you need to run If you are managing nodes yourself this can mean purchasing and installing your own physical equipment Autoscale nodes Read Cluster Autoscaling docs concepts cluster administration cluster autoscaling to learn about the tools available to automatically manage your nodes and the capacity they provide Set up node health checks For important workloads you want to make sure that the nodes and pods running on those nodes are healthy Using the Node Problem Detector docs tasks debug debug cluster monitor node health daemon you can ensure your nodes are healthy Production user management In production you may be moving from a model where you or a small group of people are accessing the cluster to where there may potentially be dozens or hundreds of people In a learning environment or platform prototype you might have a single administrative account for everything you do In production you will want more accounts with different levels of access to different namespaces Taking on a production quality cluster means deciding how you want to selectively allow access by other users In particular you need to select strategies for validating the identities of those who try to access your cluster authentication and deciding if they have permissions to do what they are asking authorization Authentication The apiserver can authenticate users using client certificates bearer tokens an authenticating proxy or HTTP basic auth You can choose which authentication methods you want to use Using plugins the apiserver can leverage your organization s existing authentication methods such as LDAP or Kerberos See Authentication docs reference access authn authz authentication for a description of these different methods of authenticating Kubernetes users Authorization When you set out to authorize your regular users you will probably choose between RBAC and ABAC authorization See Authorization Overview docs reference access authn authz authorization to review different modes for authorizing user accounts as well as service account access to your cluster Role based access control RBAC docs reference access authn authz rbac Lets you assign access to your cluster by allowing specific sets of permissions to authenticated users Permissions can be assigned for a specific namespace Role or across the entire cluster ClusterRole Then using RoleBindings and ClusterRoleBindings those permissions can be attached to particular users Attribute based access control ABAC docs reference access authn authz abac Lets you create policies based on resource attributes in the cluster and will allow or deny access based on those attributes Each line of a policy file identifies versioning properties apiVersion and kind and a map of spec properties to match the subject user or group resource property non resource property version or apis and readonly See Examples docs reference access authn authz abac examples for details As someone setting up authentication and authorization on your production Kubernetes cluster here are some things to consider Set the authorization mode When the Kubernetes API server kube apiserver docs reference command line tools reference kube apiserver starts the supported authentication modes must be set using the authorization mode flag For example that flag in the kube adminserver yaml file in etc kubernetes manifests could be set to Node RBAC This would allow Node and RBAC authorization for authenticated requests Create user certificates and role bindings RBAC If you are using RBAC authorization users can create a CertificateSigningRequest CSR that can be signed by the cluster CA Then you can bind Roles and ClusterRoles to each user See Certificate Signing Requests docs reference access authn authz certificate signing requests for details Create policies that combine attributes ABAC If you are using ABAC authorization you can assign combinations of attributes to form policies to authorize selected users or groups to access particular resources such as a pod namespace or apiGroup For more information see Examples docs reference access authn authz abac examples Consider Admission Controllers Additional forms of authorization for requests that can come in through the API server include Webhook Token Authentication docs reference access authn authz authentication webhook token authentication Webhooks and other special authorization types need to be enabled by adding Admission Controllers docs reference access authn authz admission controllers to the API server Set limits on workload resources Demands from production workloads can cause pressure both inside and outside of the Kubernetes control plane Consider these items when setting up for the needs of your cluster s workloads Set namespace limits Set per namespace quotas on things like memory and CPU See Manage Memory CPU and API Resources docs tasks administer cluster manage resources for details You can also set Hierarchical Namespaces blog 2020 08 14 introducing hierarchical namespaces for inheriting limits Prepare for DNS demand If you expect workloads to massively scale up your DNS service must be ready to scale up as well See Autoscale the DNS service in a Cluster docs tasks administer cluster dns horizontal autoscaling Create additional service accounts User accounts determine what users can do on a cluster while a service account defines pod access within a particular namespace By default a pod takes on the default service account from its namespace See Managing Service Accounts docs reference access authn authz service accounts admin for information on creating a new service account For example you might want to Add secrets that a pod could use to pull images from a particular container registry See Configure Service Accounts for Pods docs tasks configure pod container configure service account for an example Assign RBAC permissions to a service account See ServiceAccount permissions docs reference access authn authz rbac service account permissions for details Decide if you want to build your own production Kubernetes or obtain one from available Turnkey Cloud Solutions docs setup production environment turnkey solutions or Kubernetes Partners partners If you choose to build your own cluster plan how you want to handle certificates docs setup best practices certificates and set up high availability for features such as etcd docs setup production environment tools kubeadm setup ha etcd with kubeadm and the API server docs setup production environment tools kubeadm ha topology Choose from kubeadm docs setup production environment tools kubeadm kops https kops sigs k8s io or Kubespray https kubespray io deployment methods Configure user management by determining your Authentication docs reference access authn authz authentication and Authorization docs reference access authn authz authorization methods Prepare for application workloads by setting up resource limits docs tasks administer cluster manage resources DNS autoscaling docs tasks administer cluster dns horizontal autoscaling and service accounts docs reference access authn authz service accounts admin |
kubernetes setup weight 20 title Troubleshooting kubeadm contenttype concept As with any program you might run into an error installing or running kubeadm This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem overview | ---
title: Troubleshooting kubeadm
content_type: concept
weight: 20
---
<!-- overview -->
As with any program, you might run into an error installing or running kubeadm.
This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem.
If your problem is not listed below, please follow the following steps:
- If you think your problem is a bug with kubeadm:
- Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues.
- If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template.
- If you are unsure about how kubeadm works, you can ask on [Slack](https://slack.k8s.io/) in `#kubeadm`,
or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include
relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.
<!-- body -->
## Not possible to join a v1.18 Node to a v1.17 cluster due to missing RBAC
In v1.18 kubeadm added prevention for joining a Node in the cluster if a Node with the same name already exists.
This required adding RBAC for the bootstrap-token user to be able to GET a Node object.
However this causes an issue where `kubeadm join` from v1.18 cannot join a cluster created by kubeadm v1.17.
To workaround the issue you have two options:
Execute `kubeadm init phase bootstrap-token` on a control-plane node using kubeadm v1.18.
Note that this enables the rest of the bootstrap-token permissions as well.
or
Apply the following RBAC manually using `kubectl apply -f ...`:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubeadm:get-nodes
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:get-nodes
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubeadm:get-nodes
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:kubeadm:default-node-token
```
## `ebtables` or some similar executable not found during installation
If you see the following warnings while running `kubeadm init`
```console
[preflight] WARNING: ebtables not found in system path
[preflight] WARNING: ethtool not found in system path
```
Then you may be missing `ebtables`, `ethtool` or a similar executable on your node.
You can install them with the following commands:
- For Ubuntu/Debian users, run `apt install ebtables ethtool`.
- For CentOS/Fedora users, run `yum install ebtables ethtool`.
## kubeadm blocks waiting for control plane during installation
If you notice that `kubeadm init` hangs after printing out the following line:
```console
[apiclient] Created API client, waiting for the control plane to become ready
```
This may be caused by a number of problems. The most common are:
- network connection problems. Check that your machine has full network connectivity before continuing.
- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to
configure it properly, see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
- control plane containers are crashlooping or hanging. You can check this by running `docker ps`
and investigating each container by running `docker logs`. For other container runtime, see
[Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/).
## kubeadm blocks when removing managed containers
The following could happen if the container runtime halts and does not remove
any Kubernetes-managed containers:
```shell
sudo kubeadm reset
```
```console
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
(block)
```
A possible solution is to restart the container runtime and then re-run `kubeadm reset`.
You can also use `crictl` to debug the state of the container runtime. See
[Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/).
## Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state
Right after `kubeadm init` there should not be any pods in these states.
- If there are pods in one of these states _right after_ `kubeadm init`, please open an
issue in the kubeadm repo. `coredns` (or `kube-dns`) should be in the `Pending` state
until you have deployed the network add-on.
- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state
after deploying the network add-on and nothing happens to `coredns` (or `kube-dns`),
it's very likely that the Pod Network add-on that you installed is somehow broken.
You might have to grant it more RBAC privileges or use a newer version. Please file
an issue in the Pod Network providers' issue tracker and get the issue triaged there.
## `coredns` is stuck in the `Pending` state
This is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin
should [install the pod network add-on](/docs/concepts/cluster-administration/addons/)
of choice. You have to install a Pod Network
before CoreDNS may be deployed fully. Hence the `Pending` state before the network is set up.
## `HostPort` services do not work
The `HostPort` and `HostIP` functionality is available depending on your Pod Network
provider. Please contact the author of the Pod Network add-on to find out whether
`HostPort` and `HostIP` functionality are available.
Calico, Canal, and Flannel CNI providers are verified to support HostPort.
For more information, see the
[CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
If your network provider does not support the portmap CNI plugin, you may need to use the
[NodePort feature of services](/docs/concepts/services-networking/service/#type-nodeport)
or use `HostNetwork=true`.
## Pods are not accessible via their Service IP
- Many network add-ons do not yet enable [hairpin mode](/docs/tasks/debug/debug-application/debug-service/#a-pod-fails-to-reach-itself-via-the-service-ip)
which allows pods to access themselves via their Service IP. This is an issue related to
[CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network
add-on provider to get the latest status of their support for hairpin mode.
- If you are using VirtualBox (directly or via Vagrant), you will need to
ensure that `hostname -i` returns a routable IP address. By default, the first
interface is connected to a non-routable host-only network. A work around
is to modify `/etc/hosts`, see this
[Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
for an example.
## TLS certificate errors
The following error indicates a possible certificate mismatch.
```none
# kubectl get pods
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
```
- Verify that the `$HOME/.kube/config` file contains a valid certificate, and
regenerate a certificate if necessary. The certificates in a kubeconfig file
are base64 encoded. The `base64 --decode` command can be used to decode the certificate
and `openssl x509 -text -noout` can be used for viewing the certificate information.
- Unset the `KUBECONFIG` environment variable using:
```sh
unset KUBECONFIG
```
Or set it to the default `KUBECONFIG` location:
```sh
export KUBECONFIG=/etc/kubernetes/admin.conf
```
- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user:
```sh
mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
## Kubelet client certificate rotation fails {#kubelet-client-cert}
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the
`/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid`
in kube-apiserver logs. To fix the issue you must follow these steps:
1. Backup and delete `/etc/kubernetes/kubelet.conf` and `/var/lib/kubelet/pki/kubelet-client*` from the failed node.
1. From a working control plane node in the cluster that has `/etc/kubernetes/pki/ca.key` execute
`kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf`.
`$NODE` must be set to the name of the existing failed node in the cluster.
Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint,
or pass `kubeconfig user --config` (see [Generating kubeconfig files for additional users](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubeconfig-additional-users)). If your cluster does not have
the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally.
1. Copy this resulted `kubelet.conf` to `/etc/kubernetes/kubelet.conf` on the failed node.
1. Restart the kubelet (`systemctl restart kubelet`) on the failed node and wait for
`/var/lib/kubelet/pki/kubelet-client-current.pem` to be recreated.
1. Manually edit the `kubelet.conf` to point to the rotated kubelet client certificates, by replacing
`client-certificate-data` and `client-key-data` with:
```yaml
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
```
1. Restart the kubelet.
1. Make sure the node becomes `Ready`.
## Default NIC When using flannel as the pod network in Vagrant
The following error might indicate that something was wrong in the pod network:
```sh
Error from server (NotFound): the server could not find the requested resource
```
- If you're using flannel as the pod network inside Vagrant, then you will have to
specify the default interface name for flannel.
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts
are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
This may lead to problems with flannel, which defaults to the first interface on a host.
This leads to all hosts thinking they have the same public IP address. To prevent this,
pass the `--iface eth1` flag to flannel so that the second interface is chosen.
## Non-public IP used for containers
In some situations `kubectl logs` and `kubectl run` commands may return with the
following errors in an otherwise functional cluster:
```console
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
```
- This may be due to Kubernetes using an IP that can not communicate with other IPs on
the seemingly same subnet, possibly by policy of the machine provider.
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally
as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's
`InternalIP` instead of the public one.
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will
not display the offending alias IP address. Alternatively an API endpoint specific to
DigitalOcean allows to query for the anchor IP from the droplet:
```sh
curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
```
The workaround is to tell `kubelet` which IP to use using `--node-ip`.
When using DigitalOcean, it can be the public one (assigned to `eth0`) or
the private one (assigned to `eth1`) should you want to use the optional
private network. The `kubeletExtraArgs` section of the kubeadm
[`NodeRegistrationOptions` structure](/docs/reference/config-api/kubeadm-config.v1beta4/#kubeadm-k8s-io-v1beta4-NodeRegistrationOptions)
can be used for this.
Then restart `kubelet`:
```sh
systemctl daemon-reload
systemctl restart kubelet
```
## `coredns` pods have `CrashLoopBackOff` or `Error` state
If you have nodes that are running SELinux with an older version of Docker, you might experience a scenario
where the `coredns` pods are not starting. To solve that, you can try one of the following options:
- Upgrade to a [newer version of Docker](/docs/setup/production-environment/container-runtimes/#docker).
- [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux).
- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`:
```bash
kubectl -n kube-system get deployment coredns -o yaml | \
sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
kubectl apply -f -
```
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop.
[A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits.
Disabling SELinux or setting `allowPrivilegeEscalation` to `true` can compromise
the security of your cluster.
## etcd pods restart continually
If you encounter the following error:
```
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:110: decoding init error from pipe caused \"read parent: connection reset by peer\""
```
This issue appears if you run CentOS 7 with Docker 1.13.1.84.
This version of Docker can prevent the kubelet from executing into the etcd container.
To work around the issue, choose one of these options:
- Roll back to an earlier version of Docker, such as 1.13.1-75
```
yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64
```
- Install one of the more recent recommended versions, such as 18.06:
```bash
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce-18.06.1.ce-3.el7.x86_64
```
## Not possible to pass a comma separated list of values to arguments inside a `--component-extra-args` flag
`kubeadm init` flags such as `--component-extra-args` allow you to pass custom arguments to a control-plane
component like the kube-apiserver. However, this mechanism is limited due to the underlying type used for parsing
the values (`mapStringString`).
If you decide to pass an argument that supports multiple, comma-separated values such as
`--apiserver-extra-args "enable-admission-plugins=LimitRanger,NamespaceExists"` this flag will fail with
`flag: malformed pair, expect string=string`. This happens because the list of arguments for
`--apiserver-extra-args` expects `key=value` pairs and in this case `NamespacesExists` is considered
as a key that is missing a value.
Alternatively, you can try separating the `key=value` pairs like so:
`--apiserver-extra-args "enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists"`
but this will result in the key `enable-admission-plugins` only having the value of `NamespaceExists`.
A known workaround is to use the kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta4/).
## kube-proxy scheduled before node is initialized by cloud-controller-manager
In cloud provider scenarios, kube-proxy can end up being scheduled on new worker nodes before
the cloud-controller-manager has initialized the node addresses. This causes kube-proxy to fail
to pick up the node's IP address properly and has knock-on effects to the proxy function managing
load balancers.
The following error can be seen in kube-proxy Pods:
```
server.go:610] Failed to retrieve node IP: host IP unknown; known addresses: []
proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
```
A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane
nodes regardless of their conditions, keeping it off of other nodes until their initial guarding
conditions abate:
```
kubectl -n kube-system patch ds kube-proxy -p='{
"spec": {
"template": {
"spec": {
"tolerations": [
{
"key": "CriticalAddonsOnly",
"operator": "Exists"
},
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/control-plane"
}
]
}
}
}
}'
```
The tracking issue for this problem is [here](https://github.com/kubernetes/kubeadm/issues/1027).
## `/usr` is mounted read-only on nodes {#usr-mounted-read-only}
On Linux distributions such as Fedora CoreOS or Flatcar Container Linux, the directory `/usr` is mounted as a read-only filesystem.
For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md),
Kubernetes components like the kubelet and kube-controller-manager use the default path of
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_
for the feature to work.
FlexVolume was deprecated in the Kubernetes v1.23 release.
To workaround this issue, you can configure the flex-volume directory using the kubeadm
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta4/).
On the primary control-plane Node (created using `kubeadm init`), pass the following
file using `--config`:
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
- name: "volume-plugin-dir"
value: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
controllerManager:
extraArgs:
- name: "flex-volume-plugin-dir"
value: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
```
On joining Nodes:
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
- name: "volume-plugin-dir"
value: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
```
Alternatively, you can modify `/etc/fstab` to make the `/usr` mount writeable, but please
be advised that this is modifying a design principle of the Linux distribution.
## `kubeadm upgrade plan` prints out `context deadline exceeded` error message
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in
the case of running an external etcd. This is not a critical bug and happens because
older versions of kubeadm perform a version check on the external etcd cluster.
You can proceed with `kubeadm upgrade apply ...`.
This issue is fixed as of version 1.19.
## `kubeadm reset` unmounts `/var/lib/kubelet`
If `/var/lib/kubelet` is being mounted, performing a `kubeadm reset` will effectively unmount it.
To workaround the issue, re-mount the `/var/lib/kubelet` directory after performing the `kubeadm reset` operation.
This is a regression introduced in kubeadm 1.15. The issue is fixed in 1.20.
## Cannot use the metrics-server securely in a kubeadm cluster
In a kubeadm cluster, the [metrics-server](https://github.com/kubernetes-sigs/metrics-server)
can be used insecurely by passing the `--kubelet-insecure-tls` to it. This is not recommended for production clusters.
If you want to use TLS between the metrics-server and the kubelet there is a problem,
since kubeadm deploys a self-signed serving certificate for the kubelet. This can cause the following errors
on the side of the metrics-server:
```
x509: certificate signed by unknown authority
x509: certificate is valid for IP-foo not IP-bar
```
See [Enabling signed kubelet serving certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubelet-serving-certs)
to understand how to configure the kubelets in a kubeadm cluster to have properly signed serving certificates.
Also see [How to run the metrics-server securely](https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md#how-to-run-metrics-server-securely).
## Upgrade fails due to etcd hash not changing
Only applicable to upgrading a control plane node with a kubeadm binary v1.28.3 or later,
where the node is currently managed by kubeadm versions v1.28.0, v1.28.1 or v1.28.2.
Here is the error message you may encounter:
```
[upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition
[upgrade/etcd] Waiting for previous etcd to become available
I0907 10:10:09.109104 3704 etcd.go:588] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.6:2379/ https://172.17.0.4:2379/ https://172.17.0.3:2379/]) are available 1/10
[upgrade/etcd] Etcd was rolled back and is now available
static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition
couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.rollbackOldManifests
cmd/kubeadm/app/phases/upgrade/staticpods.go:525
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.upgradeComponent
cmd/kubeadm/app/phases/upgrade/staticpods.go:254
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade
cmd/kubeadm/app/phases/upgrade/staticpods.go:338
...
```
The reason for this failure is that the affected versions generate an etcd manifest file with
unwanted defaults in the PodSpec. This will result in a diff from the manifest comparison,
and kubeadm will expect a change in the Pod hash, but the kubelet will never update the hash.
There are two way to workaround this issue if you see it in your cluster:
- The etcd upgrade can be skipped between the affected versions and v1.28.3 (or later) by using:
```shell
kubeadm upgrade {apply|node} [version] --etcd-upgrade=false
```
This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.
- Before upgrade, patch the manifest for the etcd static pod, to remove the problematic defaulted attributes:
```patch
diff --git a/etc/kubernetes/manifests/etcd_defaults.yaml b/etc/kubernetes/manifests/etcd_origin.yaml
index d807ccbe0aa..46b35f00e15 100644
--- a/etc/kubernetes/manifests/etcd_defaults.yaml
+++ b/etc/kubernetes/manifests/etcd_origin.yaml
@@ -43,7 +43,6 @@ spec:
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
- successThreshold: 1
timeoutSeconds: 15
name: etcd
resources:
@@ -59,26 +58,18 @@ spec:
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
- successThreshold: 1
timeoutSeconds: 15
- terminationMessagePath: /dev/termination-log
- terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
- dnsPolicy: ClusterFirst
- enableServiceLinks: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
- restartPolicy: Always
- schedulerName: default-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
- terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
```
More information can be found in the
[tracking issue](https://github.com/kubernetes/kubeadm/issues/2927) for this bug. | kubernetes setup | title Troubleshooting kubeadm content type concept weight 20 overview As with any program you might run into an error installing or running kubeadm This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem If your problem is not listed below please follow the following steps If you think your problem is a bug with kubeadm Go to github com kubernetes kubeadm https github com kubernetes kubeadm issues and search for existing issues If no issue exists please open one https github com kubernetes kubeadm issues new and follow the issue template If you are unsure about how kubeadm works you can ask on Slack https slack k8s io in kubeadm or open a question on StackOverflow https stackoverflow com questions tagged kubernetes Please include relevant tags like kubernetes and kubeadm so folks can help you body Not possible to join a v1 18 Node to a v1 17 cluster due to missing RBAC In v1 18 kubeadm added prevention for joining a Node in the cluster if a Node with the same name already exists This required adding RBAC for the bootstrap token user to be able to GET a Node object However this causes an issue where kubeadm join from v1 18 cannot join a cluster created by kubeadm v1 17 To workaround the issue you have two options Execute kubeadm init phase bootstrap token on a control plane node using kubeadm v1 18 Note that this enables the rest of the bootstrap token permissions as well or Apply the following RBAC manually using kubectl apply f yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name kubeadm get nodes rules apiGroups resources nodes verbs get apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name kubeadm get nodes roleRef apiGroup rbac authorization k8s io kind ClusterRole name kubeadm get nodes subjects apiGroup rbac authorization k8s io kind Group name system bootstrappers kubeadm default node token ebtables or some similar executable not found during installation If you see the following warnings while running kubeadm init console preflight WARNING ebtables not found in system path preflight WARNING ethtool not found in system path Then you may be missing ebtables ethtool or a similar executable on your node You can install them with the following commands For Ubuntu Debian users run apt install ebtables ethtool For CentOS Fedora users run yum install ebtables ethtool kubeadm blocks waiting for control plane during installation If you notice that kubeadm init hangs after printing out the following line console apiclient Created API client waiting for the control plane to become ready This may be caused by a number of problems The most common are network connection problems Check that your machine has full network connectivity before continuing the cgroup driver of the container runtime differs from that of the kubelet To understand how to configure it properly see Configuring a cgroup driver docs tasks administer cluster kubeadm configure cgroup driver control plane containers are crashlooping or hanging You can check this by running docker ps and investigating each container by running docker logs For other container runtime see Debugging Kubernetes nodes with crictl docs tasks debug debug cluster crictl kubeadm blocks when removing managed containers The following could happen if the container runtime halts and does not remove any Kubernetes managed containers shell sudo kubeadm reset console preflight Running pre flight checks reset Stopping the kubelet service reset Unmounting mounted directories in var lib kubelet reset Removing kubernetes managed containers block A possible solution is to restart the container runtime and then re run kubeadm reset You can also use crictl to debug the state of the container runtime See Debugging Kubernetes nodes with crictl docs tasks debug debug cluster crictl Pods in RunContainerError CrashLoopBackOff or Error state Right after kubeadm init there should not be any pods in these states If there are pods in one of these states right after kubeadm init please open an issue in the kubeadm repo coredns or kube dns should be in the Pending state until you have deployed the network add on If you see Pods in the RunContainerError CrashLoopBackOff or Error state after deploying the network add on and nothing happens to coredns or kube dns it s very likely that the Pod Network add on that you installed is somehow broken You might have to grant it more RBAC privileges or use a newer version Please file an issue in the Pod Network providers issue tracker and get the issue triaged there coredns is stuck in the Pending state This is expected and part of the design kubeadm is network provider agnostic so the admin should install the pod network add on docs concepts cluster administration addons of choice You have to install a Pod Network before CoreDNS may be deployed fully Hence the Pending state before the network is set up HostPort services do not work The HostPort and HostIP functionality is available depending on your Pod Network provider Please contact the author of the Pod Network add on to find out whether HostPort and HostIP functionality are available Calico Canal and Flannel CNI providers are verified to support HostPort For more information see the CNI portmap documentation https github com containernetworking plugins blob master plugins meta portmap README md If your network provider does not support the portmap CNI plugin you may need to use the NodePort feature of services docs concepts services networking service type nodeport or use HostNetwork true Pods are not accessible via their Service IP Many network add ons do not yet enable hairpin mode docs tasks debug debug application debug service a pod fails to reach itself via the service ip which allows pods to access themselves via their Service IP This is an issue related to CNI https github com containernetworking cni issues 476 Please contact the network add on provider to get the latest status of their support for hairpin mode If you are using VirtualBox directly or via Vagrant you will need to ensure that hostname i returns a routable IP address By default the first interface is connected to a non routable host only network A work around is to modify etc hosts see this Vagrantfile https github com errordeveloper k8s playground blob 22dd39dfc06111235620e6c4404a96ae146f26fd Vagrantfile L11 for an example TLS certificate errors The following error indicates a possible certificate mismatch none kubectl get pods Unable to connect to the server x509 certificate signed by unknown authority possibly because of crypto rsa verification error while trying to verify candidate authority certificate kubernetes Verify that the HOME kube config file contains a valid certificate and regenerate a certificate if necessary The certificates in a kubeconfig file are base64 encoded The base64 decode command can be used to decode the certificate and openssl x509 text noout can be used for viewing the certificate information Unset the KUBECONFIG environment variable using sh unset KUBECONFIG Or set it to the default KUBECONFIG location sh export KUBECONFIG etc kubernetes admin conf Another workaround is to overwrite the existing kubeconfig for the admin user sh mv HOME kube HOME kube bak mkdir HOME kube sudo cp i etc kubernetes admin conf HOME kube config sudo chown id u id g HOME kube config Kubelet client certificate rotation fails kubelet client cert By default kubeadm configures a kubelet with automatic rotation of client certificates by using the var lib kubelet pki kubelet client current pem symlink specified in etc kubernetes kubelet conf If this rotation process fails you might see errors such as x509 certificate has expired or is not yet valid in kube apiserver logs To fix the issue you must follow these steps 1 Backup and delete etc kubernetes kubelet conf and var lib kubelet pki kubelet client from the failed node 1 From a working control plane node in the cluster that has etc kubernetes pki ca key execute kubeadm kubeconfig user org system nodes client name system node NODE kubelet conf NODE must be set to the name of the existing failed node in the cluster Modify the resulted kubelet conf manually to adjust the cluster name and server endpoint or pass kubeconfig user config see Generating kubeconfig files for additional users docs tasks administer cluster kubeadm kubeadm certs kubeconfig additional users If your cluster does not have the ca key you must sign the embedded certificates in the kubelet conf externally 1 Copy this resulted kubelet conf to etc kubernetes kubelet conf on the failed node 1 Restart the kubelet systemctl restart kubelet on the failed node and wait for var lib kubelet pki kubelet client current pem to be recreated 1 Manually edit the kubelet conf to point to the rotated kubelet client certificates by replacing client certificate data and client key data with yaml client certificate var lib kubelet pki kubelet client current pem client key var lib kubelet pki kubelet client current pem 1 Restart the kubelet 1 Make sure the node becomes Ready Default NIC When using flannel as the pod network in Vagrant The following error might indicate that something was wrong in the pod network sh Error from server NotFound the server could not find the requested resource If you re using flannel as the pod network inside Vagrant then you will have to specify the default interface name for flannel Vagrant typically assigns two interfaces to all VMs The first for which all hosts are assigned the IP address 10 0 2 15 is for external traffic that gets NATed This may lead to problems with flannel which defaults to the first interface on a host This leads to all hosts thinking they have the same public IP address To prevent this pass the iface eth1 flag to flannel so that the second interface is chosen Non public IP used for containers In some situations kubectl logs and kubectl run commands may return with the following errors in an otherwise functional cluster console Error from server Get https 10 19 0 41 10250 containerLogs default mysql ddc65b868 glc5m mysql dial tcp 10 19 0 41 10250 getsockopt no route to host This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet possibly by policy of the machine provider DigitalOcean assigns a public IP to eth0 as well as a private one to be used internally as anchor for their floating IP feature yet kubelet will pick the latter as the node s InternalIP instead of the public one Use ip addr show to check for this scenario instead of ifconfig because ifconfig will not display the offending alias IP address Alternatively an API endpoint specific to DigitalOcean allows to query for the anchor IP from the droplet sh curl http 169 254 169 254 metadata v1 interfaces public 0 anchor ipv4 address The workaround is to tell kubelet which IP to use using node ip When using DigitalOcean it can be the public one assigned to eth0 or the private one assigned to eth1 should you want to use the optional private network The kubeletExtraArgs section of the kubeadm NodeRegistrationOptions structure docs reference config api kubeadm config v1beta4 kubeadm k8s io v1beta4 NodeRegistrationOptions can be used for this Then restart kubelet sh systemctl daemon reload systemctl restart kubelet coredns pods have CrashLoopBackOff or Error state If you have nodes that are running SELinux with an older version of Docker you might experience a scenario where the coredns pods are not starting To solve that you can try one of the following options Upgrade to a newer version of Docker docs setup production environment container runtimes docker Disable SELinux https access redhat com documentation en us red hat enterprise linux 6 html security enhanced linux sect security enhanced linux enabling and disabling selinux disabling selinux Modify the coredns deployment to set allowPrivilegeEscalation to true bash kubectl n kube system get deployment coredns o yaml sed s allowPrivilegeEscalation false allowPrivilegeEscalation true g kubectl apply f Another cause for CoreDNS to have CrashLoopBackOff is when a CoreDNS Pod deployed in Kubernetes detects a loop A number of workarounds https github com coredns coredns tree master plugin loop troubleshooting loops in kubernetes clusters are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits Disabling SELinux or setting allowPrivilegeEscalation to true can compromise the security of your cluster etcd pods restart continually If you encounter the following error rpc error code 2 desc oci runtime error exec failed container linux go 247 starting container process caused process linux go 110 decoding init error from pipe caused read parent connection reset by peer This issue appears if you run CentOS 7 with Docker 1 13 1 84 This version of Docker can prevent the kubelet from executing into the etcd container To work around the issue choose one of these options Roll back to an earlier version of Docker such as 1 13 1 75 yum downgrade docker 1 13 1 75 git8633870 el7 centos x86 64 docker client 1 13 1 75 git8633870 el7 centos x86 64 docker common 1 13 1 75 git8633870 el7 centos x86 64 Install one of the more recent recommended versions such as 18 06 bash sudo yum config manager add repo https download docker com linux centos docker ce repo yum install docker ce 18 06 1 ce 3 el7 x86 64 Not possible to pass a comma separated list of values to arguments inside a component extra args flag kubeadm init flags such as component extra args allow you to pass custom arguments to a control plane component like the kube apiserver However this mechanism is limited due to the underlying type used for parsing the values mapStringString If you decide to pass an argument that supports multiple comma separated values such as apiserver extra args enable admission plugins LimitRanger NamespaceExists this flag will fail with flag malformed pair expect string string This happens because the list of arguments for apiserver extra args expects key value pairs and in this case NamespacesExists is considered as a key that is missing a value Alternatively you can try separating the key value pairs like so apiserver extra args enable admission plugins LimitRanger enable admission plugins NamespaceExists but this will result in the key enable admission plugins only having the value of NamespaceExists A known workaround is to use the kubeadm configuration file docs reference config api kubeadm config v1beta4 kube proxy scheduled before node is initialized by cloud controller manager In cloud provider scenarios kube proxy can end up being scheduled on new worker nodes before the cloud controller manager has initialized the node addresses This causes kube proxy to fail to pick up the node s IP address properly and has knock on effects to the proxy function managing load balancers The following error can be seen in kube proxy Pods server go 610 Failed to retrieve node IP host IP unknown known addresses proxier go 340 invalid nodeIP initializing kube proxy with 127 0 0 1 as nodeIP A known solution is to patch the kube proxy DaemonSet to allow scheduling it on control plane nodes regardless of their conditions keeping it off of other nodes until their initial guarding conditions abate kubectl n kube system patch ds kube proxy p spec template spec tolerations key CriticalAddonsOnly operator Exists effect NoSchedule key node role kubernetes io control plane The tracking issue for this problem is here https github com kubernetes kubeadm issues 1027 usr is mounted read only on nodes usr mounted read only On Linux distributions such as Fedora CoreOS or Flatcar Container Linux the directory usr is mounted as a read only filesystem For flex volume support https github com kubernetes community blob ab55d85 contributors devel sig storage flexvolume md Kubernetes components like the kubelet and kube controller manager use the default path of usr libexec kubernetes kubelet plugins volume exec yet the flex volume directory must be writeable for the feature to work FlexVolume was deprecated in the Kubernetes v1 23 release To workaround this issue you can configure the flex volume directory using the kubeadm configuration file docs reference config api kubeadm config v1beta4 On the primary control plane Node created using kubeadm init pass the following file using config yaml apiVersion kubeadm k8s io v1beta4 kind InitConfiguration nodeRegistration kubeletExtraArgs name volume plugin dir value opt libexec kubernetes kubelet plugins volume exec apiVersion kubeadm k8s io v1beta4 kind ClusterConfiguration controllerManager extraArgs name flex volume plugin dir value opt libexec kubernetes kubelet plugins volume exec On joining Nodes yaml apiVersion kubeadm k8s io v1beta4 kind JoinConfiguration nodeRegistration kubeletExtraArgs name volume plugin dir value opt libexec kubernetes kubelet plugins volume exec Alternatively you can modify etc fstab to make the usr mount writeable but please be advised that this is modifying a design principle of the Linux distribution kubeadm upgrade plan prints out context deadline exceeded error message This error message is shown when upgrading a Kubernetes cluster with kubeadm in the case of running an external etcd This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster You can proceed with kubeadm upgrade apply This issue is fixed as of version 1 19 kubeadm reset unmounts var lib kubelet If var lib kubelet is being mounted performing a kubeadm reset will effectively unmount it To workaround the issue re mount the var lib kubelet directory after performing the kubeadm reset operation This is a regression introduced in kubeadm 1 15 The issue is fixed in 1 20 Cannot use the metrics server securely in a kubeadm cluster In a kubeadm cluster the metrics server https github com kubernetes sigs metrics server can be used insecurely by passing the kubelet insecure tls to it This is not recommended for production clusters If you want to use TLS between the metrics server and the kubelet there is a problem since kubeadm deploys a self signed serving certificate for the kubelet This can cause the following errors on the side of the metrics server x509 certificate signed by unknown authority x509 certificate is valid for IP foo not IP bar See Enabling signed kubelet serving certificates docs tasks administer cluster kubeadm kubeadm certs kubelet serving certs to understand how to configure the kubelets in a kubeadm cluster to have properly signed serving certificates Also see How to run the metrics server securely https github com kubernetes sigs metrics server blob master FAQ md how to run metrics server securely Upgrade fails due to etcd hash not changing Only applicable to upgrading a control plane node with a kubeadm binary v1 28 3 or later where the node is currently managed by kubeadm versions v1 28 0 v1 28 1 or v1 28 2 Here is the error message you may encounter upgrade etcd Failed to upgrade etcd couldn t upgrade control plane kubeadm has tried to recover everything into the earlier state Errors faced static Pod hash for component etcd on Node kinder upgrade control plane 1 did not change after 5m0s timed out waiting for the condition upgrade etcd Waiting for previous etcd to become available I0907 10 10 09 109104 3704 etcd go 588 etcd attempting to see if all cluster endpoints https 172 17 0 6 2379 https 172 17 0 4 2379 https 172 17 0 3 2379 are available 1 10 upgrade etcd Etcd was rolled back and is now available static Pod hash for component etcd on Node kinder upgrade control plane 1 did not change after 5m0s timed out waiting for the condition couldn t upgrade control plane kubeadm has tried to recover everything into the earlier state Errors faced k8s io kubernetes cmd kubeadm app phases upgrade rollbackOldManifests cmd kubeadm app phases upgrade staticpods go 525 k8s io kubernetes cmd kubeadm app phases upgrade upgradeComponent cmd kubeadm app phases upgrade staticpods go 254 k8s io kubernetes cmd kubeadm app phases upgrade performEtcdStaticPodUpgrade cmd kubeadm app phases upgrade staticpods go 338 The reason for this failure is that the affected versions generate an etcd manifest file with unwanted defaults in the PodSpec This will result in a diff from the manifest comparison and kubeadm will expect a change in the Pod hash but the kubelet will never update the hash There are two way to workaround this issue if you see it in your cluster The etcd upgrade can be skipped between the affected versions and v1 28 3 or later by using shell kubeadm upgrade apply node version etcd upgrade false This is not recommended in case a new etcd version was introduced by a later v1 28 patch version Before upgrade patch the manifest for the etcd static pod to remove the problematic defaulted attributes patch diff git a etc kubernetes manifests etcd defaults yaml b etc kubernetes manifests etcd origin yaml index d807ccbe0aa 46b35f00e15 100644 a etc kubernetes manifests etcd defaults yaml b etc kubernetes manifests etcd origin yaml 43 7 43 6 spec scheme HTTP initialDelaySeconds 10 periodSeconds 10 successThreshold 1 timeoutSeconds 15 name etcd resources 59 26 58 18 spec scheme HTTP initialDelaySeconds 10 periodSeconds 10 successThreshold 1 timeoutSeconds 15 terminationMessagePath dev termination log terminationMessagePolicy File volumeMounts mountPath var lib etcd name etcd data mountPath etc kubernetes pki etcd name etcd certs dnsPolicy ClusterFirst enableServiceLinks true hostNetwork true priority 2000001000 priorityClassName system node critical restartPolicy Always schedulerName default scheduler securityContext seccompProfile type RuntimeDefault terminationGracePeriodSeconds 30 volumes hostPath path etc kubernetes pki etcd More information can be found in the tracking issue https github com kubernetes kubeadm issues 2927 for this bug |
kubernetes setup title Installing kubeadm title Install the kubeadm setup tool name setup contenttype task card weight 10 weight 40 | ---
title: Installing kubeadm
content_type: task
weight: 10
card:
name: setup
weight: 40
title: Install the kubeadm setup tool
---
<!-- overview -->
<img src="/images/kubeadm-stacked-color.png" align="right" width="150px"></img>
This page shows how to install the `kubeadm` toolbox.
For information on how to create a cluster with kubeadm once you have performed this installation process,
see the [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
##
* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions
based on Debian and Red Hat, and those distributions without a package manager.
* 2 GB or more of RAM per machine (any less will leave little room for your apps).
* 2 CPUs or more for control plane machines.
* Full network connectivity between all machines in the cluster (public or private network is fine).
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details.
* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
The `kubeadm` installation is done via binaries that use dynamic linking and assumes that your target system provides `glibc`.
This is a reasonable assumption on many Linux distributions (including Debian, Ubuntu, Fedora, CentOS, etc.)
but it is not always the case with custom and lightweight distributions which don't include `glibc` by default, such as Alpine Linux.
The expectation is that the distribution either includes `glibc` or a
[compatibility layer](https://wiki.alpinelinux.org/wiki/Running_glibc_programs)
that provides the expected symbols.
<!-- steps -->
## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address}
* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a`
* The product_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid`
It is very likely that hardware devices will have unique addresses, although some virtual machines may have
identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster.
If these values are not unique to each node, the installation process
may [fail](https://github.com/kubernetes/kubeadm/issues/31).
## Check network adapters
If you have more than one network adapter, and your Kubernetes components are not reachable on the default
route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter.
## Check required ports {#check-required-ports}
These [required ports](/docs/reference/networking/ports-and-protocols/)
need to be open in order for Kubernetes components to communicate with each other.
You can use tools like [netcat](https://netcat.sourceforge.net) to check if a port is open. For example:
```shell
nc 127.0.0.1 6443 -v
```
The pod network plugin you use may also require certain ports to be
open. Since this differs with each pod network plugin, please see the
documentation for the plugins about what port(s) those need.
## Swap configuration {#swap-configuration}
The default behavior of a kubelet is to fail to start if swap memory is detected on a node.
This means that swap should either be disabled or tolerated by kubelet.
* To tolerate swap, add `failSwapOn: false` to kubelet configuration or as a command line argument.
Note: even if `failSwapOn: false` is provided, workloads wouldn't have swap access by default.
This can be changed by setting a `swapBehavior`, again in the kubelet configuration file. To use swap,
set a `swapBehavior` other than the default `NoSwap` setting.
See [Swap memory management](/docs/concepts/architecture/nodes/#swap-memory) for more details.
* To disable swap, `sudo swapoff -a` can be used to disable swapping temporarily.
To make this change persistent across reboots, make sure swap is disabled in
config files like `/etc/fstab`, `systemd.swap`, depending how it was configured on your system.
## Installing a container runtime {#installing-runtime}
To run containers in Pods, Kubernetes uses a
.
By default, Kubernetes uses the
(CRI)
to interface with your chosen container runtime.
If you don't specify a runtime, kubeadm automatically tries to detect an installed
container runtime by scanning through a list of known endpoints.
If multiple or no container runtimes are detected kubeadm will throw an error
and will request that you specify which one you want to use.
See [container runtimes](/docs/setup/production-environment/container-runtimes/)
for more information.
Docker Engine does not implement the [CRI](/docs/concepts/architecture/cri/)
which is a requirement for a container runtime to work with Kubernetes.
For that reason, an additional service [cri-dockerd](https://mirantis.github.io/cri-dockerd/)
has to be installed. cri-dockerd is a project based on the legacy built-in
Docker Engine support that was [removed](/dockershim) from the kubelet in version 1.24.
The tables below include the known endpoints for supported operating systems:
| Runtime | Path to Unix domain socket |
|------------------------------------|----------------------------------------------|
| containerd | `unix:///var/run/containerd/containerd.sock` |
| CRI-O | `unix:///var/run/crio/crio.sock` |
| Docker Engine (using cri-dockerd) | `unix:///var/run/cri-dockerd.sock` |
| Runtime | Path to Windows named pipe |
|------------------------------------|----------------------------------------------|
| containerd | `npipe:////./pipe/containerd-containerd` |
| Docker Engine (using cri-dockerd) | `npipe:////./pipe/cri-dockerd` |
## Installing kubeadm, kubelet and kubectl
You will install these packages on all of your machines:
* `kubeadm`: the command to bootstrap the cluster.
* `kubelet`: the component that runs on all of the machines in your cluster
and does things like starting pods and containers.
* `kubectl`: the command line util to talk to your cluster.
kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will
need to ensure they match the version of the Kubernetes control plane you want
kubeadm to install for you. If you do not, there is a risk of a version skew occurring that
can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the
kubelet and the control plane is supported, but the kubelet version may never exceed the API
server version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server,
but not vice versa.
For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/).
These instructions exclude all Kubernetes packages from any system upgrades.
This is because kubeadm and Kubernetes require
[special attention to upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/).
For more information on version skews, see:
* Kubernetes [version and version-skew policy](/docs/setup/release/version-skew-policy/)
* Kubeadm-specific [version skew policy](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy)
There's a dedicated package repository for each Kubernetes minor version. If you want to install
a minor version other than v, please see the installation guide for
your desired minor version.
These instructions are for Kubernetes v.
1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository:
```shell
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
```
2. Download the public signing key for the Kubernetes package repositories.
The same signing key is used for all repositories so you can disregard the version in the URL:
```shell
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable://deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
```
In releases older than Debian 12 and Ubuntu 22.04, directory `/etc/apt/keyrings` does not
exist by default, and it should be created before the curl command.
3. Add the appropriate Kubernetes `apt` repository. Please note that this repository have packages
only for Kubernetes ; for other Kubernetes minor versions, you need to
change the Kubernetes minor version in the URL to match your desired minor version
(you should also check that you are reading the documentation for the version of Kubernetes
that you plan to install).
```shell
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable://deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
```
4. Update the `apt` package index, install kubelet, kubeadm and kubectl, and pin their version:
```shell
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
```
5. (Optional) Enable the kubelet service before running kubeadm:
```shell
sudo systemctl enable --now kubelet
```
1. Set SELinux to `permissive` mode:
These instructions are for Kubernetes .
```shell
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
```
- Setting SELinux in permissive mode by running `setenforce 0` and `sed ...`
effectively disables it. This is required to allow containers to access the host
filesystem; for example, some cluster network plugins require that. You have to
do this until SELinux support is improved in the kubelet.
- You can leave SELinux enabled if you know how to configure it but it may require
settings that are not supported by kubeadm.
2. Add the Kubernetes `yum` repository. The `exclude` parameter in the
repository definition ensures that the packages related to Kubernetes are
not upgraded upon running `yum update` as there's a special procedure that
must be followed for upgrading Kubernetes. Please note that this repository
have packages only for Kubernetes ; for other
Kubernetes minor versions, you need to change the Kubernetes minor version
in the URL to match your desired minor version (you should also check that
you are reading the documentation for the version of Kubernetes that you
plan to install).
```shell
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable://rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable://rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
```
3. Install kubelet, kubeadm and kubectl:
```shell
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
```
4. (Optional) Enable the kubelet service before running kubeadm:
```shell
sudo systemctl enable --now kubelet
```
Install CNI plugins (required for most pod network):
```bash
CNI_PLUGINS_VERSION="v1.3.0"
ARCH="amd64"
DEST="/opt/cni/bin"
sudo mkdir -p "$DEST"
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_PLUGINS_VERSION}/cni-plugins-linux-${ARCH}-${CNI_PLUGINS_VERSION}.tgz" | sudo tar -C "$DEST" -xz
```
Define the directory to download command files:
The `DOWNLOAD_DIR` variable must be set to a writable directory.
If you are running Flatcar Container Linux, set `DOWNLOAD_DIR="/opt/bin"`.
```bash
DOWNLOAD_DIR="/usr/local/bin"
sudo mkdir -p "$DOWNLOAD_DIR"
```
Optionally install crictl (required for interaction with the Container Runtime Interface (CRI), optional for kubeadm):
```bash
CRICTL_VERSION="v1.31.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
```
Install `kubeadm`, `kubelet` and add a `kubelet` systemd service:
```bash
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
ARCH="amd64"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
sudo chmod +x {kubeadm,kubelet}
RELEASE_VERSION="v0.16.2"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubelet/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /usr/lib/systemd/system/kubelet.service
sudo mkdir -p /usr/lib/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
```
Please refer to the note in the [Before you begin](#before-you-begin) section for Linux distributions
that do not include `glibc` by default.
Install `kubectl` by following the instructions on [Install Tools page](/docs/tasks/tools/#kubectl).
Optionally, enable the kubelet service before running kubeadm:
```bash
sudo systemctl enable --now kubelet
```
The Flatcar Container Linux distribution mounts the `/usr` directory as a read-only filesystem.
Before bootstrapping your cluster, you need to take additional steps to configure a writable directory.
See the [Kubeadm Troubleshooting guide](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#usr-mounted-read-only)
to learn how to set up a writable directory.
The kubelet is now restarting every few seconds, as it waits in a crashloop for
kubeadm to tell it what to do.
## Configuring a cgroup driver
Both the container runtime and the kubelet have a property called
["cgroup driver"](/docs/setup/production-environment/container-runtimes/#cgroup-drivers), which is important
for the management of cgroups on Linux machines.
Matching the container runtime and kubelet cgroup drivers is required or otherwise the kubelet process will fail.
See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/) for more details.
## Troubleshooting
If you are running into difficulties with kubeadm, please consult our
[troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
##
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) | kubernetes setup | title Installing kubeadm content type task weight 10 card name setup weight 40 title Install the kubeadm setup tool overview img src images kubeadm stacked color png align right width 150px img This page shows how to install the kubeadm toolbox For information on how to create a cluster with kubeadm once you have performed this installation process see the Creating a cluster with kubeadm docs setup production environment tools kubeadm create cluster kubeadm page A compatible Linux host The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat and those distributions without a package manager 2 GB or more of RAM per machine any less will leave little room for your apps 2 CPUs or more for control plane machines Full network connectivity between all machines in the cluster public or private network is fine Unique hostname MAC address and product uuid for every node See here verify mac address for more details Certain ports are open on your machines See here check required ports for more details The kubeadm installation is done via binaries that use dynamic linking and assumes that your target system provides glibc This is a reasonable assumption on many Linux distributions including Debian Ubuntu Fedora CentOS etc but it is not always the case with custom and lightweight distributions which don t include glibc by default such as Alpine Linux The expectation is that the distribution either includes glibc or a compatibility layer https wiki alpinelinux org wiki Running glibc programs that provides the expected symbols steps Verify the MAC address and product uuid are unique for every node verify mac address You can get the MAC address of the network interfaces using the command ip link or ifconfig a The product uuid can be checked by using the command sudo cat sys class dmi id product uuid It is very likely that hardware devices will have unique addresses although some virtual machines may have identical values Kubernetes uses these values to uniquely identify the nodes in the cluster If these values are not unique to each node the installation process may fail https github com kubernetes kubeadm issues 31 Check network adapters If you have more than one network adapter and your Kubernetes components are not reachable on the default route we recommend you add IP route s so Kubernetes cluster addresses go via the appropriate adapter Check required ports check required ports These required ports docs reference networking ports and protocols need to be open in order for Kubernetes components to communicate with each other You can use tools like netcat https netcat sourceforge net to check if a port is open For example shell nc 127 0 0 1 6443 v The pod network plugin you use may also require certain ports to be open Since this differs with each pod network plugin please see the documentation for the plugins about what port s those need Swap configuration swap configuration The default behavior of a kubelet is to fail to start if swap memory is detected on a node This means that swap should either be disabled or tolerated by kubelet To tolerate swap add failSwapOn false to kubelet configuration or as a command line argument Note even if failSwapOn false is provided workloads wouldn t have swap access by default This can be changed by setting a swapBehavior again in the kubelet configuration file To use swap set a swapBehavior other than the default NoSwap setting See Swap memory management docs concepts architecture nodes swap memory for more details To disable swap sudo swapoff a can be used to disable swapping temporarily To make this change persistent across reboots make sure swap is disabled in config files like etc fstab systemd swap depending how it was configured on your system Installing a container runtime installing runtime To run containers in Pods Kubernetes uses a By default Kubernetes uses the CRI to interface with your chosen container runtime If you don t specify a runtime kubeadm automatically tries to detect an installed container runtime by scanning through a list of known endpoints If multiple or no container runtimes are detected kubeadm will throw an error and will request that you specify which one you want to use See container runtimes docs setup production environment container runtimes for more information Docker Engine does not implement the CRI docs concepts architecture cri which is a requirement for a container runtime to work with Kubernetes For that reason an additional service cri dockerd https mirantis github io cri dockerd has to be installed cri dockerd is a project based on the legacy built in Docker Engine support that was removed dockershim from the kubelet in version 1 24 The tables below include the known endpoints for supported operating systems Runtime Path to Unix domain socket containerd unix var run containerd containerd sock CRI O unix var run crio crio sock Docker Engine using cri dockerd unix var run cri dockerd sock Runtime Path to Windows named pipe containerd npipe pipe containerd containerd Docker Engine using cri dockerd npipe pipe cri dockerd Installing kubeadm kubelet and kubectl You will install these packages on all of your machines kubeadm the command to bootstrap the cluster kubelet the component that runs on all of the machines in your cluster and does things like starting pods and containers kubectl the command line util to talk to your cluster kubeadm will not install or manage kubelet or kubectl for you so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you If you do not there is a risk of a version skew occurring that can lead to unexpected buggy behaviour However one minor version skew between the kubelet and the control plane is supported but the kubelet version may never exceed the API server version For example the kubelet running 1 7 0 should be fully compatible with a 1 8 0 API server but not vice versa For information about installing kubectl see Install and set up kubectl docs tasks tools These instructions exclude all Kubernetes packages from any system upgrades This is because kubeadm and Kubernetes require special attention to upgrade docs tasks administer cluster kubeadm kubeadm upgrade For more information on version skews see Kubernetes version and version skew policy docs setup release version skew policy Kubeadm specific version skew policy docs setup production environment tools kubeadm create cluster kubeadm version skew policy There s a dedicated package repository for each Kubernetes minor version If you want to install a minor version other than v please see the installation guide for your desired minor version These instructions are for Kubernetes v 1 Update the apt package index and install packages needed to use the Kubernetes apt repository shell sudo apt get update apt transport https may be a dummy package if so you can skip that package sudo apt get install y apt transport https ca certificates curl gpg 2 Download the public signing key for the Kubernetes package repositories The same signing key is used for all repositories so you can disregard the version in the URL shell If the directory etc apt keyrings does not exist it should be created before the curl command read the note below sudo mkdir p m 755 etc apt keyrings curl fsSL https pkgs k8s io core stable deb Release key sudo gpg dearmor o etc apt keyrings kubernetes apt keyring gpg In releases older than Debian 12 and Ubuntu 22 04 directory etc apt keyrings does not exist by default and it should be created before the curl command 3 Add the appropriate Kubernetes apt repository Please note that this repository have packages only for Kubernetes for other Kubernetes minor versions you need to change the Kubernetes minor version in the URL to match your desired minor version you should also check that you are reading the documentation for the version of Kubernetes that you plan to install shell This overwrites any existing configuration in etc apt sources list d kubernetes list echo deb signed by etc apt keyrings kubernetes apt keyring gpg https pkgs k8s io core stable deb sudo tee etc apt sources list d kubernetes list 4 Update the apt package index install kubelet kubeadm and kubectl and pin their version shell sudo apt get update sudo apt get install y kubelet kubeadm kubectl sudo apt mark hold kubelet kubeadm kubectl 5 Optional Enable the kubelet service before running kubeadm shell sudo systemctl enable now kubelet 1 Set SELinux to permissive mode These instructions are for Kubernetes shell Set SELinux in permissive mode effectively disabling it sudo setenforce 0 sudo sed i s SELINUX enforcing SELINUX permissive etc selinux config Setting SELinux in permissive mode by running setenforce 0 and sed effectively disables it This is required to allow containers to access the host filesystem for example some cluster network plugins require that You have to do this until SELinux support is improved in the kubelet You can leave SELinux enabled if you know how to configure it but it may require settings that are not supported by kubeadm 2 Add the Kubernetes yum repository The exclude parameter in the repository definition ensures that the packages related to Kubernetes are not upgraded upon running yum update as there s a special procedure that must be followed for upgrading Kubernetes Please note that this repository have packages only for Kubernetes for other Kubernetes minor versions you need to change the Kubernetes minor version in the URL to match your desired minor version you should also check that you are reading the documentation for the version of Kubernetes that you plan to install shell This overwrites any existing configuration in etc yum repos d kubernetes repo cat EOF sudo tee etc yum repos d kubernetes repo kubernetes name Kubernetes baseurl https pkgs k8s io core stable rpm enabled 1 gpgcheck 1 gpgkey https pkgs k8s io core stable rpm repodata repomd xml key exclude kubelet kubeadm kubectl cri tools kubernetes cni EOF 3 Install kubelet kubeadm and kubectl shell sudo yum install y kubelet kubeadm kubectl disableexcludes kubernetes 4 Optional Enable the kubelet service before running kubeadm shell sudo systemctl enable now kubelet Install CNI plugins required for most pod network bash CNI PLUGINS VERSION v1 3 0 ARCH amd64 DEST opt cni bin sudo mkdir p DEST curl L https github com containernetworking plugins releases download CNI PLUGINS VERSION cni plugins linux ARCH CNI PLUGINS VERSION tgz sudo tar C DEST xz Define the directory to download command files The DOWNLOAD DIR variable must be set to a writable directory If you are running Flatcar Container Linux set DOWNLOAD DIR opt bin bash DOWNLOAD DIR usr local bin sudo mkdir p DOWNLOAD DIR Optionally install crictl required for interaction with the Container Runtime Interface CRI optional for kubeadm bash CRICTL VERSION v1 31 0 ARCH amd64 curl L https github com kubernetes sigs cri tools releases download CRICTL VERSION crictl CRICTL VERSION linux ARCH tar gz sudo tar C DOWNLOAD DIR xz Install kubeadm kubelet and add a kubelet systemd service bash RELEASE curl sSL https dl k8s io release stable txt ARCH amd64 cd DOWNLOAD DIR sudo curl L remote name all https dl k8s io release RELEASE bin linux ARCH kubeadm kubelet sudo chmod x kubeadm kubelet RELEASE VERSION v0 16 2 curl sSL https raw githubusercontent com kubernetes release RELEASE VERSION cmd krel templates latest kubelet kubelet service sed s usr bin DOWNLOAD DIR g sudo tee usr lib systemd system kubelet service sudo mkdir p usr lib systemd system kubelet service d curl sSL https raw githubusercontent com kubernetes release RELEASE VERSION cmd krel templates latest kubeadm 10 kubeadm conf sed s usr bin DOWNLOAD DIR g sudo tee usr lib systemd system kubelet service d 10 kubeadm conf Please refer to the note in the Before you begin before you begin section for Linux distributions that do not include glibc by default Install kubectl by following the instructions on Install Tools page docs tasks tools kubectl Optionally enable the kubelet service before running kubeadm bash sudo systemctl enable now kubelet The Flatcar Container Linux distribution mounts the usr directory as a read only filesystem Before bootstrapping your cluster you need to take additional steps to configure a writable directory See the Kubeadm Troubleshooting guide docs setup production environment tools kubeadm troubleshooting kubeadm usr mounted read only to learn how to set up a writable directory The kubelet is now restarting every few seconds as it waits in a crashloop for kubeadm to tell it what to do Configuring a cgroup driver Both the container runtime and the kubelet have a property called cgroup driver docs setup production environment container runtimes cgroup drivers which is important for the management of cgroups on Linux machines Matching the container runtime and kubelet cgroup drivers is required or otherwise the kubelet process will fail See Configuring a cgroup driver docs tasks administer cluster kubeadm configure cgroup driver for more details Troubleshooting If you are running into difficulties with kubeadm please consult our troubleshooting docs docs setup production environment tools kubeadm troubleshooting kubeadm Using kubeadm to Create a Cluster docs setup production environment tools kubeadm create cluster kubeadm |
kubernetes setup title Customizing components with the kubeadm API contenttype concept weight 40 sig cluster lifecycle reviewers overview | ---
reviewers:
- sig-cluster-lifecycle
title: Customizing components with the kubeadm API
content_type: concept
weight: 40
---
<!-- overview -->
This page covers how to customize the components that kubeadm deploys. For control plane components
you can use flags in the `ClusterConfiguration` structure or patches per-node. For the kubelet
and kube-proxy you can use `KubeletConfiguration` and `KubeProxyConfiguration`, accordingly.
All of these options are possible via the kubeadm configuration API.
For more details on each field in the configuration you can navigate to our
[API reference pages](/docs/reference/config-api/kubeadm-config.v1beta4/).
Customizing the CoreDNS deployment of kubeadm is currently not supported. You must manually
patch the `kube-system/coredns`
and recreate the CoreDNS after that. Alternatively,
you can skip the default CoreDNS deployment and deploy your own variant.
For more details on that see [Using init phases with kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-phases).
To reconfigure a cluster that has already been created see
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure).
<!-- body -->
## Customizing the control plane with flags in `ClusterConfiguration`
The kubeadm `ClusterConfiguration` object exposes a way for users to override the default
flags passed to control plane components such as the APIServer, ControllerManager, Scheduler and Etcd.
The components are defined using the following structures:
- `apiServer`
- `controllerManager`
- `scheduler`
- `etcd`
These structures contain a common `extraArgs` field, that consists of `name` / `value` pairs.
To override a flag for a control plane component:
1. Add the appropriate `extraArgs` to your configuration.
2. Add flags to the `extraArgs` field.
3. Run `kubeadm init` with `--config <YOUR CONFIG YAML>`.
You can generate a `ClusterConfiguration` object with default values by running `kubeadm config print init-defaults`
and saving the output to a file of your choice.
The `ClusterConfiguration` object is currently global in kubeadm clusters. This means that any flags that you add,
will apply to all instances of the same component on different nodes. To apply individual configuration per component
on different nodes you can use [patches](#patches).
Duplicate flags (keys), or passing the same flag `--foo` multiple times, is currently not supported.
To workaround that you must use [patches](#patches).
### APIServer flags
For details, see the [reference documentation for kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/).
Example usage:
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
apiServer:
extraArgs:
- name: "enable-admission-plugins"
value: "AlwaysPullImages,DefaultStorageClass"
- name: "audit-log-path"
value: "/home/johndoe/audit.log"
```
### ControllerManager flags
For details, see the [reference documentation for kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/).
Example usage:
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
controllerManager:
extraArgs:
- name: "cluster-signing-key-file"
value: "/home/johndoe/keys/ca.key"
- name: "deployment-controller-sync-period"
value: "50"
```
### Scheduler flags
For details, see the [reference documentation for kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/).
Example usage:
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
scheduler:
extraArgs:
- name: "config"
value: "/etc/kubernetes/scheduler-config.yaml"
extraVolumes:
- name: schedulerconfig
hostPath: /home/johndoe/schedconfig.yaml
mountPath: /etc/kubernetes/scheduler-config.yaml
readOnly: true
pathType: "File"
```
### Etcd flags
For details, see the [etcd server documentation](https://etcd.io/docs/).
Example usage:
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
etcd:
local:
extraArgs:
- name: "election-timeout"
value: 1000
```
## Customizing with patches {#patches}
Kubeadm allows you to pass a directory with patch files to `InitConfiguration` and `JoinConfiguration`
on individual nodes. These patches can be used as the last customization step before component configuration
is written to disk.
You can pass this file to `kubeadm init` with `--config <YOUR CONFIG YAML>`:
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
patches:
directory: /home/user/somedir
```
For `kubeadm init` you can pass a file containing both a `ClusterConfiguration` and `InitConfiguration`
separated by `---`.
You can pass this file to `kubeadm join` with `--config <YOUR CONFIG YAML>`:
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
patches:
directory: /home/user/somedir
```
The directory must contain files named `target[suffix][+patchtype].extension`.
For example, `kube-apiserver0+merge.yaml` or just `etcd.json`.
- `target` can be one of `kube-apiserver`, `kube-controller-manager`, `kube-scheduler`, `etcd`
and `kubeletconfiguration`.
- `suffix` is an optional string that can be used to determine which patches are applied first
alpha-numerically.
- `patchtype` can be one of `strategic`, `merge` or `json` and these must match the patching formats
[supported by kubectl](/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch).
The default `patchtype` is `strategic`.
- `extension` must be either `json` or `yaml`.
If you are using `kubeadm upgrade` to upgrade your kubeadm nodes you must again provide the same
patches, so that the customization is preserved after upgrade. To do that you can use the `--patches`
flag, which must point to the same directory. `kubeadm upgrade` currently does not support a configuration
API structure that can be used for the same purpose.
## Customizing the kubelet {#kubelet}
To customize the kubelet you can add a [`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)
next to the `ClusterConfiguration` or `InitConfiguration` separated by `---` within the same configuration file.
This file can then be passed to `kubeadm init` and kubeadm will apply the same base `KubeletConfiguration`
to all nodes in the cluster.
For applying instance-specific configuration over the base `KubeletConfiguration` you can use the
[`kubeletconfiguration` patch target](#patches).
Alternatively, you can use kubelet flags as overrides by passing them in the
`nodeRegistration.kubeletExtraArgs` field supported by both `InitConfiguration` and `JoinConfiguration`.
Some kubelet flags are deprecated, so check their status in the
[kubelet reference documentation](/docs/reference/command-line-tools-reference/kubelet) before using them.
For additional details see [Configuring each kubelet in your cluster using kubeadm](/docs/setup/production-environment/tools/kubeadm/kubelet-integration)
## Customizing kube-proxy
To customize kube-proxy you can pass a `KubeProxyConfiguration` next your `ClusterConfiguration` or
`InitConfiguration` to `kubeadm init` separated by `---`.
For more details you can navigate to our [API reference pages](/docs/reference/config-api/kubeadm-config.v1beta4/).
kubeadm deploys kube-proxy as a , which means
that the `KubeProxyConfiguration` would apply to all instances of kube-proxy in the cluster.
| kubernetes setup | reviewers sig cluster lifecycle title Customizing components with the kubeadm API content type concept weight 40 overview This page covers how to customize the components that kubeadm deploys For control plane components you can use flags in the ClusterConfiguration structure or patches per node For the kubelet and kube proxy you can use KubeletConfiguration and KubeProxyConfiguration accordingly All of these options are possible via the kubeadm configuration API For more details on each field in the configuration you can navigate to our API reference pages docs reference config api kubeadm config v1beta4 Customizing the CoreDNS deployment of kubeadm is currently not supported You must manually patch the kube system coredns and recreate the CoreDNS after that Alternatively you can skip the default CoreDNS deployment and deploy your own variant For more details on that see Using init phases with kubeadm docs reference setup tools kubeadm kubeadm init init phases To reconfigure a cluster that has already been created see Reconfiguring a kubeadm cluster docs tasks administer cluster kubeadm kubeadm reconfigure body Customizing the control plane with flags in ClusterConfiguration The kubeadm ClusterConfiguration object exposes a way for users to override the default flags passed to control plane components such as the APIServer ControllerManager Scheduler and Etcd The components are defined using the following structures apiServer controllerManager scheduler etcd These structures contain a common extraArgs field that consists of name value pairs To override a flag for a control plane component 1 Add the appropriate extraArgs to your configuration 2 Add flags to the extraArgs field 3 Run kubeadm init with config YOUR CONFIG YAML You can generate a ClusterConfiguration object with default values by running kubeadm config print init defaults and saving the output to a file of your choice The ClusterConfiguration object is currently global in kubeadm clusters This means that any flags that you add will apply to all instances of the same component on different nodes To apply individual configuration per component on different nodes you can use patches patches Duplicate flags keys or passing the same flag foo multiple times is currently not supported To workaround that you must use patches patches APIServer flags For details see the reference documentation for kube apiserver docs reference command line tools reference kube apiserver Example usage yaml apiVersion kubeadm k8s io v1beta4 kind ClusterConfiguration kubernetesVersion v1 16 0 apiServer extraArgs name enable admission plugins value AlwaysPullImages DefaultStorageClass name audit log path value home johndoe audit log ControllerManager flags For details see the reference documentation for kube controller manager docs reference command line tools reference kube controller manager Example usage yaml apiVersion kubeadm k8s io v1beta4 kind ClusterConfiguration kubernetesVersion v1 16 0 controllerManager extraArgs name cluster signing key file value home johndoe keys ca key name deployment controller sync period value 50 Scheduler flags For details see the reference documentation for kube scheduler docs reference command line tools reference kube scheduler Example usage yaml apiVersion kubeadm k8s io v1beta4 kind ClusterConfiguration kubernetesVersion v1 16 0 scheduler extraArgs name config value etc kubernetes scheduler config yaml extraVolumes name schedulerconfig hostPath home johndoe schedconfig yaml mountPath etc kubernetes scheduler config yaml readOnly true pathType File Etcd flags For details see the etcd server documentation https etcd io docs Example usage yaml apiVersion kubeadm k8s io v1beta4 kind ClusterConfiguration etcd local extraArgs name election timeout value 1000 Customizing with patches patches Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes These patches can be used as the last customization step before component configuration is written to disk You can pass this file to kubeadm init with config YOUR CONFIG YAML yaml apiVersion kubeadm k8s io v1beta4 kind InitConfiguration patches directory home user somedir For kubeadm init you can pass a file containing both a ClusterConfiguration and InitConfiguration separated by You can pass this file to kubeadm join with config YOUR CONFIG YAML yaml apiVersion kubeadm k8s io v1beta4 kind JoinConfiguration patches directory home user somedir The directory must contain files named target suffix patchtype extension For example kube apiserver0 merge yaml or just etcd json target can be one of kube apiserver kube controller manager kube scheduler etcd and kubeletconfiguration suffix is an optional string that can be used to determine which patches are applied first alpha numerically patchtype can be one of strategic merge or json and these must match the patching formats supported by kubectl docs tasks manage kubernetes objects update api object kubectl patch The default patchtype is strategic extension must be either json or yaml If you are using kubeadm upgrade to upgrade your kubeadm nodes you must again provide the same patches so that the customization is preserved after upgrade To do that you can use the patches flag which must point to the same directory kubeadm upgrade currently does not support a configuration API structure that can be used for the same purpose Customizing the kubelet kubelet To customize the kubelet you can add a KubeletConfiguration docs reference config api kubelet config v1beta1 next to the ClusterConfiguration or InitConfiguration separated by within the same configuration file This file can then be passed to kubeadm init and kubeadm will apply the same base KubeletConfiguration to all nodes in the cluster For applying instance specific configuration over the base KubeletConfiguration you can use the kubeletconfiguration patch target patches Alternatively you can use kubelet flags as overrides by passing them in the nodeRegistration kubeletExtraArgs field supported by both InitConfiguration and JoinConfiguration Some kubelet flags are deprecated so check their status in the kubelet reference documentation docs reference command line tools reference kubelet before using them For additional details see Configuring each kubelet in your cluster using kubeadm docs setup production environment tools kubeadm kubelet integration Customizing kube proxy To customize kube proxy you can pass a KubeProxyConfiguration next your ClusterConfiguration or InitConfiguration to kubeadm init separated by For more details you can navigate to our API reference pages docs reference config api kubeadm config v1beta4 kubeadm deploys kube proxy as a which means that the KubeProxyConfiguration would apply to all instances of kube proxy in the cluster |
kubernetes setup weight 70 title Set up a High Availability etcd Cluster with kubeadm contenttype task sig cluster lifecycle reviewers overview | ---
reviewers:
- sig-cluster-lifecycle
title: Set up a High Availability etcd Cluster with kubeadm
content_type: task
weight: 70
---
<!-- overview -->
By default, kubeadm runs a local etcd instance on each control plane node.
It is also possible to treat the etcd cluster as external and provision
etcd instances on separate hosts. The differences between the two approaches are covered in the
[Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology) page.
This task walks through the process of creating a high availability external
etcd cluster of three members that can be used by kubeadm during cluster creation.
##
- Three hosts that can talk to each other over TCP ports 2379 and 2380. This
document assumes these default ports. However, they are configurable through
the kubeadm config file.
- Each host must have systemd and a bash compatible shell installed.
- Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
- Each host should have access to the Kubernetes container image registry (`registry.k8s.io`) or list/pull the required etcd image using
`kubeadm config images list/pull`. This guide will set up etcd instances as
[static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet.
- Some infrastructure to copy files between hosts. For example `ssh` and `scp`
can satisfy this requirement.
<!-- steps -->
## Setting up the cluster
The general approach is to generate all certs on one node and only distribute
the _necessary_ files to the other nodes.
kubeadm contains all the necessary cryptographic machinery to generate
the certificates described below; no other cryptographic tooling is required for
this example.
The examples below use IPv4 addresses but you can also configure kubeadm, the kubelet and etcd
to use IPv6 addresses. Dual-stack is supported by some Kubernetes options, but not by etcd. For more details
on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support/).
1. Configure the kubelet to be a service manager for etcd.
You must do this on every host where etcd should be running.
Since etcd was created first, you must override the service priority by creating a new unit file
that has higher precedence than the kubeadm-provided kubelet unit file.
```sh
cat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
# Replace the value of "containerRuntimeEndpoint" for a different container runtime if needed.
#
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
anonymous:
enabled: false
webhook:
enabled: false
authorization:
mode: AlwaysAllow
cgroupDriver: systemd
address: 127.0.0.1
containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock
staticPodPath: /etc/kubernetes/manifests
EOF
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf
Restart=always
EOF
systemctl daemon-reload
systemctl restart kubelet
```
Check the kubelet status to ensure it is running.
```sh
systemctl status kubelet
```
1. Create configuration files for kubeadm.
Generate one kubeadm configuration file for each host that will have an etcd
member running on it using the following script.
```sh
# Update HOST0, HOST1 and HOST2 with the IPs of your hosts
export HOST0=10.0.0.6
export HOST1=10.0.0.7
export HOST2=10.0.0.8
# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts
export NAME0="infra0"
export NAME1="infra1"
export NAME2="infra2"
# Create temp directories to store files that will end up on other hosts
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
HOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=(${NAME0} ${NAME1} ${NAME2})
for i in "${!HOSTS[@]}"; do
HOST=${HOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
---
apiVersion: "kubeadm.k8s.io/v1beta4"
kind: InitConfiguration
nodeRegistration:
name: ${NAME}
localAPIEndpoint:
advertiseAddress: ${HOST}
---
apiVersion: "kubeadm.k8s.io/v1beta4"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
- "${HOST}"
extraArgs:
- name: initial-cluster
value: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380
- name: initial-cluster-state
value: new
- name: name
value: ${NAME}
- name: listen-peer-urls
value: https://${HOST}:2380
- name: listen-client-urls
value: https://${HOST}:2379
- name: advertise-client-urls
value: https://${HOST}:2379
- name: initial-advertise-peer-urls
value: https://${HOST}:2380
EOF
done
```
1. Generate the certificate authority.
If you already have a CA then the only action that is copying the CA's `crt` and
`key` file to `/etc/kubernetes/pki/etcd/ca.crt` and
`/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied,
proceed to the next step, "Create certificates for each member".
If you do not already have a CA then run this command on `$HOST0` (where you
generated the configuration files for kubeadm).
```
kubeadm init phase certs etcd-ca
```
This creates two files:
- `/etc/kubernetes/pki/etcd/ca.crt`
- `/etc/kubernetes/pki/etcd/ca.key`
1. Create certificates for each member.
```sh
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST2}/
# cleanup non-reusable certificates
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# No need to move the certs because they are for HOST0
# clean up certs that should not be copied off this host
find /tmp/${HOST2} -name ca.key -type f -delete
find /tmp/${HOST1} -name ca.key -type f -delete
```
1. Copy certificates and kubeadm configs.
The certificates have been generated and now they must be moved to their
respective hosts.
```sh
USER=ubuntu
HOST=${HOST1}
scp -r /tmp/${HOST}/* ${USER}@${HOST}:
ssh ${USER}@${HOST}
USER@HOST $ sudo -Es
root@HOST $ chown -R root:root pki
root@HOST $ mv pki /etc/kubernetes/
```
1. Ensure all expected files exist.
The complete list of required files on `$HOST0` is:
```
/tmp/${HOST0}
└── kubeadmcfg.yaml
---
/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
├── ca.crt
├── ca.key
├── healthcheck-client.crt
├── healthcheck-client.key
├── peer.crt
├── peer.key
├── server.crt
└── server.key
```
On `$HOST1`:
```
$HOME
└── kubeadmcfg.yaml
---
/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
├── ca.crt
├── healthcheck-client.crt
├── healthcheck-client.key
├── peer.crt
├── peer.key
├── server.crt
└── server.key
```
On `$HOST2`:
```
$HOME
└── kubeadmcfg.yaml
---
/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
├── ca.crt
├── healthcheck-client.crt
├── healthcheck-client.key
├── peer.crt
├── peer.key
├── server.crt
└── server.key
```
1. Create the static pod manifests.
Now that the certificates and configs are in place it's time to create the
manifests. On each host run the `kubeadm` command to generate a static manifest
for etcd.
```sh
root@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
root@HOST1 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
root@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
```
1. Optional: Check the cluster health.
If `etcdctl` isn't available, you can run this tool inside a container image.
You would do that directly with your container runtime using a tool such as
`crictl run` and not through Kubernetes
```sh
ETCDCTL_API=3 etcdctl \
--cert /etc/kubernetes/pki/etcd/peer.crt \
--key /etc/kubernetes/pki/etcd/peer.key \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://${HOST0}:2379 endpoint health
...
https://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms
https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms
https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms
```
- Set `${HOST0}`to the IP address of the host you are testing.
##
Once you have an etcd cluster with 3 working members, you can continue setting up a
highly available control plane using the
[external etcd method with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/). | kubernetes setup | reviewers sig cluster lifecycle title Set up a High Availability etcd Cluster with kubeadm content type task weight 70 overview By default kubeadm runs a local etcd instance on each control plane node It is also possible to treat the etcd cluster as external and provision etcd instances on separate hosts The differences between the two approaches are covered in the Options for Highly Available topology docs setup production environment tools kubeadm ha topology page This task walks through the process of creating a high availability external etcd cluster of three members that can be used by kubeadm during cluster creation Three hosts that can talk to each other over TCP ports 2379 and 2380 This document assumes these default ports However they are configurable through the kubeadm config file Each host must have systemd and a bash compatible shell installed Each host must have a container runtime kubelet and kubeadm installed docs setup production environment tools kubeadm install kubeadm Each host should have access to the Kubernetes container image registry registry k8s io or list pull the required etcd image using kubeadm config images list pull This guide will set up etcd instances as static pods docs tasks configure pod container static pod managed by a kubelet Some infrastructure to copy files between hosts For example ssh and scp can satisfy this requirement steps Setting up the cluster The general approach is to generate all certs on one node and only distribute the necessary files to the other nodes kubeadm contains all the necessary cryptographic machinery to generate the certificates described below no other cryptographic tooling is required for this example The examples below use IPv4 addresses but you can also configure kubeadm the kubelet and etcd to use IPv6 addresses Dual stack is supported by some Kubernetes options but not by etcd For more details on Kubernetes dual stack support see Dual stack support with kubeadm docs setup production environment tools kubeadm dual stack support 1 Configure the kubelet to be a service manager for etcd You must do this on every host where etcd should be running Since etcd was created first you must override the service priority by creating a new unit file that has higher precedence than the kubeadm provided kubelet unit file sh cat EOF etc systemd system kubelet service d kubelet conf Replace systemd with the cgroup driver of your container runtime The default value in the kubelet is cgroupfs Replace the value of containerRuntimeEndpoint for a different container runtime if needed apiVersion kubelet config k8s io v1beta1 kind KubeletConfiguration authentication anonymous enabled false webhook enabled false authorization mode AlwaysAllow cgroupDriver systemd address 127 0 0 1 containerRuntimeEndpoint unix var run containerd containerd sock staticPodPath etc kubernetes manifests EOF cat EOF etc systemd system kubelet service d 20 etcd service manager conf Service ExecStart ExecStart usr bin kubelet config etc systemd system kubelet service d kubelet conf Restart always EOF systemctl daemon reload systemctl restart kubelet Check the kubelet status to ensure it is running sh systemctl status kubelet 1 Create configuration files for kubeadm Generate one kubeadm configuration file for each host that will have an etcd member running on it using the following script sh Update HOST0 HOST1 and HOST2 with the IPs of your hosts export HOST0 10 0 0 6 export HOST1 10 0 0 7 export HOST2 10 0 0 8 Update NAME0 NAME1 and NAME2 with the hostnames of your hosts export NAME0 infra0 export NAME1 infra1 export NAME2 infra2 Create temp directories to store files that will end up on other hosts mkdir p tmp HOST0 tmp HOST1 tmp HOST2 HOSTS HOST0 HOST1 HOST2 NAMES NAME0 NAME1 NAME2 for i in HOSTS do HOST HOSTS i NAME NAMES i cat EOF tmp HOST kubeadmcfg yaml apiVersion kubeadm k8s io v1beta4 kind InitConfiguration nodeRegistration name NAME localAPIEndpoint advertiseAddress HOST apiVersion kubeadm k8s io v1beta4 kind ClusterConfiguration etcd local serverCertSANs HOST peerCertSANs HOST extraArgs name initial cluster value NAMES 0 https HOSTS 0 2380 NAMES 1 https HOSTS 1 2380 NAMES 2 https HOSTS 2 2380 name initial cluster state value new name name value NAME name listen peer urls value https HOST 2380 name listen client urls value https HOST 2379 name advertise client urls value https HOST 2379 name initial advertise peer urls value https HOST 2380 EOF done 1 Generate the certificate authority If you already have a CA then the only action that is copying the CA s crt and key file to etc kubernetes pki etcd ca crt and etc kubernetes pki etcd ca key After those files have been copied proceed to the next step Create certificates for each member If you do not already have a CA then run this command on HOST0 where you generated the configuration files for kubeadm kubeadm init phase certs etcd ca This creates two files etc kubernetes pki etcd ca crt etc kubernetes pki etcd ca key 1 Create certificates for each member sh kubeadm init phase certs etcd server config tmp HOST2 kubeadmcfg yaml kubeadm init phase certs etcd peer config tmp HOST2 kubeadmcfg yaml kubeadm init phase certs etcd healthcheck client config tmp HOST2 kubeadmcfg yaml kubeadm init phase certs apiserver etcd client config tmp HOST2 kubeadmcfg yaml cp R etc kubernetes pki tmp HOST2 cleanup non reusable certificates find etc kubernetes pki not name ca crt not name ca key type f delete kubeadm init phase certs etcd server config tmp HOST1 kubeadmcfg yaml kubeadm init phase certs etcd peer config tmp HOST1 kubeadmcfg yaml kubeadm init phase certs etcd healthcheck client config tmp HOST1 kubeadmcfg yaml kubeadm init phase certs apiserver etcd client config tmp HOST1 kubeadmcfg yaml cp R etc kubernetes pki tmp HOST1 find etc kubernetes pki not name ca crt not name ca key type f delete kubeadm init phase certs etcd server config tmp HOST0 kubeadmcfg yaml kubeadm init phase certs etcd peer config tmp HOST0 kubeadmcfg yaml kubeadm init phase certs etcd healthcheck client config tmp HOST0 kubeadmcfg yaml kubeadm init phase certs apiserver etcd client config tmp HOST0 kubeadmcfg yaml No need to move the certs because they are for HOST0 clean up certs that should not be copied off this host find tmp HOST2 name ca key type f delete find tmp HOST1 name ca key type f delete 1 Copy certificates and kubeadm configs The certificates have been generated and now they must be moved to their respective hosts sh USER ubuntu HOST HOST1 scp r tmp HOST USER HOST ssh USER HOST USER HOST sudo Es root HOST chown R root root pki root HOST mv pki etc kubernetes 1 Ensure all expected files exist The complete list of required files on HOST0 is tmp HOST0 kubeadmcfg yaml etc kubernetes pki apiserver etcd client crt apiserver etcd client key etcd ca crt ca key healthcheck client crt healthcheck client key peer crt peer key server crt server key On HOST1 HOME kubeadmcfg yaml etc kubernetes pki apiserver etcd client crt apiserver etcd client key etcd ca crt healthcheck client crt healthcheck client key peer crt peer key server crt server key On HOST2 HOME kubeadmcfg yaml etc kubernetes pki apiserver etcd client crt apiserver etcd client key etcd ca crt healthcheck client crt healthcheck client key peer crt peer key server crt server key 1 Create the static pod manifests Now that the certificates and configs are in place it s time to create the manifests On each host run the kubeadm command to generate a static manifest for etcd sh root HOST0 kubeadm init phase etcd local config tmp HOST0 kubeadmcfg yaml root HOST1 kubeadm init phase etcd local config HOME kubeadmcfg yaml root HOST2 kubeadm init phase etcd local config HOME kubeadmcfg yaml 1 Optional Check the cluster health If etcdctl isn t available you can run this tool inside a container image You would do that directly with your container runtime using a tool such as crictl run and not through Kubernetes sh ETCDCTL API 3 etcdctl cert etc kubernetes pki etcd peer crt key etc kubernetes pki etcd peer key cacert etc kubernetes pki etcd ca crt endpoints https HOST0 2379 endpoint health https HOST0 IP 2379 is healthy successfully committed proposal took 16 283339ms https HOST1 IP 2379 is healthy successfully committed proposal took 19 44402ms https HOST2 IP 2379 is healthy successfully committed proposal took 35 926451ms Set HOST0 to the IP address of the host you are testing Once you have an etcd cluster with 3 working members you can continue setting up a highly available control plane using the external etcd method with kubeadm docs setup production environment tools kubeadm high availability |
kubernetes setup title Creating a cluster with kubeadm weight 30 contenttype task sig cluster lifecycle reviewers overview | ---
reviewers:
- sig-cluster-lifecycle
title: Creating a cluster with kubeadm
content_type: task
weight: 30
---
<!-- overview -->
<img src="/images/kubeadm-stacked-color.png" align="right" width="150px"></img>
Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices.
In fact, you can use `kubeadm` to set up a cluster that will pass the
[Kubernetes Conformance tests](/blog/2017/10/software-conformance-certification/).
`kubeadm` also supports other cluster lifecycle functions, such as
[bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
The `kubeadm` tool is good if you need:
- A simple way for you to try out Kubernetes, possibly for the first time.
- A way for existing users to automate setting up a cluster and test their application.
- A building block in other ecosystem and/or installer tools with a larger
scope.
You can install and use `kubeadm` on various machines: your laptop, a set
of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the
cloud or on-premises, you can integrate `kubeadm` into provisioning systems such
as Ansible or Terraform.
##
To follow this guide, you need:
- One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
- 2 GiB or more of RAM per machine--any less leaves little room for your apps.
- At least 2 CPUs on the machine that you use as a control-plane node.
- Full network connectivity among all machines in the cluster. You can use either a
public or a private network.
You also need to use a version of `kubeadm` that can deploy the version
of Kubernetes that you want to use in your new cluster.
[Kubernetes' version and version skew support policy](/docs/setup/release/version-skew-policy/#supported-versions)
applies to `kubeadm` as well as to Kubernetes overall.
Check that policy to learn about what versions of Kubernetes and `kubeadm`
are supported. This page is written for Kubernetes .
The `kubeadm` tool's overall feature state is General Availability (GA). Some sub-features are
still under active development. The implementation of creating the cluster may change
slightly as the tool evolves, but the overall implementation should be pretty stable.
Any commands under `kubeadm alpha` are, by definition, supported on an alpha level.
<!-- steps -->
## Objectives
* Install a single control-plane Kubernetes cluster
* Install a Pod network on the cluster so that your Pods can
talk to each other
## Instructions
### Preparing the hosts
#### Component installation
Install a
and kubeadm on all the hosts. For detailed instructions and other prerequisites, see
[Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
If you have already installed kubeadm, see the first two steps of the
[Upgrading Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes)
document for instructions on how to upgrade kubeadm.
When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
kubeadm to tell it what to do. This crashloop is expected and normal.
After you initialize your control-plane, the kubelet runs normally.
#### Network setup
kubeadm similarly to other Kubernetes components tries to find a usable IP on
the network interfaces associated with a default gateway on a host. Such
an IP is then used for the advertising and/or listening performed by a component.
To find out what this IP is on a Linux host you can use:
```shell
ip route show # Look for a line starting with "default via"
```
If two or more default gateways are present on the host, a Kubernetes component will
try to use the first one it encounters that has a suitable global unicast IP address.
While making this choice, the exact ordering of gateways might vary between different
operating systems and kernel versions.
Kubernetes components do not accept custom network interface as an option,
therefore a custom IP address must be passed as a flag to all components instances
that need such a custom configuration.
If the host does not have a default gateway and if a custom IP address is not passed
to a Kubernetes component, the component may exit with an error.
To configure the API server advertise address for control plane nodes created with both
`init` and `join`, the flag `--apiserver-advertise-address` can be used.
Preferably, this option can be set in the [kubeadm API](/docs/reference/config-api/kubeadm-config.v1beta4)
as `InitConfiguration.localAPIEndpoint` and `JoinConfiguration.controlPlane.localAPIEndpoint`.
For kubelets on all nodes, the `--node-ip` option can be passed in
`.nodeRegistration.kubeletExtraArgs` inside a kubeadm configuration file
(`InitConfiguration` or `JoinConfiguration`).
For dual-stack see
[Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support).
The IP addresses that you assign to control plane components become part of their X.509 certificates'
subject alternative name fields. Changing these IP addresses would require
signing new certificates and restarting the affected components, so that the change in
certificate files is reflected. See
[Manual certificate renewal](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)
for more details on this topic.
The Kubernetes project recommends against this approach (configuring all component instances
with custom IP addresses). Instead, the Kubernetes maintainers recommend to setup the host network,
so that the default gateway IP is the one that Kubernetes components auto-detect and use.
On Linux nodes, you can use commands such as `ip route` to configure networking; your operating
system might also provide higher level network management tools. If your node's default gateway
is a public IP address, you should configure packet filtering or other security measures that
protect the nodes and your cluster.
### Preparing the required container images
This step is optional and only applies in case you wish `kubeadm init` and `kubeadm join`
to not download the default container images which are hosted at `registry.k8s.io`.
Kubeadm has commands that can help you pre-pull the required images
when creating a cluster without an internet connection on its nodes.
See [Running kubeadm without an internet connection](/docs/reference/setup-tools/kubeadm/kubeadm-init#without-internet-connection)
for more details.
Kubeadm allows you to use a custom image repository for the required images.
See [Using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images)
for more details.
### Initializing your control-plane node
The control-plane node is the machine where the control plane components run, including
(the cluster database) and the
(which the command line tool
communicates with).
1. (Recommended) If you have plans to upgrade this single control-plane `kubeadm` cluster
to [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/)
you should specify the `--control-plane-endpoint` to set the shared endpoint for all control-plane nodes.
Such an endpoint can be either a DNS name or an IP address of a load-balancer.
1. Choose a Pod network add-on, and verify whether it requires any arguments to
be passed to `kubeadm init`. Depending on which
third-party provider you choose, you might need to set the `--pod-network-cidr` to
a provider-specific value. See [Installing a Pod network add-on](#pod-network).
1. (Optional) `kubeadm` tries to detect the container runtime by using a list of well
known endpoints. To use different container runtime or if there are more than one installed
on the provisioned node, specify the `--cri-socket` argument to `kubeadm`. See
[Installing a runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
To initialize the control-plane node run:
```bash
kubeadm init <args>
```
### Considerations about apiserver-advertise-address and ControlPlaneEndpoint
While `--apiserver-advertise-address` can be used to set the advertised address for this particular
control-plane node's API server, `--control-plane-endpoint` can be used to set the shared endpoint
for all control-plane nodes.
`--control-plane-endpoint` allows both IP addresses and DNS names that can map to IP addresses.
Please contact your network administrator to evaluate possible solutions with respect to such mapping.
Here is an example mapping:
```
192.168.0.102 cluster-endpoint
```
Where `192.168.0.102` is the IP address of this node and `cluster-endpoint` is a custom DNS name that maps to this IP.
This will allow you to pass `--control-plane-endpoint=cluster-endpoint` to `kubeadm init` and pass the same DNS name to
`kubeadm join`. Later you can modify `cluster-endpoint` to point to the address of your load-balancer in a
high availability scenario.
Turning a single control plane cluster created without `--control-plane-endpoint` into a highly available cluster
is not supported by kubeadm.
### More information
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/).
To configure `kubeadm init` with a configuration file see
[Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
To customize control plane components, including optional IPv6 assignment to liveness probe
for control plane components and etcd server, provide extra arguments to each component as documented in
[custom arguments](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/).
To reconfigure a cluster that has already been created see
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure).
To run `kubeadm init` again, you must first [tear down the cluster](#tear-down).
If you join a node with a different architecture to your cluster, make sure that your deployed DaemonSets
have container image support for this architecture.
`kubeadm init` first runs a series of prechecks to ensure that the machine
is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init`
then downloads and installs the cluster control plane components. This may take several minutes.
After it finishes you should see:
```none
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
```
To make kubectl work for your non-root user, run these commands, which are
also part of the `kubeadm init` output:
```bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
Alternatively, if you are the `root` user, you can run:
```bash
export KUBECONFIG=/etc/kubernetes/admin.conf
```
The kubeconfig file `admin.conf` that `kubeadm init` generates contains a certificate with
`Subject: O = kubeadm:cluster-admins, CN = kubernetes-admin`. The group `kubeadm:cluster-admins`
is bound to the built-in `cluster-admin` ClusterRole.
Do not share the `admin.conf` file with anyone.
`kubeadm init` generates another kubeconfig file `super-admin.conf` that contains a certificate with
`Subject: O = system:masters, CN = kubernetes-super-admin`.
`system:masters` is a break-glass, super user group that bypasses the authorization layer (for example RBAC).
Do not share the `super-admin.conf` file with anyone. It is recommended to move the file to a safe location.
See
[Generating kubeconfig files for additional users](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#kubeconfig-additional-users)
on how to use `kubeadm kubeconfig user` to generate kubeconfig files for additional users.
Make a record of the `kubeadm join` command that `kubeadm init` outputs. You
need this command to [join nodes to your cluster](#join-nodes).
The token is used for mutual authentication between the control-plane node and the joining
nodes. The token included here is secret. Keep it safe, because anyone with this
token can add authenticated nodes to your cluster. These tokens can be listed,
created, and deleted with the `kubeadm token` command. See the
[kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm-token/).
### Installing a Pod network add-on {#pod-network}
This section contains important information about networking setup and
deployment order.
Read all of this advice carefully before proceeding.
**You must deploy a
(CNI) based Pod network add-on so that your Pods can communicate with each other.
Cluster DNS (CoreDNS) will not start up before a network is installed.**
- Take care that your Pod network must not overlap with any of the host
networks: you are likely to see problems if there is any overlap.
(If you find a collision between your network plugin's preferred Pod
network and some of your host networks, you should think of a suitable
CIDR block to use instead, then use that during `kubeadm init` with
`--pod-network-cidr` and as a replacement in your network plugin's YAML).
- By default, `kubeadm` sets up your cluster to use and enforce use of
[RBAC](/docs/reference/access-authn-authz/rbac/) (role based access
control).
Make sure that your Pod network plugin supports RBAC, and so do any manifests
that you use to deploy it.
- If you want to use IPv6--either dual-stack, or single-stack IPv6 only
networking--for your cluster, make sure that your Pod network plugin
supports IPv6.
IPv6 support was added to CNI in [v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
Kubeadm should be CNI agnostic and the validation of CNI providers is out of the scope of our current e2e testing.
If you find an issue related to a CNI plugin you should log a ticket in its respective issue
tracker instead of the kubeadm or kubernetes issue trackers.
Several external projects provide Kubernetes Pod networks using CNI, some of which also
support [Network Policy](/docs/concepts/services-networking/network-policies/).
See a list of add-ons that implement the
[Kubernetes networking model](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model).
Please refer to the [Installing Addons](/docs/concepts/cluster-administration/addons/#networking-and-network-policy)
page for a non-exhaustive list of networking addons supported by Kubernetes.
You can install a Pod network add-on with the following command on the
control-plane node or a node that has the kubeconfig credentials:
```bash
kubectl apply -f <add-on.yaml>
```
Only a few CNI plugins support Windows. More details and setup instructions can be found
in [Adding Windows worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/#network-config).
You can install only one Pod network per cluster.
Once a Pod network has been installed, you can confirm that it is working by
checking that the CoreDNS Pod is `Running` in the output of `kubectl get pods --all-namespaces`.
And once the CoreDNS Pod is up and running, you can continue by joining your nodes.
If your network is not working or CoreDNS is not in the `Running` state, check out the
[troubleshooting guide](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)
for `kubeadm`.
### Managed node labels
By default, kubeadm enables the [NodeRestriction](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
admission controller that restricts what labels can be self-applied by kubelets on node registration.
The admission controller documentation covers what labels are permitted to be used with the kubelet `--node-labels` option.
The `node-role.kubernetes.io/control-plane` label is such a restricted label and kubeadm manually applies it using
a privileged client after a node has been created. To do that manually you can do the same by using `kubectl label`
and ensure it is using a privileged kubeconfig such as the kubeadm managed `/etc/kubernetes/admin.conf`.
### Control plane node isolation
By default, your cluster will not schedule Pods on the control plane nodes for security
reasons. If you want to be able to schedule Pods on the control plane nodes,
for example for a single machine Kubernetes cluster, run:
```bash
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
```
The output will look something like:
```
node "test-01" untainted
...
```
This will remove the `node-role.kubernetes.io/control-plane:NoSchedule` taint
from any nodes that have it, including the control plane nodes, meaning that the
scheduler will then be able to schedule Pods everywhere.
Additionally, you can execute the following command to remove the
[`node.kubernetes.io/exclude-from-external-load-balancers`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-exclude-from-external-load-balancers) label
from the control plane node, which excludes it from the list of backend servers:
```bash
kubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers-
```
### Adding more control plane nodes
See [Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/)
for steps on creating a high availability kubeadm cluster by adding more control plane nodes.
### Adding worker nodes {#join-nodes}
The worker nodes are where your workloads run.
The following pages show how to add Linux and Windows worker nodes to the cluster by using
the `kubeadm join` command:
* [Adding Linux worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/)
* [Adding Windows worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)
### (Optional) Controlling your cluster from machines other than the control-plane node
In order to get a kubectl on some other computer (e.g. laptop) to talk to your
cluster, you need to copy the administrator kubeconfig file from your control-plane node
to your workstation like this:
```bash
scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
```
The example above assumes SSH access is enabled for root. If that is not the
case, you can copy the `admin.conf` file to be accessible by some other user
and `scp` using that other user instead.
The `admin.conf` file gives the user _superuser_ privileges over the cluster.
This file should be used sparingly. For normal users, it's recommended to
generate an unique credential to which you grant privileges. You can do
this with the `kubeadm kubeconfig user --client-name <CN>`
command. That command will print out a KubeConfig file to STDOUT which you
should save to a file and distribute to your user. After that, grant
privileges by using `kubectl create (cluster)rolebinding`.
### (Optional) Proxying API Server to localhost
If you want to connect to the API Server from outside the cluster, you can use
`kubectl proxy`:
```bash
scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf proxy
```
You can now access the API Server locally at `http://localhost:8001/api/v1`
## Clean up {#tear-down}
If you used disposable servers for your cluster, for testing, you can
switch those off and do no further clean up. You can use
`kubectl config delete-cluster` to delete your local references to the
cluster.
However, if you want to deprovision your cluster more cleanly, you should
first [drain the node](/docs/reference/generated/kubectl/kubectl-commands#drain)
and make sure that the node is empty, then deconfigure the node.
### Remove the node
Talking to the control-plane node with the appropriate credentials, run:
```bash
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
```
Before removing the node, reset the state installed by `kubeadm`:
```bash
kubeadm reset
```
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually:
```bash
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
```
If you want to reset the IPVS tables, you must run the following command:
```bash
ipvsadm -C
```
Now remove the node:
```bash
kubectl delete node <node name>
```
If you wish to start over, run `kubeadm init` or `kubeadm join` with the
appropriate arguments.
### Clean up the control plane
You can use `kubeadm reset` on the control plane host to trigger a best-effort
clean up.
See the [`kubeadm reset`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/)
reference documentation for more information about this subcommand and its
options.
## Version skew policy {#version-skew-policy}
While kubeadm allows version skew against some components that it manages, it is recommended that you
match the kubeadm version with the versions of the control plane components, kube-proxy and kubelet.
### kubeadm's skew against the Kubernetes version
kubeadm can be used with Kubernetes components that are the same version as kubeadm
or one version older. The Kubernetes version can be specified to kubeadm by using the
`--kubernetes-version` flag of `kubeadm init` or the
[`ClusterConfiguration.kubernetesVersion`](/docs/reference/config-api/kubeadm-config.v1beta4/)
field when using `--config`. This option will control the versions
of kube-apiserver, kube-controller-manager, kube-scheduler and kube-proxy.
Example:
* kubeadm is at
* `kubernetesVersion` must be at or
### kubeadm's skew against the kubelet
Similarly to the Kubernetes version, kubeadm can be used with a kubelet version that is
the same version as kubeadm or three versions older.
Example:
* kubeadm is at
* kubelet on the host must be at , ,
or
### kubeadm's skew against kubeadm
There are certain limitations on how kubeadm commands can operate on existing nodes or whole clusters
managed by kubeadm.
If new nodes are joined to the cluster, the kubeadm binary used for `kubeadm join` must match
the last version of kubeadm used to either create the cluster with `kubeadm init` or to upgrade
the same node with `kubeadm upgrade`. Similar rules apply to the rest of the kubeadm commands
with the exception of `kubeadm upgrade`.
Example for `kubeadm join`:
* kubeadm version was used to create a cluster with `kubeadm init`
* Joining nodes must use a kubeadm binary that is at version
Nodes that are being upgraded must use a version of kubeadm that is the same MINOR
version or one MINOR version newer than the version of kubeadm used for managing the
node.
Example for `kubeadm upgrade`:
* kubeadm version was used to create or upgrade the node
* The version of kubeadm used for upgrading the node must be at
or
To learn more about the version skew between the different Kubernetes component see
the [Version Skew Policy](/releases/version-skew-policy/).
## Limitations {#limitations}
### Cluster resilience {#resilience}
The cluster created here has a single control-plane node, with a single etcd database
running on it. This means that if the control-plane node fails, your cluster may lose
data and may need to be recreated from scratch.
Workarounds:
* Regularly [back up etcd](https://etcd.io/docs/v3.5/op-guide/recovery/). The
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the control-plane node.
* Use multiple control-plane nodes. You can read
[Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) to pick a cluster
topology that provides [high-availability](/docs/setup/production-environment/tools/kubeadm/high-availability/).
### Platform compatibility {#multi-platform}
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
following the [multi-platform proposal](https://git.k8s.io/design-proposals-archive/multi-platform.md).
Multiplatform container images for the control plane and addons are also supported since v1.12.
Only some of the network providers offer solutions for all platforms. Please consult the list of
network providers above or the documentation from each provider to figure out whether the provider
supports your chosen platform.
## Troubleshooting {#troubleshooting}
If you are running into difficulties with kubeadm, please consult our
[troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
<!-- discussion -->
##
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
for details about upgrading your cluster using `kubeadm`.
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/)
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/).
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
of Pod network add-ons.
* <a id="other-addons" />See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
explore other add-ons, including tools for logging, monitoring, network policy, visualization &
control of your Kubernetes cluster.
* Configure how your cluster handles logs for cluster events and from
applications running in Pods.
See [Logging Architecture](/docs/concepts/cluster-administration/logging/) for
an overview of what is involved.
### Feedback {#feedback}
* For bugs, visit the [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
* For support, visit the
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack channel
* General SIG Cluster Lifecycle development Slack channel:
[#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
* SIG Cluster Lifecycle [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
* SIG Cluster Lifecycle mailing list:
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle) | kubernetes setup | reviewers sig cluster lifecycle title Creating a cluster with kubeadm content type task weight 30 overview img src images kubeadm stacked color png align right width 150px img Using kubeadm you can create a minimum viable Kubernetes cluster that conforms to best practices In fact you can use kubeadm to set up a cluster that will pass the Kubernetes Conformance tests blog 2017 10 software conformance certification kubeadm also supports other cluster lifecycle functions such as bootstrap tokens docs reference access authn authz bootstrap tokens and cluster upgrades The kubeadm tool is good if you need A simple way for you to try out Kubernetes possibly for the first time A way for existing users to automate setting up a cluster and test their application A building block in other ecosystem and or installer tools with a larger scope You can install and use kubeadm on various machines your laptop a set of cloud servers a Raspberry Pi and more Whether you re deploying into the cloud or on premises you can integrate kubeadm into provisioning systems such as Ansible or Terraform To follow this guide you need One or more machines running a deb rpm compatible Linux OS for example Ubuntu or CentOS 2 GiB or more of RAM per machine any less leaves little room for your apps At least 2 CPUs on the machine that you use as a control plane node Full network connectivity among all machines in the cluster You can use either a public or a private network You also need to use a version of kubeadm that can deploy the version of Kubernetes that you want to use in your new cluster Kubernetes version and version skew support policy docs setup release version skew policy supported versions applies to kubeadm as well as to Kubernetes overall Check that policy to learn about what versions of Kubernetes and kubeadm are supported This page is written for Kubernetes The kubeadm tool s overall feature state is General Availability GA Some sub features are still under active development The implementation of creating the cluster may change slightly as the tool evolves but the overall implementation should be pretty stable Any commands under kubeadm alpha are by definition supported on an alpha level steps Objectives Install a single control plane Kubernetes cluster Install a Pod network on the cluster so that your Pods can talk to each other Instructions Preparing the hosts Component installation Install a and kubeadm on all the hosts For detailed instructions and other prerequisites see Installing kubeadm docs setup production environment tools kubeadm install kubeadm If you have already installed kubeadm see the first two steps of the Upgrading Linux nodes docs tasks administer cluster kubeadm upgrading linux nodes document for instructions on how to upgrade kubeadm When you upgrade the kubelet restarts every few seconds as it waits in a crashloop for kubeadm to tell it what to do This crashloop is expected and normal After you initialize your control plane the kubelet runs normally Network setup kubeadm similarly to other Kubernetes components tries to find a usable IP on the network interfaces associated with a default gateway on a host Such an IP is then used for the advertising and or listening performed by a component To find out what this IP is on a Linux host you can use shell ip route show Look for a line starting with default via If two or more default gateways are present on the host a Kubernetes component will try to use the first one it encounters that has a suitable global unicast IP address While making this choice the exact ordering of gateways might vary between different operating systems and kernel versions Kubernetes components do not accept custom network interface as an option therefore a custom IP address must be passed as a flag to all components instances that need such a custom configuration If the host does not have a default gateway and if a custom IP address is not passed to a Kubernetes component the component may exit with an error To configure the API server advertise address for control plane nodes created with both init and join the flag apiserver advertise address can be used Preferably this option can be set in the kubeadm API docs reference config api kubeadm config v1beta4 as InitConfiguration localAPIEndpoint and JoinConfiguration controlPlane localAPIEndpoint For kubelets on all nodes the node ip option can be passed in nodeRegistration kubeletExtraArgs inside a kubeadm configuration file InitConfiguration or JoinConfiguration For dual stack see Dual stack support with kubeadm docs setup production environment tools kubeadm dual stack support The IP addresses that you assign to control plane components become part of their X 509 certificates subject alternative name fields Changing these IP addresses would require signing new certificates and restarting the affected components so that the change in certificate files is reflected See Manual certificate renewal docs tasks administer cluster kubeadm kubeadm certs manual certificate renewal for more details on this topic The Kubernetes project recommends against this approach configuring all component instances with custom IP addresses Instead the Kubernetes maintainers recommend to setup the host network so that the default gateway IP is the one that Kubernetes components auto detect and use On Linux nodes you can use commands such as ip route to configure networking your operating system might also provide higher level network management tools If your node s default gateway is a public IP address you should configure packet filtering or other security measures that protect the nodes and your cluster Preparing the required container images This step is optional and only applies in case you wish kubeadm init and kubeadm join to not download the default container images which are hosted at registry k8s io Kubeadm has commands that can help you pre pull the required images when creating a cluster without an internet connection on its nodes See Running kubeadm without an internet connection docs reference setup tools kubeadm kubeadm init without internet connection for more details Kubeadm allows you to use a custom image repository for the required images See Using custom images docs reference setup tools kubeadm kubeadm init custom images for more details Initializing your control plane node The control plane node is the machine where the control plane components run including the cluster database and the which the command line tool communicates with 1 Recommended If you have plans to upgrade this single control plane kubeadm cluster to high availability docs setup production environment tools kubeadm high availability you should specify the control plane endpoint to set the shared endpoint for all control plane nodes Such an endpoint can be either a DNS name or an IP address of a load balancer 1 Choose a Pod network add on and verify whether it requires any arguments to be passed to kubeadm init Depending on which third party provider you choose you might need to set the pod network cidr to a provider specific value See Installing a Pod network add on pod network 1 Optional kubeadm tries to detect the container runtime by using a list of well known endpoints To use different container runtime or if there are more than one installed on the provisioned node specify the cri socket argument to kubeadm See Installing a runtime docs setup production environment tools kubeadm install kubeadm installing runtime To initialize the control plane node run bash kubeadm init args Considerations about apiserver advertise address and ControlPlaneEndpoint While apiserver advertise address can be used to set the advertised address for this particular control plane node s API server control plane endpoint can be used to set the shared endpoint for all control plane nodes control plane endpoint allows both IP addresses and DNS names that can map to IP addresses Please contact your network administrator to evaluate possible solutions with respect to such mapping Here is an example mapping 192 168 0 102 cluster endpoint Where 192 168 0 102 is the IP address of this node and cluster endpoint is a custom DNS name that maps to this IP This will allow you to pass control plane endpoint cluster endpoint to kubeadm init and pass the same DNS name to kubeadm join Later you can modify cluster endpoint to point to the address of your load balancer in a high availability scenario Turning a single control plane cluster created without control plane endpoint into a highly available cluster is not supported by kubeadm More information For more information about kubeadm init arguments see the kubeadm reference guide docs reference setup tools kubeadm To configure kubeadm init with a configuration file see Using kubeadm init with a configuration file docs reference setup tools kubeadm kubeadm init config file To customize control plane components including optional IPv6 assignment to liveness probe for control plane components and etcd server provide extra arguments to each component as documented in custom arguments docs setup production environment tools kubeadm control plane flags To reconfigure a cluster that has already been created see Reconfiguring a kubeadm cluster docs tasks administer cluster kubeadm kubeadm reconfigure To run kubeadm init again you must first tear down the cluster tear down If you join a node with a different architecture to your cluster make sure that your deployed DaemonSets have container image support for this architecture kubeadm init first runs a series of prechecks to ensure that the machine is ready to run Kubernetes These prechecks expose warnings and exit on errors kubeadm init then downloads and installs the cluster control plane components This may take several minutes After it finishes you should see none Your Kubernetes control plane has initialized successfully To start using your cluster you need to run the following as a regular user mkdir p HOME kube sudo cp i etc kubernetes admin conf HOME kube config sudo chown id u id g HOME kube config You should now deploy a Pod network to the cluster Run kubectl apply f podnetwork yaml with one of the options listed at docs concepts cluster administration addons You can now join any number of machines by running the following on each node as root kubeadm join control plane host control plane port token token discovery token ca cert hash sha256 hash To make kubectl work for your non root user run these commands which are also part of the kubeadm init output bash mkdir p HOME kube sudo cp i etc kubernetes admin conf HOME kube config sudo chown id u id g HOME kube config Alternatively if you are the root user you can run bash export KUBECONFIG etc kubernetes admin conf The kubeconfig file admin conf that kubeadm init generates contains a certificate with Subject O kubeadm cluster admins CN kubernetes admin The group kubeadm cluster admins is bound to the built in cluster admin ClusterRole Do not share the admin conf file with anyone kubeadm init generates another kubeconfig file super admin conf that contains a certificate with Subject O system masters CN kubernetes super admin system masters is a break glass super user group that bypasses the authorization layer for example RBAC Do not share the super admin conf file with anyone It is recommended to move the file to a safe location See Generating kubeconfig files for additional users docs tasks administer cluster kubeadm kubeadm certs kubeconfig additional users on how to use kubeadm kubeconfig user to generate kubeconfig files for additional users Make a record of the kubeadm join command that kubeadm init outputs You need this command to join nodes to your cluster join nodes The token is used for mutual authentication between the control plane node and the joining nodes The token included here is secret Keep it safe because anyone with this token can add authenticated nodes to your cluster These tokens can be listed created and deleted with the kubeadm token command See the kubeadm reference guide docs reference setup tools kubeadm kubeadm token Installing a Pod network add on pod network This section contains important information about networking setup and deployment order Read all of this advice carefully before proceeding You must deploy a CNI based Pod network add on so that your Pods can communicate with each other Cluster DNS CoreDNS will not start up before a network is installed Take care that your Pod network must not overlap with any of the host networks you are likely to see problems if there is any overlap If you find a collision between your network plugin s preferred Pod network and some of your host networks you should think of a suitable CIDR block to use instead then use that during kubeadm init with pod network cidr and as a replacement in your network plugin s YAML By default kubeadm sets up your cluster to use and enforce use of RBAC docs reference access authn authz rbac role based access control Make sure that your Pod network plugin supports RBAC and so do any manifests that you use to deploy it If you want to use IPv6 either dual stack or single stack IPv6 only networking for your cluster make sure that your Pod network plugin supports IPv6 IPv6 support was added to CNI in v0 6 0 https github com containernetworking cni releases tag v0 6 0 Kubeadm should be CNI agnostic and the validation of CNI providers is out of the scope of our current e2e testing If you find an issue related to a CNI plugin you should log a ticket in its respective issue tracker instead of the kubeadm or kubernetes issue trackers Several external projects provide Kubernetes Pod networks using CNI some of which also support Network Policy docs concepts services networking network policies See a list of add ons that implement the Kubernetes networking model docs concepts cluster administration networking how to implement the kubernetes network model Please refer to the Installing Addons docs concepts cluster administration addons networking and network policy page for a non exhaustive list of networking addons supported by Kubernetes You can install a Pod network add on with the following command on the control plane node or a node that has the kubeconfig credentials bash kubectl apply f add on yaml Only a few CNI plugins support Windows More details and setup instructions can be found in Adding Windows worker nodes docs tasks administer cluster kubeadm adding windows nodes network config You can install only one Pod network per cluster Once a Pod network has been installed you can confirm that it is working by checking that the CoreDNS Pod is Running in the output of kubectl get pods all namespaces And once the CoreDNS Pod is up and running you can continue by joining your nodes If your network is not working or CoreDNS is not in the Running state check out the troubleshooting guide docs setup production environment tools kubeadm troubleshooting kubeadm for kubeadm Managed node labels By default kubeadm enables the NodeRestriction docs reference access authn authz admission controllers noderestriction admission controller that restricts what labels can be self applied by kubelets on node registration The admission controller documentation covers what labels are permitted to be used with the kubelet node labels option The node role kubernetes io control plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created To do that manually you can do the same by using kubectl label and ensure it is using a privileged kubeconfig such as the kubeadm managed etc kubernetes admin conf Control plane node isolation By default your cluster will not schedule Pods on the control plane nodes for security reasons If you want to be able to schedule Pods on the control plane nodes for example for a single machine Kubernetes cluster run bash kubectl taint nodes all node role kubernetes io control plane The output will look something like node test 01 untainted This will remove the node role kubernetes io control plane NoSchedule taint from any nodes that have it including the control plane nodes meaning that the scheduler will then be able to schedule Pods everywhere Additionally you can execute the following command to remove the node kubernetes io exclude from external load balancers docs reference labels annotations taints node kubernetes io exclude from external load balancers label from the control plane node which excludes it from the list of backend servers bash kubectl label nodes all node kubernetes io exclude from external load balancers Adding more control plane nodes See Creating Highly Available Clusters with kubeadm docs setup production environment tools kubeadm high availability for steps on creating a high availability kubeadm cluster by adding more control plane nodes Adding worker nodes join nodes The worker nodes are where your workloads run The following pages show how to add Linux and Windows worker nodes to the cluster by using the kubeadm join command Adding Linux worker nodes docs tasks administer cluster kubeadm adding linux nodes Adding Windows worker nodes docs tasks administer cluster kubeadm adding windows nodes Optional Controlling your cluster from machines other than the control plane node In order to get a kubectl on some other computer e g laptop to talk to your cluster you need to copy the administrator kubeconfig file from your control plane node to your workstation like this bash scp root control plane host etc kubernetes admin conf kubectl kubeconfig admin conf get nodes The example above assumes SSH access is enabled for root If that is not the case you can copy the admin conf file to be accessible by some other user and scp using that other user instead The admin conf file gives the user superuser privileges over the cluster This file should be used sparingly For normal users it s recommended to generate an unique credential to which you grant privileges You can do this with the kubeadm kubeconfig user client name CN command That command will print out a KubeConfig file to STDOUT which you should save to a file and distribute to your user After that grant privileges by using kubectl create cluster rolebinding Optional Proxying API Server to localhost If you want to connect to the API Server from outside the cluster you can use kubectl proxy bash scp root control plane host etc kubernetes admin conf kubectl kubeconfig admin conf proxy You can now access the API Server locally at http localhost 8001 api v1 Clean up tear down If you used disposable servers for your cluster for testing you can switch those off and do no further clean up You can use kubectl config delete cluster to delete your local references to the cluster However if you want to deprovision your cluster more cleanly you should first drain the node docs reference generated kubectl kubectl commands drain and make sure that the node is empty then deconfigure the node Remove the node Talking to the control plane node with the appropriate credentials run bash kubectl drain node name delete emptydir data force ignore daemonsets Before removing the node reset the state installed by kubeadm bash kubeadm reset The reset process does not reset or clean up iptables rules or IPVS tables If you wish to reset iptables you must do so manually bash iptables F iptables t nat F iptables t mangle F iptables X If you want to reset the IPVS tables you must run the following command bash ipvsadm C Now remove the node bash kubectl delete node node name If you wish to start over run kubeadm init or kubeadm join with the appropriate arguments Clean up the control plane You can use kubeadm reset on the control plane host to trigger a best effort clean up See the kubeadm reset docs reference setup tools kubeadm kubeadm reset reference documentation for more information about this subcommand and its options Version skew policy version skew policy While kubeadm allows version skew against some components that it manages it is recommended that you match the kubeadm version with the versions of the control plane components kube proxy and kubelet kubeadm s skew against the Kubernetes version kubeadm can be used with Kubernetes components that are the same version as kubeadm or one version older The Kubernetes version can be specified to kubeadm by using the kubernetes version flag of kubeadm init or the ClusterConfiguration kubernetesVersion docs reference config api kubeadm config v1beta4 field when using config This option will control the versions of kube apiserver kube controller manager kube scheduler and kube proxy Example kubeadm is at kubernetesVersion must be at or kubeadm s skew against the kubelet Similarly to the Kubernetes version kubeadm can be used with a kubelet version that is the same version as kubeadm or three versions older Example kubeadm is at kubelet on the host must be at or kubeadm s skew against kubeadm There are certain limitations on how kubeadm commands can operate on existing nodes or whole clusters managed by kubeadm If new nodes are joined to the cluster the kubeadm binary used for kubeadm join must match the last version of kubeadm used to either create the cluster with kubeadm init or to upgrade the same node with kubeadm upgrade Similar rules apply to the rest of the kubeadm commands with the exception of kubeadm upgrade Example for kubeadm join kubeadm version was used to create a cluster with kubeadm init Joining nodes must use a kubeadm binary that is at version Nodes that are being upgraded must use a version of kubeadm that is the same MINOR version or one MINOR version newer than the version of kubeadm used for managing the node Example for kubeadm upgrade kubeadm version was used to create or upgrade the node The version of kubeadm used for upgrading the node must be at or To learn more about the version skew between the different Kubernetes component see the Version Skew Policy releases version skew policy Limitations limitations Cluster resilience resilience The cluster created here has a single control plane node with a single etcd database running on it This means that if the control plane node fails your cluster may lose data and may need to be recreated from scratch Workarounds Regularly back up etcd https etcd io docs v3 5 op guide recovery The etcd data directory configured by kubeadm is at var lib etcd on the control plane node Use multiple control plane nodes You can read Options for Highly Available topology docs setup production environment tools kubeadm ha topology to pick a cluster topology that provides high availability docs setup production environment tools kubeadm high availability Platform compatibility multi platform kubeadm deb rpm packages and binaries are built for amd64 arm 32 bit arm64 ppc64le and s390x following the multi platform proposal https git k8s io design proposals archive multi platform md Multiplatform container images for the control plane and addons are also supported since v1 12 Only some of the network providers offer solutions for all platforms Please consult the list of network providers above or the documentation from each provider to figure out whether the provider supports your chosen platform Troubleshooting troubleshooting If you are running into difficulties with kubeadm please consult our troubleshooting docs docs setup production environment tools kubeadm troubleshooting kubeadm discussion Verify that your cluster is running properly with Sonobuoy https github com heptio sonobuoy a id lifecycle See Upgrading kubeadm clusters docs tasks administer cluster kubeadm kubeadm upgrade for details about upgrading your cluster using kubeadm Learn about advanced kubeadm usage in the kubeadm reference documentation docs reference setup tools kubeadm Learn more about Kubernetes concepts docs concepts and kubectl docs reference kubectl See the Cluster Networking docs concepts cluster administration networking page for a bigger list of Pod network add ons a id other addons See the list of add ons docs concepts cluster administration addons to explore other add ons including tools for logging monitoring network policy visualization amp control of your Kubernetes cluster Configure how your cluster handles logs for cluster events and from applications running in Pods See Logging Architecture docs concepts cluster administration logging for an overview of what is involved Feedback feedback For bugs visit the kubeadm GitHub issue tracker https github com kubernetes kubeadm issues For support visit the kubeadm https kubernetes slack com messages kubeadm Slack channel General SIG Cluster Lifecycle development Slack channel sig cluster lifecycle https kubernetes slack com messages sig cluster lifecycle SIG Cluster Lifecycle SIG information https github com kubernetes community tree master sig cluster lifecycle readme SIG Cluster Lifecycle mailing list kubernetes sig cluster lifecycle https groups google com forum forum kubernetes sig cluster lifecycle |
kubernetes setup weight 100 feature state fork8sversion v1 23 state stable title Dual stack support with kubeadm contenttype task min kubernetes server version 1 21 overview | ---
title: Dual-stack support with kubeadm
content_type: task
weight: 100
min-kubernetes-server-version: 1.21
---
<!-- overview -->
Your Kubernetes cluster includes [dual-stack](/docs/concepts/services-networking/dual-stack/)
networking, which means that cluster networking lets you use either address family.
In a cluster, the control plane can assign both an IPv4 address and an IPv6 address to a single
or a .
<!-- body -->
##
You need to have installed the tool,
following the steps from [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
For each server that you want to use as a ,
make sure it allows IPv6 forwarding.
### Enable IPv6 packet forwarding {#prerequisite-ipv6-forwarding}
To check if IPv6 packet forwarding is enabled:
```bash
sysctl net.ipv6.conf.all.forwarding
```
If the output is `net.ipv6.conf.all.forwarding = 1` it is already enabled.
Otherwise it is not enabled yet.
To manually enable IPv6 packet forwarding:
```bash
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee -a /etc/sysctl.d/k8s.conf
net.ipv6.conf.all.forwarding = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
```
You need to have an IPv4 and and IPv6 address range to use. Cluster operators typically
use private address ranges for IPv4. For IPv6, a cluster operator typically chooses a global
unicast address block from within `2000::/3`, using a range that is assigned to the operator.
You don't have to route the cluster's IP address ranges to the public internet.
The size of the IP address allocations should be suitable for the number of Pods and
Services that you are planning to run.
If you are upgrading an existing cluster with the `kubeadm upgrade` command,
`kubeadm` does not support making modifications to the pod IP address range
(“cluster CIDR”) nor to the cluster's Service address range (“Service CIDR”).
### Create a dual-stack cluster
To create a dual-stack cluster with `kubeadm init` you can pass command line arguments
similar to the following example:
```shell
# These address ranges are examples
kubeadm init --pod-network-cidr=10.244.0.0/16,2001:db8:42:0::/56 --service-cidr=10.96.0.0/16,2001:db8:42:1::/112
```
To make things clearer, here is an example kubeadm
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta4/)
`kubeadm-config.yaml` for the primary dual-stack control plane node.
```yaml
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
networking:
podSubnet: 10.244.0.0/16,2001:db8:42:0::/56
serviceSubnet: 10.96.0.0/16,2001:db8:42:1::/112
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "10.100.0.1"
bindPort: 6443
nodeRegistration:
kubeletExtraArgs:
- name: "node-ip"
value: "10.100.0.2,fd00:1:2:3::2"
```
`advertiseAddress` in InitConfiguration specifies the IP address that the API Server
will advertise it is listening on. The value of `advertiseAddress` equals the
`--apiserver-advertise-address` flag of `kubeadm init`.
Run kubeadm to initiate the dual-stack control plane node:
```shell
kubeadm init --config=kubeadm-config.yaml
```
The kube-controller-manager flags `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6`
are set with default values. See [configure IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#configure-ipv4-ipv6-dual-stack).
The `--apiserver-advertise-address` flag does not support dual-stack.
### Join a node to dual-stack cluster
Before joining a node, make sure that the node has IPv6 routable network interface and allows IPv6 forwarding.
Here is an example kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta4/)
`kubeadm-config.yaml` for joining a worker node to the cluster.
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: 10.100.0.1:6443
token: "clvldh.vjjwg16ucnhp94qr"
caCertHashes:
- "sha256:a4863cde706cfc580a439f842cc65d5ef112b7b2be31628513a9881cf0d9fe0e"
# change auth info above to match the actual token and CA certificate hash for your cluster
nodeRegistration:
kubeletExtraArgs:
- name: "node-ip"
value: "10.100.0.2,fd00:1:2:3::3"
```
Also, here is an example kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta4/)
`kubeadm-config.yaml` for joining another control plane node to the cluster.
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
controlPlane:
localAPIEndpoint:
advertiseAddress: "10.100.0.2"
bindPort: 6443
discovery:
bootstrapToken:
apiServerEndpoint: 10.100.0.1:6443
token: "clvldh.vjjwg16ucnhp94qr"
caCertHashes:
- "sha256:a4863cde706cfc580a439f842cc65d5ef112b7b2be31628513a9881cf0d9fe0e"
# change auth info above to match the actual token and CA certificate hash for your cluster
nodeRegistration:
kubeletExtraArgs:
- name: "node-ip"
value: "10.100.0.2,fd00:1:2:3::4"
```
`advertiseAddress` in JoinConfiguration.controlPlane specifies the IP address that the
API Server will advertise it is listening on. The value of `advertiseAddress` equals
the `--apiserver-advertise-address` flag of `kubeadm join`.
```shell
kubeadm join --config=kubeadm-config.yaml
```
### Create a single-stack cluster
Dual-stack support doesn't mean that you need to use dual-stack addressing.
You can deploy a single-stack cluster that has the dual-stack networking feature enabled.
To make things more clear, here is an example kubeadm
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta4/)
`kubeadm-config.yaml` for the single-stack control plane node.
```yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/16
```
##
* [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking
* Read about [Dual-stack](/docs/concepts/services-networking/dual-stack/) cluster networking
* Learn more about the kubeadm [configuration format](/docs/reference/config-api/kubeadm-config.v1beta4/) | kubernetes setup | title Dual stack support with kubeadm content type task weight 100 min kubernetes server version 1 21 overview Your Kubernetes cluster includes dual stack docs concepts services networking dual stack networking which means that cluster networking lets you use either address family In a cluster the control plane can assign both an IPv4 address and an IPv6 address to a single or a body You need to have installed the tool following the steps from Installing kubeadm docs setup production environment tools kubeadm install kubeadm For each server that you want to use as a make sure it allows IPv6 forwarding Enable IPv6 packet forwarding prerequisite ipv6 forwarding To check if IPv6 packet forwarding is enabled bash sysctl net ipv6 conf all forwarding If the output is net ipv6 conf all forwarding 1 it is already enabled Otherwise it is not enabled yet To manually enable IPv6 packet forwarding bash sysctl params required by setup params persist across reboots cat EOF sudo tee a etc sysctl d k8s conf net ipv6 conf all forwarding 1 EOF Apply sysctl params without reboot sudo sysctl system You need to have an IPv4 and and IPv6 address range to use Cluster operators typically use private address ranges for IPv4 For IPv6 a cluster operator typically chooses a global unicast address block from within 2000 3 using a range that is assigned to the operator You don t have to route the cluster s IP address ranges to the public internet The size of the IP address allocations should be suitable for the number of Pods and Services that you are planning to run If you are upgrading an existing cluster with the kubeadm upgrade command kubeadm does not support making modifications to the pod IP address range cluster CIDR nor to the cluster s Service address range Service CIDR Create a dual stack cluster To create a dual stack cluster with kubeadm init you can pass command line arguments similar to the following example shell These address ranges are examples kubeadm init pod network cidr 10 244 0 0 16 2001 db8 42 0 56 service cidr 10 96 0 0 16 2001 db8 42 1 112 To make things clearer here is an example kubeadm configuration file docs reference config api kubeadm config v1beta4 kubeadm config yaml for the primary dual stack control plane node yaml apiVersion kubeadm k8s io v1beta4 kind ClusterConfiguration networking podSubnet 10 244 0 0 16 2001 db8 42 0 56 serviceSubnet 10 96 0 0 16 2001 db8 42 1 112 apiVersion kubeadm k8s io v1beta4 kind InitConfiguration localAPIEndpoint advertiseAddress 10 100 0 1 bindPort 6443 nodeRegistration kubeletExtraArgs name node ip value 10 100 0 2 fd00 1 2 3 2 advertiseAddress in InitConfiguration specifies the IP address that the API Server will advertise it is listening on The value of advertiseAddress equals the apiserver advertise address flag of kubeadm init Run kubeadm to initiate the dual stack control plane node shell kubeadm init config kubeadm config yaml The kube controller manager flags node cidr mask size ipv4 node cidr mask size ipv6 are set with default values See configure IPv4 IPv6 dual stack docs concepts services networking dual stack configure ipv4 ipv6 dual stack The apiserver advertise address flag does not support dual stack Join a node to dual stack cluster Before joining a node make sure that the node has IPv6 routable network interface and allows IPv6 forwarding Here is an example kubeadm configuration file docs reference config api kubeadm config v1beta4 kubeadm config yaml for joining a worker node to the cluster yaml apiVersion kubeadm k8s io v1beta4 kind JoinConfiguration discovery bootstrapToken apiServerEndpoint 10 100 0 1 6443 token clvldh vjjwg16ucnhp94qr caCertHashes sha256 a4863cde706cfc580a439f842cc65d5ef112b7b2be31628513a9881cf0d9fe0e change auth info above to match the actual token and CA certificate hash for your cluster nodeRegistration kubeletExtraArgs name node ip value 10 100 0 2 fd00 1 2 3 3 Also here is an example kubeadm configuration file docs reference config api kubeadm config v1beta4 kubeadm config yaml for joining another control plane node to the cluster yaml apiVersion kubeadm k8s io v1beta4 kind JoinConfiguration controlPlane localAPIEndpoint advertiseAddress 10 100 0 2 bindPort 6443 discovery bootstrapToken apiServerEndpoint 10 100 0 1 6443 token clvldh vjjwg16ucnhp94qr caCertHashes sha256 a4863cde706cfc580a439f842cc65d5ef112b7b2be31628513a9881cf0d9fe0e change auth info above to match the actual token and CA certificate hash for your cluster nodeRegistration kubeletExtraArgs name node ip value 10 100 0 2 fd00 1 2 3 4 advertiseAddress in JoinConfiguration controlPlane specifies the IP address that the API Server will advertise it is listening on The value of advertiseAddress equals the apiserver advertise address flag of kubeadm join shell kubeadm join config kubeadm config yaml Create a single stack cluster Dual stack support doesn t mean that you need to use dual stack addressing You can deploy a single stack cluster that has the dual stack networking feature enabled To make things more clear here is an example kubeadm configuration file docs reference config api kubeadm config v1beta4 kubeadm config yaml for the single stack control plane node yaml apiVersion kubeadm k8s io v1beta4 kind ClusterConfiguration networking podSubnet 10 244 0 0 16 serviceSubnet 10 96 0 0 16 Validate IPv4 IPv6 dual stack docs tasks network validate dual stack networking Read about Dual stack docs concepts services networking dual stack cluster networking Learn more about the kubeadm configuration format docs reference config api kubeadm config v1beta4 |
kubernetes setup weight 80 title Configuring each kubelet in your cluster using kubeadm contenttype concept sig cluster lifecycle reviewers overview | ---
reviewers:
- sig-cluster-lifecycle
title: Configuring each kubelet in your cluster using kubeadm
content_type: concept
weight: 80
---
<!-- overview -->
The lifecycle of the kubeadm CLI tool is decoupled from the
[kubelet](/docs/reference/command-line-tools-reference/kubelet), which is a daemon that runs
on each node within the Kubernetes cluster. The kubeadm CLI tool is executed by the user when Kubernetes is
initialized or upgraded, whereas the kubelet is always running in the background.
Since the kubelet is a daemon, it needs to be maintained by some kind of an init
system or service manager. When the kubelet is installed using DEBs or RPMs,
systemd is configured to manage the kubelet. You can use a different service
manager instead, but you need to configure it manually.
Some kubelet configuration details need to be the same across all kubelets involved in the cluster, while
other configuration aspects need to be set on a per-kubelet basis to accommodate the different
characteristics of a given machine (such as OS, storage, and networking). You can manage the configuration
of your kubelets manually, but kubeadm now provides a `KubeletConfiguration` API type for
[managing your kubelet configurations centrally](#configure-kubelets-using-kubeadm).
<!-- body -->
## Kubelet configuration patterns
The following sections describe patterns to kubelet configuration that are simplified by
using kubeadm, rather than managing the kubelet configuration for each Node manually.
### Propagating cluster-level configuration to each kubelet
You can provide the kubelet with default values to be used by `kubeadm init` and `kubeadm join`
commands. Interesting examples include using a different container runtime or setting the default subnet
used by services.
If you want your services to use the subnet `10.96.0.0/12` as the default for services, you can pass
the `--service-cidr` parameter to kubeadm:
```bash
kubeadm init --service-cidr 10.96.0.0/12
```
Virtual IPs for services are now allocated from this subnet. You also need to set the DNS address used
by the kubelet, using the `--cluster-dns` flag. This setting needs to be the same for every kubelet
on every manager and Node in the cluster. The kubelet provides a versioned, structured API object
that can configure most parameters in the kubelet and push out this configuration to each running
kubelet in the cluster. This object is called
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/).
The `KubeletConfiguration` allows the user to specify flags such as the cluster DNS IP addresses expressed as
a list of values to a camelCased key, illustrated by the following example:
```yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDNS:
- 10.96.0.10
```
For more details on the `KubeletConfiguration` have a look at [this section](#configure-kubelets-using-kubeadm).
### Providing instance-specific configuration details
Some hosts require specific kubelet configurations due to differences in hardware, operating system,
networking, or other host-specific parameters. The following list provides a few examples.
- The path to the DNS resolution file, as specified by the `--resolv-conf` kubelet
configuration flag, may differ among operating systems, or depending on whether you are using
`systemd-resolved`. If this path is wrong, DNS resolution will fail on the Node whose kubelet
is configured incorrectly.
- The Node API object `.metadata.name` is set to the machine's hostname by default,
unless you are using a cloud provider. You can use the `--hostname-override` flag to override the
default behavior if you need to specify a Node name different from the machine's hostname.
- Currently, the kubelet cannot automatically detect the cgroup driver used by the container runtime,
but the value of `--cgroup-driver` must match the cgroup driver used by the container runtime to ensure
the health of the kubelet.
- To specify the container runtime you must set its endpoint with the
`--container-runtime-endpoint=<path>` flag.
The recommended way of applying such instance-specific configuration is by using
[`KubeletConfiguration` patches](/docs/setup/production-environment/tools/kubeadm/control-plane-flags#patches).
## Configure kubelets using kubeadm
It is possible to configure the kubelet that kubeadm will start if a custom
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)
API object is passed with a configuration file like so `kubeadm ... --config some-config-file.yaml`.
By calling `kubeadm config print init-defaults --component-configs KubeletConfiguration` you can
see all the default values for this structure.
It is also possible to apply instance-specific patches over the base `KubeletConfiguration`.
Have a look at [Customizing the kubelet](/docs/setup/production-environment/tools/kubeadm/control-plane-flags#customizing-the-kubelet)
for more details.
### Workflow when using `kubeadm init`
When you call `kubeadm init`, the kubelet configuration is marshalled to disk
at `/var/lib/kubelet/config.yaml`, and also uploaded to a `kubelet-config` ConfigMap in the `kube-system`
namespace of the cluster. A kubelet configuration file is also written to `/etc/kubernetes/kubelet.conf`
with the baseline cluster-wide configuration for all kubelets in the cluster. This configuration file
points to the client certificates that allow the kubelet to communicate with the API server. This
addresses the need to
[propagate cluster-level configuration to each kubelet](#propagating-cluster-level-configuration-to-each-kubelet).
To address the second pattern of
[providing instance-specific configuration details](#providing-instance-specific-configuration-details),
kubeadm writes an environment file to `/var/lib/kubelet/kubeadm-flags.env`, which contains a list of
flags to pass to the kubelet when it starts. The flags are presented in the file like this:
```bash
KUBELET_KUBEADM_ARGS="--flag1=value1 --flag2=value2 ..."
```
In addition to the flags used when starting the kubelet, the file also contains dynamic
parameters such as the cgroup driver and whether to use a different container runtime socket
(`--cri-socket`).
After marshalling these two files to disk, kubeadm attempts to run the following two
commands, if you are using systemd:
```bash
systemctl daemon-reload && systemctl restart kubelet
```
If the reload and restart are successful, the normal `kubeadm init` workflow continues.
### Workflow when using `kubeadm join`
When you run `kubeadm join`, kubeadm uses the Bootstrap Token credential to perform
a TLS bootstrap, which fetches the credential needed to download the
`kubelet-config` ConfigMap and writes it to `/var/lib/kubelet/config.yaml`. The dynamic
environment file is generated in exactly the same way as `kubeadm init`.
Next, `kubeadm` runs the following two commands to load the new configuration into the kubelet:
```bash
systemctl daemon-reload && systemctl restart kubelet
```
After the kubelet loads the new configuration, kubeadm writes the
`/etc/kubernetes/bootstrap-kubelet.conf` KubeConfig file, which contains a CA certificate and Bootstrap
Token. These are used by the kubelet to perform the TLS Bootstrap and obtain a unique
credential, which is stored in `/etc/kubernetes/kubelet.conf`.
When the `/etc/kubernetes/kubelet.conf` file is written, the kubelet has finished performing the TLS Bootstrap.
Kubeadm deletes the `/etc/kubernetes/bootstrap-kubelet.conf` file after completing the TLS Bootstrap.
## The kubelet drop-in file for systemd
`kubeadm` ships with configuration for how systemd should run the kubelet.
Note that the kubeadm CLI command never touches this drop-in file.
This configuration file installed by the `kubeadm`
[package](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf) is written to
`/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf` and is used by systemd.
It augments the basic
[`kubelet.service`](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubelet/kubelet.service).
If you want to override that further, you can make a directory `/etc/systemd/system/kubelet.service.d/`
(not `/usr/lib/systemd/system/kubelet.service.d/`) and put your own customizations into a file there.
For example, you might add a new local file `/etc/systemd/system/kubelet.service.d/local-overrides.conf`
to override the unit settings configured by `kubeadm`.
Here is what you are likely to find in `/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf`:
The contents below are just an example. If you don't want to use a package manager
follow the guide outlined in the ([Without a package manager](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#k8s-install-2))
section.
```none
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating
# the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,
# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.
# KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
```
This file specifies the default locations for all of the files managed by kubeadm for the kubelet.
- The KubeConfig file to use for the TLS Bootstrap is `/etc/kubernetes/bootstrap-kubelet.conf`,
but it is only used if `/etc/kubernetes/kubelet.conf` does not exist.
- The KubeConfig file with the unique kubelet identity is `/etc/kubernetes/kubelet.conf`.
- The file containing the kubelet's ComponentConfig is `/var/lib/kubelet/config.yaml`.
- The dynamic environment file that contains `KUBELET_KUBEADM_ARGS` is sourced from `/var/lib/kubelet/kubeadm-flags.env`.
- The file that can contain user-specified flag overrides with `KUBELET_EXTRA_ARGS` is sourced from
`/etc/default/kubelet` (for DEBs), or `/etc/sysconfig/kubelet` (for RPMs). `KUBELET_EXTRA_ARGS`
is last in the flag chain and has the highest priority in the event of conflicting settings.
## Kubernetes binaries and package contents
The DEB and RPM packages shipped with the Kubernetes releases are:
| Package name | Description |
|--------------|-------------|
| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and the [kubelet drop-in file](#the-kubelet-drop-in-file-for-systemd) for the kubelet. |
| `kubelet` | Installs the `/usr/bin/kubelet` binary. |
| `kubectl` | Installs the `/usr/bin/kubectl` binary. |
| `cri-tools` | Installs the `/usr/bin/crictl` binary from the [cri-tools git repository](https://github.com/kubernetes-sigs/cri-tools). |
| `kubernetes-cni` | Installs the `/opt/cni/bin` binaries from the [plugins git repository](https://github.com/containernetworking/plugins). | | kubernetes setup | reviewers sig cluster lifecycle title Configuring each kubelet in your cluster using kubeadm content type concept weight 80 overview The lifecycle of the kubeadm CLI tool is decoupled from the kubelet docs reference command line tools reference kubelet which is a daemon that runs on each node within the Kubernetes cluster The kubeadm CLI tool is executed by the user when Kubernetes is initialized or upgraded whereas the kubelet is always running in the background Since the kubelet is a daemon it needs to be maintained by some kind of an init system or service manager When the kubelet is installed using DEBs or RPMs systemd is configured to manage the kubelet You can use a different service manager instead but you need to configure it manually Some kubelet configuration details need to be the same across all kubelets involved in the cluster while other configuration aspects need to be set on a per kubelet basis to accommodate the different characteristics of a given machine such as OS storage and networking You can manage the configuration of your kubelets manually but kubeadm now provides a KubeletConfiguration API type for managing your kubelet configurations centrally configure kubelets using kubeadm body Kubelet configuration patterns The following sections describe patterns to kubelet configuration that are simplified by using kubeadm rather than managing the kubelet configuration for each Node manually Propagating cluster level configuration to each kubelet You can provide the kubelet with default values to be used by kubeadm init and kubeadm join commands Interesting examples include using a different container runtime or setting the default subnet used by services If you want your services to use the subnet 10 96 0 0 12 as the default for services you can pass the service cidr parameter to kubeadm bash kubeadm init service cidr 10 96 0 0 12 Virtual IPs for services are now allocated from this subnet You also need to set the DNS address used by the kubelet using the cluster dns flag This setting needs to be the same for every kubelet on every manager and Node in the cluster The kubelet provides a versioned structured API object that can configure most parameters in the kubelet and push out this configuration to each running kubelet in the cluster This object is called KubeletConfiguration docs reference config api kubelet config v1beta1 The KubeletConfiguration allows the user to specify flags such as the cluster DNS IP addresses expressed as a list of values to a camelCased key illustrated by the following example yaml apiVersion kubelet config k8s io v1beta1 kind KubeletConfiguration clusterDNS 10 96 0 10 For more details on the KubeletConfiguration have a look at this section configure kubelets using kubeadm Providing instance specific configuration details Some hosts require specific kubelet configurations due to differences in hardware operating system networking or other host specific parameters The following list provides a few examples The path to the DNS resolution file as specified by the resolv conf kubelet configuration flag may differ among operating systems or depending on whether you are using systemd resolved If this path is wrong DNS resolution will fail on the Node whose kubelet is configured incorrectly The Node API object metadata name is set to the machine s hostname by default unless you are using a cloud provider You can use the hostname override flag to override the default behavior if you need to specify a Node name different from the machine s hostname Currently the kubelet cannot automatically detect the cgroup driver used by the container runtime but the value of cgroup driver must match the cgroup driver used by the container runtime to ensure the health of the kubelet To specify the container runtime you must set its endpoint with the container runtime endpoint path flag The recommended way of applying such instance specific configuration is by using KubeletConfiguration patches docs setup production environment tools kubeadm control plane flags patches Configure kubelets using kubeadm It is possible to configure the kubelet that kubeadm will start if a custom KubeletConfiguration docs reference config api kubelet config v1beta1 API object is passed with a configuration file like so kubeadm config some config file yaml By calling kubeadm config print init defaults component configs KubeletConfiguration you can see all the default values for this structure It is also possible to apply instance specific patches over the base KubeletConfiguration Have a look at Customizing the kubelet docs setup production environment tools kubeadm control plane flags customizing the kubelet for more details Workflow when using kubeadm init When you call kubeadm init the kubelet configuration is marshalled to disk at var lib kubelet config yaml and also uploaded to a kubelet config ConfigMap in the kube system namespace of the cluster A kubelet configuration file is also written to etc kubernetes kubelet conf with the baseline cluster wide configuration for all kubelets in the cluster This configuration file points to the client certificates that allow the kubelet to communicate with the API server This addresses the need to propagate cluster level configuration to each kubelet propagating cluster level configuration to each kubelet To address the second pattern of providing instance specific configuration details providing instance specific configuration details kubeadm writes an environment file to var lib kubelet kubeadm flags env which contains a list of flags to pass to the kubelet when it starts The flags are presented in the file like this bash KUBELET KUBEADM ARGS flag1 value1 flag2 value2 In addition to the flags used when starting the kubelet the file also contains dynamic parameters such as the cgroup driver and whether to use a different container runtime socket cri socket After marshalling these two files to disk kubeadm attempts to run the following two commands if you are using systemd bash systemctl daemon reload systemctl restart kubelet If the reload and restart are successful the normal kubeadm init workflow continues Workflow when using kubeadm join When you run kubeadm join kubeadm uses the Bootstrap Token credential to perform a TLS bootstrap which fetches the credential needed to download the kubelet config ConfigMap and writes it to var lib kubelet config yaml The dynamic environment file is generated in exactly the same way as kubeadm init Next kubeadm runs the following two commands to load the new configuration into the kubelet bash systemctl daemon reload systemctl restart kubelet After the kubelet loads the new configuration kubeadm writes the etc kubernetes bootstrap kubelet conf KubeConfig file which contains a CA certificate and Bootstrap Token These are used by the kubelet to perform the TLS Bootstrap and obtain a unique credential which is stored in etc kubernetes kubelet conf When the etc kubernetes kubelet conf file is written the kubelet has finished performing the TLS Bootstrap Kubeadm deletes the etc kubernetes bootstrap kubelet conf file after completing the TLS Bootstrap The kubelet drop in file for systemd kubeadm ships with configuration for how systemd should run the kubelet Note that the kubeadm CLI command never touches this drop in file This configuration file installed by the kubeadm package https github com kubernetes release blob cd53840 cmd krel templates latest kubeadm 10 kubeadm conf is written to usr lib systemd system kubelet service d 10 kubeadm conf and is used by systemd It augments the basic kubelet service https github com kubernetes release blob cd53840 cmd krel templates latest kubelet kubelet service If you want to override that further you can make a directory etc systemd system kubelet service d not usr lib systemd system kubelet service d and put your own customizations into a file there For example you might add a new local file etc systemd system kubelet service d local overrides conf to override the unit settings configured by kubeadm Here is what you are likely to find in usr lib systemd system kubelet service d 10 kubeadm conf The contents below are just an example If you don t want to use a package manager follow the guide outlined in the Without a package manager docs setup production environment tools kubeadm install kubeadm k8s install 2 section none Service Environment KUBELET KUBECONFIG ARGS bootstrap kubeconfig etc kubernetes bootstrap kubelet conf kubeconfig etc kubernetes kubelet conf Environment KUBELET CONFIG ARGS config var lib kubelet config yaml This is a file that kubeadm init and kubeadm join generate at runtime populating the KUBELET KUBEADM ARGS variable dynamically EnvironmentFile var lib kubelet kubeadm flags env This is a file that the user can use for overrides of the kubelet args as a last resort Preferably the user should use the NodeRegistration KubeletExtraArgs object in the configuration files instead KUBELET EXTRA ARGS should be sourced from this file EnvironmentFile etc default kubelet ExecStart ExecStart usr bin kubelet KUBELET KUBECONFIG ARGS KUBELET CONFIG ARGS KUBELET KUBEADM ARGS KUBELET EXTRA ARGS This file specifies the default locations for all of the files managed by kubeadm for the kubelet The KubeConfig file to use for the TLS Bootstrap is etc kubernetes bootstrap kubelet conf but it is only used if etc kubernetes kubelet conf does not exist The KubeConfig file with the unique kubelet identity is etc kubernetes kubelet conf The file containing the kubelet s ComponentConfig is var lib kubelet config yaml The dynamic environment file that contains KUBELET KUBEADM ARGS is sourced from var lib kubelet kubeadm flags env The file that can contain user specified flag overrides with KUBELET EXTRA ARGS is sourced from etc default kubelet for DEBs or etc sysconfig kubelet for RPMs KUBELET EXTRA ARGS is last in the flag chain and has the highest priority in the event of conflicting settings Kubernetes binaries and package contents The DEB and RPM packages shipped with the Kubernetes releases are Package name Description kubeadm Installs the usr bin kubeadm CLI tool and the kubelet drop in file the kubelet drop in file for systemd for the kubelet kubelet Installs the usr bin kubelet binary kubectl Installs the usr bin kubectl binary cri tools Installs the usr bin crictl binary from the cri tools git repository https github com kubernetes sigs cri tools kubernetes cni Installs the opt cni bin binaries from the plugins git repository https github com containernetworking plugins |
kubernetes setup title Creating Highly Available Clusters with kubeadm contenttype task sig cluster lifecycle reviewers overview weight 60 | ---
reviewers:
- sig-cluster-lifecycle
title: Creating Highly Available Clusters with kubeadm
content_type: task
weight: 60
---
<!-- overview -->
This page explains two different approaches to setting up a highly available Kubernetes
cluster using kubeadm:
- With stacked control plane nodes. This approach requires less infrastructure. The etcd members
and control plane nodes are co-located.
- With an external etcd cluster. This approach requires more infrastructure. The
control plane nodes and etcd members are separated.
Before proceeding, you should carefully consider which approach best meets the needs of your applications
and environment. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/)
outlines the advantages and disadvantages of each.
If you encounter issues with setting up the HA cluster, please report these
in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
See also the [upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/).
This page does not address running your cluster on a cloud provider. In a cloud
environment, neither approach documented here works with Service objects of type
LoadBalancer, or with dynamic PersistentVolumes.
##
The prerequisites depend on which topology you have selected for your cluster's
control plane:
<!--
note to reviewers: these prerequisites should match the start of the
external etc tab
-->
You need:
- Three or more machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for
the control-plane nodes. Having an odd number of control plane nodes can help
with leader selection in the case of machine or zone failure.
- including a , already set up and working
- Three or more machines that meet [kubeadm's minimum
requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers
- including a container runtime, already set up and working
- Full network connectivity between all machines in the cluster (public or
private network)
- Superuser privileges on all machines using `sudo`
- You can use a different tool; this guide uses `sudo` in the examples.
- SSH access from one device to all nodes in the system
- `kubeadm` and `kubelet` already installed on all machines.
_See [Stacked etcd topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology) for context._
<!--
note to reviewers: these prerequisites should match the start of the
stacked etc tab
-->
You need:
- Three or more machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for
the control-plane nodes. Having an odd number of control plane nodes can help
with leader selection in the case of machine or zone failure.
- including a , already set up and working
- Three or more machines that meet [kubeadm's minimum
requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers
- including a container runtime, already set up and working
- Full network connectivity between all machines in the cluster (public or
private network)
- Superuser privileges on all machines using `sudo`
- You can use a different tool; this guide uses `sudo` in the examples.
- SSH access from one device to all nodes in the system
- `kubeadm` and `kubelet` already installed on all machines.
<!-- end of shared prerequisites -->
And you also need:
- Three or more additional machines, that will become etcd cluster members.
Having an odd number of members in the etcd cluster is a requirement for achieving
optimal voting quorum.
- These machines again need to have `kubeadm` and `kubelet` installed.
- These machines also require a container runtime, that is already set up and working.
_See [External etcd topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/#external-etcd-topology) for context._
### Container images
Each host should have access read and fetch images from the Kubernetes container image registry,
`registry.k8s.io`. If you want to deploy a highly-available cluster where the hosts do not have
access to pull images, this is possible. You must ensure by some other means that the correct
container images are already available on the relevant hosts.
### Command line interface {#kubectl}
To manage Kubernetes once your cluster is set up, you should
[install kubectl](/docs/tasks/tools/#kubectl) on your PC. It is also useful
to install the `kubectl` tool on each control plane node, as this can be
helpful for troubleshooting.
<!-- steps -->
## First steps for both methods
### Create load balancer for kube-apiserver
There are many configurations for load balancers. The following example is only one
option. Your cluster requirements may need a different configuration.
1. Create a kube-apiserver load balancer with a name that resolves to DNS.
- In a cloud environment you should place your control plane nodes behind a TCP
forwarding load balancer. This load balancer distributes traffic to all
healthy control plane nodes in its target list. The health check for
an apiserver is a TCP check on the port the kube-apiserver listens on
(default value `:6443`).
- It is not recommended to use an IP address directly in a cloud environment.
- The load balancer must be able to communicate with all control plane nodes
on the apiserver port. It must also allow incoming traffic on its
listening port.
- Make sure the address of the load balancer always matches
the address of kubeadm's `ControlPlaneEndpoint`.
- Read the [Options for Software Load Balancing](https://git.k8s.io/kubeadm/docs/ha-considerations.md#options-for-software-load-balancing)
guide for more details.
1. Add the first control plane node to the load balancer, and test the
connection:
```shell
nc -v <LOAD_BALANCER_IP> <PORT>
```
A connection refused error is expected because the API server is not yet
running. A timeout, however, means the load balancer cannot communicate
with the control plane node. If a timeout occurs, reconfigure the load
balancer to communicate with the control plane node.
1. Add the remaining control plane nodes to the load balancer target group.
## Stacked control plane and etcd nodes
### Steps for the first control plane node
1. Initialize the control plane:
```sh
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
```
- You can use the `--kubernetes-version` flag to set the Kubernetes version to use.
It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.
- The `--control-plane-endpoint` flag should be set to the address or DNS and port of the load balancer.
- The `--upload-certs` flag is used to upload the certificates that should be shared
across all the control-plane instances to the cluster. If instead, you prefer to copy certs across
control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual
certificate distribution](#manual-certs) section below.
The `kubeadm init` flags `--config` and `--certificate-key` cannot be mixed, therefore if you want
to use the [kubeadm configuration](/docs/reference/config-api/kubeadm-config.v1beta4/)
you must add the `certificateKey` field in the appropriate config locations
(under `InitConfiguration` and `JoinConfiguration: controlPlane`).
Some CNI network plugins require additional configuration, for example specifying the pod IP CIDR, while others do not.
See the [CNI network documentation](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network).
To add a pod CIDR pass the flag `--pod-network-cidr`, or if you are using a kubeadm configuration file
set the `podSubnet` field under the `networking` object of `ClusterConfiguration`.
The output looks similar to:
```sh
...
You can now join any number of control-plane node by running the following command on each as a root:
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
```
- Copy this output to a text file. You will need it later to join control plane and worker nodes to
the cluster.
- When `--upload-certs` is used with `kubeadm init`, the certificates of the primary control plane
are encrypted and uploaded in the `kubeadm-certs` Secret.
- To re-upload the certificates and generate a new decryption key, use the following command on a
control plane
node that is already joined to the cluster:
```sh
sudo kubeadm init phase upload-certs --upload-certs
```
- You can also specify a custom `--certificate-key` during `init` that can later be used by `join`.
To generate such a key you can use the following command:
```sh
kubeadm certs certificate-key
```
The certificate key is a hex encoded string that is an AES key of size 32 bytes.
The `kubeadm-certs` Secret and the decryption key expire after two hours.
As stated in the command output, the certificate key gives access to cluster sensitive data, keep it secret!
1. Apply the CNI plugin of your choice:
[Follow these instructions](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)
to install the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the
kubeadm configuration file (if applicable).
You must pick a network plugin that suits your use case and deploy it before you move on to next step.
If you don't do this, you will not be able to launch your cluster properly.
1. Type the following and watch the pods of the control plane components get started:
```sh
kubectl get pod -n kube-system -w
```
### Steps for the rest of the control plane nodes
For each additional control plane node you should:
1. Execute the join command that was previously given to you by the `kubeadm init` output on the first node.
It should look something like this:
```sh
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
```
- The `--control-plane` flag tells `kubeadm join` to create a new control plane.
- The `--certificate-key ...` will cause the control plane certificates to be downloaded
from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key.
You can join multiple control-plane nodes in parallel.
## External etcd nodes
Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd
with the exception that you should setup etcd first, and you should pass the etcd information
in the kubeadm config file.
### Set up the etcd cluster
1. Follow these [instructions](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) to set up the etcd cluster.
1. Set up SSH as described [here](#manual-certs).
1. Copy the following files from any etcd node in the cluster to the first control plane node:
```sh
export CONTROL_PLANE="[email protected]"
scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
```
- Replace the value of `CONTROL_PLANE` with the `user@host` of the first control-plane node.
### Set up the first control plane node
1. Create a file called `kubeadm-config.yaml` with the following contents:
```yaml
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" # change this (see below)
etcd:
external:
endpoints:
- https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately
- https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately
- https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
```
The difference between stacked etcd and external etcd here is that the external etcd setup requires
a configuration file with the etcd endpoints under the `external` object for `etcd`.
In the case of the stacked etcd topology, this is managed automatically.
- Replace the following variables in the config template with the appropriate values for your cluster:
- `LOAD_BALANCER_DNS`
- `LOAD_BALANCER_PORT`
- `ETCD_0_IP`
- `ETCD_1_IP`
- `ETCD_2_IP`
The following steps are similar to the stacked etcd setup:
1. Run `sudo kubeadm init --config kubeadm-config.yaml --upload-certs` on this node.
1. Write the output join commands that are returned to a text file for later use.
1. Apply the CNI plugin of your choice.
You must pick a network plugin that suits your use case and deploy it before you move on to next step.
If you don't do this, you will not be able to launch your cluster properly.
### Steps for the rest of the control plane nodes
The steps are the same as for the stacked etcd setup:
- Make sure the first control plane node is fully initialized.
- Join each control plane node with the join command you saved to a text file. It's recommended
to join the control plane nodes one at a time.
- Don't forget that the decryption key from `--certificate-key` expires after two hours, by default.
## Common tasks after bootstrapping control plane
### Install workers
Worker nodes can be joined to the cluster with the command you stored previously
as the output from the `kubeadm init` command:
```sh
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
```
## Manual certificate distribution {#manual-certs}
If you choose to not use `kubeadm init` with the `--upload-certs` flag this means that
you are going to have to manually copy the certificates from the primary control plane node to the
joining control plane nodes.
There are many ways to do this. The following example uses `ssh` and `scp`:
SSH is required if you want to control all nodes from a single machine.
1. Enable ssh-agent on your main device that has access to all other nodes in
the system:
```shell
eval $(ssh-agent)
```
1. Add your SSH identity to the session:
```shell
ssh-add ~/.ssh/path_to_private_key
```
1. SSH between nodes to check that the connection is working correctly.
- When you SSH to any node, add the `-A` flag. This flag allows the node that you
have logged into via SSH to access the SSH agent on your PC. Consider alternative
methods if you do not fully trust the security of your user session on the node.
```shell
ssh -A 10.0.0.7
```
- When using sudo on any node, make sure to preserve the environment so SSH
forwarding works:
```shell
sudo -E -s
```
1. After configuring SSH on all the nodes you should run the following script on the first
control plane node after running `kubeadm init`. This script will copy the certificates from
the first control plane node to the other control plane nodes:
In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the
other control plane nodes.
```sh
USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
# Skip the next line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
```
Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates
with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake,
the creation of additional nodes could fail due to a lack of required SANs.
1. Then on each joining control plane node you have to run the following script before running `kubeadm join`.
This script will move the previously copied certificates from the home directory to `/etc/kubernetes/pki`:
```sh
USER=ubuntu # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Skip the next line if you are using external etcd
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
``` | kubernetes setup | reviewers sig cluster lifecycle title Creating Highly Available Clusters with kubeadm content type task weight 60 overview This page explains two different approaches to setting up a highly available Kubernetes cluster using kubeadm With stacked control plane nodes This approach requires less infrastructure The etcd members and control plane nodes are co located With an external etcd cluster This approach requires more infrastructure The control plane nodes and etcd members are separated Before proceeding you should carefully consider which approach best meets the needs of your applications and environment Options for Highly Available topology docs setup production environment tools kubeadm ha topology outlines the advantages and disadvantages of each If you encounter issues with setting up the HA cluster please report these in the kubeadm issue tracker https github com kubernetes kubeadm issues new See also the upgrade documentation docs tasks administer cluster kubeadm kubeadm upgrade This page does not address running your cluster on a cloud provider In a cloud environment neither approach documented here works with Service objects of type LoadBalancer or with dynamic PersistentVolumes The prerequisites depend on which topology you have selected for your cluster s control plane note to reviewers these prerequisites should match the start of the external etc tab You need Three or more machines that meet kubeadm s minimum requirements docs setup production environment tools kubeadm install kubeadm before you begin for the control plane nodes Having an odd number of control plane nodes can help with leader selection in the case of machine or zone failure including a already set up and working Three or more machines that meet kubeadm s minimum requirements docs setup production environment tools kubeadm install kubeadm before you begin for the workers including a container runtime already set up and working Full network connectivity between all machines in the cluster public or private network Superuser privileges on all machines using sudo You can use a different tool this guide uses sudo in the examples SSH access from one device to all nodes in the system kubeadm and kubelet already installed on all machines See Stacked etcd topology docs setup production environment tools kubeadm ha topology stacked etcd topology for context note to reviewers these prerequisites should match the start of the stacked etc tab You need Three or more machines that meet kubeadm s minimum requirements docs setup production environment tools kubeadm install kubeadm before you begin for the control plane nodes Having an odd number of control plane nodes can help with leader selection in the case of machine or zone failure including a already set up and working Three or more machines that meet kubeadm s minimum requirements docs setup production environment tools kubeadm install kubeadm before you begin for the workers including a container runtime already set up and working Full network connectivity between all machines in the cluster public or private network Superuser privileges on all machines using sudo You can use a different tool this guide uses sudo in the examples SSH access from one device to all nodes in the system kubeadm and kubelet already installed on all machines end of shared prerequisites And you also need Three or more additional machines that will become etcd cluster members Having an odd number of members in the etcd cluster is a requirement for achieving optimal voting quorum These machines again need to have kubeadm and kubelet installed These machines also require a container runtime that is already set up and working See External etcd topology docs setup production environment tools kubeadm ha topology external etcd topology for context Container images Each host should have access read and fetch images from the Kubernetes container image registry registry k8s io If you want to deploy a highly available cluster where the hosts do not have access to pull images this is possible You must ensure by some other means that the correct container images are already available on the relevant hosts Command line interface kubectl To manage Kubernetes once your cluster is set up you should install kubectl docs tasks tools kubectl on your PC It is also useful to install the kubectl tool on each control plane node as this can be helpful for troubleshooting steps First steps for both methods Create load balancer for kube apiserver There are many configurations for load balancers The following example is only one option Your cluster requirements may need a different configuration 1 Create a kube apiserver load balancer with a name that resolves to DNS In a cloud environment you should place your control plane nodes behind a TCP forwarding load balancer This load balancer distributes traffic to all healthy control plane nodes in its target list The health check for an apiserver is a TCP check on the port the kube apiserver listens on default value 6443 It is not recommended to use an IP address directly in a cloud environment The load balancer must be able to communicate with all control plane nodes on the apiserver port It must also allow incoming traffic on its listening port Make sure the address of the load balancer always matches the address of kubeadm s ControlPlaneEndpoint Read the Options for Software Load Balancing https git k8s io kubeadm docs ha considerations md options for software load balancing guide for more details 1 Add the first control plane node to the load balancer and test the connection shell nc v LOAD BALANCER IP PORT A connection refused error is expected because the API server is not yet running A timeout however means the load balancer cannot communicate with the control plane node If a timeout occurs reconfigure the load balancer to communicate with the control plane node 1 Add the remaining control plane nodes to the load balancer target group Stacked control plane and etcd nodes Steps for the first control plane node 1 Initialize the control plane sh sudo kubeadm init control plane endpoint LOAD BALANCER DNS LOAD BALANCER PORT upload certs You can use the kubernetes version flag to set the Kubernetes version to use It is recommended that the versions of kubeadm kubelet kubectl and Kubernetes match The control plane endpoint flag should be set to the address or DNS and port of the load balancer The upload certs flag is used to upload the certificates that should be shared across all the control plane instances to the cluster If instead you prefer to copy certs across control plane nodes manually or using automation tools please remove this flag and refer to Manual certificate distribution manual certs section below The kubeadm init flags config and certificate key cannot be mixed therefore if you want to use the kubeadm configuration docs reference config api kubeadm config v1beta4 you must add the certificateKey field in the appropriate config locations under InitConfiguration and JoinConfiguration controlPlane Some CNI network plugins require additional configuration for example specifying the pod IP CIDR while others do not See the CNI network documentation docs setup production environment tools kubeadm create cluster kubeadm pod network To add a pod CIDR pass the flag pod network cidr or if you are using a kubeadm configuration file set the podSubnet field under the networking object of ClusterConfiguration The output looks similar to sh You can now join any number of control plane node by running the following command on each as a root kubeadm join 192 168 0 200 6443 token 9vr73a a8uxyaju799qwdjv discovery token ca cert hash sha256 7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 control plane certificate key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 Please note that the certificate key gives access to cluster sensitive data keep it secret As a safeguard uploaded certs will be deleted in two hours If necessary you can use kubeadm init phase upload certs to reload certs afterward Then you can join any number of worker nodes by running the following on each as root kubeadm join 192 168 0 200 6443 token 9vr73a a8uxyaju799qwdjv discovery token ca cert hash sha256 7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 Copy this output to a text file You will need it later to join control plane and worker nodes to the cluster When upload certs is used with kubeadm init the certificates of the primary control plane are encrypted and uploaded in the kubeadm certs Secret To re upload the certificates and generate a new decryption key use the following command on a control plane node that is already joined to the cluster sh sudo kubeadm init phase upload certs upload certs You can also specify a custom certificate key during init that can later be used by join To generate such a key you can use the following command sh kubeadm certs certificate key The certificate key is a hex encoded string that is an AES key of size 32 bytes The kubeadm certs Secret and the decryption key expire after two hours As stated in the command output the certificate key gives access to cluster sensitive data keep it secret 1 Apply the CNI plugin of your choice Follow these instructions docs setup production environment tools kubeadm create cluster kubeadm pod network to install the CNI provider Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm configuration file if applicable You must pick a network plugin that suits your use case and deploy it before you move on to next step If you don t do this you will not be able to launch your cluster properly 1 Type the following and watch the pods of the control plane components get started sh kubectl get pod n kube system w Steps for the rest of the control plane nodes For each additional control plane node you should 1 Execute the join command that was previously given to you by the kubeadm init output on the first node It should look something like this sh sudo kubeadm join 192 168 0 200 6443 token 9vr73a a8uxyaju799qwdjv discovery token ca cert hash sha256 7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 control plane certificate key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 The control plane flag tells kubeadm join to create a new control plane The certificate key will cause the control plane certificates to be downloaded from the kubeadm certs Secret in the cluster and be decrypted using the given key You can join multiple control plane nodes in parallel External etcd nodes Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first and you should pass the etcd information in the kubeadm config file Set up the etcd cluster 1 Follow these instructions docs setup production environment tools kubeadm setup ha etcd with kubeadm to set up the etcd cluster 1 Set up SSH as described here manual certs 1 Copy the following files from any etcd node in the cluster to the first control plane node sh export CONTROL PLANE ubuntu 10 0 0 7 scp etc kubernetes pki etcd ca crt CONTROL PLANE scp etc kubernetes pki apiserver etcd client crt CONTROL PLANE scp etc kubernetes pki apiserver etcd client key CONTROL PLANE Replace the value of CONTROL PLANE with the user host of the first control plane node Set up the first control plane node 1 Create a file called kubeadm config yaml with the following contents yaml apiVersion kubeadm k8s io v1beta4 kind ClusterConfiguration kubernetesVersion stable controlPlaneEndpoint LOAD BALANCER DNS LOAD BALANCER PORT change this see below etcd external endpoints https ETCD 0 IP 2379 change ETCD 0 IP appropriately https ETCD 1 IP 2379 change ETCD 1 IP appropriately https ETCD 2 IP 2379 change ETCD 2 IP appropriately caFile etc kubernetes pki etcd ca crt certFile etc kubernetes pki apiserver etcd client crt keyFile etc kubernetes pki apiserver etcd client key The difference between stacked etcd and external etcd here is that the external etcd setup requires a configuration file with the etcd endpoints under the external object for etcd In the case of the stacked etcd topology this is managed automatically Replace the following variables in the config template with the appropriate values for your cluster LOAD BALANCER DNS LOAD BALANCER PORT ETCD 0 IP ETCD 1 IP ETCD 2 IP The following steps are similar to the stacked etcd setup 1 Run sudo kubeadm init config kubeadm config yaml upload certs on this node 1 Write the output join commands that are returned to a text file for later use 1 Apply the CNI plugin of your choice You must pick a network plugin that suits your use case and deploy it before you move on to next step If you don t do this you will not be able to launch your cluster properly Steps for the rest of the control plane nodes The steps are the same as for the stacked etcd setup Make sure the first control plane node is fully initialized Join each control plane node with the join command you saved to a text file It s recommended to join the control plane nodes one at a time Don t forget that the decryption key from certificate key expires after two hours by default Common tasks after bootstrapping control plane Install workers Worker nodes can be joined to the cluster with the command you stored previously as the output from the kubeadm init command sh sudo kubeadm join 192 168 0 200 6443 token 9vr73a a8uxyaju799qwdjv discovery token ca cert hash sha256 7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 Manual certificate distribution manual certs If you choose to not use kubeadm init with the upload certs flag this means that you are going to have to manually copy the certificates from the primary control plane node to the joining control plane nodes There are many ways to do this The following example uses ssh and scp SSH is required if you want to control all nodes from a single machine 1 Enable ssh agent on your main device that has access to all other nodes in the system shell eval ssh agent 1 Add your SSH identity to the session shell ssh add ssh path to private key 1 SSH between nodes to check that the connection is working correctly When you SSH to any node add the A flag This flag allows the node that you have logged into via SSH to access the SSH agent on your PC Consider alternative methods if you do not fully trust the security of your user session on the node shell ssh A 10 0 0 7 When using sudo on any node make sure to preserve the environment so SSH forwarding works shell sudo E s 1 After configuring SSH on all the nodes you should run the following script on the first control plane node after running kubeadm init This script will copy the certificates from the first control plane node to the other control plane nodes In the following example replace CONTROL PLANE IPS with the IP addresses of the other control plane nodes sh USER ubuntu customizable CONTROL PLANE IPS 10 0 0 7 10 0 0 8 for host in CONTROL PLANE IPS do scp etc kubernetes pki ca crt USER host scp etc kubernetes pki ca key USER host scp etc kubernetes pki sa key USER host scp etc kubernetes pki sa pub USER host scp etc kubernetes pki front proxy ca crt USER host scp etc kubernetes pki front proxy ca key USER host scp etc kubernetes pki etcd ca crt USER host etcd ca crt Skip the next line if you are using external etcd scp etc kubernetes pki etcd ca key USER host etcd ca key done Copy only the certificates in the above list kubeadm will take care of generating the rest of the certificates with the required SANs for the joining control plane instances If you copy all the certificates by mistake the creation of additional nodes could fail due to a lack of required SANs 1 Then on each joining control plane node you have to run the following script before running kubeadm join This script will move the previously copied certificates from the home directory to etc kubernetes pki sh USER ubuntu customizable mkdir p etc kubernetes pki etcd mv home USER ca crt etc kubernetes pki mv home USER ca key etc kubernetes pki mv home USER sa pub etc kubernetes pki mv home USER sa key etc kubernetes pki mv home USER front proxy ca crt etc kubernetes pki mv home USER front proxy ca key etc kubernetes pki mv home USER etcd ca crt etc kubernetes pki etcd ca crt Skip the next line if you are using external etcd mv home USER etcd ca key etc kubernetes pki etcd ca key |
kubernetes setup jlowdermilk weight 20 title Running in multiple zones justinsb contenttype concept quinton hoole reviewers | ---
reviewers:
- jlowdermilk
- justinsb
- quinton-hoole
title: Running in multiple zones
weight: 20
content_type: concept
---
<!-- overview -->
This page describes running Kubernetes across multiple zones.
<!-- body -->
## Background
Kubernetes is designed so that a single Kubernetes cluster can run
across multiple failure zones, typically where these zones fit within
a logical grouping called a _region_. Major cloud providers define a region
as a set of failure zones (also called _availability zones_) that provide
a consistent set of features: within a region, each zone offers the same
APIs and services.
Typical cloud architectures aim to minimize the chance that a failure in
one zone also impairs services in another zone.
## Control plane behavior
All [control plane components](/docs/concepts/architecture/#control-plane-components)
support running as a pool of interchangeable resources, replicated per
component.
When you deploy a cluster control plane, place replicas of
control plane components across multiple failure zones. If availability is
an important concern, select at least three failure zones and replicate
each individual control plane component (API server, scheduler, etcd,
cluster controller manager) across at least three failure zones.
If you are running a cloud controller manager then you should
also replicate this across all the failure zones you selected.
Kubernetes does not provide cross-zone resilience for the API server
endpoints. You can use various techniques to improve availability for
the cluster API server, including DNS round-robin, SRV records, or
a third-party load balancing solution with health checking.
## Node behavior
Kubernetes automatically spreads the Pods for
workload resources (such as
or )
across different nodes in a cluster. This spreading helps
reduce the impact of failures.
When nodes start up, the kubelet on each node automatically adds
to the Node object
that represents that specific kubelet in the Kubernetes API.
These labels can include
[zone information](/docs/reference/labels-annotations-taints/#topologykubernetesiozone).
If your cluster spans multiple zones or regions, you can use node labels
in conjunction with
[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
to control how Pods are spread across your cluster among fault domains:
regions, zones, and even specific nodes.
These hints enable the
to place
Pods for better expected availability, reducing the risk that a correlated
failure affects your whole workload.
For example, you can set a constraint to make sure that the
3 replicas of a StatefulSet are all running in different zones to each
other, whenever that is feasible. You can define this declaratively
without explicitly defining which availability zones are in use for
each workload.
### Distributing nodes across zones
Kubernetes' core does not create nodes for you; you need to do that yourself,
or use a tool such as the [Cluster API](https://cluster-api.sigs.k8s.io/) to
manage nodes on your behalf.
Using tools such as the Cluster API you can define sets of machines to run as
worker nodes for your cluster across multiple failure domains, and rules to
automatically heal the cluster in case of whole-zone service disruption.
## Manual zone assignment for Pods
You can apply [node selector constraints](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
to Pods that you create, as well as to Pod templates in workload resources
such as Deployment, StatefulSet, or Job.
## Storage access for zones
When persistent volumes are created, Kubernetes automatically adds zone labels
to any PersistentVolumes that are linked to a specific zone.
The then ensures,
through its `NoVolumeZoneConflict` predicate, that pods which claim a given PersistentVolume
are only placed into the same zone as that volume.
Please note that the method of adding zone labels can depend on your
cloud provider and the storage provisioner you’re using. Always refer to the specific
documentation for your environment to ensure correct configuration.
You can specify a
for PersistentVolumeClaims that specifies the failure domains (zones) that the
storage in that class may use.
To learn about configuring a StorageClass that is aware of failure domains or zones,
see [Allowed topologies](/docs/concepts/storage/storage-classes/#allowed-topologies).
## Networking
By itself, Kubernetes does not include zone-aware networking. You can use a
[network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
to configure cluster networking, and that network solution might have zone-specific
elements. For example, if your cloud provider supports Services with
`type=LoadBalancer`, the load balancer might only send traffic to Pods running in the
same zone as the load balancer element processing a given connection.
Check your cloud provider's documentation for details.
For custom or on-premises deployments, similar considerations apply.
and
behavior, including handling
of different failure zones, does vary depending on exactly how your cluster is set up.
## Fault recovery
When you set up your cluster, you might also need to consider whether and how
your setup can restore service if all the failure zones in a region go
off-line at the same time. For example, do you rely on there being at least
one node able to run Pods in a zone?
Make sure that any cluster-critical repair work does not rely
on there being at least one healthy node in your cluster. For example: if all nodes
are unhealthy, you might need to run a repair Job with a special
so that the repair
can complete enough to bring at least one node into service.
Kubernetes doesn't come with an answer for this challenge; however, it's
something to consider.
##
To learn how the scheduler places Pods in a cluster, honoring the configured constraints,
visit [Scheduling and Eviction](/docs/concepts/scheduling-eviction/). | kubernetes setup | reviewers jlowdermilk justinsb quinton hoole title Running in multiple zones weight 20 content type concept overview This page describes running Kubernetes across multiple zones body Background Kubernetes is designed so that a single Kubernetes cluster can run across multiple failure zones typically where these zones fit within a logical grouping called a region Major cloud providers define a region as a set of failure zones also called availability zones that provide a consistent set of features within a region each zone offers the same APIs and services Typical cloud architectures aim to minimize the chance that a failure in one zone also impairs services in another zone Control plane behavior All control plane components docs concepts architecture control plane components support running as a pool of interchangeable resources replicated per component When you deploy a cluster control plane place replicas of control plane components across multiple failure zones If availability is an important concern select at least three failure zones and replicate each individual control plane component API server scheduler etcd cluster controller manager across at least three failure zones If you are running a cloud controller manager then you should also replicate this across all the failure zones you selected Kubernetes does not provide cross zone resilience for the API server endpoints You can use various techniques to improve availability for the cluster API server including DNS round robin SRV records or a third party load balancing solution with health checking Node behavior Kubernetes automatically spreads the Pods for workload resources such as or across different nodes in a cluster This spreading helps reduce the impact of failures When nodes start up the kubelet on each node automatically adds to the Node object that represents that specific kubelet in the Kubernetes API These labels can include zone information docs reference labels annotations taints topologykubernetesiozone If your cluster spans multiple zones or regions you can use node labels in conjunction with Pod topology spread constraints docs concepts scheduling eviction topology spread constraints to control how Pods are spread across your cluster among fault domains regions zones and even specific nodes These hints enable the to place Pods for better expected availability reducing the risk that a correlated failure affects your whole workload For example you can set a constraint to make sure that the 3 replicas of a StatefulSet are all running in different zones to each other whenever that is feasible You can define this declaratively without explicitly defining which availability zones are in use for each workload Distributing nodes across zones Kubernetes core does not create nodes for you you need to do that yourself or use a tool such as the Cluster API https cluster api sigs k8s io to manage nodes on your behalf Using tools such as the Cluster API you can define sets of machines to run as worker nodes for your cluster across multiple failure domains and rules to automatically heal the cluster in case of whole zone service disruption Manual zone assignment for Pods You can apply node selector constraints docs concepts scheduling eviction assign pod node nodeselector to Pods that you create as well as to Pod templates in workload resources such as Deployment StatefulSet or Job Storage access for zones When persistent volumes are created Kubernetes automatically adds zone labels to any PersistentVolumes that are linked to a specific zone The then ensures through its NoVolumeZoneConflict predicate that pods which claim a given PersistentVolume are only placed into the same zone as that volume Please note that the method of adding zone labels can depend on your cloud provider and the storage provisioner you re using Always refer to the specific documentation for your environment to ensure correct configuration You can specify a for PersistentVolumeClaims that specifies the failure domains zones that the storage in that class may use To learn about configuring a StorageClass that is aware of failure domains or zones see Allowed topologies docs concepts storage storage classes allowed topologies Networking By itself Kubernetes does not include zone aware networking You can use a network plugin docs concepts extend kubernetes compute storage net network plugins to configure cluster networking and that network solution might have zone specific elements For example if your cloud provider supports Services with type LoadBalancer the load balancer might only send traffic to Pods running in the same zone as the load balancer element processing a given connection Check your cloud provider s documentation for details For custom or on premises deployments similar considerations apply and behavior including handling of different failure zones does vary depending on exactly how your cluster is set up Fault recovery When you set up your cluster you might also need to consider whether and how your setup can restore service if all the failure zones in a region go off line at the same time For example do you rely on there being at least one node able to run Pods in a zone Make sure that any cluster critical repair work does not rely on there being at least one healthy node in your cluster For example if all nodes are unhealthy you might need to run a repair Job with a special so that the repair can complete enough to bring at least one node into service Kubernetes doesn t come with an answer for this challenge however it s something to consider To learn how the scheduler places Pods in a cluster honoring the configured constraints visit Scheduling and Eviction docs concepts scheduling eviction |
kubernetes setup title Considerations for large clusters davidopp weight 10 or virtual machines running Kubernetes agents managed by the reviewers A cluster is a set of glossarytooltip text nodes termid node physical lavalamp | ---
reviewers:
- davidopp
- lavalamp
title: Considerations for large clusters
weight: 10
---
A cluster is a set of (physical
or virtual machines) running Kubernetes agents, managed by the
.
Kubernetes supports clusters with up to 5,000 nodes. More specifically,
Kubernetes is designed to accommodate configurations that meet *all* of the following criteria:
* No more than 110 pods per node
* No more than 5,000 nodes
* No more than 150,000 total pods
* No more than 300,000 total containers
You can scale your cluster by adding or removing nodes. The way you do this depends
on how your cluster is deployed.
## Cloud provider resource quotas {#quota-issues}
To avoid running into cloud provider quota issues, when creating a cluster with many nodes,
consider:
* Requesting a quota increase for cloud resources such as:
* Computer instances
* CPUs
* Storage volumes
* In-use IP addresses
* Packet filtering rule sets
* Number of load balancers
* Network subnets
* Log streams
* Gating the cluster scaling actions to bring up new nodes in batches, with a pause
between batches, because some cloud providers rate limit the creation of new instances.
## Control plane components
For a large cluster, you need a control plane with sufficient compute and other
resources.
Typically you would run one or two control plane instances per failure zone,
scaling those instances vertically first and then scaling horizontally after reaching
the point of falling returns to (vertical) scale.
You should run at least one instance per failure zone to provide fault-tolerance. Kubernetes
nodes do not automatically steer traffic towards control-plane endpoints that are in the
same failure zone; however, your cloud provider might have its own mechanisms to do this.
For example, using a managed load balancer, you configure the load balancer to send traffic
that originates from the kubelet and Pods in failure zone _A_, and direct that traffic only
to the control plane hosts that are also in zone _A_. If a single control-plane host or
endpoint failure zone _A_ goes offline, that means that all the control-plane traffic for
nodes in zone _A_ is now being sent between zones. Running multiple control plane hosts in
each zone makes that outcome less likely.
### etcd storage
To improve performance of large clusters, you can store Event objects in a separate
dedicated etcd instance.
When creating a cluster, you can (using custom tooling):
* start and configure additional etcd instance
* configure the to use it for storing events
See [Operating etcd clusters for Kubernetes](/docs/tasks/administer-cluster/configure-upgrade-etcd/) and
[Set up a High Availability etcd cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
for details on configuring and managing etcd for a large cluster.
## Addon resources
Kubernetes [resource limits](/docs/concepts/configuration/manage-resources-containers/)
help to minimize the impact of memory leaks and other ways that pods and containers can
impact on other components. These resource limits apply to
resources just as they apply to application workloads.
For example, you can set CPU and memory limits for a logging component:
```yaml
...
containers:
- name: fluentd-cloud-logging
image: fluent/fluentd-kubernetes-daemonset:v1
resources:
limits:
cpu: 100m
memory: 200Mi
```
Addons' default limits are typically based on data collected from experience running
each addon on small or medium Kubernetes clusters. When running on large
clusters, addons often consume more of some resources than their default limits.
If a large cluster is deployed without adjusting these values, the addon(s)
may continuously get killed because they keep hitting the memory limit.
Alternatively, the addon may run but with poor performance due to CPU time
slice restrictions.
To avoid running into cluster addon resource issues, when creating a cluster with
many nodes, consider the following:
* Some addons scale vertically - there is one replica of the addon for the cluster
or serving a whole failure zone. For these addons, increase requests and limits
as you scale out your cluster.
* Many addons scale horizontally - you add capacity by running more pods - but with
a very large cluster you may also need to raise CPU or memory limits slightly.
The [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) can run in _recommender_ mode to provide suggested
figures for requests and limits.
* Some addons run as one copy per node, controlled by a : for example, a node-level log aggregator. Similar to
the case with horizontally-scaled addons, you may also need to raise CPU or memory
limits slightly.
##
* `VerticalPodAutoscaler` is a custom resource that you can deploy into your cluster
to help you manage resource requests and limits for pods.
Learn more about [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme)
and how you can use it to scale cluster
components, including cluster-critical addons.
* Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/)
* The [addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme)
helps you in resizing the addons automatically as your cluster's scale changes. | kubernetes setup | reviewers davidopp lavalamp title Considerations for large clusters weight 10 A cluster is a set of physical or virtual machines running Kubernetes agents managed by the Kubernetes supports clusters with up to 5 000 nodes More specifically Kubernetes is designed to accommodate configurations that meet all of the following criteria No more than 110 pods per node No more than 5 000 nodes No more than 150 000 total pods No more than 300 000 total containers You can scale your cluster by adding or removing nodes The way you do this depends on how your cluster is deployed Cloud provider resource quotas quota issues To avoid running into cloud provider quota issues when creating a cluster with many nodes consider Requesting a quota increase for cloud resources such as Computer instances CPUs Storage volumes In use IP addresses Packet filtering rule sets Number of load balancers Network subnets Log streams Gating the cluster scaling actions to bring up new nodes in batches with a pause between batches because some cloud providers rate limit the creation of new instances Control plane components For a large cluster you need a control plane with sufficient compute and other resources Typically you would run one or two control plane instances per failure zone scaling those instances vertically first and then scaling horizontally after reaching the point of falling returns to vertical scale You should run at least one instance per failure zone to provide fault tolerance Kubernetes nodes do not automatically steer traffic towards control plane endpoints that are in the same failure zone however your cloud provider might have its own mechanisms to do this For example using a managed load balancer you configure the load balancer to send traffic that originates from the kubelet and Pods in failure zone A and direct that traffic only to the control plane hosts that are also in zone A If a single control plane host or endpoint failure zone A goes offline that means that all the control plane traffic for nodes in zone A is now being sent between zones Running multiple control plane hosts in each zone makes that outcome less likely etcd storage To improve performance of large clusters you can store Event objects in a separate dedicated etcd instance When creating a cluster you can using custom tooling start and configure additional etcd instance configure the to use it for storing events See Operating etcd clusters for Kubernetes docs tasks administer cluster configure upgrade etcd and Set up a High Availability etcd cluster with kubeadm docs setup production environment tools kubeadm setup ha etcd with kubeadm for details on configuring and managing etcd for a large cluster Addon resources Kubernetes resource limits docs concepts configuration manage resources containers help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components These resource limits apply to resources just as they apply to application workloads For example you can set CPU and memory limits for a logging component yaml containers name fluentd cloud logging image fluent fluentd kubernetes daemonset v1 resources limits cpu 100m memory 200Mi Addons default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters When running on large clusters addons often consume more of some resources than their default limits If a large cluster is deployed without adjusting these values the addon s may continuously get killed because they keep hitting the memory limit Alternatively the addon may run but with poor performance due to CPU time slice restrictions To avoid running into cluster addon resource issues when creating a cluster with many nodes consider the following Some addons scale vertically there is one replica of the addon for the cluster or serving a whole failure zone For these addons increase requests and limits as you scale out your cluster Many addons scale horizontally you add capacity by running more pods but with a very large cluster you may also need to raise CPU or memory limits slightly The Vertical Pod Autoscaler https github com kubernetes autoscaler tree master vertical pod autoscaler readme can run in recommender mode to provide suggested figures for requests and limits Some addons run as one copy per node controlled by a for example a node level log aggregator Similar to the case with horizontally scaled addons you may also need to raise CPU or memory limits slightly VerticalPodAutoscaler is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods Learn more about Vertical Pod Autoscaler https github com kubernetes autoscaler tree master vertical pod autoscaler readme and how you can use it to scale cluster components including cluster critical addons Read about cluster autoscaling docs concepts cluster administration cluster autoscaling The addon resizer https github com kubernetes autoscaler tree master addon resizer readme helps you in resizing the addons automatically as your cluster s scale changes |
kubernetes setup title PKI certificates and requirements contenttype concept sig cluster lifecycle reviewers overview weight 50 | ---
title: PKI certificates and requirements
reviewers:
- sig-cluster-lifecycle
content_type: concept
weight: 50
---
<!-- overview -->
Kubernetes requires PKI certificates for authentication over TLS.
If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates
that your cluster requires are automatically generated.
You can also generate your own certificates -- for example, to keep your private keys more secure
by not storing them on the API server.
This page explains the certificates that your cluster requires.
<!-- body -->
## How certificates are used by your cluster
Kubernetes requires PKI for the following operations:
### Server certificates
* Server certificate for the API server endpoint
* Server certificate for the etcd server
* [Server certificates](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#client-and-serving-certificates)
for each kubelet (every runs a kubelet)
* Optional server certificate for the [front-proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
### Client certificates
* Client certificates for each kubelet, used to authenticate to the API server as a client of
the Kubernetes API
* Client certificate for each API server, used to authenticate to etcd
* Client certificate for the controller manager to securely communicate with the API server
* Client certificate for the scheduler to securely communicate with the API server
* Client certificates, one for each node, for kube-proxy to authenticate to the API server
* Optional client certificates for administrators of the cluster to authenticate to the API server
* Optional client certificate for the [front-proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
### Kubelet's server and client certificates
To establish a secure connection and authenticate itself to the kubelet, the API Server
requires a client certificate and key pair.
In this scenario, there are two approaches for certificate usage:
* Shared Certificates: The kube-apiserver can utilize the same certificate and key pair it uses
to authenticate its clients. This means that the existing certificates, such as `apiserver.crt`
and `apiserver.key`, can be used for communicating with the kubelet servers.
* Separate Certificates: Alternatively, the kube-apiserver can generate a new client certificate
and key pair to authenticate its communication with the kubelet servers. In this case,
a distinct certificate named `kubelet-client.crt` and its corresponding private key,
`kubelet-client.key` are created.
`front-proxy` certificates are required only if you run kube-proxy to support
[an extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/).
etcd also implements mutual TLS to authenticate clients and peers.
## Where certificates are stored
If you install Kubernetes with kubeadm, most certificates are stored in `/etc/kubernetes/pki`.
All paths in this documentation are relative to that directory, with the exception of user account
certificates which kubeadm places in `/etc/kubernetes`.
## Configure certificates manually
If you don't want kubeadm to generate the required certificates, you can create them using a
single root CA or by providing all certificates. See [Certificates](/docs/tasks/administer-cluster/certificates/)
for details on creating your own certificate authority. See
[Certificate Management with kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/)
for more on managing certificates.
### Single root CA
You can create a single root CA, controlled by an administrator. This root CA can then create
multiple intermediate CAs, and delegate all further creation to Kubernetes itself.
Required CAs:
| Path | Default CN | Description |
|------------------------|---------------------------|----------------------------------|
| ca.crt,key | kubernetes-ca | Kubernetes general CA |
| etcd/ca.crt,key | etcd-ca | For all etcd-related functions |
| front-proxy-ca.crt,key | kubernetes-front-proxy-ca | For the [front-end proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) |
On top of the above CAs, it is also necessary to get a public/private key pair for service account
management, `sa.key` and `sa.pub`.
The following example illustrates the CA key and certificate files shown in the previous table:
```
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
```
### All certificates
If you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself.
Required certificates:
| Default CN | Parent CA | O (in Subject) | kind | hosts (SAN) |
|-------------------------------|---------------------------|----------------|------------------|-----------------------------------------------------|
| kube-etcd | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
| kube-etcd-peer | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
| kube-etcd-healthcheck-client | etcd-ca | | client | |
| kube-apiserver-etcd-client | etcd-ca | | client | |
| kube-apiserver | kubernetes-ca | | server | `<hostname>`, `<Host_IP>`, `<advertise_IP>`[^1] |
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
Instead of using the super-user group `system:masters` for `kube-apiserver-kubelet-client`
a less privileged group can be used. kubeadm uses the `kubeadm:cluster-admins` group for
that purpose.
[^1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)
the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,
`kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`)
where `kind` maps to one or more of the x509 key usage, which is also documented in the
`.spec.usages` of a [CertificateSigningRequest](/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1#CertificateSigningRequest)
type:
| kind | Key usage |
|--------|---------------------------------------------------------------------------------|
| server | digital signature, key encipherment, server auth |
| client | digital signature, key encipherment, client auth |
Hosts/SAN listed above are the recommended ones for getting a working cluster; if required by a
specific setup, it is possible to add additional SANs on all the server certificates.
For kubeadm users only:
* The scenario where you are copying to your cluster CA certificates without private keys is
referred as external CA in the kubeadm documentation.
* If you are comparing the above list with a kubeadm generated PKI, please be aware that
`kube-etcd`, `kube-etcd-peer` and `kube-etcd-healthcheck-client` certificates are not generated
in case of external etcd.
### Certificate paths
Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)).
Paths should be specified using the given argument regardless of location.
| DefaultCN | recommendedkeypath | recommendedcertpath | command | keyargument | certargument |
| --------- | ------------------ | ------------------- | ------- | ----------- | ------------ |
| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |
| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file,--root-ca-file,--cluster-signing-cert-file |
| kube-apiserver | apiserver.key | apiserver.crt| kube-apiserver | --tls-private-key-file | --tls-cert-file |
| kube-apiserver-kubelet-client | apiserver-kubelet-client.key | apiserver-kubelet-client.crt | kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |
| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file,--peer-trusted-ca-file |
| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
| etcd-ca| | etcd/ca.crt | etcdctl | | --cacert |
| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert |
Same considerations apply for the service account key pair:
| private key path | public key path | command | argument |
|-------------------|------------------|-------------------------|--------------------------------------|
| sa.key | | kube-controller-manager | --service-account-private-key-file |
| | sa.pub | kube-apiserver | --service-account-key-file |
The following example illustrates the file paths [from the previous tables](#certificate-paths)
you need to provide if you are generating all of your own keys and certificates:
```
/etc/kubernetes/pki/etcd/ca.key
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/apiserver-etcd-client.key
/etc/kubernetes/pki/apiserver-etcd-client.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/apiserver.key
/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/apiserver-kubelet-client.key
/etc/kubernetes/pki/apiserver-kubelet-client.crt
/etc/kubernetes/pki/front-proxy-ca.key
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-client.key
/etc/kubernetes/pki/front-proxy-client.crt
/etc/kubernetes/pki/etcd/server.key
/etc/kubernetes/pki/etcd/server.crt
/etc/kubernetes/pki/etcd/peer.key
/etc/kubernetes/pki/etcd/peer.crt
/etc/kubernetes/pki/etcd/healthcheck-client.key
/etc/kubernetes/pki/etcd/healthcheck-client.crt
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
```
## Configure certificates for user accounts
You must manually configure these administrator accounts and service accounts:
| Filename | Credential name | Default CN | O (in Subject) |
|-------------------------|----------------------------|-------------------------------------|------------------------|
| admin.conf | default-admin | kubernetes-admin | `<admin-group>` |
| super-admin.conf | default-super-admin | kubernetes-super-admin | system:masters |
| kubelet.conf | default-auth | system:node:`<nodeName>` (see note) | system:nodes |
| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
| scheduler.conf | default-scheduler | system:kube-scheduler | |
The value of `<nodeName>` for `kubelet.conf` **must** match precisely the value of the node name
provided by the kubelet as it registers with the apiserver. For further details, read the
[Node Authorization](/docs/reference/access-authn-authz/node/).
In the above example `<admin-group>` is implementation specific. Some tools sign the
certificate in the default `admin.conf` to be part of the `system:masters` group.
`system:masters` is a break-glass, super user group can bypass the authorization
layer of Kubernetes, such as RBAC. Also some tools do not generate a separate
`super-admin.conf` with a certificate bound to this super user group.
kubeadm generates two separate administrator certificates in kubeconfig files.
One is in `admin.conf` and has `Subject: O = kubeadm:cluster-admins, CN = kubernetes-admin`.
`kubeadm:cluster-admins` is a custom group bound to the `cluster-admin` ClusterRole.
This file is generated on all kubeadm managed control plane machines.
Another is in `super-admin.conf` that has `Subject: O = system:masters, CN = kubernetes-super-admin`.
This file is generated only on the node where `kubeadm init` was called.
1. For each configuration, generate an x509 certificate/key pair with the
given Common Name (CN) and Organization (O).
1. Run `kubectl` as follows for each configuration:
```
KUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs
KUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs
KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>
KUBECONFIG=<filename> kubectl config use-context default-system
```
These files are used as follows:
| Filename | Command | Comment |
|-------------------------|-------------------------|-----------------------------------------------------------------------|
| admin.conf | kubectl | Configures administrator user for the cluster |
| super-admin.conf | kubectl | Configures super administrator user for the cluster |
| kubelet.conf | kubelet | One required for each node in the cluster. |
| controller-manager.conf | kube-controller-manager | Must be added to manifest in `manifests/kube-controller-manager.yaml` |
| scheduler.conf | kube-scheduler | Must be added to manifest in `manifests/kube-scheduler.yaml` |
The following files illustrate full paths to the files listed in the previous table:
```
/etc/kubernetes/admin.conf
/etc/kubernetes/super-admin.conf
/etc/kubernetes/kubelet.conf
/etc/kubernetes/controller-manager.conf
/etc/kubernetes/scheduler.conf
``` | kubernetes setup | title PKI certificates and requirements reviewers sig cluster lifecycle content type concept weight 50 overview Kubernetes requires PKI certificates for authentication over TLS If you install Kubernetes with kubeadm docs reference setup tools kubeadm the certificates that your cluster requires are automatically generated You can also generate your own certificates for example to keep your private keys more secure by not storing them on the API server This page explains the certificates that your cluster requires body How certificates are used by your cluster Kubernetes requires PKI for the following operations Server certificates Server certificate for the API server endpoint Server certificate for the etcd server Server certificates docs reference access authn authz kubelet tls bootstrapping client and serving certificates for each kubelet every runs a kubelet Optional server certificate for the front proxy docs tasks extend kubernetes configure aggregation layer Client certificates Client certificates for each kubelet used to authenticate to the API server as a client of the Kubernetes API Client certificate for each API server used to authenticate to etcd Client certificate for the controller manager to securely communicate with the API server Client certificate for the scheduler to securely communicate with the API server Client certificates one for each node for kube proxy to authenticate to the API server Optional client certificates for administrators of the cluster to authenticate to the API server Optional client certificate for the front proxy docs tasks extend kubernetes configure aggregation layer Kubelet s server and client certificates To establish a secure connection and authenticate itself to the kubelet the API Server requires a client certificate and key pair In this scenario there are two approaches for certificate usage Shared Certificates The kube apiserver can utilize the same certificate and key pair it uses to authenticate its clients This means that the existing certificates such as apiserver crt and apiserver key can be used for communicating with the kubelet servers Separate Certificates Alternatively the kube apiserver can generate a new client certificate and key pair to authenticate its communication with the kubelet servers In this case a distinct certificate named kubelet client crt and its corresponding private key kubelet client key are created front proxy certificates are required only if you run kube proxy to support an extension API server docs tasks extend kubernetes setup extension api server etcd also implements mutual TLS to authenticate clients and peers Where certificates are stored If you install Kubernetes with kubeadm most certificates are stored in etc kubernetes pki All paths in this documentation are relative to that directory with the exception of user account certificates which kubeadm places in etc kubernetes Configure certificates manually If you don t want kubeadm to generate the required certificates you can create them using a single root CA or by providing all certificates See Certificates docs tasks administer cluster certificates for details on creating your own certificate authority See Certificate Management with kubeadm docs tasks administer cluster kubeadm kubeadm certs for more on managing certificates Single root CA You can create a single root CA controlled by an administrator This root CA can then create multiple intermediate CAs and delegate all further creation to Kubernetes itself Required CAs Path Default CN Description ca crt key kubernetes ca Kubernetes general CA etcd ca crt key etcd ca For all etcd related functions front proxy ca crt key kubernetes front proxy ca For the front end proxy docs tasks extend kubernetes configure aggregation layer On top of the above CAs it is also necessary to get a public private key pair for service account management sa key and sa pub The following example illustrates the CA key and certificate files shown in the previous table etc kubernetes pki ca crt etc kubernetes pki ca key etc kubernetes pki etcd ca crt etc kubernetes pki etcd ca key etc kubernetes pki front proxy ca crt etc kubernetes pki front proxy ca key All certificates If you don t wish to copy the CA private keys to your cluster you can generate all certificates yourself Required certificates Default CN Parent CA O in Subject kind hosts SAN kube etcd etcd ca server client hostname Host IP localhost 127 0 0 1 kube etcd peer etcd ca server client hostname Host IP localhost 127 0 0 1 kube etcd healthcheck client etcd ca client kube apiserver etcd client etcd ca client kube apiserver kubernetes ca server hostname Host IP advertise IP 1 kube apiserver kubelet client kubernetes ca system masters client front proxy client kubernetes front proxy ca client Instead of using the super user group system masters for kube apiserver kubelet client a less privileged group can be used kubeadm uses the kubeadm cluster admins group for that purpose 1 any other IP or DNS name you contact your cluster on as used by kubeadm docs reference setup tools kubeadm the load balancer stable IP and or DNS name kubernetes kubernetes default kubernetes default svc kubernetes default svc cluster kubernetes default svc cluster local where kind maps to one or more of the x509 key usage which is also documented in the spec usages of a CertificateSigningRequest docs reference kubernetes api authentication resources certificate signing request v1 CertificateSigningRequest type kind Key usage server digital signature key encipherment server auth client digital signature key encipherment client auth Hosts SAN listed above are the recommended ones for getting a working cluster if required by a specific setup it is possible to add additional SANs on all the server certificates For kubeadm users only The scenario where you are copying to your cluster CA certificates without private keys is referred as external CA in the kubeadm documentation If you are comparing the above list with a kubeadm generated PKI please be aware that kube etcd kube etcd peer and kube etcd healthcheck client certificates are not generated in case of external etcd Certificate paths Certificates should be placed in a recommended path as used by kubeadm docs reference setup tools kubeadm Paths should be specified using the given argument regardless of location DefaultCN recommendedkeypath recommendedcertpath command keyargument certargument etcd ca etcd ca key etcd ca crt kube apiserver etcd cafile kube apiserver etcd client apiserver etcd client key apiserver etcd client crt kube apiserver etcd keyfile etcd certfile kubernetes ca ca key ca crt kube apiserver client ca file kubernetes ca ca key ca crt kube controller manager cluster signing key file client ca file root ca file cluster signing cert file kube apiserver apiserver key apiserver crt kube apiserver tls private key file tls cert file kube apiserver kubelet client apiserver kubelet client key apiserver kubelet client crt kube apiserver kubelet client key kubelet client certificate front proxy ca front proxy ca key front proxy ca crt kube apiserver requestheader client ca file front proxy ca front proxy ca key front proxy ca crt kube controller manager requestheader client ca file front proxy client front proxy client key front proxy client crt kube apiserver proxy client key file proxy client cert file etcd ca etcd ca key etcd ca crt etcd trusted ca file peer trusted ca file kube etcd etcd server key etcd server crt etcd key file cert file kube etcd peer etcd peer key etcd peer crt etcd peer key file peer cert file etcd ca etcd ca crt etcdctl cacert kube etcd healthcheck client etcd healthcheck client key etcd healthcheck client crt etcdctl key cert Same considerations apply for the service account key pair private key path public key path command argument sa key kube controller manager service account private key file sa pub kube apiserver service account key file The following example illustrates the file paths from the previous tables certificate paths you need to provide if you are generating all of your own keys and certificates etc kubernetes pki etcd ca key etc kubernetes pki etcd ca crt etc kubernetes pki apiserver etcd client key etc kubernetes pki apiserver etcd client crt etc kubernetes pki ca key etc kubernetes pki ca crt etc kubernetes pki apiserver key etc kubernetes pki apiserver crt etc kubernetes pki apiserver kubelet client key etc kubernetes pki apiserver kubelet client crt etc kubernetes pki front proxy ca key etc kubernetes pki front proxy ca crt etc kubernetes pki front proxy client key etc kubernetes pki front proxy client crt etc kubernetes pki etcd server key etc kubernetes pki etcd server crt etc kubernetes pki etcd peer key etc kubernetes pki etcd peer crt etc kubernetes pki etcd healthcheck client key etc kubernetes pki etcd healthcheck client crt etc kubernetes pki sa key etc kubernetes pki sa pub Configure certificates for user accounts You must manually configure these administrator accounts and service accounts Filename Credential name Default CN O in Subject admin conf default admin kubernetes admin admin group super admin conf default super admin kubernetes super admin system masters kubelet conf default auth system node nodeName see note system nodes controller manager conf default controller manager system kube controller manager scheduler conf default scheduler system kube scheduler The value of nodeName for kubelet conf must match precisely the value of the node name provided by the kubelet as it registers with the apiserver For further details read the Node Authorization docs reference access authn authz node In the above example admin group is implementation specific Some tools sign the certificate in the default admin conf to be part of the system masters group system masters is a break glass super user group can bypass the authorization layer of Kubernetes such as RBAC Also some tools do not generate a separate super admin conf with a certificate bound to this super user group kubeadm generates two separate administrator certificates in kubeconfig files One is in admin conf and has Subject O kubeadm cluster admins CN kubernetes admin kubeadm cluster admins is a custom group bound to the cluster admin ClusterRole This file is generated on all kubeadm managed control plane machines Another is in super admin conf that has Subject O system masters CN kubernetes super admin This file is generated only on the node where kubeadm init was called 1 For each configuration generate an x509 certificate key pair with the given Common Name CN and Organization O 1 Run kubectl as follows for each configuration KUBECONFIG filename kubectl config set cluster default cluster server https host ip 6443 certificate authority path to kubernetes ca embed certs KUBECONFIG filename kubectl config set credentials credential name client key path to key pem client certificate path to cert pem embed certs KUBECONFIG filename kubectl config set context default system cluster default cluster user credential name KUBECONFIG filename kubectl config use context default system These files are used as follows Filename Command Comment admin conf kubectl Configures administrator user for the cluster super admin conf kubectl Configures super administrator user for the cluster kubelet conf kubelet One required for each node in the cluster controller manager conf kube controller manager Must be added to manifest in manifests kube controller manager yaml scheduler conf kube scheduler Must be added to manifest in manifests kube scheduler yaml The following files illustrate full paths to the files listed in the previous table etc kubernetes admin conf etc kubernetes super admin conf etc kubernetes kubelet conf etc kubernetes controller manager conf etc kubernetes scheduler conf |
linux lanana org as part of the Linux Assigned Names And Numbers Authority mainline Linux kernel is now the maintained main document Note The original version of this document which was maintained at Last update 2005 01 17 version 1 4 Unicode support LANANA project is no longer existent So this version in the | Unicode support
===============
Last update: 2005-01-17, version 1.4
Note: The original version of this document, which was maintained at
lanana.org as part of the Linux Assigned Names And Numbers Authority
(LANANA) project, is no longer existent. So, this version in the
mainline Linux kernel is now the maintained main document.
Introduction
------------
The Linux kernel code has been rewritten to use Unicode to map
characters to fonts. By downloading a single Unicode-to-font table,
both the eight-bit character sets and UTF-8 mode are changed to use
the font as indicated.
This changes the semantics of the eight-bit character tables subtly.
The four character tables are now:
=============== =============================== ================
Map symbol Map name Escape code (G0)
=============== =============================== ================
LAT1_MAP Latin-1 (ISO 8859-1) ESC ( B
GRAF_MAP DEC VT100 pseudographics ESC ( 0
IBMPC_MAP IBM code page 437 ESC ( U
USER_MAP User defined ESC ( K
=============== =============================== ================
In particular, ESC ( U is no longer "straight to font", since the font
might be completely different than the IBM character set. This
permits for example the use of block graphics even with a Latin-1 font
loaded.
Note that although these codes are similar to ISO 2022, neither the
codes nor their uses match ISO 2022; Linux has two 8-bit codes (G0 and
G1), whereas ISO 2022 has four 7-bit codes (G0-G3).
In accordance with the Unicode standard/ISO 10646 the range U+F000 to
U+F8FF has been reserved for OS-wide allocation (the Unicode Standard
refers to this as a "Corporate Zone", since this is inaccurate for
Linux we call it the "Linux Zone"). U+F000 was picked as the starting
point since it lets the direct-mapping area start on a large power of
two (in case 1024- or 2048-character fonts ever become necessary).
This leaves U+E000 to U+EFFF as End User Zone.
[v1.2]: The Unicodes range from U+F000 and up to U+F7FF have been
hard-coded to map directly to the loaded font, bypassing the
translation table. The user-defined map now defaults to U+F000 to
U+F0FF, emulating the previous behaviour. In practice, this range
might be shorter; for example, vgacon can only handle 256-character
(U+F000..U+F0FF) or 512-character (U+F000..U+F1FF) fonts.
Actual characters assigned in the Linux Zone
--------------------------------------------
In addition, the following characters not present in Unicode 1.1.4
have been defined; these are used by the DEC VT graphics map. [v1.2]
THIS USE IS OBSOLETE AND SHOULD NO LONGER BE USED; PLEASE SEE BELOW.
====== ======================================
U+F800 DEC VT GRAPHICS HORIZONTAL LINE SCAN 1
U+F801 DEC VT GRAPHICS HORIZONTAL LINE SCAN 3
U+F803 DEC VT GRAPHICS HORIZONTAL LINE SCAN 7
U+F804 DEC VT GRAPHICS HORIZONTAL LINE SCAN 9
====== ======================================
The DEC VT220 uses a 6x10 character matrix, and these characters form
a smooth progression in the DEC VT graphics character set. I have
omitted the scan 5 line, since it is also used as a block-graphics
character, and hence has been coded as U+2500 FORMS LIGHT HORIZONTAL.
[v1.3]: These characters have been officially added to Unicode 3.2.0;
they are added at U+23BA, U+23BB, U+23BC, U+23BD. Linux now uses the
new values.
[v1.2]: The following characters have been added to represent common
keyboard symbols that are unlikely to ever be added to Unicode proper
since they are horribly vendor-specific. This, of course, is an
excellent example of horrible design.
====== ======================================
U+F810 KEYBOARD SYMBOL FLYING FLAG
U+F811 KEYBOARD SYMBOL PULLDOWN MENU
U+F812 KEYBOARD SYMBOL OPEN APPLE
U+F813 KEYBOARD SYMBOL SOLID APPLE
====== ======================================
Klingon language support
------------------------
In 1996, Linux was the first operating system in the world to add
support for the artificial language Klingon, created by Marc Okrand
for the "Star Trek" television series. This encoding was later
adopted by the ConScript Unicode Registry and proposed (but ultimately
rejected) for inclusion in Unicode Plane 1. Thus, it remains as a
Linux/CSUR private assignment in the Linux Zone.
This encoding has been endorsed by the Klingon Language Institute.
For more information, contact them at:
http://www.kli.org/
Since the characters in the beginning of the Linux CZ have been more
of the dingbats/symbols/forms type and this is a language, I have
located it at the end, on a 16-cell boundary in keeping with standard
Unicode practice.
.. note::
This range is now officially managed by the ConScript Unicode
Registry. The normative reference is at:
https://www.evertype.com/standards/csur/klingon.html
Klingon has an alphabet of 26 characters, a positional numeric writing
system with 10 digits, and is written left-to-right, top-to-bottom.
Several glyph forms for the Klingon alphabet have been proposed.
However, since the set of symbols appear to be consistent throughout,
with only the actual shapes being different, in keeping with standard
Unicode practice these differences are considered font variants.
====== =======================================================
U+F8D0 KLINGON LETTER A
U+F8D1 KLINGON LETTER B
U+F8D2 KLINGON LETTER CH
U+F8D3 KLINGON LETTER D
U+F8D4 KLINGON LETTER E
U+F8D5 KLINGON LETTER GH
U+F8D6 KLINGON LETTER H
U+F8D7 KLINGON LETTER I
U+F8D8 KLINGON LETTER J
U+F8D9 KLINGON LETTER L
U+F8DA KLINGON LETTER M
U+F8DB KLINGON LETTER N
U+F8DC KLINGON LETTER NG
U+F8DD KLINGON LETTER O
U+F8DE KLINGON LETTER P
U+F8DF KLINGON LETTER Q
- Written <q> in standard Okrand Latin transliteration
U+F8E0 KLINGON LETTER QH
- Written <Q> in standard Okrand Latin transliteration
U+F8E1 KLINGON LETTER R
U+F8E2 KLINGON LETTER S
U+F8E3 KLINGON LETTER T
U+F8E4 KLINGON LETTER TLH
U+F8E5 KLINGON LETTER U
U+F8E6 KLINGON LETTER V
U+F8E7 KLINGON LETTER W
U+F8E8 KLINGON LETTER Y
U+F8E9 KLINGON LETTER GLOTTAL STOP
U+F8F0 KLINGON DIGIT ZERO
U+F8F1 KLINGON DIGIT ONE
U+F8F2 KLINGON DIGIT TWO
U+F8F3 KLINGON DIGIT THREE
U+F8F4 KLINGON DIGIT FOUR
U+F8F5 KLINGON DIGIT FIVE
U+F8F6 KLINGON DIGIT SIX
U+F8F7 KLINGON DIGIT SEVEN
U+F8F8 KLINGON DIGIT EIGHT
U+F8F9 KLINGON DIGIT NINE
U+F8FD KLINGON COMMA
U+F8FE KLINGON FULL STOP
U+F8FF KLINGON SYMBOL FOR EMPIRE
====== =======================================================
Other Fictional and Artificial Scripts
--------------------------------------
Since the assignment of the Klingon Linux Unicode block, a registry of
fictional and artificial scripts has been established by John Cowan
<[email protected]> and Michael Everson <[email protected]>.
The ConScript Unicode Registry is accessible at:
https://www.evertype.com/standards/csur/
The ranges used fall at the low end of the End User Zone and can hence
not be normatively assigned, but it is recommended that people who
wish to encode fictional scripts use these codes, in the interest of
interoperability. For Klingon, CSUR has adopted the Linux encoding.
The CSUR people are driving adding Tengwar and Cirth into Unicode
Plane 1; the addition of Klingon to Unicode Plane 1 has been rejected
and so the above encoding remains official. | linux | Unicode support Last update 2005 01 17 version 1 4 Note The original version of this document which was maintained at lanana org as part of the Linux Assigned Names And Numbers Authority LANANA project is no longer existent So this version in the mainline Linux kernel is now the maintained main document Introduction The Linux kernel code has been rewritten to use Unicode to map characters to fonts By downloading a single Unicode to font table both the eight bit character sets and UTF 8 mode are changed to use the font as indicated This changes the semantics of the eight bit character tables subtly The four character tables are now Map symbol Map name Escape code G0 LAT1 MAP Latin 1 ISO 8859 1 ESC B GRAF MAP DEC VT100 pseudographics ESC 0 IBMPC MAP IBM code page 437 ESC U USER MAP User defined ESC K In particular ESC U is no longer straight to font since the font might be completely different than the IBM character set This permits for example the use of block graphics even with a Latin 1 font loaded Note that although these codes are similar to ISO 2022 neither the codes nor their uses match ISO 2022 Linux has two 8 bit codes G0 and G1 whereas ISO 2022 has four 7 bit codes G0 G3 In accordance with the Unicode standard ISO 10646 the range U F000 to U F8FF has been reserved for OS wide allocation the Unicode Standard refers to this as a Corporate Zone since this is inaccurate for Linux we call it the Linux Zone U F000 was picked as the starting point since it lets the direct mapping area start on a large power of two in case 1024 or 2048 character fonts ever become necessary This leaves U E000 to U EFFF as End User Zone v1 2 The Unicodes range from U F000 and up to U F7FF have been hard coded to map directly to the loaded font bypassing the translation table The user defined map now defaults to U F000 to U F0FF emulating the previous behaviour In practice this range might be shorter for example vgacon can only handle 256 character U F000 U F0FF or 512 character U F000 U F1FF fonts Actual characters assigned in the Linux Zone In addition the following characters not present in Unicode 1 1 4 have been defined these are used by the DEC VT graphics map v1 2 THIS USE IS OBSOLETE AND SHOULD NO LONGER BE USED PLEASE SEE BELOW U F800 DEC VT GRAPHICS HORIZONTAL LINE SCAN 1 U F801 DEC VT GRAPHICS HORIZONTAL LINE SCAN 3 U F803 DEC VT GRAPHICS HORIZONTAL LINE SCAN 7 U F804 DEC VT GRAPHICS HORIZONTAL LINE SCAN 9 The DEC VT220 uses a 6x10 character matrix and these characters form a smooth progression in the DEC VT graphics character set I have omitted the scan 5 line since it is also used as a block graphics character and hence has been coded as U 2500 FORMS LIGHT HORIZONTAL v1 3 These characters have been officially added to Unicode 3 2 0 they are added at U 23BA U 23BB U 23BC U 23BD Linux now uses the new values v1 2 The following characters have been added to represent common keyboard symbols that are unlikely to ever be added to Unicode proper since they are horribly vendor specific This of course is an excellent example of horrible design U F810 KEYBOARD SYMBOL FLYING FLAG U F811 KEYBOARD SYMBOL PULLDOWN MENU U F812 KEYBOARD SYMBOL OPEN APPLE U F813 KEYBOARD SYMBOL SOLID APPLE Klingon language support In 1996 Linux was the first operating system in the world to add support for the artificial language Klingon created by Marc Okrand for the Star Trek television series This encoding was later adopted by the ConScript Unicode Registry and proposed but ultimately rejected for inclusion in Unicode Plane 1 Thus it remains as a Linux CSUR private assignment in the Linux Zone This encoding has been endorsed by the Klingon Language Institute For more information contact them at http www kli org Since the characters in the beginning of the Linux CZ have been more of the dingbats symbols forms type and this is a language I have located it at the end on a 16 cell boundary in keeping with standard Unicode practice note This range is now officially managed by the ConScript Unicode Registry The normative reference is at https www evertype com standards csur klingon html Klingon has an alphabet of 26 characters a positional numeric writing system with 10 digits and is written left to right top to bottom Several glyph forms for the Klingon alphabet have been proposed However since the set of symbols appear to be consistent throughout with only the actual shapes being different in keeping with standard Unicode practice these differences are considered font variants U F8D0 KLINGON LETTER A U F8D1 KLINGON LETTER B U F8D2 KLINGON LETTER CH U F8D3 KLINGON LETTER D U F8D4 KLINGON LETTER E U F8D5 KLINGON LETTER GH U F8D6 KLINGON LETTER H U F8D7 KLINGON LETTER I U F8D8 KLINGON LETTER J U F8D9 KLINGON LETTER L U F8DA KLINGON LETTER M U F8DB KLINGON LETTER N U F8DC KLINGON LETTER NG U F8DD KLINGON LETTER O U F8DE KLINGON LETTER P U F8DF KLINGON LETTER Q Written q in standard Okrand Latin transliteration U F8E0 KLINGON LETTER QH Written Q in standard Okrand Latin transliteration U F8E1 KLINGON LETTER R U F8E2 KLINGON LETTER S U F8E3 KLINGON LETTER T U F8E4 KLINGON LETTER TLH U F8E5 KLINGON LETTER U U F8E6 KLINGON LETTER V U F8E7 KLINGON LETTER W U F8E8 KLINGON LETTER Y U F8E9 KLINGON LETTER GLOTTAL STOP U F8F0 KLINGON DIGIT ZERO U F8F1 KLINGON DIGIT ONE U F8F2 KLINGON DIGIT TWO U F8F3 KLINGON DIGIT THREE U F8F4 KLINGON DIGIT FOUR U F8F5 KLINGON DIGIT FIVE U F8F6 KLINGON DIGIT SIX U F8F7 KLINGON DIGIT SEVEN U F8F8 KLINGON DIGIT EIGHT U F8F9 KLINGON DIGIT NINE U F8FD KLINGON COMMA U F8FE KLINGON FULL STOP U F8FF KLINGON SYMBOL FOR EMPIRE Other Fictional and Artificial Scripts Since the assignment of the Klingon Linux Unicode block a registry of fictional and artificial scripts has been established by John Cowan jcowan reutershealth com and Michael Everson everson evertype com The ConScript Unicode Registry is accessible at https www evertype com standards csur The ranges used fall at the low end of the End User Zone and can hence not be normatively assigned but it is recommended that people who wish to encode fictional scripts use these codes in the interest of interoperability For Klingon CSUR has adopted the Linux encoding The CSUR people are driving adding Tengwar and Cirth into Unicode Plane 1 the addition of Klingon to Unicode Plane 1 has been rejected and so the above encoding remains official |
linux organization here this material was not written to be a single coherent The following is a collection of user oriented documents that have been document With luck things will improve quickly over time added to the kernel over time There is as yet little overall order or The Linux kernel user s and administrator s guide This initial section contains overall information including the README | =================================================
The Linux kernel user's and administrator's guide
=================================================
The following is a collection of user-oriented documents that have been
added to the kernel over time. There is, as yet, little overall order or
organization here — this material was not written to be a single, coherent
document! With luck things will improve quickly over time.
This initial section contains overall information, including the README
file describing the kernel as a whole, documentation on kernel parameters,
etc.
.. toctree::
:maxdepth: 1
README
kernel-parameters
devices
sysctl/index
abi
features
This section describes CPU vulnerabilities and their mitigations.
.. toctree::
:maxdepth: 1
hw-vuln/index
Here is a set of documents aimed at users who are trying to track down
problems and bugs in particular.
.. toctree::
:maxdepth: 1
reporting-issues
reporting-regressions
quickly-build-trimmed-linux
verify-bugs-and-bisect-regressions
bug-hunting
bug-bisect
tainted-kernels
ramoops
dynamic-debug-howto
init
kdump/index
perf/index
pstore-blk
This is the beginning of a section with information of interest to
application developers. Documents covering various aspects of the kernel
ABI will be found here.
.. toctree::
:maxdepth: 1
sysfs-rules
This is the beginning of a section with information of interest to
application developers and system integrators doing analysis of the
Linux kernel for safety critical applications. Documents supporting
analysis of kernel interactions with applications, and key kernel
subsystems expectations will be found here.
.. toctree::
:maxdepth: 1
workload-tracing
The rest of this manual consists of various unordered guides on how to
configure specific aspects of kernel behavior to your liking.
.. toctree::
:maxdepth: 1
acpi/index
aoe/index
auxdisplay/index
bcache
binderfs
binfmt-misc
blockdev/index
bootconfig
braille-console
btmrvl
cgroup-v1/index
cgroup-v2
cifs/index
clearing-warn-once
cpu-load
cputopology
dell_rbu
device-mapper/index
edid
efi-stub
ext4
filesystem-monitoring
nfs/index
gpio/index
highuid
hw_random
initrd
iostats
java
jfs
kernel-per-CPU-kthreads
laptops/index
lcd-panel-cgram
ldm
lockup-watchdogs
LSM/index
md
media/index
mm/index
module-signing
mono
namespaces/index
numastat
parport
perf-security
pm/index
pnp
rapidio
RAS/index
rtc
serial-console
svga
syscall-user-dispatch
sysrq
thermal/index
thunderbolt
ufs
unicode
vga-softcursor
video-output
xfs
.. only:: subproject and html
Indices
=======
* :ref:`genindex` | linux | The Linux kernel user s and administrator s guide The following is a collection of user oriented documents that have been added to the kernel over time There is as yet little overall order or organization here this material was not written to be a single coherent document With luck things will improve quickly over time This initial section contains overall information including the README file describing the kernel as a whole documentation on kernel parameters etc toctree maxdepth 1 README kernel parameters devices sysctl index abi features This section describes CPU vulnerabilities and their mitigations toctree maxdepth 1 hw vuln index Here is a set of documents aimed at users who are trying to track down problems and bugs in particular toctree maxdepth 1 reporting issues reporting regressions quickly build trimmed linux verify bugs and bisect regressions bug hunting bug bisect tainted kernels ramoops dynamic debug howto init kdump index perf index pstore blk This is the beginning of a section with information of interest to application developers Documents covering various aspects of the kernel ABI will be found here toctree maxdepth 1 sysfs rules This is the beginning of a section with information of interest to application developers and system integrators doing analysis of the Linux kernel for safety critical applications Documents supporting analysis of kernel interactions with applications and key kernel subsystems expectations will be found here toctree maxdepth 1 workload tracing The rest of this manual consists of various unordered guides on how to configure specific aspects of kernel behavior to your liking toctree maxdepth 1 acpi index aoe index auxdisplay index bcache binderfs binfmt misc blockdev index bootconfig braille console btmrvl cgroup v1 index cgroup v2 cifs index clearing warn once cpu load cputopology dell rbu device mapper index edid efi stub ext4 filesystem monitoring nfs index gpio index highuid hw random initrd iostats java jfs kernel per CPU kthreads laptops index lcd panel cgram ldm lockup watchdogs LSM index md media index mm index module signing mono namespaces index numastat parport perf security pm index pnp rapidio RAS index rtc serial console svga syscall user dispatch sysrq thermal index thunderbolt ufs unicode vga softcursor video output xfs only subproject and html Indices ref genindex |
linux and depends on internal kernel structures and layout It is agreed upon may not be stable across kernel releases internal API Therefore there are aspects of the sysfs interface that The kernel exported sysfs exports internal kernel implementation details Rules on how to access information in sysfs by the kernel developers that the Linux kernel does not provide a stable To minimize the risk of breaking users of sysfs which are in most cases | Rules on how to access information in sysfs
===========================================
The kernel-exported sysfs exports internal kernel implementation details
and depends on internal kernel structures and layout. It is agreed upon
by the kernel developers that the Linux kernel does not provide a stable
internal API. Therefore, there are aspects of the sysfs interface that
may not be stable across kernel releases.
To minimize the risk of breaking users of sysfs, which are in most cases
low-level userspace applications, with a new kernel release, the users
of sysfs must follow some rules to use an as-abstract-as-possible way to
access this filesystem. The current udev and HAL programs already
implement this and users are encouraged to plug, if possible, into the
abstractions these programs provide instead of accessing sysfs directly.
But if you really do want or need to access sysfs directly, please follow
the following rules and then your programs should work with future
versions of the sysfs interface.
- Do not use libsysfs
It makes assumptions about sysfs which are not true. Its API does not
offer any abstraction, it exposes all the kernel driver-core
implementation details in its own API. Therefore it is not better than
reading directories and opening the files yourself.
Also, it is not actively maintained, in the sense of reflecting the
current kernel development. The goal of providing a stable interface
to sysfs has failed; it causes more problems than it solves. It
violates many of the rules in this document.
- sysfs is always at ``/sys``
Parsing ``/proc/mounts`` is a waste of time. Other mount points are a
system configuration bug you should not try to solve. For test cases,
possibly support a ``SYSFS_PATH`` environment variable to overwrite the
application's behavior, but never try to search for sysfs. Never try
to mount it, if you are not an early boot script.
- devices are only "devices"
There is no such thing like class-, bus-, physical devices,
interfaces, and such that you can rely on in userspace. Everything is
just simply a "device". Class-, bus-, physical, ... types are just
kernel implementation details which should not be expected by
applications that look for devices in sysfs.
The properties of a device are:
- devpath (``/devices/pci0000:00/0000:00:1d.1/usb2/2-2/2-2:1.0``)
- identical to the DEVPATH value in the event sent from the kernel
at device creation and removal
- the unique key to the device at that point in time
- the kernel's path to the device directory without the leading
``/sys``, and always starting with a slash
- all elements of a devpath must be real directories. Symlinks
pointing to /sys/devices must always be resolved to their real
target and the target path must be used to access the device.
That way the devpath to the device matches the devpath of the
kernel used at event time.
- using or exposing symlink values as elements in a devpath string
is a bug in the application
- kernel name (``sda``, ``tty``, ``0000:00:1f.2``, ...)
- a directory name, identical to the last element of the devpath
- applications need to handle spaces and characters like ``!`` in
the name
- subsystem (``block``, ``tty``, ``pci``, ...)
- simple string, never a path or a link
- retrieved by reading the "subsystem"-link and using only the
last element of the target path
- driver (``tg3``, ``ata_piix``, ``uhci_hcd``)
- a simple string, which may contain spaces, never a path or a
link
- it is retrieved by reading the "driver"-link and using only the
last element of the target path
- devices which do not have "driver"-link just do not have a
driver; copying the driver value in a child device context is a
bug in the application
- attributes
- the files in the device directory or files below subdirectories
of the same device directory
- accessing attributes reached by a symlink pointing to another device,
like the "device"-link, is a bug in the application
Everything else is just a kernel driver-core implementation detail
that should not be assumed to be stable across kernel releases.
- Properties of parent devices never belong into a child device.
Always look at the parent devices themselves for determining device
context properties. If the device ``eth0`` or ``sda`` does not have a
"driver"-link, then this device does not have a driver. Its value is empty.
Never copy any property of the parent-device into a child-device. Parent
device properties may change dynamically without any notice to the
child device.
- Hierarchy in a single device tree
There is only one valid place in sysfs where hierarchy can be examined
and this is below: ``/sys/devices.``
It is planned that all device directories will end up in the tree
below this directory.
- Classification by subsystem
There are currently three places for classification of devices:
``/sys/block,`` ``/sys/class`` and ``/sys/bus.`` It is planned that these will
not contain any device directories themselves, but only flat lists of
symlinks pointing to the unified ``/sys/devices`` tree.
All three places have completely different rules on how to access
device information. It is planned to merge all three
classification directories into one place at ``/sys/subsystem``,
following the layout of the bus directories. All buses and
classes, including the converted block subsystem, will show up
there.
The devices belonging to a subsystem will create a symlink in the
"devices" directory at ``/sys/subsystem/<name>/devices``,
If ``/sys/subsystem`` exists, ``/sys/bus``, ``/sys/class`` and ``/sys/block``
can be ignored. If it does not exist, you always have to scan all three
places, as the kernel is free to move a subsystem from one place to
the other, as long as the devices are still reachable by the same
subsystem name.
Assuming ``/sys/class/<subsystem>`` and ``/sys/bus/<subsystem>``, or
``/sys/block`` and ``/sys/class/block`` are not interchangeable is a bug in
the application.
- Block
The converted block subsystem at ``/sys/class/block`` or
``/sys/subsystem/block`` will contain the links for disks and partitions
at the same level, never in a hierarchy. Assuming the block subsystem to
contain only disks and not partition devices in the same flat list is
a bug in the application.
- "device"-link and <subsystem>:<kernel name>-links
Never depend on the "device"-link. The "device"-link is a workaround
for the old layout, where class devices are not created in
``/sys/devices/`` like the bus devices. If the link-resolving of a
device directory does not end in ``/sys/devices/``, you can use the
"device"-link to find the parent devices in ``/sys/devices/``, That is the
single valid use of the "device"-link; it must never appear in any
path as an element. Assuming the existence of the "device"-link for
a device in ``/sys/devices/`` is a bug in the application.
Accessing ``/sys/class/net/eth0/device`` is a bug in the application.
Never depend on the class-specific links back to the ``/sys/class``
directory. These links are also a workaround for the design mistake
that class devices are not created in ``/sys/devices.`` If a device
directory does not contain directories for child devices, these links
may be used to find the child devices in ``/sys/class.`` That is the single
valid use of these links; they must never appear in any path as an
element. Assuming the existence of these links for devices which are
real child device directories in the ``/sys/devices`` tree is a bug in
the application.
It is planned to remove all these links when all class device
directories live in ``/sys/devices.``
- Position of devices along device chain can change.
Never depend on a specific parent device position in the devpath,
or the chain of parent devices. The kernel is free to insert devices into
the chain. You must always request the parent device you are looking for
by its subsystem value. You need to walk up the chain until you find
the device that matches the expected subsystem. Depending on a specific
position of a parent device or exposing relative paths using ``../`` to
access the chain of parents is a bug in the application.
- When reading and writing sysfs device attribute files, avoid dependency
on specific error codes wherever possible. This minimizes coupling to
the error handling implementation within the kernel.
In general, failures to read or write sysfs device attributes shall
propagate errors wherever possible. Common errors include, but are not
limited to:
``-EIO``: The read or store operation is not supported, typically
returned by the sysfs system itself if the read or store pointer
is ``NULL``.
``-ENXIO``: The read or store operation failed
Error codes will not be changed without good reason, and should a change
to error codes result in user-space breakage, it will be fixed, or the
the offending change will be reverted.
Userspace applications can, however, expect the format and contents of
the attribute files to remain consistent in the absence of a version
attribute change in the context of a given attribute. | linux | Rules on how to access information in sysfs The kernel exported sysfs exports internal kernel implementation details and depends on internal kernel structures and layout It is agreed upon by the kernel developers that the Linux kernel does not provide a stable internal API Therefore there are aspects of the sysfs interface that may not be stable across kernel releases To minimize the risk of breaking users of sysfs which are in most cases low level userspace applications with a new kernel release the users of sysfs must follow some rules to use an as abstract as possible way to access this filesystem The current udev and HAL programs already implement this and users are encouraged to plug if possible into the abstractions these programs provide instead of accessing sysfs directly But if you really do want or need to access sysfs directly please follow the following rules and then your programs should work with future versions of the sysfs interface Do not use libsysfs It makes assumptions about sysfs which are not true Its API does not offer any abstraction it exposes all the kernel driver core implementation details in its own API Therefore it is not better than reading directories and opening the files yourself Also it is not actively maintained in the sense of reflecting the current kernel development The goal of providing a stable interface to sysfs has failed it causes more problems than it solves It violates many of the rules in this document sysfs is always at sys Parsing proc mounts is a waste of time Other mount points are a system configuration bug you should not try to solve For test cases possibly support a SYSFS PATH environment variable to overwrite the application s behavior but never try to search for sysfs Never try to mount it if you are not an early boot script devices are only devices There is no such thing like class bus physical devices interfaces and such that you can rely on in userspace Everything is just simply a device Class bus physical types are just kernel implementation details which should not be expected by applications that look for devices in sysfs The properties of a device are devpath devices pci0000 00 0000 00 1d 1 usb2 2 2 2 2 1 0 identical to the DEVPATH value in the event sent from the kernel at device creation and removal the unique key to the device at that point in time the kernel s path to the device directory without the leading sys and always starting with a slash all elements of a devpath must be real directories Symlinks pointing to sys devices must always be resolved to their real target and the target path must be used to access the device That way the devpath to the device matches the devpath of the kernel used at event time using or exposing symlink values as elements in a devpath string is a bug in the application kernel name sda tty 0000 00 1f 2 a directory name identical to the last element of the devpath applications need to handle spaces and characters like in the name subsystem block tty pci simple string never a path or a link retrieved by reading the subsystem link and using only the last element of the target path driver tg3 ata piix uhci hcd a simple string which may contain spaces never a path or a link it is retrieved by reading the driver link and using only the last element of the target path devices which do not have driver link just do not have a driver copying the driver value in a child device context is a bug in the application attributes the files in the device directory or files below subdirectories of the same device directory accessing attributes reached by a symlink pointing to another device like the device link is a bug in the application Everything else is just a kernel driver core implementation detail that should not be assumed to be stable across kernel releases Properties of parent devices never belong into a child device Always look at the parent devices themselves for determining device context properties If the device eth0 or sda does not have a driver link then this device does not have a driver Its value is empty Never copy any property of the parent device into a child device Parent device properties may change dynamically without any notice to the child device Hierarchy in a single device tree There is only one valid place in sysfs where hierarchy can be examined and this is below sys devices It is planned that all device directories will end up in the tree below this directory Classification by subsystem There are currently three places for classification of devices sys block sys class and sys bus It is planned that these will not contain any device directories themselves but only flat lists of symlinks pointing to the unified sys devices tree All three places have completely different rules on how to access device information It is planned to merge all three classification directories into one place at sys subsystem following the layout of the bus directories All buses and classes including the converted block subsystem will show up there The devices belonging to a subsystem will create a symlink in the devices directory at sys subsystem name devices If sys subsystem exists sys bus sys class and sys block can be ignored If it does not exist you always have to scan all three places as the kernel is free to move a subsystem from one place to the other as long as the devices are still reachable by the same subsystem name Assuming sys class subsystem and sys bus subsystem or sys block and sys class block are not interchangeable is a bug in the application Block The converted block subsystem at sys class block or sys subsystem block will contain the links for disks and partitions at the same level never in a hierarchy Assuming the block subsystem to contain only disks and not partition devices in the same flat list is a bug in the application device link and subsystem kernel name links Never depend on the device link The device link is a workaround for the old layout where class devices are not created in sys devices like the bus devices If the link resolving of a device directory does not end in sys devices you can use the device link to find the parent devices in sys devices That is the single valid use of the device link it must never appear in any path as an element Assuming the existence of the device link for a device in sys devices is a bug in the application Accessing sys class net eth0 device is a bug in the application Never depend on the class specific links back to the sys class directory These links are also a workaround for the design mistake that class devices are not created in sys devices If a device directory does not contain directories for child devices these links may be used to find the child devices in sys class That is the single valid use of these links they must never appear in any path as an element Assuming the existence of these links for devices which are real child device directories in the sys devices tree is a bug in the application It is planned to remove all these links when all class device directories live in sys devices Position of devices along device chain can change Never depend on a specific parent device position in the devpath or the chain of parent devices The kernel is free to insert devices into the chain You must always request the parent device you are looking for by its subsystem value You need to walk up the chain until you find the device that matches the expected subsystem Depending on a specific position of a parent device or exposing relative paths using to access the chain of parents is a bug in the application When reading and writing sysfs device attribute files avoid dependency on specific error codes wherever possible This minimizes coupling to the error handling implementation within the kernel In general failures to read or write sysfs device attributes shall propagate errors wherever possible Common errors include but are not limited to EIO The read or store operation is not supported typically returned by the sysfs system itself if the read or store pointer is NULL ENXIO The read or store operation failed Error codes will not be changed without good reason and should a change to error codes result in user space breakage it will be fixed or the the offending change will be reverted Userspace applications can however expect the format and contents of the attribute files to remain consistent in the absence of a version attribute change in the context of a given attribute |
linux Ext4 is an advanced level of the ext3 filesystem which incorporates scalability and reliability enhancements for supporting large filesystems 64 bit in keeping with increasing disk capacities and state of the art feature requirements ext4 General Information SPDX License Identifier GPL 2 0 | .. SPDX-License-Identifier: GPL-2.0
========================
ext4 General Information
========================
Ext4 is an advanced level of the ext3 filesystem which incorporates
scalability and reliability enhancements for supporting large filesystems
(64 bit) in keeping with increasing disk capacities and state-of-the-art
feature requirements.
Mailing list: [email protected]
Web site: http://ext4.wiki.kernel.org
Quick usage instructions
========================
Note: More extensive information for getting started with ext4 can be
found at the ext4 wiki site at the URL:
http://ext4.wiki.kernel.org/index.php/Ext4_Howto
- The latest version of e2fsprogs can be found at:
https://www.kernel.org/pub/linux/kernel/people/tytso/e2fsprogs/
or
http://sourceforge.net/project/showfiles.php?group_id=2406
or grab the latest git repository from:
https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git
- Create a new filesystem using the ext4 filesystem type:
# mke2fs -t ext4 /dev/hda1
Or to configure an existing ext3 filesystem to support extents:
# tune2fs -O extents /dev/hda1
If the filesystem was created with 128 byte inodes, it can be
converted to use 256 byte for greater efficiency via:
# tune2fs -I 256 /dev/hda1
- Mounting:
# mount -t ext4 /dev/hda1 /wherever
- When comparing performance with other filesystems, it's always
important to try multiple workloads; very often a subtle change in a
workload parameter can completely change the ranking of which
filesystems do well compared to others. When comparing versus ext3,
note that ext4 enables write barriers by default, while ext3 does
not enable write barriers by default. So it is useful to use
explicitly specify whether barriers are enabled or not when via the
'-o barriers=[0|1]' mount option for both ext3 and ext4 filesystems
for a fair comparison. When tuning ext3 for best benchmark numbers,
it is often worthwhile to try changing the data journaling mode; '-o
data=writeback' can be faster for some workloads. (Note however that
running mounted with data=writeback can potentially leave stale data
exposed in recently written files in case of an unclean shutdown,
which could be a security exposure in some situations.) Configuring
the filesystem with a large journal can also be helpful for
metadata-intensive workloads.
Features
========
Currently Available
-------------------
* ability to use filesystems > 16TB (e2fsprogs support not available yet)
* extent format reduces metadata overhead (RAM, IO for access, transactions)
* extent format more robust in face of on-disk corruption due to magics,
* internal redundancy in tree
* improved file allocation (multi-block alloc)
* lift 32000 subdirectory limit imposed by i_links_count[1]
* nsec timestamps for mtime, atime, ctime, create time
* inode version field on disk (NFSv4, Lustre)
* reduced e2fsck time via uninit_bg feature
* journal checksumming for robustness, performance
* persistent file preallocation (e.g for streaming media, databases)
* ability to pack bitmaps and inode tables into larger virtual groups via the
flex_bg feature
* large file support
* inode allocation using large virtual block groups via flex_bg
* delayed allocation
* large block (up to pagesize) support
* efficient new ordered mode in JBD2 and ext4 (avoid using buffer head to force
the ordering)
* Case-insensitive file name lookups
* file-based encryption support (fscrypt)
* file-based verity support (fsverity)
[1] Filesystems with a block size of 1k may see a limit imposed by the
directory hash tree having a maximum depth of two.
case-insensitive file name lookups
======================================================
The case-insensitive file name lookup feature is supported on a
per-directory basis, allowing the user to mix case-insensitive and
case-sensitive directories in the same filesystem. It is enabled by
flipping the +F inode attribute of an empty directory. The
case-insensitive string match operation is only defined when we know how
text in encoded in a byte sequence. For that reason, in order to enable
case-insensitive directories, the filesystem must have the
casefold feature, which stores the filesystem-wide encoding
model used. By default, the charset adopted is the latest version of
Unicode (12.1.0, by the time of this writing), encoded in the UTF-8
form. The comparison algorithm is implemented by normalizing the
strings to the Canonical decomposition form, as defined by Unicode,
followed by a byte per byte comparison.
The case-awareness is name-preserving on the disk, meaning that the file
name provided by userspace is a byte-per-byte match to what is actually
written in the disk. The Unicode normalization format used by the
kernel is thus an internal representation, and not exposed to the
userspace nor to the disk, with the important exception of disk hashes,
used on large case-insensitive directories with DX feature. On DX
directories, the hash must be calculated using the casefolded version of
the filename, meaning that the normalization format used actually has an
impact on where the directory entry is stored.
When we change from viewing filenames as opaque byte sequences to seeing
them as encoded strings we need to address what happens when a program
tries to create a file with an invalid name. The Unicode subsystem
within the kernel leaves the decision of what to do in this case to the
filesystem, which select its preferred behavior by enabling/disabling
the strict mode. When Ext4 encounters one of those strings and the
filesystem did not require strict mode, it falls back to considering the
entire string as an opaque byte sequence, which still allows the user to
operate on that file, but the case-insensitive lookups won't work.
Options
=======
When mounting an ext4 filesystem, the following option are accepted:
(*) == default
ro
Mount filesystem read only. Note that ext4 will replay the journal (and
thus write to the partition) even when mounted "read only". The mount
options "ro,noload" can be used to prevent writes to the filesystem.
journal_checksum
Enable checksumming of the journal transactions. This will allow the
recovery code in e2fsck and the kernel to detect corruption in the
kernel. It is a compatible change and will be ignored by older
kernels.
journal_async_commit
Commit block can be written to disk without waiting for descriptor
blocks. If enabled older kernels cannot mount the device. This will
enable 'journal_checksum' internally.
journal_path=path, journal_dev=devnum
When the external journal device's major/minor numbers have changed,
these options allow the user to specify the new journal location. The
journal device is identified through either its new major/minor numbers
encoded in devnum, or via a path to the device.
norecovery, noload
Don't load the journal on mounting. Note that if the filesystem was
not unmounted cleanly, skipping the journal replay will lead to the
filesystem containing inconsistencies that can lead to any number of
problems.
data=journal
All data are committed into the journal prior to being written into the
main file system. Enabling this mode will disable delayed allocation
and O_DIRECT support.
data=ordered (*)
All data are forced directly out to the main file system prior to its
metadata being committed to the journal.
data=writeback
Data ordering is not preserved, data may be written into the main file
system after its metadata has been committed to the journal.
commit=nrsec (*)
This setting limits the maximum age of the running transaction to
'nrsec' seconds. The default value is 5 seconds. This means that if
you lose your power, you will lose as much as the latest 5 seconds of
metadata changes (your filesystem will not be damaged though, thanks
to the journaling). This default value (or any low value) will hurt
performance, but it's good for data-safety. Setting it to 0 will have
the same effect as leaving it at the default (5 seconds). Setting it
to very large values will improve performance. Note that due to
delayed allocation even older data can be lost on power failure since
writeback of those data begins only after time set in
/proc/sys/vm/dirty_expire_centisecs.
barrier=<0|1(*)>, barrier(*), nobarrier
This enables/disables the use of write barriers in the jbd code.
barrier=0 disables, barrier=1 enables. This also requires an IO stack
which can support barriers, and if jbd gets an error on a barrier
write, it will disable again with a warning. Write barriers enforce
proper on-disk ordering of journal commits, making volatile disk write
caches safe to use, at some performance penalty. If your disks are
battery-backed in one way or another, disabling barriers may safely
improve performance. The mount options "barrier" and "nobarrier" can
also be used to enable or disable barriers, for consistency with other
ext4 mount options.
inode_readahead_blks=n
This tuning parameter controls the maximum number of inode table blocks
that ext4's inode table readahead algorithm will pre-read into the
buffer cache. The default value is 32 blocks.
bsddf (*)
Make 'df' act like BSD.
minixdf
Make 'df' act like Minix.
debug
Extra debugging information is sent to syslog.
abort
Simulate the effects of calling ext4_abort() for debugging purposes.
This is normally used while remounting a filesystem which is already
mounted.
errors=remount-ro
Remount the filesystem read-only on an error.
errors=continue
Keep going on a filesystem error.
errors=panic
Panic and halt the machine if an error occurs. (These mount options
override the errors behavior specified in the superblock, which can be
configured using tune2fs)
data_err=ignore(*)
Just print an error message if an error occurs in a file data buffer in
ordered mode.
data_err=abort
Abort the journal if an error occurs in a file data buffer in ordered
mode.
grpid | bsdgroups
New objects have the group ID of their parent.
nogrpid (*) | sysvgroups
New objects have the group ID of their creator.
resgid=n
The group ID which may use the reserved blocks.
resuid=n
The user ID which may use the reserved blocks.
sb=
Use alternate superblock at this location.
quota, noquota, grpquota, usrquota
These options are ignored by the filesystem. They are used only by
quota tools to recognize volumes where quota should be turned on. See
documentation in the quota-tools package for more details
(http://sourceforge.net/projects/linuxquota).
jqfmt=<quota type>, usrjquota=<file>, grpjquota=<file>
These options tell filesystem details about quota so that quota
information can be properly updated during journal replay. They replace
the above quota options. See documentation in the quota-tools package
for more details (http://sourceforge.net/projects/linuxquota).
stripe=n
Number of filesystem blocks that mballoc will try to use for allocation
size and alignment. For RAID5/6 systems this should be the number of
data disks * RAID chunk size in file system blocks.
delalloc (*)
Defer block allocation until just before ext4 writes out the block(s)
in question. This allows ext4 to better allocation decisions more
efficiently.
nodelalloc
Disable delayed allocation. Blocks are allocated when the data is
copied from userspace to the page cache, either via the write(2) system
call or when an mmap'ed page which was previously unallocated is
written for the first time.
max_batch_time=usec
Maximum amount of time ext4 should wait for additional filesystem
operations to be batch together with a synchronous write operation.
Since a synchronous write operation is going to force a commit and then
a wait for the I/O complete, it doesn't cost much, and can be a huge
throughput win, we wait for a small amount of time to see if any other
transactions can piggyback on the synchronous write. The algorithm
used is designed to automatically tune for the speed of the disk, by
measuring the amount of time (on average) that it takes to finish
committing a transaction. Call this time the "commit time". If the
time that the transaction has been running is less than the commit
time, ext4 will try sleeping for the commit time to see if other
operations will join the transaction. The commit time is capped by
the max_batch_time, which defaults to 15000us (15ms). This
optimization can be turned off entirely by setting max_batch_time to 0.
min_batch_time=usec
This parameter sets the commit time (as described above) to be at least
min_batch_time. It defaults to zero microseconds. Increasing this
parameter may improve the throughput of multi-threaded, synchronous
workloads on very fast disks, at the cost of increasing latency.
journal_ioprio=prio
The I/O priority (from 0 to 7, where 0 is the highest priority) which
should be used for I/O operations submitted by kjournald2 during a
commit operation. This defaults to 3, which is a slightly higher
priority than the default I/O priority.
auto_da_alloc(*), noauto_da_alloc
Many broken applications don't use fsync() when replacing existing
files via patterns such as fd = open("foo.new")/write(fd,..)/close(fd)/
rename("foo.new", "foo"), or worse yet, fd = open("foo",
O_TRUNC)/write(fd,..)/close(fd). If auto_da_alloc is enabled, ext4
will detect the replace-via-rename and replace-via-truncate patterns
and force that any delayed allocation blocks are allocated such that at
the next journal commit, in the default data=ordered mode, the data
blocks of the new file are forced to disk before the rename() operation
is committed. This provides roughly the same level of guarantees as
ext3, and avoids the "zero-length" problem that can happen when a
system crashes before the delayed allocation blocks are forced to disk.
noinit_itable
Do not initialize any uninitialized inode table blocks in the
background. This feature may be used by installation CD's so that the
install process can complete as quickly as possible; the inode table
initialization process would then be deferred until the next time the
file system is unmounted.
init_itable=n
The lazy itable init code will wait n times the number of milliseconds
it took to zero out the previous block group's inode table. This
minimizes the impact on the system performance while file system's
inode table is being initialized.
discard, nodiscard(*)
Controls whether ext4 should issue discard/TRIM commands to the
underlying block device when blocks are freed. This is useful for SSD
devices and sparse/thinly-provisioned LUNs, but it is off by default
until sufficient testing has been done.
nouid32
Disables 32-bit UIDs and GIDs. This is for interoperability with
older kernels which only store and expect 16-bit values.
block_validity(*), noblock_validity
These options enable or disable the in-kernel facility for tracking
filesystem metadata blocks within internal data structures. This
allows multi- block allocator and other routines to notice bugs or
corrupted allocation bitmaps which cause blocks to be allocated which
overlap with filesystem metadata blocks.
dioread_lock, dioread_nolock
Controls whether or not ext4 should use the DIO read locking. If the
dioread_nolock option is specified ext4 will allocate uninitialized
extent before buffer write and convert the extent to initialized after
IO completes. This approach allows ext4 code to avoid using inode
mutex, which improves scalability on high speed storages. However this
does not work with data journaling and dioread_nolock option will be
ignored with kernel warning. Note that dioread_nolock code path is only
used for extent-based files. Because of the restrictions this options
comprises it is off by default (e.g. dioread_lock).
max_dir_size_kb=n
This limits the size of directories so that any attempt to expand them
beyond the specified limit in kilobytes will cause an ENOSPC error.
This is useful in memory constrained environments, where a very large
directory can cause severe performance problems or even provoke the Out
Of Memory killer. (For example, if there is only 512mb memory
available, a 176mb directory may seriously cramp the system's style.)
i_version
Enable 64-bit inode version support. This option is off by default.
dax
Use direct access (no page cache). See
Documentation/filesystems/dax.rst. Note that this option is
incompatible with data=journal.
inlinecrypt
When possible, encrypt/decrypt the contents of encrypted files using the
blk-crypto framework rather than filesystem-layer encryption. This
allows the use of inline encryption hardware. The on-disk format is
unaffected. For more details, see
Documentation/block/inline-encryption.rst.
Data Mode
=========
There are 3 different data modes:
* writeback mode
In data=writeback mode, ext4 does not journal data at all. This mode provides
a similar level of journaling as that of XFS, JFS, and ReiserFS in its default
mode - metadata journaling. A crash+recovery can cause incorrect data to
appear in files which were written shortly before the crash. This mode will
typically provide the best ext4 performance.
* ordered mode
In data=ordered mode, ext4 only officially journals metadata, but it logically
groups metadata information related to data changes with the data blocks into
a single unit called a transaction. When it's time to write the new metadata
out to disk, the associated data blocks are written first. In general, this
mode performs slightly slower than writeback but significantly faster than
journal mode.
* journal mode
data=journal mode provides full data and metadata journaling. All new data is
written to the journal first, and then to its final location. In the event of
a crash, the journal can be replayed, bringing both data and metadata into a
consistent state. This mode is the slowest except when data needs to be read
from and written to disk at the same time where it outperforms all others
modes. Enabling this mode will disable delayed allocation and O_DIRECT
support.
/proc entries
=============
Information about mounted ext4 file systems can be found in
/proc/fs/ext4. Each mounted filesystem will have a directory in
/proc/fs/ext4 based on its device name (i.e., /proc/fs/ext4/hdc or
/proc/fs/ext4/dm-0). The files in each per-device directory are shown
in table below.
Files in /proc/fs/ext4/<devname>
mb_groups
details of multiblock allocator buddy cache of free blocks
/sys entries
============
Information about mounted ext4 file systems can be found in
/sys/fs/ext4. Each mounted filesystem will have a directory in
/sys/fs/ext4 based on its device name (i.e., /sys/fs/ext4/hdc or
/sys/fs/ext4/dm-0). The files in each per-device directory are shown
in table below.
Files in /sys/fs/ext4/<devname>:
(see also Documentation/ABI/testing/sysfs-fs-ext4)
delayed_allocation_blocks
This file is read-only and shows the number of blocks that are dirty in
the page cache, but which do not have their location in the filesystem
allocated yet.
inode_goal
Tuning parameter which (if non-zero) controls the goal inode used by
the inode allocator in preference to all other allocation heuristics.
This is intended for debugging use only, and should be 0 on production
systems.
inode_readahead_blks
Tuning parameter which controls the maximum number of inode table
blocks that ext4's inode table readahead algorithm will pre-read into
the buffer cache.
lifetime_write_kbytes
This file is read-only and shows the number of kilobytes of data that
have been written to this filesystem since it was created.
max_writeback_mb_bump
The maximum number of megabytes the writeback code will try to write
out before move on to another inode.
mb_group_prealloc
The multiblock allocator will round up allocation requests to a
multiple of this tuning parameter if the stripe size is not set in the
ext4 superblock
mb_max_to_scan
The maximum number of extents the multiblock allocator will search to
find the best extent.
mb_min_to_scan
The minimum number of extents the multiblock allocator will search to
find the best extent.
mb_order2_req
Tuning parameter which controls the minimum size for requests (as a
power of 2) where the buddy cache is used.
mb_stats
Controls whether the multiblock allocator should collect statistics,
which are shown during the unmount. 1 means to collect statistics, 0
means not to collect statistics.
mb_stream_req
Files which have fewer blocks than this tunable parameter will have
their blocks allocated out of a block group specific preallocation
pool, so that small files are packed closely together. Each large file
will have its blocks allocated out of its own unique preallocation
pool.
session_write_kbytes
This file is read-only and shows the number of kilobytes of data that
have been written to this filesystem since it was mounted.
reserved_clusters
This is RW file and contains number of reserved clusters in the file
system which will be used in the specific situations to avoid costly
zeroout, unexpected ENOSPC, or possible data loss. The default is 2% or
4096 clusters, whichever is smaller and this can be changed however it
can never exceed number of clusters in the file system. If there is not
enough space for the reserved space when mounting the file mount will
_not_ fail.
Ioctls
======
Ext4 implements various ioctls which can be used by applications to access
ext4-specific functionality. An incomplete list of these ioctls is shown in the
table below. This list includes truly ext4-specific ioctls (``EXT4_IOC_*``) as
well as ioctls that may have been ext4-specific originally but are now supported
by some other filesystem(s) too (``FS_IOC_*``).
Table of Ext4 ioctls
FS_IOC_GETFLAGS
Get additional attributes associated with inode. The ioctl argument is
an integer bitfield, with bit values described in ext4.h.
FS_IOC_SETFLAGS
Set additional attributes associated with inode. The ioctl argument is
an integer bitfield, with bit values described in ext4.h.
EXT4_IOC_GETVERSION, EXT4_IOC_GETVERSION_OLD
Get the inode i_generation number stored for each inode. The
i_generation number is normally changed only when new inode is created
and it is particularly useful for network filesystems. The '_OLD'
version of this ioctl is an alias for FS_IOC_GETVERSION.
EXT4_IOC_SETVERSION, EXT4_IOC_SETVERSION_OLD
Set the inode i_generation number stored for each inode. The '_OLD'
version of this ioctl is an alias for FS_IOC_SETVERSION.
EXT4_IOC_GROUP_EXTEND
This ioctl has the same purpose as the resize mount option. It allows
to resize filesystem to the end of the last existing block group,
further resize has to be done with resize2fs, either online, or
offline. The argument points to the unsigned logn number representing
the filesystem new block count.
EXT4_IOC_MOVE_EXT
Move the block extents from orig_fd (the one this ioctl is pointing to)
to the donor_fd (the one specified in move_extent structure passed as
an argument to this ioctl). Then, exchange inode metadata between
orig_fd and donor_fd. This is especially useful for online
defragmentation, because the allocator has the opportunity to allocate
moved blocks better, ideally into one contiguous extent.
EXT4_IOC_GROUP_ADD
Add a new group descriptor to an existing or new group descriptor
block. The new group descriptor is described by ext4_new_group_input
structure, which is passed as an argument to this ioctl. This is
especially useful in conjunction with EXT4_IOC_GROUP_EXTEND, which
allows online resize of the filesystem to the end of the last existing
block group. Those two ioctls combined is used in userspace online
resize tool (e.g. resize2fs).
EXT4_IOC_MIGRATE
This ioctl operates on the filesystem itself. It converts (migrates)
ext3 indirect block mapped inode to ext4 extent mapped inode by walking
through indirect block mapping of the original inode and converting
contiguous block ranges into ext4 extents of the temporary inode. Then,
inodes are swapped. This ioctl might help, when migrating from ext3 to
ext4 filesystem, however suggestion is to create fresh ext4 filesystem
and copy data from the backup. Note, that filesystem has to support
extents for this ioctl to work.
EXT4_IOC_ALLOC_DA_BLKS
Force all of the delay allocated blocks to be allocated to preserve
application-expected ext3 behaviour. Note that this will also start
triggering a write of the data blocks, but this behaviour may change in
the future as it is not necessary and has been done this way only for
sake of simplicity.
EXT4_IOC_RESIZE_FS
Resize the filesystem to a new size. The number of blocks of resized
filesystem is passed in via 64 bit integer argument. The kernel
allocates bitmaps and inode table, the userspace tool thus just passes
the new number of blocks.
EXT4_IOC_SWAP_BOOT
Swap i_blocks and associated attributes (like i_blocks, i_size,
i_flags, ...) from the specified inode with inode EXT4_BOOT_LOADER_INO
(#5). This is typically used to store a boot loader in a secure part of
the filesystem, where it can't be changed by a normal user by accident.
The data blocks of the previous boot loader will be associated with the
given inode.
References
==========
kernel source: <file:fs/ext4/>
<file:fs/jbd2/>
programs: http://e2fsprogs.sourceforge.net/
useful links: https://fedoraproject.org/wiki/ext3-devel
http://www.bullopensource.org/ext4/
http://ext4.wiki.kernel.org/index.php/Main_Page
https://fedoraproject.org/wiki/Features/Ext4 | linux | SPDX License Identifier GPL 2 0 ext4 General Information Ext4 is an advanced level of the ext3 filesystem which incorporates scalability and reliability enhancements for supporting large filesystems 64 bit in keeping with increasing disk capacities and state of the art feature requirements Mailing list linux ext4 vger kernel org Web site http ext4 wiki kernel org Quick usage instructions Note More extensive information for getting started with ext4 can be found at the ext4 wiki site at the URL http ext4 wiki kernel org index php Ext4 Howto The latest version of e2fsprogs can be found at https www kernel org pub linux kernel people tytso e2fsprogs or http sourceforge net project showfiles php group id 2406 or grab the latest git repository from https git kernel org pub scm fs ext2 e2fsprogs git Create a new filesystem using the ext4 filesystem type mke2fs t ext4 dev hda1 Or to configure an existing ext3 filesystem to support extents tune2fs O extents dev hda1 If the filesystem was created with 128 byte inodes it can be converted to use 256 byte for greater efficiency via tune2fs I 256 dev hda1 Mounting mount t ext4 dev hda1 wherever When comparing performance with other filesystems it s always important to try multiple workloads very often a subtle change in a workload parameter can completely change the ranking of which filesystems do well compared to others When comparing versus ext3 note that ext4 enables write barriers by default while ext3 does not enable write barriers by default So it is useful to use explicitly specify whether barriers are enabled or not when via the o barriers 0 1 mount option for both ext3 and ext4 filesystems for a fair comparison When tuning ext3 for best benchmark numbers it is often worthwhile to try changing the data journaling mode o data writeback can be faster for some workloads Note however that running mounted with data writeback can potentially leave stale data exposed in recently written files in case of an unclean shutdown which could be a security exposure in some situations Configuring the filesystem with a large journal can also be helpful for metadata intensive workloads Features Currently Available ability to use filesystems 16TB e2fsprogs support not available yet extent format reduces metadata overhead RAM IO for access transactions extent format more robust in face of on disk corruption due to magics internal redundancy in tree improved file allocation multi block alloc lift 32000 subdirectory limit imposed by i links count 1 nsec timestamps for mtime atime ctime create time inode version field on disk NFSv4 Lustre reduced e2fsck time via uninit bg feature journal checksumming for robustness performance persistent file preallocation e g for streaming media databases ability to pack bitmaps and inode tables into larger virtual groups via the flex bg feature large file support inode allocation using large virtual block groups via flex bg delayed allocation large block up to pagesize support efficient new ordered mode in JBD2 and ext4 avoid using buffer head to force the ordering Case insensitive file name lookups file based encryption support fscrypt file based verity support fsverity 1 Filesystems with a block size of 1k may see a limit imposed by the directory hash tree having a maximum depth of two case insensitive file name lookups The case insensitive file name lookup feature is supported on a per directory basis allowing the user to mix case insensitive and case sensitive directories in the same filesystem It is enabled by flipping the F inode attribute of an empty directory The case insensitive string match operation is only defined when we know how text in encoded in a byte sequence For that reason in order to enable case insensitive directories the filesystem must have the casefold feature which stores the filesystem wide encoding model used By default the charset adopted is the latest version of Unicode 12 1 0 by the time of this writing encoded in the UTF 8 form The comparison algorithm is implemented by normalizing the strings to the Canonical decomposition form as defined by Unicode followed by a byte per byte comparison The case awareness is name preserving on the disk meaning that the file name provided by userspace is a byte per byte match to what is actually written in the disk The Unicode normalization format used by the kernel is thus an internal representation and not exposed to the userspace nor to the disk with the important exception of disk hashes used on large case insensitive directories with DX feature On DX directories the hash must be calculated using the casefolded version of the filename meaning that the normalization format used actually has an impact on where the directory entry is stored When we change from viewing filenames as opaque byte sequences to seeing them as encoded strings we need to address what happens when a program tries to create a file with an invalid name The Unicode subsystem within the kernel leaves the decision of what to do in this case to the filesystem which select its preferred behavior by enabling disabling the strict mode When Ext4 encounters one of those strings and the filesystem did not require strict mode it falls back to considering the entire string as an opaque byte sequence which still allows the user to operate on that file but the case insensitive lookups won t work Options When mounting an ext4 filesystem the following option are accepted default ro Mount filesystem read only Note that ext4 will replay the journal and thus write to the partition even when mounted read only The mount options ro noload can be used to prevent writes to the filesystem journal checksum Enable checksumming of the journal transactions This will allow the recovery code in e2fsck and the kernel to detect corruption in the kernel It is a compatible change and will be ignored by older kernels journal async commit Commit block can be written to disk without waiting for descriptor blocks If enabled older kernels cannot mount the device This will enable journal checksum internally journal path path journal dev devnum When the external journal device s major minor numbers have changed these options allow the user to specify the new journal location The journal device is identified through either its new major minor numbers encoded in devnum or via a path to the device norecovery noload Don t load the journal on mounting Note that if the filesystem was not unmounted cleanly skipping the journal replay will lead to the filesystem containing inconsistencies that can lead to any number of problems data journal All data are committed into the journal prior to being written into the main file system Enabling this mode will disable delayed allocation and O DIRECT support data ordered All data are forced directly out to the main file system prior to its metadata being committed to the journal data writeback Data ordering is not preserved data may be written into the main file system after its metadata has been committed to the journal commit nrsec This setting limits the maximum age of the running transaction to nrsec seconds The default value is 5 seconds This means that if you lose your power you will lose as much as the latest 5 seconds of metadata changes your filesystem will not be damaged though thanks to the journaling This default value or any low value will hurt performance but it s good for data safety Setting it to 0 will have the same effect as leaving it at the default 5 seconds Setting it to very large values will improve performance Note that due to delayed allocation even older data can be lost on power failure since writeback of those data begins only after time set in proc sys vm dirty expire centisecs barrier 0 1 barrier nobarrier This enables disables the use of write barriers in the jbd code barrier 0 disables barrier 1 enables This also requires an IO stack which can support barriers and if jbd gets an error on a barrier write it will disable again with a warning Write barriers enforce proper on disk ordering of journal commits making volatile disk write caches safe to use at some performance penalty If your disks are battery backed in one way or another disabling barriers may safely improve performance The mount options barrier and nobarrier can also be used to enable or disable barriers for consistency with other ext4 mount options inode readahead blks n This tuning parameter controls the maximum number of inode table blocks that ext4 s inode table readahead algorithm will pre read into the buffer cache The default value is 32 blocks bsddf Make df act like BSD minixdf Make df act like Minix debug Extra debugging information is sent to syslog abort Simulate the effects of calling ext4 abort for debugging purposes This is normally used while remounting a filesystem which is already mounted errors remount ro Remount the filesystem read only on an error errors continue Keep going on a filesystem error errors panic Panic and halt the machine if an error occurs These mount options override the errors behavior specified in the superblock which can be configured using tune2fs data err ignore Just print an error message if an error occurs in a file data buffer in ordered mode data err abort Abort the journal if an error occurs in a file data buffer in ordered mode grpid bsdgroups New objects have the group ID of their parent nogrpid sysvgroups New objects have the group ID of their creator resgid n The group ID which may use the reserved blocks resuid n The user ID which may use the reserved blocks sb Use alternate superblock at this location quota noquota grpquota usrquota These options are ignored by the filesystem They are used only by quota tools to recognize volumes where quota should be turned on See documentation in the quota tools package for more details http sourceforge net projects linuxquota jqfmt quota type usrjquota file grpjquota file These options tell filesystem details about quota so that quota information can be properly updated during journal replay They replace the above quota options See documentation in the quota tools package for more details http sourceforge net projects linuxquota stripe n Number of filesystem blocks that mballoc will try to use for allocation size and alignment For RAID5 6 systems this should be the number of data disks RAID chunk size in file system blocks delalloc Defer block allocation until just before ext4 writes out the block s in question This allows ext4 to better allocation decisions more efficiently nodelalloc Disable delayed allocation Blocks are allocated when the data is copied from userspace to the page cache either via the write 2 system call or when an mmap ed page which was previously unallocated is written for the first time max batch time usec Maximum amount of time ext4 should wait for additional filesystem operations to be batch together with a synchronous write operation Since a synchronous write operation is going to force a commit and then a wait for the I O complete it doesn t cost much and can be a huge throughput win we wait for a small amount of time to see if any other transactions can piggyback on the synchronous write The algorithm used is designed to automatically tune for the speed of the disk by measuring the amount of time on average that it takes to finish committing a transaction Call this time the commit time If the time that the transaction has been running is less than the commit time ext4 will try sleeping for the commit time to see if other operations will join the transaction The commit time is capped by the max batch time which defaults to 15000us 15ms This optimization can be turned off entirely by setting max batch time to 0 min batch time usec This parameter sets the commit time as described above to be at least min batch time It defaults to zero microseconds Increasing this parameter may improve the throughput of multi threaded synchronous workloads on very fast disks at the cost of increasing latency journal ioprio prio The I O priority from 0 to 7 where 0 is the highest priority which should be used for I O operations submitted by kjournald2 during a commit operation This defaults to 3 which is a slightly higher priority than the default I O priority auto da alloc noauto da alloc Many broken applications don t use fsync when replacing existing files via patterns such as fd open foo new write fd close fd rename foo new foo or worse yet fd open foo O TRUNC write fd close fd If auto da alloc is enabled ext4 will detect the replace via rename and replace via truncate patterns and force that any delayed allocation blocks are allocated such that at the next journal commit in the default data ordered mode the data blocks of the new file are forced to disk before the rename operation is committed This provides roughly the same level of guarantees as ext3 and avoids the zero length problem that can happen when a system crashes before the delayed allocation blocks are forced to disk noinit itable Do not initialize any uninitialized inode table blocks in the background This feature may be used by installation CD s so that the install process can complete as quickly as possible the inode table initialization process would then be deferred until the next time the file system is unmounted init itable n The lazy itable init code will wait n times the number of milliseconds it took to zero out the previous block group s inode table This minimizes the impact on the system performance while file system s inode table is being initialized discard nodiscard Controls whether ext4 should issue discard TRIM commands to the underlying block device when blocks are freed This is useful for SSD devices and sparse thinly provisioned LUNs but it is off by default until sufficient testing has been done nouid32 Disables 32 bit UIDs and GIDs This is for interoperability with older kernels which only store and expect 16 bit values block validity noblock validity These options enable or disable the in kernel facility for tracking filesystem metadata blocks within internal data structures This allows multi block allocator and other routines to notice bugs or corrupted allocation bitmaps which cause blocks to be allocated which overlap with filesystem metadata blocks dioread lock dioread nolock Controls whether or not ext4 should use the DIO read locking If the dioread nolock option is specified ext4 will allocate uninitialized extent before buffer write and convert the extent to initialized after IO completes This approach allows ext4 code to avoid using inode mutex which improves scalability on high speed storages However this does not work with data journaling and dioread nolock option will be ignored with kernel warning Note that dioread nolock code path is only used for extent based files Because of the restrictions this options comprises it is off by default e g dioread lock max dir size kb n This limits the size of directories so that any attempt to expand them beyond the specified limit in kilobytes will cause an ENOSPC error This is useful in memory constrained environments where a very large directory can cause severe performance problems or even provoke the Out Of Memory killer For example if there is only 512mb memory available a 176mb directory may seriously cramp the system s style i version Enable 64 bit inode version support This option is off by default dax Use direct access no page cache See Documentation filesystems dax rst Note that this option is incompatible with data journal inlinecrypt When possible encrypt decrypt the contents of encrypted files using the blk crypto framework rather than filesystem layer encryption This allows the use of inline encryption hardware The on disk format is unaffected For more details see Documentation block inline encryption rst Data Mode There are 3 different data modes writeback mode In data writeback mode ext4 does not journal data at all This mode provides a similar level of journaling as that of XFS JFS and ReiserFS in its default mode metadata journaling A crash recovery can cause incorrect data to appear in files which were written shortly before the crash This mode will typically provide the best ext4 performance ordered mode In data ordered mode ext4 only officially journals metadata but it logically groups metadata information related to data changes with the data blocks into a single unit called a transaction When it s time to write the new metadata out to disk the associated data blocks are written first In general this mode performs slightly slower than writeback but significantly faster than journal mode journal mode data journal mode provides full data and metadata journaling All new data is written to the journal first and then to its final location In the event of a crash the journal can be replayed bringing both data and metadata into a consistent state This mode is the slowest except when data needs to be read from and written to disk at the same time where it outperforms all others modes Enabling this mode will disable delayed allocation and O DIRECT support proc entries Information about mounted ext4 file systems can be found in proc fs ext4 Each mounted filesystem will have a directory in proc fs ext4 based on its device name i e proc fs ext4 hdc or proc fs ext4 dm 0 The files in each per device directory are shown in table below Files in proc fs ext4 devname mb groups details of multiblock allocator buddy cache of free blocks sys entries Information about mounted ext4 file systems can be found in sys fs ext4 Each mounted filesystem will have a directory in sys fs ext4 based on its device name i e sys fs ext4 hdc or sys fs ext4 dm 0 The files in each per device directory are shown in table below Files in sys fs ext4 devname see also Documentation ABI testing sysfs fs ext4 delayed allocation blocks This file is read only and shows the number of blocks that are dirty in the page cache but which do not have their location in the filesystem allocated yet inode goal Tuning parameter which if non zero controls the goal inode used by the inode allocator in preference to all other allocation heuristics This is intended for debugging use only and should be 0 on production systems inode readahead blks Tuning parameter which controls the maximum number of inode table blocks that ext4 s inode table readahead algorithm will pre read into the buffer cache lifetime write kbytes This file is read only and shows the number of kilobytes of data that have been written to this filesystem since it was created max writeback mb bump The maximum number of megabytes the writeback code will try to write out before move on to another inode mb group prealloc The multiblock allocator will round up allocation requests to a multiple of this tuning parameter if the stripe size is not set in the ext4 superblock mb max to scan The maximum number of extents the multiblock allocator will search to find the best extent mb min to scan The minimum number of extents the multiblock allocator will search to find the best extent mb order2 req Tuning parameter which controls the minimum size for requests as a power of 2 where the buddy cache is used mb stats Controls whether the multiblock allocator should collect statistics which are shown during the unmount 1 means to collect statistics 0 means not to collect statistics mb stream req Files which have fewer blocks than this tunable parameter will have their blocks allocated out of a block group specific preallocation pool so that small files are packed closely together Each large file will have its blocks allocated out of its own unique preallocation pool session write kbytes This file is read only and shows the number of kilobytes of data that have been written to this filesystem since it was mounted reserved clusters This is RW file and contains number of reserved clusters in the file system which will be used in the specific situations to avoid costly zeroout unexpected ENOSPC or possible data loss The default is 2 or 4096 clusters whichever is smaller and this can be changed however it can never exceed number of clusters in the file system If there is not enough space for the reserved space when mounting the file mount will not fail Ioctls Ext4 implements various ioctls which can be used by applications to access ext4 specific functionality An incomplete list of these ioctls is shown in the table below This list includes truly ext4 specific ioctls EXT4 IOC as well as ioctls that may have been ext4 specific originally but are now supported by some other filesystem s too FS IOC Table of Ext4 ioctls FS IOC GETFLAGS Get additional attributes associated with inode The ioctl argument is an integer bitfield with bit values described in ext4 h FS IOC SETFLAGS Set additional attributes associated with inode The ioctl argument is an integer bitfield with bit values described in ext4 h EXT4 IOC GETVERSION EXT4 IOC GETVERSION OLD Get the inode i generation number stored for each inode The i generation number is normally changed only when new inode is created and it is particularly useful for network filesystems The OLD version of this ioctl is an alias for FS IOC GETVERSION EXT4 IOC SETVERSION EXT4 IOC SETVERSION OLD Set the inode i generation number stored for each inode The OLD version of this ioctl is an alias for FS IOC SETVERSION EXT4 IOC GROUP EXTEND This ioctl has the same purpose as the resize mount option It allows to resize filesystem to the end of the last existing block group further resize has to be done with resize2fs either online or offline The argument points to the unsigned logn number representing the filesystem new block count EXT4 IOC MOVE EXT Move the block extents from orig fd the one this ioctl is pointing to to the donor fd the one specified in move extent structure passed as an argument to this ioctl Then exchange inode metadata between orig fd and donor fd This is especially useful for online defragmentation because the allocator has the opportunity to allocate moved blocks better ideally into one contiguous extent EXT4 IOC GROUP ADD Add a new group descriptor to an existing or new group descriptor block The new group descriptor is described by ext4 new group input structure which is passed as an argument to this ioctl This is especially useful in conjunction with EXT4 IOC GROUP EXTEND which allows online resize of the filesystem to the end of the last existing block group Those two ioctls combined is used in userspace online resize tool e g resize2fs EXT4 IOC MIGRATE This ioctl operates on the filesystem itself It converts migrates ext3 indirect block mapped inode to ext4 extent mapped inode by walking through indirect block mapping of the original inode and converting contiguous block ranges into ext4 extents of the temporary inode Then inodes are swapped This ioctl might help when migrating from ext3 to ext4 filesystem however suggestion is to create fresh ext4 filesystem and copy data from the backup Note that filesystem has to support extents for this ioctl to work EXT4 IOC ALLOC DA BLKS Force all of the delay allocated blocks to be allocated to preserve application expected ext3 behaviour Note that this will also start triggering a write of the data blocks but this behaviour may change in the future as it is not necessary and has been done this way only for sake of simplicity EXT4 IOC RESIZE FS Resize the filesystem to a new size The number of blocks of resized filesystem is passed in via 64 bit integer argument The kernel allocates bitmaps and inode table the userspace tool thus just passes the new number of blocks EXT4 IOC SWAP BOOT Swap i blocks and associated attributes like i blocks i size i flags from the specified inode with inode EXT4 BOOT LOADER INO 5 This is typically used to store a boot loader in a secure part of the filesystem where it can t be changed by a normal user by accident The data blocks of the previous boot loader will be associated with the given inode References kernel source file fs ext4 file fs jbd2 programs http e2fsprogs sourceforge net useful links https fedoraproject org wiki ext3 devel http www bullopensource org ext4 http ext4 wiki kernel org index php Main Page https fedoraproject org wiki Features Ext4 |
linux This list is the Linux Device List the official registry of allocated admindevices Linux allocated devices 4 x version system The version of this document at lanana org is no longer maintained This device numbers and directory nodes for the Linux operating | .. _admin_devices:
Linux allocated devices (4.x+ version)
======================================
This list is the Linux Device List, the official registry of allocated
device numbers and ``/dev`` directory nodes for the Linux operating
system.
The version of this document at lanana.org is no longer maintained. This
version in the mainline Linux kernel is the master document. Updates
shall be sent as patches to the kernel maintainers (see the
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>` document).
Specifically explore the sections titled "CHAR and MISC DRIVERS", and
"BLOCK LAYER" in the MAINTAINERS file to find the right maintainers
to involve for character and block devices.
This document is included by reference into the Filesystem Hierarchy
Standard (FHS). The FHS is available from https://www.pathname.com/fhs/.
Allocations marked (68k/Amiga) apply to Linux/68k on the Amiga
platform only. Allocations marked (68k/Atari) apply to Linux/68k on
the Atari platform only.
This document is in the public domain. The authors requests, however,
that semantically altered versions are not distributed without
permission of the authors, assuming the authors can be contacted without
an unreasonable effort.
.. attention::
DEVICE DRIVERS AUTHORS PLEASE READ THIS
Linux now has extensive support for dynamic allocation of device numbering
and can use ``sysfs`` and ``udev`` (``systemd``) to handle the naming needs.
There are still some exceptions in the serial and boot device area. Before
asking for a device number make sure you actually need one.
To have a major number allocated, or a minor number in situations
where that applies (e.g. busmice), please submit a patch and send to
the authors as indicated above.
Keep the description of the device *in the same format
as this list*. The reason for this is that it is the only way we have
found to ensure we have all the requisite information to publish your
device and avoid conflicts.
Finally, sometimes we have to play "namespace police." Please don't be
offended. We often get submissions for ``/dev`` names that would be bound
to cause conflicts down the road. We are trying to avoid getting in a
situation where we would have to suffer an incompatible forward
change. Therefore, please consult with us **before** you make your
device names and numbers in any way public, at least to the point
where it would be at all difficult to get them changed.
Your cooperation is appreciated.
.. include:: devices.txt
:literal:
Additional ``/dev/`` directory entries
--------------------------------------
This section details additional entries that should or may exist in
the /dev directory. It is preferred that symbolic links use the same
form (absolute or relative) as is indicated here. Links are
classified as "hard" or "symbolic" depending on the preferred type of
link; if possible, the indicated type of link should be used.
Compulsory links
++++++++++++++++
These links should exist on all systems:
=============== =============== =============== ===============================
/dev/fd /proc/self/fd symbolic File descriptors
/dev/stdin fd/0 symbolic stdin file descriptor
/dev/stdout fd/1 symbolic stdout file descriptor
/dev/stderr fd/2 symbolic stderr file descriptor
/dev/nfsd socksys symbolic Required by iBCS-2
/dev/X0R null symbolic Required by iBCS-2
=============== =============== =============== ===============================
Note: ``/dev/X0R`` is <letter X>-<digit 0>-<letter R>.
Recommended links
+++++++++++++++++
It is recommended that these links exist on all systems:
=============== =============== =============== ===============================
/dev/core /proc/kcore symbolic Backward compatibility
/dev/ramdisk ram0 symbolic Backward compatibility
/dev/ftape qft0 symbolic Backward compatibility
/dev/bttv0 video0 symbolic Backward compatibility
/dev/radio radio0 symbolic Backward compatibility
/dev/i2o* /dev/i2o/* symbolic Backward compatibility
/dev/scd? sr? hard Alternate SCSI CD-ROM name
=============== =============== =============== ===============================
Locally defined links
+++++++++++++++++++++
The following links may be established locally to conform to the
configuration of the system. This is merely a tabulation of existing
practice, and does not constitute a recommendation. However, if they
exist, they should have the following uses.
=============== =============== =============== ===============================
/dev/mouse mouse port symbolic Current mouse device
/dev/tape tape device symbolic Current tape device
/dev/cdrom CD-ROM device symbolic Current CD-ROM device
/dev/cdwriter CD-writer symbolic Current CD-writer device
/dev/scanner scanner symbolic Current scanner device
/dev/modem modem port symbolic Current dialout device
/dev/root root device symbolic Current root filesystem
/dev/swap swap device symbolic Current swap device
=============== =============== =============== ===============================
``/dev/modem`` should not be used for a modem which supports dialin as
well as dialout, as it tends to cause lock file problems. If it
exists, ``/dev/modem`` should point to the appropriate primary TTY device
(the use of the alternate callout devices is deprecated).
For SCSI devices, ``/dev/tape`` and ``/dev/cdrom`` should point to the
*cooked* devices (``/dev/st*`` and ``/dev/sr*``, respectively), whereas
``/dev/cdwriter`` and /dev/scanner should point to the appropriate generic
SCSI devices (/dev/sg*).
``/dev/mouse`` may point to a primary serial TTY device, a hardware mouse
device, or a socket for a mouse driver program (e.g. ``/dev/gpmdata``).
Sockets and pipes
+++++++++++++++++
Non-transient sockets and named pipes may exist in /dev. Common entries are:
=============== =============== ===============================================
/dev/printer socket lpd local socket
/dev/log socket syslog local socket
/dev/gpmdata socket gpm mouse multiplexer
=============== =============== ===============================================
Mount points
++++++++++++
The following names are reserved for mounting special filesystems
under /dev. These special filesystems provide kernel interfaces that
cannot be provided with standard device nodes.
=============== =============== ===============================================
/dev/pts devpts PTY slave filesystem
/dev/shm tmpfs POSIX shared memory maintenance access
=============== =============== ===============================================
Terminal devices
----------------
Terminal, or TTY devices are a special class of character devices. A
terminal device is any device that could act as a controlling terminal
for a session; this includes virtual consoles, serial ports, and
pseudoterminals (PTYs).
All terminal devices share a common set of capabilities known as line
disciplines; these include the common terminal line discipline as well
as SLIP and PPP modes.
All terminal devices are named similarly; this section explains the
naming and use of the various types of TTYs. Note that the naming
conventions include several historical warts; some of these are
Linux-specific, some were inherited from other systems, and some
reflect Linux outgrowing a borrowed convention.
A hash mark (``#``) in a device name is used here to indicate a decimal
number without leading zeroes.
Virtual consoles and the console device
+++++++++++++++++++++++++++++++++++++++
Virtual consoles are full-screen terminal displays on the system video
monitor. Virtual consoles are named ``/dev/tty#``, with numbering
starting at ``/dev/tty1``; ``/dev/tty0`` is the current virtual console.
``/dev/tty0`` is the device that should be used to access the system video
card on those architectures for which the frame buffer devices
(``/dev/fb*``) are not applicable. Do not use ``/dev/console``
for this purpose.
The console device, ``/dev/console``, is the device to which system
messages should be sent, and on which logins should be permitted in
single-user mode. Starting with Linux 2.1.71, ``/dev/console`` is managed
by the kernel; for previous versions it should be a symbolic link to
either ``/dev/tty0``, a specific virtual console such as ``/dev/tty1``, or to
a serial port primary (``tty*``, not ``cu*``) device, depending on the
configuration of the system.
Serial ports
++++++++++++
Serial ports are RS-232 serial ports and any device which simulates
one, either in hardware (such as internal modems) or in software (such
as the ISDN driver.) Under Linux, each serial ports has two device
names, the primary or callin device and the alternate or callout one.
Each kind of device is indicated by a different letter. For any
letter X, the names of the devices are ``/dev/ttyX#`` and ``/dev/cux#``,
respectively; for historical reasons, ``/dev/ttyS#`` and ``/dev/ttyC#``
correspond to ``/dev/cua#`` and ``/dev/cub#``. In the future, it should be
expected that multiple letters will be used; all letters will be upper
case for the "tty" device (e.g. ``/dev/ttyDP#``) and lower case for the
"cu" device (e.g. ``/dev/cudp#``).
The names ``/dev/ttyQ#`` and ``/dev/cuq#`` are reserved for local use.
The alternate devices provide for kernel-based exclusion and somewhat
different defaults than the primary devices. Their main purpose is to
allow the use of serial ports with programs with no inherent or broken
support for serial ports. Their use is deprecated, and they may be
removed from a future version of Linux.
Arbitration of serial ports is provided by the use of lock files with
the names ``/var/lock/LCK..ttyX#``. The contents of the lock file should
be the PID of the locking process as an ASCII number.
It is common practice to install links such as /dev/modem
which point to serial ports. In order to ensure proper locking in the
presence of these links, it is recommended that software chase
symlinks and lock all possible names; additionally, it is recommended
that a lock file be installed with the corresponding alternate
device. In order to avoid deadlocks, it is recommended that the locks
are acquired in the following order, and released in the reverse:
1. The symbolic link name, if any (``/var/lock/LCK..modem``)
2. The "tty" name (``/var/lock/LCK..ttyS2``)
3. The alternate device name (``/var/lock/LCK..cua2``)
In the case of nested symbolic links, the lock files should be
installed in the order the symlinks are resolved.
Under no circumstances should an application hold a lock while waiting
for another to be released. In addition, applications which attempt
to create lock files for the corresponding alternate device names
should take into account the possibility of being used on a non-serial
port TTY, for which no alternate device would exist.
Pseudoterminals (PTYs)
++++++++++++++++++++++
Pseudoterminals, or PTYs, are used to create login sessions or provide
other capabilities requiring a TTY line discipline (including SLIP or
PPP capability) to arbitrary data-generation processes. Each PTY has
a master side, named ``/dev/pty[p-za-e][0-9a-f]``, and a slave side, named
``/dev/tty[p-za-e][0-9a-f]``. The kernel arbitrates the use of PTYs by
allowing each master side to be opened only once.
Once the master side has been opened, the corresponding slave device
can be used in the same manner as any TTY device. The master and
slave devices are connected by the kernel, generating the equivalent
of a bidirectional pipe with TTY capabilities.
Recent versions of the Linux kernels and GNU libc contain support for
the System V/Unix98 naming scheme for PTYs, which assigns a common
device, ``/dev/ptmx``, to all the masters (opening it will automatically
give you a previously unassigned PTY) and a subdirectory, ``/dev/pts``,
for the slaves; the slaves are named with decimal integers (``/dev/pts/#``
in our notation). This removes the problem of exhausting the
namespace and enables the kernel to automatically create the device
nodes for the slaves on demand using the "devpts" filesystem. | linux | admin devices Linux allocated devices 4 x version This list is the Linux Device List the official registry of allocated device numbers and dev directory nodes for the Linux operating system The version of this document at lanana org is no longer maintained This version in the mainline Linux kernel is the master document Updates shall be sent as patches to the kernel maintainers see the ref Documentation process submitting patches rst submittingpatches document Specifically explore the sections titled CHAR and MISC DRIVERS and BLOCK LAYER in the MAINTAINERS file to find the right maintainers to involve for character and block devices This document is included by reference into the Filesystem Hierarchy Standard FHS The FHS is available from https www pathname com fhs Allocations marked 68k Amiga apply to Linux 68k on the Amiga platform only Allocations marked 68k Atari apply to Linux 68k on the Atari platform only This document is in the public domain The authors requests however that semantically altered versions are not distributed without permission of the authors assuming the authors can be contacted without an unreasonable effort attention DEVICE DRIVERS AUTHORS PLEASE READ THIS Linux now has extensive support for dynamic allocation of device numbering and can use sysfs and udev systemd to handle the naming needs There are still some exceptions in the serial and boot device area Before asking for a device number make sure you actually need one To have a major number allocated or a minor number in situations where that applies e g busmice please submit a patch and send to the authors as indicated above Keep the description of the device in the same format as this list The reason for this is that it is the only way we have found to ensure we have all the requisite information to publish your device and avoid conflicts Finally sometimes we have to play namespace police Please don t be offended We often get submissions for dev names that would be bound to cause conflicts down the road We are trying to avoid getting in a situation where we would have to suffer an incompatible forward change Therefore please consult with us before you make your device names and numbers in any way public at least to the point where it would be at all difficult to get them changed Your cooperation is appreciated include devices txt literal Additional dev directory entries This section details additional entries that should or may exist in the dev directory It is preferred that symbolic links use the same form absolute or relative as is indicated here Links are classified as hard or symbolic depending on the preferred type of link if possible the indicated type of link should be used Compulsory links These links should exist on all systems dev fd proc self fd symbolic File descriptors dev stdin fd 0 symbolic stdin file descriptor dev stdout fd 1 symbolic stdout file descriptor dev stderr fd 2 symbolic stderr file descriptor dev nfsd socksys symbolic Required by iBCS 2 dev X0R null symbolic Required by iBCS 2 Note dev X0R is letter X digit 0 letter R Recommended links It is recommended that these links exist on all systems dev core proc kcore symbolic Backward compatibility dev ramdisk ram0 symbolic Backward compatibility dev ftape qft0 symbolic Backward compatibility dev bttv0 video0 symbolic Backward compatibility dev radio radio0 symbolic Backward compatibility dev i2o dev i2o symbolic Backward compatibility dev scd sr hard Alternate SCSI CD ROM name Locally defined links The following links may be established locally to conform to the configuration of the system This is merely a tabulation of existing practice and does not constitute a recommendation However if they exist they should have the following uses dev mouse mouse port symbolic Current mouse device dev tape tape device symbolic Current tape device dev cdrom CD ROM device symbolic Current CD ROM device dev cdwriter CD writer symbolic Current CD writer device dev scanner scanner symbolic Current scanner device dev modem modem port symbolic Current dialout device dev root root device symbolic Current root filesystem dev swap swap device symbolic Current swap device dev modem should not be used for a modem which supports dialin as well as dialout as it tends to cause lock file problems If it exists dev modem should point to the appropriate primary TTY device the use of the alternate callout devices is deprecated For SCSI devices dev tape and dev cdrom should point to the cooked devices dev st and dev sr respectively whereas dev cdwriter and dev scanner should point to the appropriate generic SCSI devices dev sg dev mouse may point to a primary serial TTY device a hardware mouse device or a socket for a mouse driver program e g dev gpmdata Sockets and pipes Non transient sockets and named pipes may exist in dev Common entries are dev printer socket lpd local socket dev log socket syslog local socket dev gpmdata socket gpm mouse multiplexer Mount points The following names are reserved for mounting special filesystems under dev These special filesystems provide kernel interfaces that cannot be provided with standard device nodes dev pts devpts PTY slave filesystem dev shm tmpfs POSIX shared memory maintenance access Terminal devices Terminal or TTY devices are a special class of character devices A terminal device is any device that could act as a controlling terminal for a session this includes virtual consoles serial ports and pseudoterminals PTYs All terminal devices share a common set of capabilities known as line disciplines these include the common terminal line discipline as well as SLIP and PPP modes All terminal devices are named similarly this section explains the naming and use of the various types of TTYs Note that the naming conventions include several historical warts some of these are Linux specific some were inherited from other systems and some reflect Linux outgrowing a borrowed convention A hash mark in a device name is used here to indicate a decimal number without leading zeroes Virtual consoles and the console device Virtual consoles are full screen terminal displays on the system video monitor Virtual consoles are named dev tty with numbering starting at dev tty1 dev tty0 is the current virtual console dev tty0 is the device that should be used to access the system video card on those architectures for which the frame buffer devices dev fb are not applicable Do not use dev console for this purpose The console device dev console is the device to which system messages should be sent and on which logins should be permitted in single user mode Starting with Linux 2 1 71 dev console is managed by the kernel for previous versions it should be a symbolic link to either dev tty0 a specific virtual console such as dev tty1 or to a serial port primary tty not cu device depending on the configuration of the system Serial ports Serial ports are RS 232 serial ports and any device which simulates one either in hardware such as internal modems or in software such as the ISDN driver Under Linux each serial ports has two device names the primary or callin device and the alternate or callout one Each kind of device is indicated by a different letter For any letter X the names of the devices are dev ttyX and dev cux respectively for historical reasons dev ttyS and dev ttyC correspond to dev cua and dev cub In the future it should be expected that multiple letters will be used all letters will be upper case for the tty device e g dev ttyDP and lower case for the cu device e g dev cudp The names dev ttyQ and dev cuq are reserved for local use The alternate devices provide for kernel based exclusion and somewhat different defaults than the primary devices Their main purpose is to allow the use of serial ports with programs with no inherent or broken support for serial ports Their use is deprecated and they may be removed from a future version of Linux Arbitration of serial ports is provided by the use of lock files with the names var lock LCK ttyX The contents of the lock file should be the PID of the locking process as an ASCII number It is common practice to install links such as dev modem which point to serial ports In order to ensure proper locking in the presence of these links it is recommended that software chase symlinks and lock all possible names additionally it is recommended that a lock file be installed with the corresponding alternate device In order to avoid deadlocks it is recommended that the locks are acquired in the following order and released in the reverse 1 The symbolic link name if any var lock LCK modem 2 The tty name var lock LCK ttyS2 3 The alternate device name var lock LCK cua2 In the case of nested symbolic links the lock files should be installed in the order the symlinks are resolved Under no circumstances should an application hold a lock while waiting for another to be released In addition applications which attempt to create lock files for the corresponding alternate device names should take into account the possibility of being used on a non serial port TTY for which no alternate device would exist Pseudoterminals PTYs Pseudoterminals or PTYs are used to create login sessions or provide other capabilities requiring a TTY line discipline including SLIP or PPP capability to arbitrary data generation processes Each PTY has a master side named dev pty p za e 0 9a f and a slave side named dev tty p za e 0 9a f The kernel arbitrates the use of PTYs by allowing each master side to be opened only once Once the master side has been opened the corresponding slave device can be used in the same manner as any TTY device The master and slave devices are connected by the kernel generating the equivalent of a bidirectional pipe with TTY capabilities Recent versions of the Linux kernels and GNU libc contain support for the System V Unix98 naming scheme for PTYs which assigns a common device dev ptmx to all the masters opening it will automatically give you a previously unassigned PTY and a subdirectory dev pts for the slaves the slaves are named with decimal integers dev pts in our notation This removes the problem of exhausting the namespace and enables the kernel to automatically create the device nodes for the slaves on demand using the devpts filesystem |
linux When Linux developers talk about a Real Time Clock they usually mean UTC formerly Greenwich Mean Time works even with system power off Such clocks will normally not track the local time zone or daylight savings time unless they dual boot Real Time Clock RTC Drivers for Linux with MS Windows but will instead be set to Coordinated Universal Time something that tracks wall clock time and is battery backed so that it | =======================================
Real Time Clock (RTC) Drivers for Linux
=======================================
When Linux developers talk about a "Real Time Clock", they usually mean
something that tracks wall clock time and is battery backed so that it
works even with system power off. Such clocks will normally not track
the local time zone or daylight savings time -- unless they dual boot
with MS-Windows -- but will instead be set to Coordinated Universal Time
(UTC, formerly "Greenwich Mean Time").
The newest non-PC hardware tends to just count seconds, like the time(2)
system call reports, but RTCs also very commonly represent time using
the Gregorian calendar and 24 hour time, as reported by gmtime(3).
Linux has two largely-compatible userspace RTC API families you may
need to know about:
* /dev/rtc ... is the RTC provided by PC compatible systems,
so it's not very portable to non-x86 systems.
* /dev/rtc0, /dev/rtc1 ... are part of a framework that's
supported by a wide variety of RTC chips on all systems.
Programmers need to understand that the PC/AT functionality is not
always available, and some systems can do much more. That is, the
RTCs use the same API to make requests in both RTC frameworks (using
different filenames of course), but the hardware may not offer the
same functionality. For example, not every RTC is hooked up to an
IRQ, so they can't all issue alarms; and where standard PC RTCs can
only issue an alarm up to 24 hours in the future, other hardware may
be able to schedule one any time in the upcoming century.
Old PC/AT-Compatible driver: /dev/rtc
--------------------------------------
All PCs (even Alpha machines) have a Real Time Clock built into them.
Usually they are built into the chipset of the computer, but some may
actually have a Motorola MC146818 (or clone) on the board. This is the
clock that keeps the date and time while your computer is turned off.
ACPI has standardized that MC146818 functionality, and extended it in
a few ways (enabling longer alarm periods, and wake-from-hibernate).
That functionality is NOT exposed in the old driver.
However it can also be used to generate signals from a slow 2Hz to a
relatively fast 8192Hz, in increments of powers of two. These signals
are reported by interrupt number 8. (Oh! So *that* is what IRQ 8 is
for...) It can also function as a 24hr alarm, raising IRQ 8 when the
alarm goes off. The alarm can also be programmed to only check any
subset of the three programmable values, meaning that it could be set to
ring on the 30th second of the 30th minute of every hour, for example.
The clock can also be set to generate an interrupt upon every clock
update, thus generating a 1Hz signal.
The interrupts are reported via /dev/rtc (major 10, minor 135, read only
character device) in the form of an unsigned long. The low byte contains
the type of interrupt (update-done, alarm-rang, or periodic) that was
raised, and the remaining bytes contain the number of interrupts since
the last read. Status information is reported through the pseudo-file
/proc/driver/rtc if the /proc filesystem was enabled. The driver has
built in locking so that only one process is allowed to have the /dev/rtc
interface open at a time.
A user process can monitor these interrupts by doing a read(2) or a
select(2) on /dev/rtc -- either will block/stop the user process until
the next interrupt is received. This is useful for things like
reasonably high frequency data acquisition where one doesn't want to
burn up 100% CPU by polling gettimeofday etc. etc.
At high frequencies, or under high loads, the user process should check
the number of interrupts received since the last read to determine if
there has been any interrupt "pileup" so to speak. Just for reference, a
typical 486-33 running a tight read loop on /dev/rtc will start to suffer
occasional interrupt pileup (i.e. > 1 IRQ event since last read) for
frequencies above 1024Hz. So you really should check the high bytes
of the value you read, especially at frequencies above that of the
normal timer interrupt, which is 100Hz.
Programming and/or enabling interrupt frequencies greater than 64Hz is
only allowed by root. This is perhaps a bit conservative, but we don't want
an evil user generating lots of IRQs on a slow 386sx-16, where it might have
a negative impact on performance. This 64Hz limit can be changed by writing
a different value to /proc/sys/dev/rtc/max-user-freq. Note that the
interrupt handler is only a few lines of code to minimize any possibility
of this effect.
Also, if the kernel time is synchronized with an external source, the
kernel will write the time back to the CMOS clock every 11 minutes. In
the process of doing this, the kernel briefly turns off RTC periodic
interrupts, so be aware of this if you are doing serious work. If you
don't synchronize the kernel time with an external source (via ntp or
whatever) then the kernel will keep its hands off the RTC, allowing you
exclusive access to the device for your applications.
The alarm and/or interrupt frequency are programmed into the RTC via
various ioctl(2) calls as listed in ./include/linux/rtc.h
Rather than write 50 pages describing the ioctl() and so on, it is
perhaps more useful to include a small test program that demonstrates
how to use them, and demonstrates the features of the driver. This is
probably a lot more useful to people interested in writing applications
that will be using this driver. See the code at the end of this document.
(The original /dev/rtc driver was written by Paul Gortmaker.)
New portable "RTC Class" drivers: /dev/rtcN
--------------------------------------------
Because Linux supports many non-ACPI and non-PC platforms, some of which
have more than one RTC style clock, it needed a more portable solution
than expecting a single battery-backed MC146818 clone on every system.
Accordingly, a new "RTC Class" framework has been defined. It offers
three different userspace interfaces:
* /dev/rtcN ... much the same as the older /dev/rtc interface
* /sys/class/rtc/rtcN ... sysfs attributes support readonly
access to some RTC attributes.
* /proc/driver/rtc ... the system clock RTC may expose itself
using a procfs interface. If there is no RTC for the system clock,
rtc0 is used by default. More information is (currently) shown
here than through sysfs.
The RTC Class framework supports a wide variety of RTCs, ranging from those
integrated into embeddable system-on-chip (SOC) processors to discrete chips
using I2C, SPI, or some other bus to communicate with the host CPU. There's
even support for PC-style RTCs ... including the features exposed on newer PCs
through ACPI.
The new framework also removes the "one RTC per system" restriction. For
example, maybe the low-power battery-backed RTC is a discrete I2C chip, but
a high functionality RTC is integrated into the SOC. That system might read
the system clock from the discrete RTC, but use the integrated one for all
other tasks, because of its greater functionality.
Check out tools/testing/selftests/rtc/rtctest.c for an example usage of the
ioctl interface. | linux | Real Time Clock RTC Drivers for Linux When Linux developers talk about a Real Time Clock they usually mean something that tracks wall clock time and is battery backed so that it works even with system power off Such clocks will normally not track the local time zone or daylight savings time unless they dual boot with MS Windows but will instead be set to Coordinated Universal Time UTC formerly Greenwich Mean Time The newest non PC hardware tends to just count seconds like the time 2 system call reports but RTCs also very commonly represent time using the Gregorian calendar and 24 hour time as reported by gmtime 3 Linux has two largely compatible userspace RTC API families you may need to know about dev rtc is the RTC provided by PC compatible systems so it s not very portable to non x86 systems dev rtc0 dev rtc1 are part of a framework that s supported by a wide variety of RTC chips on all systems Programmers need to understand that the PC AT functionality is not always available and some systems can do much more That is the RTCs use the same API to make requests in both RTC frameworks using different filenames of course but the hardware may not offer the same functionality For example not every RTC is hooked up to an IRQ so they can t all issue alarms and where standard PC RTCs can only issue an alarm up to 24 hours in the future other hardware may be able to schedule one any time in the upcoming century Old PC AT Compatible driver dev rtc All PCs even Alpha machines have a Real Time Clock built into them Usually they are built into the chipset of the computer but some may actually have a Motorola MC146818 or clone on the board This is the clock that keeps the date and time while your computer is turned off ACPI has standardized that MC146818 functionality and extended it in a few ways enabling longer alarm periods and wake from hibernate That functionality is NOT exposed in the old driver However it can also be used to generate signals from a slow 2Hz to a relatively fast 8192Hz in increments of powers of two These signals are reported by interrupt number 8 Oh So that is what IRQ 8 is for It can also function as a 24hr alarm raising IRQ 8 when the alarm goes off The alarm can also be programmed to only check any subset of the three programmable values meaning that it could be set to ring on the 30th second of the 30th minute of every hour for example The clock can also be set to generate an interrupt upon every clock update thus generating a 1Hz signal The interrupts are reported via dev rtc major 10 minor 135 read only character device in the form of an unsigned long The low byte contains the type of interrupt update done alarm rang or periodic that was raised and the remaining bytes contain the number of interrupts since the last read Status information is reported through the pseudo file proc driver rtc if the proc filesystem was enabled The driver has built in locking so that only one process is allowed to have the dev rtc interface open at a time A user process can monitor these interrupts by doing a read 2 or a select 2 on dev rtc either will block stop the user process until the next interrupt is received This is useful for things like reasonably high frequency data acquisition where one doesn t want to burn up 100 CPU by polling gettimeofday etc etc At high frequencies or under high loads the user process should check the number of interrupts received since the last read to determine if there has been any interrupt pileup so to speak Just for reference a typical 486 33 running a tight read loop on dev rtc will start to suffer occasional interrupt pileup i e 1 IRQ event since last read for frequencies above 1024Hz So you really should check the high bytes of the value you read especially at frequencies above that of the normal timer interrupt which is 100Hz Programming and or enabling interrupt frequencies greater than 64Hz is only allowed by root This is perhaps a bit conservative but we don t want an evil user generating lots of IRQs on a slow 386sx 16 where it might have a negative impact on performance This 64Hz limit can be changed by writing a different value to proc sys dev rtc max user freq Note that the interrupt handler is only a few lines of code to minimize any possibility of this effect Also if the kernel time is synchronized with an external source the kernel will write the time back to the CMOS clock every 11 minutes In the process of doing this the kernel briefly turns off RTC periodic interrupts so be aware of this if you are doing serious work If you don t synchronize the kernel time with an external source via ntp or whatever then the kernel will keep its hands off the RTC allowing you exclusive access to the device for your applications The alarm and or interrupt frequency are programmed into the RTC via various ioctl 2 calls as listed in include linux rtc h Rather than write 50 pages describing the ioctl and so on it is perhaps more useful to include a small test program that demonstrates how to use them and demonstrates the features of the driver This is probably a lot more useful to people interested in writing applications that will be using this driver See the code at the end of this document The original dev rtc driver was written by Paul Gortmaker New portable RTC Class drivers dev rtcN Because Linux supports many non ACPI and non PC platforms some of which have more than one RTC style clock it needed a more portable solution than expecting a single battery backed MC146818 clone on every system Accordingly a new RTC Class framework has been defined It offers three different userspace interfaces dev rtcN much the same as the older dev rtc interface sys class rtc rtcN sysfs attributes support readonly access to some RTC attributes proc driver rtc the system clock RTC may expose itself using a procfs interface If there is no RTC for the system clock rtc0 is used by default More information is currently shown here than through sysfs The RTC Class framework supports a wide variety of RTCs ranging from those integrated into embeddable system on chip SOC processors to discrete chips using I2C SPI or some other bus to communicate with the host CPU There s even support for PC style RTCs including the features exposed on newer PCs through ACPI The new framework also removes the one RTC per system restriction For example maybe the low power battery backed RTC is a discrete I2C chip but a high functionality RTC is integrated into the SOC That system might read the system clock from the discrete RTC but use the integrated one for all other tasks because of its greater functionality Check out tools testing selftests rtc rtctest c for an example usage of the ioctl interface |
linux Author Matt Porter RapidIO is a high speed switched fabric interconnect with features aimed Introduction RapidIO Subsystem Guide | =======================
RapidIO Subsystem Guide
=======================
:Author: Matt Porter
Introduction
============
RapidIO is a high speed switched fabric interconnect with features aimed
at the embedded market. RapidIO provides support for memory-mapped I/O
as well as message-based transactions over the switched fabric network.
RapidIO has a standardized discovery mechanism not unlike the PCI bus
standard that allows simple detection of devices in a network.
This documentation is provided for developers intending to support
RapidIO on new architectures, write new drivers, or to understand the
subsystem internals.
Known Bugs and Limitations
==========================
Bugs
----
None. ;)
Limitations
-----------
1. Access/management of RapidIO memory regions is not supported
2. Multiple host enumeration is not supported
RapidIO driver interface
========================
Drivers are provided a set of calls in order to interface with the
subsystem to gather info on devices, request/map memory region
resources, and manage mailboxes/doorbells.
Functions
---------
.. kernel-doc:: include/linux/rio_drv.h
:internal:
.. kernel-doc:: drivers/rapidio/rio-driver.c
:export:
.. kernel-doc:: drivers/rapidio/rio.c
:export:
Internals
=========
This chapter contains the autogenerated documentation of the RapidIO
subsystem.
Structures
----------
.. kernel-doc:: include/linux/rio.h
:internal:
Enumeration and Discovery
-------------------------
.. kernel-doc:: drivers/rapidio/rio-scan.c
:internal:
Driver functionality
--------------------
.. kernel-doc:: drivers/rapidio/rio.c
:internal:
.. kernel-doc:: drivers/rapidio/rio-access.c
:internal:
Device model support
--------------------
.. kernel-doc:: drivers/rapidio/rio-driver.c
:internal:
PPC32 support
-------------
.. kernel-doc:: arch/powerpc/sysdev/fsl_rio.c
:internal:
Credits
=======
The following people have contributed to the RapidIO subsystem directly
or indirectly:
1. Matt Porter\ [email protected]
2. Randy Vinson\ [email protected]
3. Dan Malek\ [email protected]
The following people have contributed to this document:
1. Matt Porter\ [email protected] | linux | RapidIO Subsystem Guide Author Matt Porter Introduction RapidIO is a high speed switched fabric interconnect with features aimed at the embedded market RapidIO provides support for memory mapped I O as well as message based transactions over the switched fabric network RapidIO has a standardized discovery mechanism not unlike the PCI bus standard that allows simple detection of devices in a network This documentation is provided for developers intending to support RapidIO on new architectures write new drivers or to understand the subsystem internals Known Bugs and Limitations Bugs None Limitations 1 Access management of RapidIO memory regions is not supported 2 Multiple host enumeration is not supported RapidIO driver interface Drivers are provided a set of calls in order to interface with the subsystem to gather info on devices request map memory region resources and manage mailboxes doorbells Functions kernel doc include linux rio drv h internal kernel doc drivers rapidio rio driver c export kernel doc drivers rapidio rio c export Internals This chapter contains the autogenerated documentation of the RapidIO subsystem Structures kernel doc include linux rio h internal Enumeration and Discovery kernel doc drivers rapidio rio scan c internal Driver functionality kernel doc drivers rapidio rio c internal kernel doc drivers rapidio rio access c internal Device model support kernel doc drivers rapidio rio driver c internal PPC32 support kernel doc arch powerpc sysdev fsl rio c internal Credits The following people have contributed to the RapidIO subsystem directly or indirectly 1 Matt Porter mporter kernel crashing org 2 Randy Vinson rvinson mvista com 3 Dan Malek dan embeddedalley com The following people have contributed to this document 1 Matt Porter mporter kernel crashing org |
linux Usage of Performance Counters for Linux perfevents 1 2 3 Perf events and tool security can impose a considerable risk of leaking sensitive data accessed by Overview perfsecurity | .. _perf_security:
Perf events and tool security
=============================
Overview
--------
Usage of Performance Counters for Linux (perf_events) [1]_ , [2]_ , [3]_
can impose a considerable risk of leaking sensitive data accessed by
monitored processes. The data leakage is possible both in scenarios of
direct usage of perf_events system call API [2]_ and over data files
generated by Perf tool user mode utility (Perf) [3]_ , [4]_ . The risk
depends on the nature of data that perf_events performance monitoring
units (PMU) [2]_ and Perf collect and expose for performance analysis.
Collected system and performance data may be split into several
categories:
1. System hardware and software configuration data, for example: a CPU
model and its cache configuration, an amount of available memory and
its topology, used kernel and Perf versions, performance monitoring
setup including experiment time, events configuration, Perf command
line parameters, etc.
2. User and kernel module paths and their load addresses with sizes,
process and thread names with their PIDs and TIDs, timestamps for
captured hardware and software events.
3. Content of kernel software counters (e.g., for context switches, page
faults, CPU migrations), architectural hardware performance counters
(PMC) [8]_ and machine specific registers (MSR) [9]_ that provide
execution metrics for various monitored parts of the system (e.g.,
memory controller (IMC), interconnect (QPI/UPI) or peripheral (PCIe)
uncore counters) without direct attribution to any execution context
state.
4. Content of architectural execution context registers (e.g., RIP, RSP,
RBP on x86_64), process user and kernel space memory addresses and
data, content of various architectural MSRs that capture data from
this category.
Data that belong to the fourth category can potentially contain
sensitive process data. If PMUs in some monitoring modes capture values
of execution context registers or data from process memory then access
to such monitoring modes requires to be ordered and secured properly.
So, perf_events performance monitoring and observability operations are
the subject for security access control management [5]_ .
perf_events access control
-------------------------------
To perform security checks, the Linux implementation splits processes
into two categories [6]_ : a) privileged processes (whose effective user
ID is 0, referred to as superuser or root), and b) unprivileged
processes (whose effective UID is nonzero). Privileged processes bypass
all kernel security permission checks so perf_events performance
monitoring is fully available to privileged processes without access,
scope and resource restrictions.
Unprivileged processes are subject to a full security permission check
based on the process's credentials [5]_ (usually: effective UID,
effective GID, and supplementary group list).
Linux divides the privileges traditionally associated with superuser
into distinct units, known as capabilities [6]_ , which can be
independently enabled and disabled on per-thread basis for processes and
files of unprivileged users.
Unprivileged processes with enabled CAP_PERFMON capability are treated
as privileged processes with respect to perf_events performance
monitoring and observability operations, thus, bypass *scope* permissions
checks in the kernel. CAP_PERFMON implements the principle of least
privilege [13]_ (POSIX 1003.1e: 2.2.2.39) for performance monitoring and
observability operations in the kernel and provides a secure approach to
performance monitoring and observability in the system.
For backward compatibility reasons the access to perf_events monitoring and
observability operations is also open for CAP_SYS_ADMIN privileged
processes but CAP_SYS_ADMIN usage for secure monitoring and observability
use cases is discouraged with respect to the CAP_PERFMON capability.
If system audit records [14]_ for a process using perf_events system call
API contain denial records of acquiring both CAP_PERFMON and CAP_SYS_ADMIN
capabilities then providing the process with CAP_PERFMON capability singly
is recommended as the preferred secure approach to resolve double access
denial logging related to usage of performance monitoring and observability.
Prior Linux v5.9 unprivileged processes using perf_events system call
are also subject for PTRACE_MODE_READ_REALCREDS ptrace access mode check
[7]_ , whose outcome determines whether monitoring is permitted.
So unprivileged processes provided with CAP_SYS_PTRACE capability are
effectively permitted to pass the check. Starting from Linux v5.9
CAP_SYS_PTRACE capability is not required and CAP_PERFMON is enough to
be provided for processes to make performance monitoring and observability
operations.
Other capabilities being granted to unprivileged processes can
effectively enable capturing of additional data required for later
performance analysis of monitored processes or a system. For example,
CAP_SYSLOG capability permits reading kernel space memory addresses from
/proc/kallsyms file.
Privileged Perf users groups
---------------------------------
Mechanisms of capabilities, privileged capability-dumb files [6]_,
file system ACLs [10]_ and sudo [15]_ utility can be used to create
dedicated groups of privileged Perf users who are permitted to execute
performance monitoring and observability without limits. The following
steps can be taken to create such groups of privileged Perf users.
1. Create perf_users group of privileged Perf users, assign perf_users
group to Perf tool executable and limit access to the executable for
other users in the system who are not in the perf_users group:
::
# groupadd perf_users
# ls -alhF
-rwxr-xr-x 2 root root 11M Oct 19 15:12 perf
# chgrp perf_users perf
# ls -alhF
-rwxr-xr-x 2 root perf_users 11M Oct 19 15:12 perf
# chmod o-rwx perf
# ls -alhF
-rwxr-x--- 2 root perf_users 11M Oct 19 15:12 perf
2. Assign the required capabilities to the Perf tool executable file and
enable members of perf_users group with monitoring and observability
privileges [6]_ :
::
# setcap "cap_perfmon,cap_sys_ptrace,cap_syslog=ep" perf
# setcap -v "cap_perfmon,cap_sys_ptrace,cap_syslog=ep" perf
perf: OK
# getcap perf
perf = cap_sys_ptrace,cap_syslog,cap_perfmon+ep
If the libcap [16]_ installed doesn't yet support "cap_perfmon", use "38" instead,
i.e.:
::
# setcap "38,cap_ipc_lock,cap_sys_ptrace,cap_syslog=ep" perf
Note that you may need to have 'cap_ipc_lock' in the mix for tools such as
'perf top', alternatively use 'perf top -m N', to reduce the memory that
it uses for the perf ring buffer, see the memory allocation section below.
Using a libcap without support for CAP_PERFMON will make cap_get_flag(caps, 38,
CAP_EFFECTIVE, &val) fail, which will lead the default event to be 'cycles:u',
so as a workaround explicitly ask for the 'cycles' event, i.e.:
::
# perf top -e cycles
To get kernel and user samples with a perf binary with just CAP_PERFMON.
As a result, members of perf_users group are capable of conducting
performance monitoring and observability by using functionality of the
configured Perf tool executable that, when executes, passes perf_events
subsystem scope checks.
In case Perf tool executable can't be assigned required capabilities (e.g.
file system is mounted with nosuid option or extended attributes are
not supported by the file system) then creation of the capabilities
privileged environment, naturally shell, is possible. The shell provides
inherent processes with CAP_PERFMON and other required capabilities so that
performance monitoring and observability operations are available in the
environment without limits. Access to the environment can be open via sudo
utility for members of perf_users group only. In order to create such
environment:
1. Create shell script that uses capsh utility [16]_ to assign CAP_PERFMON
and other required capabilities into ambient capability set of the shell
process, lock the process security bits after enabling SECBIT_NO_SETUID_FIXUP,
SECBIT_NOROOT and SECBIT_NO_CAP_AMBIENT_RAISE bits and then change
the process identity to sudo caller of the script who should essentially
be a member of perf_users group:
::
# ls -alh /usr/local/bin/perf.shell
-rwxr-xr-x. 1 root root 83 Oct 13 23:57 /usr/local/bin/perf.shell
# cat /usr/local/bin/perf.shell
exec /usr/sbin/capsh --iab=^cap_perfmon --secbits=239 --user=$SUDO_USER -- -l
2. Extend sudo policy at /etc/sudoers file with a rule for perf_users group:
::
# grep perf_users /etc/sudoers
%perf_users ALL=/usr/local/bin/perf.shell
3. Check that members of perf_users group have access to the privileged
shell and have CAP_PERFMON and other required capabilities enabled
in permitted, effective and ambient capability sets of an inherent process:
::
$ id
uid=1003(capsh_test) gid=1004(capsh_test) groups=1004(capsh_test),1000(perf_users) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ sudo perf.shell
[sudo] password for capsh_test:
$ grep Cap /proc/self/status
CapInh: 0000004000000000
CapPrm: 0000004000000000
CapEff: 0000004000000000
CapBnd: 000000ffffffffff
CapAmb: 0000004000000000
$ capsh --decode=0000004000000000
0x0000004000000000=cap_perfmon
As a result, members of perf_users group have access to the privileged
environment where they can use tools employing performance monitoring APIs
governed by CAP_PERFMON Linux capability.
This specific access control management is only available to superuser
or root running processes with CAP_SETPCAP, CAP_SETFCAP [6]_
capabilities.
Unprivileged users
-----------------------------------
perf_events *scope* and *access* control for unprivileged processes
is governed by perf_event_paranoid [2]_ setting:
-1:
Impose no *scope* and *access* restrictions on using perf_events
performance monitoring. Per-user per-cpu perf_event_mlock_kb [2]_
locking limit is ignored when allocating memory buffers for storing
performance data. This is the least secure mode since allowed
monitored *scope* is maximized and no perf_events specific limits
are imposed on *resources* allocated for performance monitoring.
>=0:
*scope* includes per-process and system wide performance monitoring
but excludes raw tracepoints and ftrace function tracepoints
monitoring. CPU and system events happened when executing either in
user or in kernel space can be monitored and captured for later
analysis. Per-user per-cpu perf_event_mlock_kb locking limit is
imposed but ignored for unprivileged processes with CAP_IPC_LOCK
[6]_ capability.
>=1:
*scope* includes per-process performance monitoring only and
excludes system wide performance monitoring. CPU and system events
happened when executing either in user or in kernel space can be
monitored and captured for later analysis. Per-user per-cpu
perf_event_mlock_kb locking limit is imposed but ignored for
unprivileged processes with CAP_IPC_LOCK capability.
>=2:
*scope* includes per-process performance monitoring only. CPU and
system events happened when executing in user space only can be
monitored and captured for later analysis. Per-user per-cpu
perf_event_mlock_kb locking limit is imposed but ignored for
unprivileged processes with CAP_IPC_LOCK capability.
Resource control
---------------------------------
Open file descriptors
+++++++++++++++++++++
The perf_events system call API [2]_ allocates file descriptors for
every configured PMU event. Open file descriptors are a per-process
accountable resource governed by the RLIMIT_NOFILE [11]_ limit
(ulimit -n), which is usually derived from the login shell process. When
configuring Perf collection for a long list of events on a large server
system, this limit can be easily hit preventing required monitoring
configuration. RLIMIT_NOFILE limit can be increased on per-user basis
modifying content of the limits.conf file [12]_ . Ordinarily, a Perf
sampling session (perf record) requires an amount of open perf_event
file descriptors that is not less than the number of monitored events
multiplied by the number of monitored CPUs.
Memory allocation
+++++++++++++++++
The amount of memory available to user processes for capturing
performance monitoring data is governed by the perf_event_mlock_kb [2]_
setting. This perf_event specific resource setting defines overall
per-cpu limits of memory allowed for mapping by the user processes to
execute performance monitoring. The setting essentially extends the
RLIMIT_MEMLOCK [11]_ limit, but only for memory regions mapped
specifically for capturing monitored performance events and related data.
For example, if a machine has eight cores and perf_event_mlock_kb limit
is set to 516 KiB, then a user process is provided with 516 KiB * 8 =
4128 KiB of memory above the RLIMIT_MEMLOCK limit (ulimit -l) for
perf_event mmap buffers. In particular, this means that, if the user
wants to start two or more performance monitoring processes, the user is
required to manually distribute the available 4128 KiB between the
monitoring processes, for example, using the --mmap-pages Perf record
mode option. Otherwise, the first started performance monitoring process
allocates all available 4128 KiB and the other processes will fail to
proceed due to the lack of memory.
RLIMIT_MEMLOCK and perf_event_mlock_kb resource constraints are ignored
for processes with the CAP_IPC_LOCK capability. Thus, perf_events/Perf
privileged users can be provided with memory above the constraints for
perf_events/Perf performance monitoring purpose by providing the Perf
executable with CAP_IPC_LOCK capability.
Bibliography
------------
.. [1] `<https://lwn.net/Articles/337493/>`_
.. [2] `<http://man7.org/linux/man-pages/man2/perf_event_open.2.html>`_
.. [3] `<http://web.eece.maine.edu/~vweaver/projects/perf_events/>`_
.. [4] `<https://perf.wiki.kernel.org/index.php/Main_Page>`_
.. [5] `<https://www.kernel.org/doc/html/latest/security/credentials.html>`_
.. [6] `<http://man7.org/linux/man-pages/man7/capabilities.7.html>`_
.. [7] `<http://man7.org/linux/man-pages/man2/ptrace.2.html>`_
.. [8] `<https://en.wikipedia.org/wiki/Hardware_performance_counter>`_
.. [9] `<https://en.wikipedia.org/wiki/Model-specific_register>`_
.. [10] `<http://man7.org/linux/man-pages/man5/acl.5.html>`_
.. [11] `<http://man7.org/linux/man-pages/man2/getrlimit.2.html>`_
.. [12] `<http://man7.org/linux/man-pages/man5/limits.conf.5.html>`_
.. [13] `<https://sites.google.com/site/fullycapable>`_
.. [14] `<http://man7.org/linux/man-pages/man8/auditd.8.html>`_
.. [15] `<https://man7.org/linux/man-pages/man8/sudo.8.html>`_
.. [16] `<https://git.kernel.org/pub/scm/libs/libcap/libcap.git/>`_ | linux | perf security Perf events and tool security Overview Usage of Performance Counters for Linux perf events 1 2 3 can impose a considerable risk of leaking sensitive data accessed by monitored processes The data leakage is possible both in scenarios of direct usage of perf events system call API 2 and over data files generated by Perf tool user mode utility Perf 3 4 The risk depends on the nature of data that perf events performance monitoring units PMU 2 and Perf collect and expose for performance analysis Collected system and performance data may be split into several categories 1 System hardware and software configuration data for example a CPU model and its cache configuration an amount of available memory and its topology used kernel and Perf versions performance monitoring setup including experiment time events configuration Perf command line parameters etc 2 User and kernel module paths and their load addresses with sizes process and thread names with their PIDs and TIDs timestamps for captured hardware and software events 3 Content of kernel software counters e g for context switches page faults CPU migrations architectural hardware performance counters PMC 8 and machine specific registers MSR 9 that provide execution metrics for various monitored parts of the system e g memory controller IMC interconnect QPI UPI or peripheral PCIe uncore counters without direct attribution to any execution context state 4 Content of architectural execution context registers e g RIP RSP RBP on x86 64 process user and kernel space memory addresses and data content of various architectural MSRs that capture data from this category Data that belong to the fourth category can potentially contain sensitive process data If PMUs in some monitoring modes capture values of execution context registers or data from process memory then access to such monitoring modes requires to be ordered and secured properly So perf events performance monitoring and observability operations are the subject for security access control management 5 perf events access control To perform security checks the Linux implementation splits processes into two categories 6 a privileged processes whose effective user ID is 0 referred to as superuser or root and b unprivileged processes whose effective UID is nonzero Privileged processes bypass all kernel security permission checks so perf events performance monitoring is fully available to privileged processes without access scope and resource restrictions Unprivileged processes are subject to a full security permission check based on the process s credentials 5 usually effective UID effective GID and supplementary group list Linux divides the privileges traditionally associated with superuser into distinct units known as capabilities 6 which can be independently enabled and disabled on per thread basis for processes and files of unprivileged users Unprivileged processes with enabled CAP PERFMON capability are treated as privileged processes with respect to perf events performance monitoring and observability operations thus bypass scope permissions checks in the kernel CAP PERFMON implements the principle of least privilege 13 POSIX 1003 1e 2 2 2 39 for performance monitoring and observability operations in the kernel and provides a secure approach to performance monitoring and observability in the system For backward compatibility reasons the access to perf events monitoring and observability operations is also open for CAP SYS ADMIN privileged processes but CAP SYS ADMIN usage for secure monitoring and observability use cases is discouraged with respect to the CAP PERFMON capability If system audit records 14 for a process using perf events system call API contain denial records of acquiring both CAP PERFMON and CAP SYS ADMIN capabilities then providing the process with CAP PERFMON capability singly is recommended as the preferred secure approach to resolve double access denial logging related to usage of performance monitoring and observability Prior Linux v5 9 unprivileged processes using perf events system call are also subject for PTRACE MODE READ REALCREDS ptrace access mode check 7 whose outcome determines whether monitoring is permitted So unprivileged processes provided with CAP SYS PTRACE capability are effectively permitted to pass the check Starting from Linux v5 9 CAP SYS PTRACE capability is not required and CAP PERFMON is enough to be provided for processes to make performance monitoring and observability operations Other capabilities being granted to unprivileged processes can effectively enable capturing of additional data required for later performance analysis of monitored processes or a system For example CAP SYSLOG capability permits reading kernel space memory addresses from proc kallsyms file Privileged Perf users groups Mechanisms of capabilities privileged capability dumb files 6 file system ACLs 10 and sudo 15 utility can be used to create dedicated groups of privileged Perf users who are permitted to execute performance monitoring and observability without limits The following steps can be taken to create such groups of privileged Perf users 1 Create perf users group of privileged Perf users assign perf users group to Perf tool executable and limit access to the executable for other users in the system who are not in the perf users group groupadd perf users ls alhF rwxr xr x 2 root root 11M Oct 19 15 12 perf chgrp perf users perf ls alhF rwxr xr x 2 root perf users 11M Oct 19 15 12 perf chmod o rwx perf ls alhF rwxr x 2 root perf users 11M Oct 19 15 12 perf 2 Assign the required capabilities to the Perf tool executable file and enable members of perf users group with monitoring and observability privileges 6 setcap cap perfmon cap sys ptrace cap syslog ep perf setcap v cap perfmon cap sys ptrace cap syslog ep perf perf OK getcap perf perf cap sys ptrace cap syslog cap perfmon ep If the libcap 16 installed doesn t yet support cap perfmon use 38 instead i e setcap 38 cap ipc lock cap sys ptrace cap syslog ep perf Note that you may need to have cap ipc lock in the mix for tools such as perf top alternatively use perf top m N to reduce the memory that it uses for the perf ring buffer see the memory allocation section below Using a libcap without support for CAP PERFMON will make cap get flag caps 38 CAP EFFECTIVE val fail which will lead the default event to be cycles u so as a workaround explicitly ask for the cycles event i e perf top e cycles To get kernel and user samples with a perf binary with just CAP PERFMON As a result members of perf users group are capable of conducting performance monitoring and observability by using functionality of the configured Perf tool executable that when executes passes perf events subsystem scope checks In case Perf tool executable can t be assigned required capabilities e g file system is mounted with nosuid option or extended attributes are not supported by the file system then creation of the capabilities privileged environment naturally shell is possible The shell provides inherent processes with CAP PERFMON and other required capabilities so that performance monitoring and observability operations are available in the environment without limits Access to the environment can be open via sudo utility for members of perf users group only In order to create such environment 1 Create shell script that uses capsh utility 16 to assign CAP PERFMON and other required capabilities into ambient capability set of the shell process lock the process security bits after enabling SECBIT NO SETUID FIXUP SECBIT NOROOT and SECBIT NO CAP AMBIENT RAISE bits and then change the process identity to sudo caller of the script who should essentially be a member of perf users group ls alh usr local bin perf shell rwxr xr x 1 root root 83 Oct 13 23 57 usr local bin perf shell cat usr local bin perf shell exec usr sbin capsh iab cap perfmon secbits 239 user SUDO USER l 2 Extend sudo policy at etc sudoers file with a rule for perf users group grep perf users etc sudoers perf users ALL usr local bin perf shell 3 Check that members of perf users group have access to the privileged shell and have CAP PERFMON and other required capabilities enabled in permitted effective and ambient capability sets of an inherent process id uid 1003 capsh test gid 1004 capsh test groups 1004 capsh test 1000 perf users context unconfined u unconfined r unconfined t s0 s0 c0 c1023 sudo perf shell sudo password for capsh test grep Cap proc self status CapInh 0000004000000000 CapPrm 0000004000000000 CapEff 0000004000000000 CapBnd 000000ffffffffff CapAmb 0000004000000000 capsh decode 0000004000000000 0x0000004000000000 cap perfmon As a result members of perf users group have access to the privileged environment where they can use tools employing performance monitoring APIs governed by CAP PERFMON Linux capability This specific access control management is only available to superuser or root running processes with CAP SETPCAP CAP SETFCAP 6 capabilities Unprivileged users perf events scope and access control for unprivileged processes is governed by perf event paranoid 2 setting 1 Impose no scope and access restrictions on using perf events performance monitoring Per user per cpu perf event mlock kb 2 locking limit is ignored when allocating memory buffers for storing performance data This is the least secure mode since allowed monitored scope is maximized and no perf events specific limits are imposed on resources allocated for performance monitoring 0 scope includes per process and system wide performance monitoring but excludes raw tracepoints and ftrace function tracepoints monitoring CPU and system events happened when executing either in user or in kernel space can be monitored and captured for later analysis Per user per cpu perf event mlock kb locking limit is imposed but ignored for unprivileged processes with CAP IPC LOCK 6 capability 1 scope includes per process performance monitoring only and excludes system wide performance monitoring CPU and system events happened when executing either in user or in kernel space can be monitored and captured for later analysis Per user per cpu perf event mlock kb locking limit is imposed but ignored for unprivileged processes with CAP IPC LOCK capability 2 scope includes per process performance monitoring only CPU and system events happened when executing in user space only can be monitored and captured for later analysis Per user per cpu perf event mlock kb locking limit is imposed but ignored for unprivileged processes with CAP IPC LOCK capability Resource control Open file descriptors The perf events system call API 2 allocates file descriptors for every configured PMU event Open file descriptors are a per process accountable resource governed by the RLIMIT NOFILE 11 limit ulimit n which is usually derived from the login shell process When configuring Perf collection for a long list of events on a large server system this limit can be easily hit preventing required monitoring configuration RLIMIT NOFILE limit can be increased on per user basis modifying content of the limits conf file 12 Ordinarily a Perf sampling session perf record requires an amount of open perf event file descriptors that is not less than the number of monitored events multiplied by the number of monitored CPUs Memory allocation The amount of memory available to user processes for capturing performance monitoring data is governed by the perf event mlock kb 2 setting This perf event specific resource setting defines overall per cpu limits of memory allowed for mapping by the user processes to execute performance monitoring The setting essentially extends the RLIMIT MEMLOCK 11 limit but only for memory regions mapped specifically for capturing monitored performance events and related data For example if a machine has eight cores and perf event mlock kb limit is set to 516 KiB then a user process is provided with 516 KiB 8 4128 KiB of memory above the RLIMIT MEMLOCK limit ulimit l for perf event mmap buffers In particular this means that if the user wants to start two or more performance monitoring processes the user is required to manually distribute the available 4128 KiB between the monitoring processes for example using the mmap pages Perf record mode option Otherwise the first started performance monitoring process allocates all available 4128 KiB and the other processes will fail to proceed due to the lack of memory RLIMIT MEMLOCK and perf event mlock kb resource constraints are ignored for processes with the CAP IPC LOCK capability Thus perf events Perf privileged users can be provided with memory above the constraints for perf events Perf performance monitoring purpose by providing the Perf executable with CAP IPC LOCK capability Bibliography 1 https lwn net Articles 337493 2 http man7 org linux man pages man2 perf event open 2 html 3 http web eece maine edu vweaver projects perf events 4 https perf wiki kernel org index php Main Page 5 https www kernel org doc html latest security credentials html 6 http man7 org linux man pages man7 capabilities 7 html 7 http man7 org linux man pages man2 ptrace 2 html 8 https en wikipedia org wiki Hardware performance counter 9 https en wikipedia org wiki Model specific register 10 http man7 org linux man pages man5 acl 5 html 11 http man7 org linux man pages man2 getrlimit 2 html 12 http man7 org linux man pages man5 limits conf 5 html 13 https sites google com site fullycapable 14 http man7 org linux man pages man8 auditd 8 html 15 https man7 org linux man pages man8 sudo 8 html 16 https git kernel org pub scm libs libcap libcap git |
linux a Random Number Generator RNG The software has two parts special hardware feature on your CPU or motherboard Hardware random number generators The hwrandom framework is software that makes use of a Introduction | =================================
Hardware random number generators
=================================
Introduction
============
The hw_random framework is software that makes use of a
special hardware feature on your CPU or motherboard,
a Random Number Generator (RNG). The software has two parts:
a core providing the /dev/hwrng character device and its
sysfs support, plus a hardware-specific driver that plugs
into that core.
To make the most effective use of these mechanisms, you
should download the support software as well. Download the
latest version of the "rng-tools" package from:
https://github.com/nhorman/rng-tools
Those tools use /dev/hwrng to fill the kernel entropy pool,
which is used internally and exported by the /dev/urandom and
/dev/random special files.
Theory of operation
===================
CHARACTER DEVICE. Using the standard open()
and read() system calls, you can read random data from
the hardware RNG device. This data is NOT CHECKED by any
fitness tests, and could potentially be bogus (if the
hardware is faulty or has been tampered with). Data is only
output if the hardware "has-data" flag is set, but nevertheless
a security-conscious person would run fitness tests on the
data before assuming it is truly random.
The rng-tools package uses such tests in "rngd", and lets you
run them by hand with a "rngtest" utility.
/dev/hwrng is char device major 10, minor 183.
CLASS DEVICE. There is a /sys/class/misc/hw_random node with
two unique attributes, "rng_available" and "rng_current". The
"rng_available" attribute lists the hardware-specific drivers
available, while "rng_current" lists the one which is currently
connected to /dev/hwrng. If your system has more than one
RNG available, you may change the one used by writing a name from
the list in "rng_available" into "rng_current".
==========================================================================
Hardware driver for Intel/AMD/VIA Random Number Generators (RNG)
- Copyright 2000,2001 Jeff Garzik <[email protected]>
- Copyright 2000,2001 Philipp Rumpf <[email protected]>
About the Intel RNG hardware, from the firmware hub datasheet
=============================================================
The Firmware Hub integrates a Random Number Generator (RNG)
using thermal noise generated from inherently random quantum
mechanical properties of silicon. When not generating new random
bits the RNG circuitry will enter a low power state. Intel will
provide a binary software driver to give third party software
access to our RNG for use as a security feature. At this time,
the RNG is only to be used with a system in an OS-present state.
Intel RNG Driver notes
======================
FIXME: support poll(2)
.. note::
request_mem_region was removed, for three reasons:
1) Only one RNG is supported by this driver;
2) The location used by the RNG is a fixed location in
MMIO-addressable memory;
3) users with properly working BIOS e820 handling will always
have the region in which the RNG is located reserved, so
request_mem_region calls always fail for proper setups.
However, for people who use mem=XX, BIOS e820 information is
**not** in /proc/iomem, and request_mem_region(RNG_ADDR) can
succeed.
Driver details
==============
Based on:
Intel 82802AB/82802AC Firmware Hub (FWH) Datasheet
May 1999 Order Number: 290658-002 R
Intel 82802 Firmware Hub:
Random Number Generator
Programmer's Reference Manual
December 1999 Order Number: 298029-001 R
Intel 82802 Firmware HUB Random Number Generator Driver
Copyright (c) 2000 Matt Sottek <[email protected]>
Special thanks to Matt Sottek. I did the "guts", he
did the "brains" and all the testing. | linux | Hardware random number generators Introduction The hw random framework is software that makes use of a special hardware feature on your CPU or motherboard a Random Number Generator RNG The software has two parts a core providing the dev hwrng character device and its sysfs support plus a hardware specific driver that plugs into that core To make the most effective use of these mechanisms you should download the support software as well Download the latest version of the rng tools package from https github com nhorman rng tools Those tools use dev hwrng to fill the kernel entropy pool which is used internally and exported by the dev urandom and dev random special files Theory of operation CHARACTER DEVICE Using the standard open and read system calls you can read random data from the hardware RNG device This data is NOT CHECKED by any fitness tests and could potentially be bogus if the hardware is faulty or has been tampered with Data is only output if the hardware has data flag is set but nevertheless a security conscious person would run fitness tests on the data before assuming it is truly random The rng tools package uses such tests in rngd and lets you run them by hand with a rngtest utility dev hwrng is char device major 10 minor 183 CLASS DEVICE There is a sys class misc hw random node with two unique attributes rng available and rng current The rng available attribute lists the hardware specific drivers available while rng current lists the one which is currently connected to dev hwrng If your system has more than one RNG available you may change the one used by writing a name from the list in rng available into rng current Hardware driver for Intel AMD VIA Random Number Generators RNG Copyright 2000 2001 Jeff Garzik jgarzik pobox com Copyright 2000 2001 Philipp Rumpf prumpf mandrakesoft com About the Intel RNG hardware from the firmware hub datasheet The Firmware Hub integrates a Random Number Generator RNG using thermal noise generated from inherently random quantum mechanical properties of silicon When not generating new random bits the RNG circuitry will enter a low power state Intel will provide a binary software driver to give third party software access to our RNG for use as a security feature At this time the RNG is only to be used with a system in an OS present state Intel RNG Driver notes FIXME support poll 2 note request mem region was removed for three reasons 1 Only one RNG is supported by this driver 2 The location used by the RNG is a fixed location in MMIO addressable memory 3 users with properly working BIOS e820 handling will always have the region in which the RNG is located reserved so request mem region calls always fail for proper setups However for people who use mem XX BIOS e820 information is not in proc iomem and request mem region RNG ADDR can succeed Driver details Based on Intel 82802AB 82802AC Firmware Hub FWH Datasheet May 1999 Order Number 290658 002 R Intel 82802 Firmware Hub Random Number Generator Programmer s Reference Manual December 1999 Order Number 298029 001 R Intel 82802 Firmware HUB Random Number Generator Driver Copyright c 2000 Matt Sottek msottek quiknet com Special thanks to Matt Sottek I did the guts he did the brains and all the testing |
linux Author Tejun Heo tj kernel org Date October 2015 Control Group v2 cgroup v2 This is the authoritative documentation on the design interface and | .. _cgroup-v2:
================
Control Group v2
================
:Date: October, 2015
:Author: Tejun Heo <[email protected]>
This is the authoritative documentation on the design, interface and
conventions of cgroup v2. It describes all userland-visible aspects
of cgroup including core and specific controller behaviors. All
future changes must be reflected in this document. Documentation for
v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
.. CONTENTS
1. Introduction
1-1. Terminology
1-2. What is cgroup?
2. Basic Operations
2-1. Mounting
2-2. Organizing Processes and Threads
2-2-1. Processes
2-2-2. Threads
2-3. [Un]populated Notification
2-4. Controlling Controllers
2-4-1. Enabling and Disabling
2-4-2. Top-down Constraint
2-4-3. No Internal Process Constraint
2-5. Delegation
2-5-1. Model of Delegation
2-5-2. Delegation Containment
2-6. Guidelines
2-6-1. Organize Once and Control
2-6-2. Avoid Name Collisions
3. Resource Distribution Models
3-1. Weights
3-2. Limits
3-3. Protections
3-4. Allocations
4. Interface Files
4-1. Format
4-2. Conventions
4-3. Core Interface Files
5. Controllers
5-1. CPU
5-1-1. CPU Interface Files
5-2. Memory
5-2-1. Memory Interface Files
5-2-2. Usage Guidelines
5-2-3. Memory Ownership
5-3. IO
5-3-1. IO Interface Files
5-3-2. Writeback
5-3-3. IO Latency
5-3-3-1. How IO Latency Throttling Works
5-3-3-2. IO Latency Interface Files
5-3-4. IO Priority
5-4. PID
5-4-1. PID Interface Files
5-5. Cpuset
5.5-1. Cpuset Interface Files
5-6. Device
5-7. RDMA
5-7-1. RDMA Interface Files
5-8. HugeTLB
5.8-1. HugeTLB Interface Files
5-9. Misc
5.9-1 Miscellaneous cgroup Interface Files
5.9-2 Migration and Ownership
5-10. Others
5-10-1. perf_event
5-N. Non-normative information
5-N-1. CPU controller root cgroup process behaviour
5-N-2. IO controller root cgroup process behaviour
6. Namespace
6-1. Basics
6-2. The Root and Views
6-3. Migration and setns(2)
6-4. Interaction with Other Namespaces
P. Information on Kernel Programming
P-1. Filesystem Support for Writeback
D. Deprecated v1 Core Features
R. Issues with v1 and Rationales for v2
R-1. Multiple Hierarchies
R-2. Thread Granularity
R-3. Competition Between Inner Nodes and Threads
R-4. Other Interface Issues
R-5. Controller Issues and Remedies
R-5-1. Memory
Introduction
============
Terminology
-----------
"cgroup" stands for "control group" and is never capitalized. The
singular form is used to designate the whole feature and also as a
qualifier as in "cgroup controllers". When explicitly referring to
multiple individual control groups, the plural form "cgroups" is used.
What is cgroup?
---------------
cgroup is a mechanism to organize processes hierarchically and
distribute system resources along the hierarchy in a controlled and
configurable manner.
cgroup is largely composed of two parts - the core and controllers.
cgroup core is primarily responsible for hierarchically organizing
processes. A cgroup controller is usually responsible for
distributing a specific type of system resource along the hierarchy
although there are utility controllers which serve purposes other than
resource distribution.
cgroups form a tree structure and every process in the system belongs
to one and only one cgroup. All threads of a process belong to the
same cgroup. On creation, all processes are put in the cgroup that
the parent process belongs to at the time. A process can be migrated
to another cgroup. Migration of a process doesn't affect already
existing descendant processes.
Following certain structural constraints, controllers may be enabled or
disabled selectively on a cgroup. All controller behaviors are
hierarchical - if a controller is enabled on a cgroup, it affects all
processes which belong to the cgroups consisting the inclusive
sub-hierarchy of the cgroup. When a controller is enabled on a nested
cgroup, it always restricts the resource distribution further. The
restrictions set closer to the root in the hierarchy can not be
overridden from further away.
Basic Operations
================
Mounting
--------
Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
hierarchy can be mounted with the following mount command::
# mount -t cgroup2 none $MOUNT_POINT
cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
controllers which support v2 and are not bound to a v1 hierarchy are
automatically bound to the v2 hierarchy and show up at the root.
Controllers which are not in active use in the v2 hierarchy can be
bound to other hierarchies. This allows mixing v2 hierarchy with the
legacy v1 multiple hierarchies in a fully backward compatible way.
A controller can be moved across hierarchies only after the controller
is no longer referenced in its current hierarchy. Because per-cgroup
controller states are destroyed asynchronously and controllers may
have lingering references, a controller may not show up immediately on
the v2 hierarchy after the final umount of the previous hierarchy.
Similarly, a controller should be fully disabled to be moved out of
the unified hierarchy and it may take some time for the disabled
controller to become available for other hierarchies; furthermore, due
to inter-controller dependencies, other controllers may need to be
disabled too.
While useful for development and manual configurations, moving
controllers dynamically between the v2 and other hierarchies is
strongly discouraged for production use. It is recommended to decide
the hierarchies and controller associations before starting using the
controllers after system boot.
During transition to v2, system management software might still
automount the v1 cgroup filesystem and so hijack all controllers
during boot, before manual intervention is possible. To make testing
and experimenting easier, the kernel parameter cgroup_no_v1= allows
disabling controllers in v1 and make them always available in v2.
cgroup v2 currently supports the following mount options.
nsdelegate
Consider cgroup namespaces as delegation boundaries. This
option is system wide and can only be set on mount or modified
through remount from the init namespace. The mount option is
ignored on non-init namespace mounts. Please refer to the
Delegation section for details.
favordynmods
Reduce the latencies of dynamic cgroup modifications such as
task migrations and controller on/offs at the cost of making
hot path operations such as forks and exits more expensive.
The static usage pattern of creating a cgroup, enabling
controllers, and then seeding it with CLONE_INTO_CGROUP is
not affected by this option.
memory_localevents
Only populate memory.events with data for the current cgroup,
and not any subtrees. This is legacy behaviour, the default
behaviour without this option is to include subtree counts.
This option is system wide and can only be set on mount or
modified through remount from the init namespace. The mount
option is ignored on non-init namespace mounts.
memory_recursiveprot
Recursively apply memory.min and memory.low protection to
entire subtrees, without requiring explicit downward
propagation into leaf cgroups. This allows protecting entire
subtrees from one another, while retaining free competition
within those subtrees. This should have been the default
behavior but is a mount-option to avoid regressing setups
relying on the original semantics (e.g. specifying bogusly
high 'bypass' protection values at higher tree levels).
memory_hugetlb_accounting
Count HugeTLB memory usage towards the cgroup's overall
memory usage for the memory controller (for the purpose of
statistics reporting and memory protetion). This is a new
behavior that could regress existing setups, so it must be
explicitly opted in with this mount option.
A few caveats to keep in mind:
* There is no HugeTLB pool management involved in the memory
controller. The pre-allocated pool does not belong to anyone.
Specifically, when a new HugeTLB folio is allocated to
the pool, it is not accounted for from the perspective of the
memory controller. It is only charged to a cgroup when it is
actually used (for e.g at page fault time). Host memory
overcommit management has to consider this when configuring
hard limits. In general, HugeTLB pool management should be
done via other mechanisms (such as the HugeTLB controller).
* Failure to charge a HugeTLB folio to the memory controller
results in SIGBUS. This could happen even if the HugeTLB pool
still has pages available (but the cgroup limit is hit and
reclaim attempt fails).
* Charging HugeTLB memory towards the memory controller affects
memory protection and reclaim dynamics. Any userspace tuning
(of low, min limits for e.g) needs to take this into account.
* HugeTLB pages utilized while this option is not selected
will not be tracked by the memory controller (even if cgroup
v2 is remounted later on).
pids_localevents
The option restores v1-like behavior of pids.events:max, that is only
local (inside cgroup proper) fork failures are counted. Without this
option pids.events.max represents any pids.max enforcemnt across
cgroup's subtree.
Organizing Processes and Threads
--------------------------------
Processes
~~~~~~~~~
Initially, only the root cgroup exists to which all processes belong.
A child cgroup can be created by creating a sub-directory::
# mkdir $CGROUP_NAME
A given cgroup may have multiple child cgroups forming a tree
structure. Each cgroup has a read-writable interface file
"cgroup.procs". When read, it lists the PIDs of all processes which
belong to the cgroup one-per-line. The PIDs are not ordered and the
same PID may show up more than once if the process got moved to
another cgroup and then back or the PID got recycled while reading.
A process can be migrated into a cgroup by writing its PID to the
target cgroup's "cgroup.procs" file. Only one process can be migrated
on a single write(2) call. If a process is composed of multiple
threads, writing the PID of any thread migrates all threads of the
process.
When a process forks a child process, the new process is born into the
cgroup that the forking process belongs to at the time of the
operation. After exit, a process stays associated with the cgroup
that it belonged to at the time of exit until it's reaped; however, a
zombie process does not appear in "cgroup.procs" and thus can't be
moved to another cgroup.
A cgroup which doesn't have any children or live processes can be
destroyed by removing the directory. Note that a cgroup which doesn't
have any children and is associated only with zombie processes is
considered empty and can be removed::
# rmdir $CGROUP_NAME
"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
cgroup is in use in the system, this file may contain multiple lines,
one for each hierarchy. The entry for cgroup v2 is always in the
format "0::$PATH"::
# cat /proc/842/cgroup
...
0::/test-cgroup/test-cgroup-nested
If the process becomes a zombie and the cgroup it was associated with
is removed subsequently, " (deleted)" is appended to the path::
# cat /proc/842/cgroup
...
0::/test-cgroup/test-cgroup-nested (deleted)
Threads
~~~~~~~
cgroup v2 supports thread granularity for a subset of controllers to
support use cases requiring hierarchical resource distribution across
the threads of a group of processes. By default, all threads of a
process belong to the same cgroup, which also serves as the resource
domain to host resource consumptions which are not specific to a
process or thread. The thread mode allows threads to be spread across
a subtree while still maintaining the common resource domain for them.
Controllers which support thread mode are called threaded controllers.
The ones which don't are called domain controllers.
Marking a cgroup threaded makes it join the resource domain of its
parent as a threaded cgroup. The parent may be another threaded
cgroup whose resource domain is further up in the hierarchy. The root
of a threaded subtree, that is, the nearest ancestor which is not
threaded, is called threaded domain or thread root interchangeably and
serves as the resource domain for the entire subtree.
Inside a threaded subtree, threads of a process can be put in
different cgroups and are not subject to the no internal process
constraint - threaded controllers can be enabled on non-leaf cgroups
whether they have threads in them or not.
As the threaded domain cgroup hosts all the domain resource
consumptions of the subtree, it is considered to have internal
resource consumptions whether there are processes in it or not and
can't have populated child cgroups which aren't threaded. Because the
root cgroup is not subject to no internal process constraint, it can
serve both as a threaded domain and a parent to domain cgroups.
The current operation mode or type of the cgroup is shown in the
"cgroup.type" file which indicates whether the cgroup is a normal
domain, a domain which is serving as the domain of a threaded subtree,
or a threaded cgroup.
On creation, a cgroup is always a domain cgroup and can be made
threaded by writing "threaded" to the "cgroup.type" file. The
operation is single direction::
# echo threaded > cgroup.type
Once threaded, the cgroup can't be made a domain again. To enable the
thread mode, the following conditions must be met.
- As the cgroup will join the parent's resource domain. The parent
must either be a valid (threaded) domain or a threaded cgroup.
- When the parent is an unthreaded domain, it must not have any domain
controllers enabled or populated domain children. The root is
exempt from this requirement.
Topology-wise, a cgroup can be in an invalid state. Please consider
the following topology::
A (threaded domain) - B (threaded) - C (domain, just created)
C is created as a domain but isn't connected to a parent which can
host child domains. C can't be used until it is turned into a
threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
these cases. Operations which fail due to invalid topology use
EOPNOTSUPP as the errno.
A domain cgroup is turned into a threaded domain when one of its child
cgroup becomes threaded or threaded controllers are enabled in the
"cgroup.subtree_control" file while there are processes in the cgroup.
A threaded domain reverts to a normal domain when the conditions
clear.
When read, "cgroup.threads" contains the list of the thread IDs of all
threads in the cgroup. Except that the operations are per-thread
instead of per-process, "cgroup.threads" has the same format and
behaves the same way as "cgroup.procs". While "cgroup.threads" can be
written to in any cgroup, as it can only move threads inside the same
threaded domain, its operations are confined inside each threaded
subtree.
The threaded domain cgroup serves as the resource domain for the whole
subtree, and, while the threads can be scattered across the subtree,
all the processes are considered to be in the threaded domain cgroup.
"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
processes in the subtree and is not readable in the subtree proper.
However, "cgroup.procs" can be written to from anywhere in the subtree
to migrate all threads of the matching process to the cgroup.
Only threaded controllers can be enabled in a threaded subtree. When
a threaded controller is enabled inside a threaded subtree, it only
accounts for and controls resource consumptions associated with the
threads in the cgroup and its descendants. All consumptions which
aren't tied to a specific thread belong to the threaded domain cgroup.
Because a threaded subtree is exempt from no internal process
constraint, a threaded controller must be able to handle competition
between threads in a non-leaf cgroup and its child cgroups. Each
threaded controller defines how such competitions are handled.
Currently, the following controllers are threaded and can be enabled
in a threaded cgroup::
- cpu
- cpuset
- perf_event
- pids
[Un]populated Notification
--------------------------
Each non-root cgroup has a "cgroup.events" file which contains
"populated" field indicating whether the cgroup's sub-hierarchy has
live processes in it. Its value is 0 if there is no live process in
the cgroup and its descendants; otherwise, 1. poll and [id]notify
events are triggered when the value changes. This can be used, for
example, to start a clean-up operation after all processes of a given
sub-hierarchy have exited. The populated state updates and
notifications are recursive. Consider the following sub-hierarchy
where the numbers in the parentheses represent the numbers of processes
in each cgroup::
A(4) - B(0) - C(1)
\ D(0)
A, B and C's "populated" fields would be 1 while D's 0. After the one
process in C exits, B and C's "populated" fields would flip to "0" and
file modified events will be generated on the "cgroup.events" files of
both cgroups.
Controlling Controllers
-----------------------
Enabling and Disabling
~~~~~~~~~~~~~~~~~~~~~~
Each cgroup has a "cgroup.controllers" file which lists all
controllers available for the cgroup to enable::
# cat cgroup.controllers
cpu io memory
No controller is enabled by default. Controllers can be enabled and
disabled by writing to the "cgroup.subtree_control" file::
# echo "+cpu +memory -io" > cgroup.subtree_control
Only controllers which are listed in "cgroup.controllers" can be
enabled. When multiple operations are specified as above, either they
all succeed or fail. If multiple operations on the same controller
are specified, the last one is effective.
Enabling a controller in a cgroup indicates that the distribution of
the target resource across its immediate children will be controlled.
Consider the following sub-hierarchy. The enabled controllers are
listed in parentheses::
A(cpu,memory) - B(memory) - C()
\ D()
As A has "cpu" and "memory" enabled, A will control the distribution
of CPU cycles and memory to its children, in this case, B. As B has
"memory" enabled but not "CPU", C and D will compete freely on CPU
cycles but their division of memory available to B will be controlled.
As a controller regulates the distribution of the target resource to
the cgroup's children, enabling it creates the controller's interface
files in the child cgroups. In the above example, enabling "cpu" on B
would create the "cpu." prefixed controller interface files in C and
D. Likewise, disabling "memory" from B would remove the "memory."
prefixed controller interface files from C and D. This means that the
controller interface files - anything which doesn't start with
"cgroup." are owned by the parent rather than the cgroup itself.
Top-down Constraint
~~~~~~~~~~~~~~~~~~~
Resources are distributed top-down and a cgroup can further distribute
a resource only if the resource has been distributed to it from the
parent. This means that all non-root "cgroup.subtree_control" files
can only contain controllers which are enabled in the parent's
"cgroup.subtree_control" file. A controller can be enabled only if
the parent has the controller enabled and a controller can't be
disabled if one or more children have it enabled.
No Internal Process Constraint
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Non-root cgroups can distribute domain resources to their children
only when they don't have any processes of their own. In other words,
only domain cgroups which don't contain any processes can have domain
controllers enabled in their "cgroup.subtree_control" files.
This guarantees that, when a domain controller is looking at the part
of the hierarchy which has it enabled, processes are always only on
the leaves. This rules out situations where child cgroups compete
against internal processes of the parent.
The root cgroup is exempt from this restriction. Root contains
processes and anonymous resource consumption which can't be associated
with any other cgroups and requires special treatment from most
controllers. How resource consumption in the root cgroup is governed
is up to each controller (for more information on this topic please
refer to the Non-normative information section in the Controllers
chapter).
Note that the restriction doesn't get in the way if there is no
enabled controller in the cgroup's "cgroup.subtree_control". This is
important as otherwise it wouldn't be possible to create children of a
populated cgroup. To control resource distribution of a cgroup, the
cgroup must create children and transfer all its processes to the
children before enabling controllers in its "cgroup.subtree_control"
file.
Delegation
----------
Model of Delegation
~~~~~~~~~~~~~~~~~~~
A cgroup can be delegated in two ways. First, to a less privileged
user by granting write access of the directory and its "cgroup.procs",
"cgroup.threads" and "cgroup.subtree_control" files to the user.
Second, if the "nsdelegate" mount option is set, automatically to a
cgroup namespace on namespace creation.
Because the resource control interface files in a given directory
control the distribution of the parent's resources, the delegatee
shouldn't be allowed to write to them. For the first method, this is
achieved by not granting access to these files. For the second, files
outside the namespace should be hidden from the delegatee by the means
of at least mount namespacing, and the kernel rejects writes to all
files on a namespace root from inside the cgroup namespace, except for
those files listed in "/sys/kernel/cgroup/delegate" (including
"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
The end results are equivalent for both delegation types. Once
delegated, the user can build sub-hierarchy under the directory,
organize processes inside it as it sees fit and further distribute the
resources it received from the parent. The limits and other settings
of all resource controllers are hierarchical and regardless of what
happens in the delegated sub-hierarchy, nothing can escape the
resource restrictions imposed by the parent.
Currently, cgroup doesn't impose any restrictions on the number of
cgroups in or nesting depth of a delegated sub-hierarchy; however,
this may be limited explicitly in the future.
Delegation Containment
~~~~~~~~~~~~~~~~~~~~~~
A delegated sub-hierarchy is contained in the sense that processes
can't be moved into or out of the sub-hierarchy by the delegatee.
For delegations to a less privileged user, this is achieved by
requiring the following conditions for a process with a non-root euid
to migrate a target process into a cgroup by writing its PID to the
"cgroup.procs" file.
- The writer must have write access to the "cgroup.procs" file.
- The writer must have write access to the "cgroup.procs" file of the
common ancestor of the source and destination cgroups.
The above two constraints ensure that while a delegatee may migrate
processes around freely in the delegated sub-hierarchy it can't pull
in from or push out to outside the sub-hierarchy.
For an example, let's assume cgroups C0 and C1 have been delegated to
user U0 who created C00, C01 under C0 and C10 under C1 as follows and
all processes under C0 and C1 belong to U0::
~~~~~~~~~~~~~ - C0 - C00
~ cgroup ~ \ C01
~ hierarchy ~
~~~~~~~~~~~~~ - C1 - C10
Let's also say U0 wants to write the PID of a process which is
currently in C10 into "C00/cgroup.procs". U0 has write access to the
file; however, the common ancestor of the source cgroup C10 and the
destination cgroup C00 is above the points of delegation and U0 would
not have write access to its "cgroup.procs" files and thus the write
will be denied with -EACCES.
For delegations to namespaces, containment is achieved by requiring
that both the source and destination cgroups are reachable from the
namespace of the process which is attempting the migration. If either
is not reachable, the migration is rejected with -ENOENT.
Guidelines
----------
Organize Once and Control
~~~~~~~~~~~~~~~~~~~~~~~~~
Migrating a process across cgroups is a relatively expensive operation
and stateful resources such as memory are not moved together with the
process. This is an explicit design decision as there often exist
inherent trade-offs between migration and various hot paths in terms
of synchronization cost.
As such, migrating processes across cgroups frequently as a means to
apply different resource restrictions is discouraged. A workload
should be assigned to a cgroup according to the system's logical and
resource structure once on start-up. Dynamic adjustments to resource
distribution can be made by changing controller configuration through
the interface files.
Avoid Name Collisions
~~~~~~~~~~~~~~~~~~~~~
Interface files for a cgroup and its children cgroups occupy the same
directory and it is possible to create children cgroups which collide
with interface files.
All cgroup core interface files are prefixed with "cgroup." and each
controller's interface files are prefixed with the controller name and
a dot. A controller's name is composed of lower case alphabets and
'_'s but never begins with an '_' so it can be used as the prefix
character for collision avoidance. Also, interface file names won't
start or end with terms which are often used in categorizing workloads
such as job, service, slice, unit or workload.
cgroup doesn't do anything to prevent name collisions and it's the
user's responsibility to avoid them.
Resource Distribution Models
============================
cgroup controllers implement several resource distribution schemes
depending on the resource type and expected use cases. This section
describes major schemes in use along with their expected behaviors.
Weights
-------
A parent's resource is distributed by adding up the weights of all
active children and giving each the fraction matching the ratio of its
weight against the sum. As only children which can make use of the
resource at the moment participate in the distribution, this is
work-conserving. Due to the dynamic nature, this model is usually
used for stateless resources.
All weights are in the range [1, 10000] with the default at 100. This
allows symmetric multiplicative biases in both directions at fine
enough granularity while staying in the intuitive range.
As long as the weight is in range, all configuration combinations are
valid and there is no reason to reject configuration changes or
process migrations.
"cpu.weight" proportionally distributes CPU cycles to active children
and is an example of this type.
.. _cgroupv2-limits-distributor:
Limits
------
A child can only consume up to the configured amount of the resource.
Limits can be over-committed - the sum of the limits of children can
exceed the amount of resource available to the parent.
Limits are in the range [0, max] and defaults to "max", which is noop.
As limits can be over-committed, all configuration combinations are
valid and there is no reason to reject configuration changes or
process migrations.
"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
on an IO device and is an example of this type.
.. _cgroupv2-protections-distributor:
Protections
-----------
A cgroup is protected up to the configured amount of the resource
as long as the usages of all its ancestors are under their
protected levels. Protections can be hard guarantees or best effort
soft boundaries. Protections can also be over-committed in which case
only up to the amount available to the parent is protected among
children.
Protections are in the range [0, max] and defaults to 0, which is
noop.
As protections can be over-committed, all configuration combinations
are valid and there is no reason to reject configuration changes or
process migrations.
"memory.low" implements best-effort memory protection and is an
example of this type.
Allocations
-----------
A cgroup is exclusively allocated a certain amount of a finite
resource. Allocations can't be over-committed - the sum of the
allocations of children can not exceed the amount of resource
available to the parent.
Allocations are in the range [0, max] and defaults to 0, which is no
resource.
As allocations can't be over-committed, some configuration
combinations are invalid and should be rejected. Also, if the
resource is mandatory for execution of processes, process migrations
may be rejected.
"cpu.rt.max" hard-allocates realtime slices and is an example of this
type.
Interface Files
===============
Format
------
All interface files should be in one of the following formats whenever
possible::
New-line separated values
(when only one value can be written at once)
VAL0\n
VAL1\n
...
Space separated values
(when read-only or multiple values can be written at once)
VAL0 VAL1 ...\n
Flat keyed
KEY0 VAL0\n
KEY1 VAL1\n
...
Nested keyed
KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
...
For a writable file, the format for writing should generally match
reading; however, controllers may allow omitting later fields or
implement restricted shortcuts for most common use cases.
For both flat and nested keyed files, only the values for a single key
can be written at a time. For nested keyed files, the sub key pairs
may be specified in any order and not all pairs have to be specified.
Conventions
-----------
- Settings for a single feature should be contained in a single file.
- The root cgroup should be exempt from resource control and thus
shouldn't have resource control interface files.
- The default time unit is microseconds. If a different unit is ever
used, an explicit unit suffix must be present.
- A parts-per quantity should use a percentage decimal with at least
two digit fractional part - e.g. 13.40.
- If a controller implements weight based resource distribution, its
interface file should be named "weight" and have the range [1,
10000] with 100 as the default. The values are chosen to allow
enough and symmetric bias in both directions while keeping it
intuitive (the default is 100%).
- If a controller implements an absolute resource guarantee and/or
limit, the interface files should be named "min" and "max"
respectively. If a controller implements best effort resource
guarantee and/or limit, the interface files should be named "low"
and "high" respectively.
In the above four control files, the special token "max" should be
used to represent upward infinity for both reading and writing.
- If a setting has a configurable default value and keyed specific
overrides, the default entry should be keyed with "default" and
appear as the first entry in the file.
The default value can be updated by writing either "default $VAL" or
"$VAL".
When writing to update a specific override, "default" can be used as
the value to indicate removal of the override. Override entries
with "default" as the value must not appear when read.
For example, a setting which is keyed by major:minor device numbers
with integer values may look like the following::
# cat cgroup-example-interface-file
default 150
8:0 300
The default value can be updated by::
# echo 125 > cgroup-example-interface-file
or::
# echo "default 125" > cgroup-example-interface-file
An override can be set by::
# echo "8:16 170" > cgroup-example-interface-file
and cleared by::
# echo "8:0 default" > cgroup-example-interface-file
# cat cgroup-example-interface-file
default 125
8:16 170
- For events which are not very high frequency, an interface file
"events" should be created which lists event key value pairs.
Whenever a notifiable event happens, file modified event should be
generated on the file.
Core Interface Files
--------------------
All cgroup core files are prefixed with "cgroup."
cgroup.type
A read-write single value file which exists on non-root
cgroups.
When read, it indicates the current type of the cgroup, which
can be one of the following values.
- "domain" : A normal valid domain cgroup.
- "domain threaded" : A threaded domain cgroup which is
serving as the root of a threaded subtree.
- "domain invalid" : A cgroup which is in an invalid state.
It can't be populated or have controllers enabled. It may
be allowed to become a threaded cgroup.
- "threaded" : A threaded cgroup which is a member of a
threaded subtree.
A cgroup can be turned into a threaded cgroup by writing
"threaded" to this file.
cgroup.procs
A read-write new-line separated values file which exists on
all cgroups.
When read, it lists the PIDs of all processes which belong to
the cgroup one-per-line. The PIDs are not ordered and the
same PID may show up more than once if the process got moved
to another cgroup and then back or the PID got recycled while
reading.
A PID can be written to migrate the process associated with
the PID to the cgroup. The writer should match all of the
following conditions.
- It must have write access to the "cgroup.procs" file.
- It must have write access to the "cgroup.procs" file of the
common ancestor of the source and destination cgroups.
When delegating a sub-hierarchy, write access to this file
should be granted along with the containing directory.
In a threaded cgroup, reading this file fails with EOPNOTSUPP
as all the processes belong to the thread root. Writing is
supported and moves every thread of the process to the cgroup.
cgroup.threads
A read-write new-line separated values file which exists on
all cgroups.
When read, it lists the TIDs of all threads which belong to
the cgroup one-per-line. The TIDs are not ordered and the
same TID may show up more than once if the thread got moved to
another cgroup and then back or the TID got recycled while
reading.
A TID can be written to migrate the thread associated with the
TID to the cgroup. The writer should match all of the
following conditions.
- It must have write access to the "cgroup.threads" file.
- The cgroup that the thread is currently in must be in the
same resource domain as the destination cgroup.
- It must have write access to the "cgroup.procs" file of the
common ancestor of the source and destination cgroups.
When delegating a sub-hierarchy, write access to this file
should be granted along with the containing directory.
cgroup.controllers
A read-only space separated values file which exists on all
cgroups.
It shows space separated list of all controllers available to
the cgroup. The controllers are not ordered.
cgroup.subtree_control
A read-write space separated values file which exists on all
cgroups. Starts out empty.
When read, it shows space separated list of the controllers
which are enabled to control resource distribution from the
cgroup to its children.
Space separated list of controllers prefixed with '+' or '-'
can be written to enable or disable controllers. A controller
name prefixed with '+' enables the controller and '-'
disables. If a controller appears more than once on the list,
the last one is effective. When multiple enable and disable
operations are specified, either all succeed or all fail.
cgroup.events
A read-only flat-keyed file which exists on non-root cgroups.
The following entries are defined. Unless specified
otherwise, a value change in this file generates a file
modified event.
populated
1 if the cgroup or its descendants contains any live
processes; otherwise, 0.
frozen
1 if the cgroup is frozen; otherwise, 0.
cgroup.max.descendants
A read-write single value files. The default is "max".
Maximum allowed number of descent cgroups.
If the actual number of descendants is equal or larger,
an attempt to create a new cgroup in the hierarchy will fail.
cgroup.max.depth
A read-write single value files. The default is "max".
Maximum allowed descent depth below the current cgroup.
If the actual descent depth is equal or larger,
an attempt to create a new child cgroup will fail.
cgroup.stat
A read-only flat-keyed file with the following entries:
nr_descendants
Total number of visible descendant cgroups.
nr_dying_descendants
Total number of dying descendant cgroups. A cgroup becomes
dying after being deleted by a user. The cgroup will remain
in dying state for some time undefined time (which can depend
on system load) before being completely destroyed.
A process can't enter a dying cgroup under any circumstances,
a dying cgroup can't revive.
A dying cgroup can consume system resources not exceeding
limits, which were active at the moment of cgroup deletion.
nr_subsys_<cgroup_subsys>
Total number of live cgroup subsystems (e.g memory
cgroup) at and beneath the current cgroup.
nr_dying_subsys_<cgroup_subsys>
Total number of dying cgroup subsystems (e.g. memory
cgroup) at and beneath the current cgroup.
cgroup.freeze
A read-write single value file which exists on non-root cgroups.
Allowed values are "0" and "1". The default is "0".
Writing "1" to the file causes freezing of the cgroup and all
descendant cgroups. This means that all belonging processes will
be stopped and will not run until the cgroup will be explicitly
unfrozen. Freezing of the cgroup may take some time; when this action
is completed, the "frozen" value in the cgroup.events control file
will be updated to "1" and the corresponding notification will be
issued.
A cgroup can be frozen either by its own settings, or by settings
of any ancestor cgroups. If any of ancestor cgroups is frozen, the
cgroup will remain frozen.
Processes in the frozen cgroup can be killed by a fatal signal.
They also can enter and leave a frozen cgroup: either by an explicit
move by a user, or if freezing of the cgroup races with fork().
If a process is moved to a frozen cgroup, it stops. If a process is
moved out of a frozen cgroup, it becomes running.
Frozen status of a cgroup doesn't affect any cgroup tree operations:
it's possible to delete a frozen (and empty) cgroup, as well as
create new sub-cgroups.
cgroup.kill
A write-only single value file which exists in non-root cgroups.
The only allowed value is "1".
Writing "1" to the file causes the cgroup and all descendant cgroups to
be killed. This means that all processes located in the affected cgroup
tree will be killed via SIGKILL.
Killing a cgroup tree will deal with concurrent forks appropriately and
is protected against migrations.
In a threaded cgroup, writing this file fails with EOPNOTSUPP as
killing cgroups is a process directed operation, i.e. it affects
the whole thread-group.
cgroup.pressure
A read-write single value file that allowed values are "0" and "1".
The default is "1".
Writing "0" to the file will disable the cgroup PSI accounting.
Writing "1" to the file will re-enable the cgroup PSI accounting.
This control attribute is not hierarchical, so disable or enable PSI
accounting in a cgroup does not affect PSI accounting in descendants
and doesn't need pass enablement via ancestors from root.
The reason this control attribute exists is that PSI accounts stalls for
each cgroup separately and aggregates it at each level of the hierarchy.
This may cause non-negligible overhead for some workloads when under
deep level of the hierarchy, in which case this control attribute can
be used to disable PSI accounting in the non-leaf cgroups.
irq.pressure
A read-write nested-keyed file.
Shows pressure stall information for IRQ/SOFTIRQ. See
:ref:`Documentation/accounting/psi.rst <psi>` for details.
Controllers
===========
.. _cgroup-v2-cpu:
CPU
---
The "cpu" controllers regulates distribution of CPU cycles. This
controller implements weight and absolute bandwidth limit models for
normal scheduling policy and absolute bandwidth allocation model for
realtime scheduling policy.
In all the above models, cycles distribution is defined only on a temporal
base and it does not account for the frequency at which tasks are executed.
The (optional) utilization clamping support allows to hint the schedutil
cpufreq governor about the minimum desired frequency which should always be
provided by a CPU, as well as the maximum desired frequency, which should not
be exceeded by a CPU.
WARNING: cgroup2 doesn't yet support control of realtime processes. For
a kernel built with the CONFIG_RT_GROUP_SCHED option enabled for group
scheduling of realtime processes, the cpu controller can only be enabled
when all RT processes are in the root cgroup. This limitation does
not apply if CONFIG_RT_GROUP_SCHED is disabled. Be aware that system
management software may already have placed RT processes into nonroot
cgroups during the system boot process, and these processes may need
to be moved to the root cgroup before the cpu controller can be enabled
with a CONFIG_RT_GROUP_SCHED enabled kernel.
CPU Interface Files
~~~~~~~~~~~~~~~~~~~
All time durations are in microseconds.
cpu.stat
A read-only flat-keyed file.
This file exists whether the controller is enabled or not.
It always reports the following three stats:
- usage_usec
- user_usec
- system_usec
and the following five when the controller is enabled:
- nr_periods
- nr_throttled
- throttled_usec
- nr_bursts
- burst_usec
cpu.weight
A read-write single value file which exists on non-root
cgroups. The default is "100".
For non idle groups (cpu.idle = 0), the weight is in the
range [1, 10000].
If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
then the weight will show as a 0.
cpu.weight.nice
A read-write single value file which exists on non-root
cgroups. The default is "0".
The nice value is in the range [-20, 19].
This interface file is an alternative interface for
"cpu.weight" and allows reading and setting weight using the
same values used by nice(2). Because the range is smaller and
granularity is coarser for the nice values, the read value is
the closest approximation of the current weight.
cpu.max
A read-write two value file which exists on non-root cgroups.
The default is "max 100000".
The maximum bandwidth limit. It's in the following format::
$MAX $PERIOD
which indicates that the group may consume up to $MAX in each
$PERIOD duration. "max" for $MAX indicates no limit. If only
one number is written, $MAX is updated.
cpu.max.burst
A read-write single value file which exists on non-root
cgroups. The default is "0".
The burst in the range [0, $MAX].
cpu.pressure
A read-write nested-keyed file.
Shows pressure stall information for CPU. See
:ref:`Documentation/accounting/psi.rst <psi>` for details.
cpu.uclamp.min
A read-write single value file which exists on non-root cgroups.
The default is "0", i.e. no utilization boosting.
The requested minimum utilization (protection) as a percentage
rational number, e.g. 12.34 for 12.34%.
This interface allows reading and setting minimum utilization clamp
values similar to the sched_setattr(2). This minimum utilization
value is used to clamp the task specific minimum utilization clamp.
The requested minimum utilization (protection) is always capped by
the current value for the maximum utilization (limit), i.e.
`cpu.uclamp.max`.
cpu.uclamp.max
A read-write single value file which exists on non-root cgroups.
The default is "max". i.e. no utilization capping
The requested maximum utilization (limit) as a percentage rational
number, e.g. 98.76 for 98.76%.
This interface allows reading and setting maximum utilization clamp
values similar to the sched_setattr(2). This maximum utilization
value is used to clamp the task specific maximum utilization clamp.
cpu.idle
A read-write single value file which exists on non-root cgroups.
The default is 0.
This is the cgroup analog of the per-task SCHED_IDLE sched policy.
Setting this value to a 1 will make the scheduling policy of the
cgroup SCHED_IDLE. The threads inside the cgroup will retain their
own relative priorities, but the cgroup itself will be treated as
very low priority relative to its peers.
Memory
------
The "memory" controller regulates distribution of memory. Memory is
stateful and implements both limit and protection models. Due to the
intertwining between memory usage and reclaim pressure and the
stateful nature of memory, the distribution model is relatively
complex.
While not completely water-tight, all major memory usages by a given
cgroup are tracked so that the total memory consumption can be
accounted and controlled to a reasonable extent. Currently, the
following types of memory usages are tracked.
- Userland memory - page cache and anonymous memory.
- Kernel data structures such as dentries and inodes.
- TCP socket buffers.
The above list may expand in the future for better coverage.
Memory Interface Files
~~~~~~~~~~~~~~~~~~~~~~
All memory amounts are in bytes. If a value which is not aligned to
PAGE_SIZE is written, the value may be rounded up to the closest
PAGE_SIZE multiple when read back.
memory.current
A read-only single value file which exists on non-root
cgroups.
The total amount of memory currently being used by the cgroup
and its descendants.
memory.min
A read-write single value file which exists on non-root
cgroups. The default is "0".
Hard memory protection. If the memory usage of a cgroup
is within its effective min boundary, the cgroup's memory
won't be reclaimed under any conditions. If there is no
unprotected reclaimable memory available, OOM killer
is invoked. Above the effective min boundary (or
effective low boundary if it is higher), pages are reclaimed
proportionally to the overage, reducing reclaim pressure for
smaller overages.
Effective min boundary is limited by memory.min values of
all ancestor cgroups. If there is memory.min overcommitment
(child cgroup or cgroups are requiring more protected memory
than parent will allow), then each child cgroup will get
the part of parent's protection proportional to its
actual memory usage below memory.min.
Putting more memory than generally available under this
protection is discouraged and may lead to constant OOMs.
If a memory cgroup is not populated with processes,
its memory.min is ignored.
memory.low
A read-write single value file which exists on non-root
cgroups. The default is "0".
Best-effort memory protection. If the memory usage of a
cgroup is within its effective low boundary, the cgroup's
memory won't be reclaimed unless there is no reclaimable
memory available in unprotected cgroups.
Above the effective low boundary (or
effective min boundary if it is higher), pages are reclaimed
proportionally to the overage, reducing reclaim pressure for
smaller overages.
Effective low boundary is limited by memory.low values of
all ancestor cgroups. If there is memory.low overcommitment
(child cgroup or cgroups are requiring more protected memory
than parent will allow), then each child cgroup will get
the part of parent's protection proportional to its
actual memory usage below memory.low.
Putting more memory than generally available under this
protection is discouraged.
memory.high
A read-write single value file which exists on non-root
cgroups. The default is "max".
Memory usage throttle limit. If a cgroup's usage goes
over the high boundary, the processes of the cgroup are
throttled and put under heavy reclaim pressure.
Going over the high limit never invokes the OOM killer and
under extreme conditions the limit may be breached. The high
limit should be used in scenarios where an external process
monitors the limited cgroup to alleviate heavy reclaim
pressure.
memory.max
A read-write single value file which exists on non-root
cgroups. The default is "max".
Memory usage hard limit. This is the main mechanism to limit
memory usage of a cgroup. If a cgroup's memory usage reaches
this limit and can't be reduced, the OOM killer is invoked in
the cgroup. Under certain circumstances, the usage may go
over the limit temporarily.
In default configuration regular 0-order allocations always
succeed unless OOM killer chooses current task as a victim.
Some kinds of allocations don't invoke the OOM killer.
Caller could retry them differently, return into userspace
as -ENOMEM or silently ignore in cases like disk readahead.
memory.reclaim
A write-only nested-keyed file which exists for all cgroups.
This is a simple interface to trigger memory reclaim in the
target cgroup.
Example::
echo "1G" > memory.reclaim
Please note that the kernel can over or under reclaim from
the target cgroup. If less bytes are reclaimed than the
specified amount, -EAGAIN is returned.
Please note that the proactive reclaim (triggered by this
interface) is not meant to indicate memory pressure on the
memory cgroup. Therefore socket memory balancing triggered by
the memory reclaim normally is not exercised in this case.
This means that the networking layer will not adapt based on
reclaim induced by memory.reclaim.
The following nested keys are defined.
========== ================================
swappiness Swappiness value to reclaim with
========== ================================
Specifying a swappiness value instructs the kernel to perform
the reclaim with that swappiness value. Note that this has the
same semantics as vm.swappiness applied to memcg reclaim with
all the existing limitations and potential future extensions.
memory.peak
A read-write single value file which exists on non-root cgroups.
The max memory usage recorded for the cgroup and its descendants since
either the creation of the cgroup or the most recent reset for that FD.
A write of any non-empty string to this file resets it to the
current memory usage for subsequent reads through the same
file descriptor.
memory.oom.group
A read-write single value file which exists on non-root
cgroups. The default value is "0".
Determines whether the cgroup should be treated as
an indivisible workload by the OOM killer. If set,
all tasks belonging to the cgroup or to its descendants
(if the memory cgroup is not a leaf cgroup) are killed
together or not at all. This can be used to avoid
partial kills to guarantee workload integrity.
Tasks with the OOM protection (oom_score_adj set to -1000)
are treated as an exception and are never killed.
If the OOM killer is invoked in a cgroup, it's not going
to kill any tasks outside of this cgroup, regardless
memory.oom.group values of ancestor cgroups.
memory.events
A read-only flat-keyed file which exists on non-root cgroups.
The following entries are defined. Unless specified
otherwise, a value change in this file generates a file
modified event.
Note that all fields in this file are hierarchical and the
file modified event can be generated due to an event down the
hierarchy. For the local events at the cgroup level see
memory.events.local.
low
The number of times the cgroup is reclaimed due to
high memory pressure even though its usage is under
the low boundary. This usually indicates that the low
boundary is over-committed.
high
The number of times processes of the cgroup are
throttled and routed to perform direct memory reclaim
because the high memory boundary was exceeded. For a
cgroup whose memory usage is capped by the high limit
rather than global memory pressure, this event's
occurrences are expected.
max
The number of times the cgroup's memory usage was
about to go over the max boundary. If direct reclaim
fails to bring it down, the cgroup goes to OOM state.
oom
The number of time the cgroup's memory usage was
reached the limit and allocation was about to fail.
This event is not raised if the OOM killer is not
considered as an option, e.g. for failed high-order
allocations or if caller asked to not retry attempts.
oom_kill
The number of processes belonging to this cgroup
killed by any kind of OOM killer.
oom_group_kill
The number of times a group OOM has occurred.
memory.events.local
Similar to memory.events but the fields in the file are local
to the cgroup i.e. not hierarchical. The file modified event
generated on this file reflects only the local events.
memory.stat
A read-only flat-keyed file which exists on non-root cgroups.
This breaks down the cgroup's memory footprint into different
types of memory, type-specific details, and other information
on the state and past events of the memory management system.
All memory amounts are in bytes.
The entries are ordered to be human readable, and new entries
can show up in the middle. Don't rely on items remaining in a
fixed position; use the keys to look up specific values!
If the entry has no per-node counter (or not show in the
memory.numa_stat). We use 'npn' (non-per-node) as the tag
to indicate that it will not show in the memory.numa_stat.
anon
Amount of memory used in anonymous mappings such as
brk(), sbrk(), and mmap(MAP_ANONYMOUS)
file
Amount of memory used to cache filesystem data,
including tmpfs and shared memory.
kernel (npn)
Amount of total kernel memory, including
(kernel_stack, pagetables, percpu, vmalloc, slab) in
addition to other kernel memory use cases.
kernel_stack
Amount of memory allocated to kernel stacks.
pagetables
Amount of memory allocated for page tables.
sec_pagetables
Amount of memory allocated for secondary page tables,
this currently includes KVM mmu allocations on x86
and arm64 and IOMMU page tables.
percpu (npn)
Amount of memory used for storing per-cpu kernel
data structures.
sock (npn)
Amount of memory used in network transmission buffers
vmalloc (npn)
Amount of memory used for vmap backed memory.
shmem
Amount of cached filesystem data that is swap-backed,
such as tmpfs, shm segments, shared anonymous mmap()s
zswap
Amount of memory consumed by the zswap compression backend.
zswapped
Amount of application memory swapped out to zswap.
file_mapped
Amount of cached filesystem data mapped with mmap()
file_dirty
Amount of cached filesystem data that was modified but
not yet written back to disk
file_writeback
Amount of cached filesystem data that was modified and
is currently being written back to disk
swapcached
Amount of swap cached in memory. The swapcache is accounted
against both memory and swap usage.
anon_thp
Amount of memory used in anonymous mappings backed by
transparent hugepages
file_thp
Amount of cached filesystem data backed by transparent
hugepages
shmem_thp
Amount of shm, tmpfs, shared anonymous mmap()s backed by
transparent hugepages
inactive_anon, active_anon, inactive_file, active_file, unevictable
Amount of memory, swap-backed and filesystem-backed,
on the internal memory management lists used by the
page reclaim algorithm.
As these represent internal list state (eg. shmem pages are on anon
memory management lists), inactive_foo + active_foo may not be equal to
the value for the foo counter, since the foo counter is type-based, not
list-based.
slab_reclaimable
Part of "slab" that might be reclaimed, such as
dentries and inodes.
slab_unreclaimable
Part of "slab" that cannot be reclaimed on memory
pressure.
slab (npn)
Amount of memory used for storing in-kernel data
structures.
workingset_refault_anon
Number of refaults of previously evicted anonymous pages.
workingset_refault_file
Number of refaults of previously evicted file pages.
workingset_activate_anon
Number of refaulted anonymous pages that were immediately
activated.
workingset_activate_file
Number of refaulted file pages that were immediately activated.
workingset_restore_anon
Number of restored anonymous pages which have been detected as
an active workingset before they got reclaimed.
workingset_restore_file
Number of restored file pages which have been detected as an
active workingset before they got reclaimed.
workingset_nodereclaim
Number of times a shadow node has been reclaimed
pgscan (npn)
Amount of scanned pages (in an inactive LRU list)
pgsteal (npn)
Amount of reclaimed pages
pgscan_kswapd (npn)
Amount of scanned pages by kswapd (in an inactive LRU list)
pgscan_direct (npn)
Amount of scanned pages directly (in an inactive LRU list)
pgscan_khugepaged (npn)
Amount of scanned pages by khugepaged (in an inactive LRU list)
pgsteal_kswapd (npn)
Amount of reclaimed pages by kswapd
pgsteal_direct (npn)
Amount of reclaimed pages directly
pgsteal_khugepaged (npn)
Amount of reclaimed pages by khugepaged
pgfault (npn)
Total number of page faults incurred
pgmajfault (npn)
Number of major page faults incurred
pgrefill (npn)
Amount of scanned pages (in an active LRU list)
pgactivate (npn)
Amount of pages moved to the active LRU list
pgdeactivate (npn)
Amount of pages moved to the inactive LRU list
pglazyfree (npn)
Amount of pages postponed to be freed under memory pressure
pglazyfreed (npn)
Amount of reclaimed lazyfree pages
swpin_zero
Number of pages swapped into memory and filled with zero, where I/O
was optimized out because the page content was detected to be zero
during swapout.
swpout_zero
Number of zero-filled pages swapped out with I/O skipped due to the
content being detected as zero.
zswpin
Number of pages moved in to memory from zswap.
zswpout
Number of pages moved out of memory to zswap.
zswpwb
Number of pages written from zswap to swap.
thp_fault_alloc (npn)
Number of transparent hugepages which were allocated to satisfy
a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
is not set.
thp_collapse_alloc (npn)
Number of transparent hugepages which were allocated to allow
collapsing an existing range of pages. This counter is not
present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
thp_swpout (npn)
Number of transparent hugepages which are swapout in one piece
without splitting.
thp_swpout_fallback (npn)
Number of transparent hugepages which were split before swapout.
Usually because failed to allocate some continuous swap space
for the huge page.
numa_pages_migrated (npn)
Number of pages migrated by NUMA balancing.
numa_pte_updates (npn)
Number of pages whose page table entries are modified by
NUMA balancing to produce NUMA hinting faults on access.
numa_hint_faults (npn)
Number of NUMA hinting faults.
pgdemote_kswapd
Number of pages demoted by kswapd.
pgdemote_direct
Number of pages demoted directly.
pgdemote_khugepaged
Number of pages demoted by khugepaged.
hugetlb
Amount of memory used by hugetlb pages. This metric only shows
up if hugetlb usage is accounted for in memory.current (i.e.
cgroup is mounted with the memory_hugetlb_accounting option).
memory.numa_stat
A read-only nested-keyed file which exists on non-root cgroups.
This breaks down the cgroup's memory footprint into different
types of memory, type-specific details, and other information
per node on the state of the memory management system.
This is useful for providing visibility into the NUMA locality
information within an memcg since the pages are allowed to be
allocated from any physical node. One of the use case is evaluating
application performance by combining this information with the
application's CPU allocation.
All memory amounts are in bytes.
The output format of memory.numa_stat is::
type N0=<bytes in node 0> N1=<bytes in node 1> ...
The entries are ordered to be human readable, and new entries
can show up in the middle. Don't rely on items remaining in a
fixed position; use the keys to look up specific values!
The entries can refer to the memory.stat.
memory.swap.current
A read-only single value file which exists on non-root
cgroups.
The total amount of swap currently being used by the cgroup
and its descendants.
memory.swap.high
A read-write single value file which exists on non-root
cgroups. The default is "max".
Swap usage throttle limit. If a cgroup's swap usage exceeds
this limit, all its further allocations will be throttled to
allow userspace to implement custom out-of-memory procedures.
This limit marks a point of no return for the cgroup. It is NOT
designed to manage the amount of swapping a workload does
during regular operation. Compare to memory.swap.max, which
prohibits swapping past a set amount, but lets the cgroup
continue unimpeded as long as other memory can be reclaimed.
Healthy workloads are not expected to reach this limit.
memory.swap.peak
A read-write single value file which exists on non-root cgroups.
The max swap usage recorded for the cgroup and its descendants since
the creation of the cgroup or the most recent reset for that FD.
A write of any non-empty string to this file resets it to the
current memory usage for subsequent reads through the same
file descriptor.
memory.swap.max
A read-write single value file which exists on non-root
cgroups. The default is "max".
Swap usage hard limit. If a cgroup's swap usage reaches this
limit, anonymous memory of the cgroup will not be swapped out.
memory.swap.events
A read-only flat-keyed file which exists on non-root cgroups.
The following entries are defined. Unless specified
otherwise, a value change in this file generates a file
modified event.
high
The number of times the cgroup's swap usage was over
the high threshold.
max
The number of times the cgroup's swap usage was about
to go over the max boundary and swap allocation
failed.
fail
The number of times swap allocation failed either
because of running out of swap system-wide or max
limit.
When reduced under the current usage, the existing swap
entries are reclaimed gradually and the swap usage may stay
higher than the limit for an extended period of time. This
reduces the impact on the workload and memory management.
memory.zswap.current
A read-only single value file which exists on non-root
cgroups.
The total amount of memory consumed by the zswap compression
backend.
memory.zswap.max
A read-write single value file which exists on non-root
cgroups. The default is "max".
Zswap usage hard limit. If a cgroup's zswap pool reaches this
limit, it will refuse to take any more stores before existing
entries fault back in or are written out to disk.
memory.zswap.writeback
A read-write single value file. The default value is "1".
Note that this setting is hierarchical, i.e. the writeback would be
implicitly disabled for child cgroups if the upper hierarchy
does so.
When this is set to 0, all swapping attempts to swapping devices
are disabled. This included both zswap writebacks, and swapping due
to zswap store failures. If the zswap store failures are recurring
(for e.g if the pages are incompressible), users can observe
reclaim inefficiency after disabling writeback (because the same
pages might be rejected again and again).
Note that this is subtly different from setting memory.swap.max to
0, as it still allows for pages to be written to the zswap pool.
This setting has no effect if zswap is disabled, and swapping
is allowed unless memory.swap.max is set to 0.
memory.pressure
A read-only nested-keyed file.
Shows pressure stall information for memory. See
:ref:`Documentation/accounting/psi.rst <psi>` for details.
Usage Guidelines
~~~~~~~~~~~~~~~~
"memory.high" is the main mechanism to control memory usage.
Over-committing on high limit (sum of high limits > available memory)
and letting global memory pressure to distribute memory according to
usage is a viable strategy.
Because breach of the high limit doesn't trigger the OOM killer but
throttles the offending cgroup, a management agent has ample
opportunities to monitor and take appropriate actions such as granting
more memory or terminating the workload.
Determining whether a cgroup has enough memory is not trivial as
memory usage doesn't indicate whether the workload can benefit from
more memory. For example, a workload which writes data received from
network to a file can use all available memory but can also operate as
performant with a small amount of memory. A measure of memory
pressure - how much the workload is being impacted due to lack of
memory - is necessary to determine whether a workload needs more
memory; unfortunately, memory pressure monitoring mechanism isn't
implemented yet.
Memory Ownership
~~~~~~~~~~~~~~~~
A memory area is charged to the cgroup which instantiated it and stays
charged to the cgroup until the area is released. Migrating a process
to a different cgroup doesn't move the memory usages that it
instantiated while in the previous cgroup to the new cgroup.
A memory area may be used by processes belonging to different cgroups.
To which cgroup the area will be charged is in-deterministic; however,
over time, the memory area is likely to end up in a cgroup which has
enough memory allowance to avoid high reclaim pressure.
If a cgroup sweeps a considerable amount of memory which is expected
to be accessed repeatedly by other cgroups, it may make sense to use
POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
belonging to the affected files to ensure correct memory ownership.
IO
--
The "io" controller regulates the distribution of IO resources. This
controller implements both weight based and absolute bandwidth or IOPS
limit distribution; however, weight based distribution is available
only if cfq-iosched is in use and neither scheme is available for
blk-mq devices.
IO Interface Files
~~~~~~~~~~~~~~~~~~
io.stat
A read-only nested-keyed file.
Lines are keyed by $MAJ:$MIN device numbers and not ordered.
The following nested keys are defined.
====== =====================
rbytes Bytes read
wbytes Bytes written
rios Number of read IOs
wios Number of write IOs
dbytes Bytes discarded
dios Number of discard IOs
====== =====================
An example read output follows::
8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
io.cost.qos
A read-write nested-keyed file which exists only on the root
cgroup.
This file configures the Quality of Service of the IO cost
model based controller (CONFIG_BLK_CGROUP_IOCOST) which
currently implements "io.weight" proportional control. Lines
are keyed by $MAJ:$MIN device numbers and not ordered. The
line for a given device is populated on the first write for
the device on "io.cost.qos" or "io.cost.model". The following
nested keys are defined.
====== =====================================
enable Weight-based control enable
ctrl "auto" or "user"
rpct Read latency percentile [0, 100]
rlat Read latency threshold
wpct Write latency percentile [0, 100]
wlat Write latency threshold
min Minimum scaling percentage [1, 10000]
max Maximum scaling percentage [1, 10000]
====== =====================================
The controller is disabled by default and can be enabled by
setting "enable" to 1. "rpct" and "wpct" parameters default
to zero and the controller uses internal device saturation
state to adjust the overall IO rate between "min" and "max".
When a better control quality is needed, latency QoS
parameters can be configured. For example::
8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
shows that on sdb, the controller is enabled, will consider
the device saturated if the 95th percentile of read completion
latencies is above 75ms or write 150ms, and adjust the overall
IO issue rate between 50% and 150% accordingly.
The lower the saturation point, the better the latency QoS at
the cost of aggregate bandwidth. The narrower the allowed
adjustment range between "min" and "max", the more conformant
to the cost model the IO behavior. Note that the IO issue
base rate may be far off from 100% and setting "min" and "max"
blindly can lead to a significant loss of device capacity or
control quality. "min" and "max" are useful for regulating
devices which show wide temporary behavior changes - e.g. a
ssd which accepts writes at the line speed for a while and
then completely stalls for multiple seconds.
When "ctrl" is "auto", the parameters are controlled by the
kernel and may change automatically. Setting "ctrl" to "user"
or setting any of the percentile and latency parameters puts
it into "user" mode and disables the automatic changes. The
automatic mode can be restored by setting "ctrl" to "auto".
io.cost.model
A read-write nested-keyed file which exists only on the root
cgroup.
This file configures the cost model of the IO cost model based
controller (CONFIG_BLK_CGROUP_IOCOST) which currently
implements "io.weight" proportional control. Lines are keyed
by $MAJ:$MIN device numbers and not ordered. The line for a
given device is populated on the first write for the device on
"io.cost.qos" or "io.cost.model". The following nested keys
are defined.
===== ================================
ctrl "auto" or "user"
model The cost model in use - "linear"
===== ================================
When "ctrl" is "auto", the kernel may change all parameters
dynamically. When "ctrl" is set to "user" or any other
parameters are written to, "ctrl" become "user" and the
automatic changes are disabled.
When "model" is "linear", the following model parameters are
defined.
============= ========================================
[r|w]bps The maximum sequential IO throughput
[r|w]seqiops The maximum 4k sequential IOs per second
[r|w]randiops The maximum 4k random IOs per second
============= ========================================
From the above, the builtin linear model determines the base
costs of a sequential and random IO and the cost coefficient
for the IO size. While simple, this model can cover most
common device classes acceptably.
The IO cost model isn't expected to be accurate in absolute
sense and is scaled to the device behavior dynamically.
If needed, tools/cgroup/iocost_coef_gen.py can be used to
generate device-specific coefficients.
io.weight
A read-write flat-keyed file which exists on non-root cgroups.
The default is "default 100".
The first line is the default weight applied to devices
without specific override. The rest are overrides keyed by
$MAJ:$MIN device numbers and not ordered. The weights are in
the range [1, 10000] and specifies the relative amount IO time
the cgroup can use in relation to its siblings.
The default weight can be updated by writing either "default
$WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
"$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
An example read output follows::
default 100
8:16 200
8:0 50
io.max
A read-write nested-keyed file which exists on non-root
cgroups.
BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
device numbers and not ordered. The following nested keys are
defined.
===== ==================================
rbps Max read bytes per second
wbps Max write bytes per second
riops Max read IO operations per second
wiops Max write IO operations per second
===== ==================================
When writing, any number of nested key-value pairs can be
specified in any order. "max" can be specified as the value
to remove a specific limit. If the same key is specified
multiple times, the outcome is undefined.
BPS and IOPS are measured in each IO direction and IOs are
delayed if limit is reached. Temporary bursts are allowed.
Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
echo "8:16 rbps=2097152 wiops=120" > io.max
Reading returns the following::
8:16 rbps=2097152 wbps=max riops=max wiops=120
Write IOPS limit can be removed by writing the following::
echo "8:16 wiops=max" > io.max
Reading now returns the following::
8:16 rbps=2097152 wbps=max riops=max wiops=max
io.pressure
A read-only nested-keyed file.
Shows pressure stall information for IO. See
:ref:`Documentation/accounting/psi.rst <psi>` for details.
Writeback
~~~~~~~~~
Page cache is dirtied through buffered writes and shared mmaps and
written asynchronously to the backing filesystem by the writeback
mechanism. Writeback sits between the memory and IO domains and
regulates the proportion of dirty memory by balancing dirtying and
write IOs.
The io controller, in conjunction with the memory controller,
implements control of page cache writeback IOs. The memory controller
defines the memory domain that dirty memory ratio is calculated and
maintained for and the io controller defines the io domain which
writes out dirty pages for the memory domain. Both system-wide and
per-cgroup dirty memory states are examined and the more restrictive
of the two is enforced.
cgroup writeback requires explicit support from the underlying
filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
attributed to the root cgroup.
There are inherent differences in memory and writeback management
which affects how cgroup ownership is tracked. Memory is tracked per
page while writeback per inode. For the purpose of writeback, an
inode is assigned to a cgroup and all IO requests to write dirty pages
from the inode are attributed to that cgroup.
As cgroup ownership for memory is tracked per page, there can be pages
which are associated with different cgroups than the one the inode is
associated with. These are called foreign pages. The writeback
constantly keeps track of foreign pages and, if a particular foreign
cgroup becomes the majority over a certain period of time, switches
the ownership of the inode to that cgroup.
While this model is enough for most use cases where a given inode is
mostly dirtied by a single cgroup even when the main writing cgroup
changes over time, use cases where multiple cgroups write to a single
inode simultaneously are not supported well. In such circumstances, a
significant portion of IOs are likely to be attributed incorrectly.
As memory controller assigns page ownership on the first use and
doesn't update it until the page is released, even if writeback
strictly follows page ownership, multiple cgroups dirtying overlapping
areas wouldn't work as expected. It's recommended to avoid such usage
patterns.
The sysctl knobs which affect writeback behavior are applied to cgroup
writeback as follows.
vm.dirty_background_ratio, vm.dirty_ratio
These ratios apply the same to cgroup writeback with the
amount of available memory capped by limits imposed by the
memory controller and system-wide clean memory.
vm.dirty_background_bytes, vm.dirty_bytes
For cgroup writeback, this is calculated into ratio against
total available memory and applied the same way as
vm.dirty[_background]_ratio.
IO Latency
~~~~~~~~~~
This is a cgroup v2 controller for IO workload protection. You provide a group
with a latency target, and if the average latency exceeds that target the
controller will throttle any peers that have a lower latency target than the
protected workload.
The limits are only applied at the peer level in the hierarchy. This means that
in the diagram below, only groups A, B, and C will influence each other, and
groups D and F will influence each other. Group G will influence nobody::
[root]
/ | \
A B C
/ \ |
D F G
So the ideal way to configure this is to set io.latency in groups A, B, and C.
Generally you do not want to set a value lower than the latency your device
supports. Experiment to find the value that works best for your workload.
Start at higher than the expected latency for your device and watch the
avg_lat value in io.stat for your workload group to get an idea of the
latency you see during normal operation. Use the avg_lat value as a basis for
your real setting, setting at 10-15% higher than the value in io.stat.
How IO Latency Throttling Works
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
io.latency is work conserving; so as long as everybody is meeting their latency
target the controller doesn't do anything. Once a group starts missing its
target it begins throttling any peer group that has a higher target than itself.
This throttling takes 2 forms:
- Queue depth throttling. This is the number of outstanding IO's a group is
allowed to have. We will clamp down relatively quickly, starting at no limit
and going all the way down to 1 IO at a time.
- Artificial delay induction. There are certain types of IO that cannot be
throttled without possibly adversely affecting higher priority groups. This
includes swapping and metadata IO. These types of IO are allowed to occur
normally, however they are "charged" to the originating group. If the
originating group is being throttled you will see the use_delay and delay
fields in io.stat increase. The delay value is how many microseconds that are
being added to any process that runs in this group. Because this number can
grow quite large if there is a lot of swapping or metadata IO occurring we
limit the individual delay events to 1 second at a time.
Once the victimized group starts meeting its latency target again it will start
unthrottling any peer groups that were throttled previously. If the victimized
group simply stops doing IO the global counter will unthrottle appropriately.
IO Latency Interface Files
~~~~~~~~~~~~~~~~~~~~~~~~~~
io.latency
This takes a similar format as the other controllers.
"MAJOR:MINOR target=<target time in microseconds>"
io.stat
If the controller is enabled you will see extra stats in io.stat in
addition to the normal ones.
depth
This is the current queue depth for the group.
avg_lat
This is an exponential moving average with a decay rate of 1/exp
bound by the sampling interval. The decay rate interval can be
calculated by multiplying the win value in io.stat by the
corresponding number of samples based on the win value.
win
The sampling window size in milliseconds. This is the minimum
duration of time between evaluation events. Windows only elapse
with IO activity. Idle periods extend the most recent window.
IO Priority
~~~~~~~~~~~
A single attribute controls the behavior of the I/O priority cgroup policy,
namely the io.prio.class attribute. The following values are accepted for
that attribute:
no-change
Do not modify the I/O priority class.
promote-to-rt
For requests that have a non-RT I/O priority class, change it into RT.
Also change the priority level of these requests to 4. Do not modify
the I/O priority of requests that have priority class RT.
restrict-to-be
For requests that do not have an I/O priority class or that have I/O
priority class RT, change it into BE. Also change the priority level
of these requests to 0. Do not modify the I/O priority class of
requests that have priority class IDLE.
idle
Change the I/O priority class of all requests into IDLE, the lowest
I/O priority class.
none-to-rt
Deprecated. Just an alias for promote-to-rt.
The following numerical values are associated with the I/O priority policies:
+----------------+---+
| no-change | 0 |
+----------------+---+
| promote-to-rt | 1 |
+----------------+---+
| restrict-to-be | 2 |
+----------------+---+
| idle | 3 |
+----------------+---+
The numerical value that corresponds to each I/O priority class is as follows:
+-------------------------------+---+
| IOPRIO_CLASS_NONE | 0 |
+-------------------------------+---+
| IOPRIO_CLASS_RT (real-time) | 1 |
+-------------------------------+---+
| IOPRIO_CLASS_BE (best effort) | 2 |
+-------------------------------+---+
| IOPRIO_CLASS_IDLE | 3 |
+-------------------------------+---+
The algorithm to set the I/O priority class for a request is as follows:
- If I/O priority class policy is promote-to-rt, change the request I/O
priority class to IOPRIO_CLASS_RT and change the request I/O priority
level to 4.
- If I/O priority class policy is not promote-to-rt, translate the I/O priority
class policy into a number, then change the request I/O priority class
into the maximum of the I/O priority class policy number and the numerical
I/O priority class.
PID
---
The process number controller is used to allow a cgroup to stop any
new tasks from being fork()'d or clone()'d after a specified limit is
reached.
The number of tasks in a cgroup can be exhausted in ways which other
controllers cannot prevent, thus warranting its own controller. For
example, a fork bomb is likely to exhaust the number of tasks before
hitting memory restrictions.
Note that PIDs used in this controller refer to TIDs, process IDs as
used by the kernel.
PID Interface Files
~~~~~~~~~~~~~~~~~~~
pids.max
A read-write single value file which exists on non-root
cgroups. The default is "max".
Hard limit of number of processes.
pids.current
A read-only single value file which exists on non-root cgroups.
The number of processes currently in the cgroup and its
descendants.
pids.peak
A read-only single value file which exists on non-root cgroups.
The maximum value that the number of processes in the cgroup and its
descendants has ever reached.
pids.events
A read-only flat-keyed file which exists on non-root cgroups. Unless
specified otherwise, a value change in this file generates a file
modified event. The following entries are defined.
max
The number of times the cgroup's total number of processes hit the pids.max
limit (see also pids_localevents).
pids.events.local
Similar to pids.events but the fields in the file are local
to the cgroup i.e. not hierarchical. The file modified event
generated on this file reflects only the local events.
Organisational operations are not blocked by cgroup policies, so it is
possible to have pids.current > pids.max. This can be done by either
setting the limit to be smaller than pids.current, or attaching enough
processes to the cgroup such that pids.current is larger than
pids.max. However, it is not possible to violate a cgroup PID policy
through fork() or clone(). These will return -EAGAIN if the creation
of a new process would cause a cgroup policy to be violated.
Cpuset
------
The "cpuset" controller provides a mechanism for constraining
the CPU and memory node placement of tasks to only the resources
specified in the cpuset interface files in a task's current cgroup.
This is especially valuable on large NUMA systems where placing jobs
on properly sized subsets of the systems with careful processor and
memory placement to reduce cross-node memory access and contention
can improve overall system performance.
The "cpuset" controller is hierarchical. That means the controller
cannot use CPUs or memory nodes not allowed in its parent.
Cpuset Interface Files
~~~~~~~~~~~~~~~~~~~~~~
cpuset.cpus
A read-write multiple values file which exists on non-root
cpuset-enabled cgroups.
It lists the requested CPUs to be used by tasks within this
cgroup. The actual list of CPUs to be granted, however, is
subjected to constraints imposed by its parent and can differ
from the requested CPUs.
The CPU numbers are comma-separated numbers or ranges.
For example::
# cat cpuset.cpus
0-4,6,8-10
An empty value indicates that the cgroup is using the same
setting as the nearest cgroup ancestor with a non-empty
"cpuset.cpus" or all the available CPUs if none is found.
The value of "cpuset.cpus" stays constant until the next update
and won't be affected by any CPU hotplug events.
cpuset.cpus.effective
A read-only multiple values file which exists on all
cpuset-enabled cgroups.
It lists the onlined CPUs that are actually granted to this
cgroup by its parent. These CPUs are allowed to be used by
tasks within the current cgroup.
If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
all the CPUs from the parent cgroup that can be available to
be used by this cgroup. Otherwise, it should be a subset of
"cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
can be granted. In this case, it will be treated just like an
empty "cpuset.cpus".
Its value will be affected by CPU hotplug events.
cpuset.mems
A read-write multiple values file which exists on non-root
cpuset-enabled cgroups.
It lists the requested memory nodes to be used by tasks within
this cgroup. The actual list of memory nodes granted, however,
is subjected to constraints imposed by its parent and can differ
from the requested memory nodes.
The memory node numbers are comma-separated numbers or ranges.
For example::
# cat cpuset.mems
0-1,3
An empty value indicates that the cgroup is using the same
setting as the nearest cgroup ancestor with a non-empty
"cpuset.mems" or all the available memory nodes if none
is found.
The value of "cpuset.mems" stays constant until the next update
and won't be affected by any memory nodes hotplug events.
Setting a non-empty value to "cpuset.mems" causes memory of
tasks within the cgroup to be migrated to the designated nodes if
they are currently using memory outside of the designated nodes.
There is a cost for this memory migration. The migration
may not be complete and some memory pages may be left behind.
So it is recommended that "cpuset.mems" should be set properly
before spawning new tasks into the cpuset. Even if there is
a need to change "cpuset.mems" with active tasks, it shouldn't
be done frequently.
cpuset.mems.effective
A read-only multiple values file which exists on all
cpuset-enabled cgroups.
It lists the onlined memory nodes that are actually granted to
this cgroup by its parent. These memory nodes are allowed to
be used by tasks within the current cgroup.
If "cpuset.mems" is empty, it shows all the memory nodes from the
parent cgroup that will be available to be used by this cgroup.
Otherwise, it should be a subset of "cpuset.mems" unless none of
the memory nodes listed in "cpuset.mems" can be granted. In this
case, it will be treated just like an empty "cpuset.mems".
Its value will be affected by memory nodes hotplug events.
cpuset.cpus.exclusive
A read-write multiple values file which exists on non-root
cpuset-enabled cgroups.
It lists all the exclusive CPUs that are allowed to be used
to create a new cpuset partition. Its value is not used
unless the cgroup becomes a valid partition root. See the
"cpuset.cpus.partition" section below for a description of what
a cpuset partition is.
When the cgroup becomes a partition root, the actual exclusive
CPUs that are allocated to that partition are listed in
"cpuset.cpus.exclusive.effective" which may be different
from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive"
has previously been set, "cpuset.cpus.exclusive.effective"
is always a subset of it.
Users can manually set it to a value that is different from
"cpuset.cpus". One constraint in setting it is that the list of
CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup
isn't set, its "cpuset.cpus" value, if set, cannot be a subset
of it to leave at least one CPU available when the exclusive
CPUs are taken away.
For a parent cgroup, any one of its exclusive CPUs can only
be distributed to at most one of its child cgroups. Having an
exclusive CPU appearing in two or more of its child cgroups is
not allowed (the exclusivity rule). A value that violates the
exclusivity rule will be rejected with a write error.
The root cgroup is a partition root and all its available CPUs
are in its exclusive CPU set.
cpuset.cpus.exclusive.effective
A read-only multiple values file which exists on all non-root
cpuset-enabled cgroups.
This file shows the effective set of exclusive CPUs that
can be used to create a partition root. The content
of this file will always be a subset of its parent's
"cpuset.cpus.exclusive.effective" if its parent is not the root
cgroup. It will also be a subset of "cpuset.cpus.exclusive"
if it is set. If "cpuset.cpus.exclusive" is not set, it is
treated to have an implicit value of "cpuset.cpus" in the
formation of local partition.
cpuset.cpus.isolated
A read-only and root cgroup only multiple values file.
This file shows the set of all isolated CPUs used in existing
isolated partitions. It will be empty if no isolated partition
is created.
cpuset.cpus.partition
A read-write single value file which exists on non-root
cpuset-enabled cgroups. This flag is owned by the parent cgroup
and is not delegatable.
It accepts only the following input values when written to.
========== =====================================
"member" Non-root member of a partition
"root" Partition root
"isolated" Partition root without load balancing
========== =====================================
A cpuset partition is a collection of cpuset-enabled cgroups with
a partition root at the top of the hierarchy and its descendants
except those that are separate partition roots themselves and
their descendants. A partition has exclusive access to the
set of exclusive CPUs allocated to it. Other cgroups outside
of that partition cannot use any CPUs in that set.
There are two types of partitions - local and remote. A local
partition is one whose parent cgroup is also a valid partition
root. A remote partition is one whose parent cgroup is not a
valid partition root itself. Writing to "cpuset.cpus.exclusive"
is optional for the creation of a local partition as its
"cpuset.cpus.exclusive" file will assume an implicit value that
is the same as "cpuset.cpus" if it is not set. Writing the
proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
before the target partition root is mandatory for the creation
of a remote partition.
Currently, a remote partition cannot be created under a local
partition. All the ancestors of a remote partition root except
the root cgroup cannot be a partition root.
The root cgroup is always a partition root and its state cannot
be changed. All other non-root cgroups start out as "member".
When set to "root", the current cgroup is the root of a new
partition or scheduling domain. The set of exclusive CPUs is
determined by the value of its "cpuset.cpus.exclusive.effective".
When set to "isolated", the CPUs in that partition will be in
an isolated state without any load balancing from the scheduler
and excluded from the unbound workqueues. Tasks placed in such
a partition with multiple CPUs should be carefully distributed
and bound to each of the individual CPUs for optimal performance.
A partition root ("root" or "isolated") can be in one of the
two possible states - valid or invalid. An invalid partition
root is in a degraded state where some state information may
be retained, but behaves more like a "member".
All possible state transitions among "member", "root" and
"isolated" are allowed.
On read, the "cpuset.cpus.partition" file can show the following
values.
============================= =====================================
"member" Non-root member of a partition
"root" Partition root
"isolated" Partition root without load balancing
"root invalid (<reason>)" Invalid partition root
"isolated invalid (<reason>)" Invalid isolated partition root
============================= =====================================
In the case of an invalid partition root, a descriptive string on
why the partition is invalid is included within parentheses.
For a local partition root to be valid, the following conditions
must be met.
1) The parent cgroup is a valid partition root.
2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
though it may contain offline CPUs.
3) The "cpuset.cpus.effective" cannot be empty unless there is
no task associated with this partition.
For a remote partition root to be valid, all the above conditions
except the first one must be met.
External events like hotplug or changes to "cpuset.cpus" or
"cpuset.cpus.exclusive" can cause a valid partition root to
become invalid and vice versa. Note that a task cannot be
moved to a cgroup with empty "cpuset.cpus.effective".
A valid non-root parent partition may distribute out all its CPUs
to its child local partitions when there is no task associated
with it.
Care must be taken to change a valid partition root to "member"
as all its child local partitions, if present, will become
invalid causing disruption to tasks running in those child
partitions. These inactivated partitions could be recovered if
their parent is switched back to a partition root with a proper
value in "cpuset.cpus" or "cpuset.cpus.exclusive".
Poll and inotify events are triggered whenever the state of
"cpuset.cpus.partition" changes. That includes changes caused
by write to "cpuset.cpus.partition", cpu hotplug or other
changes that modify the validity status of the partition.
This will allow user space agents to monitor unexpected changes
to "cpuset.cpus.partition" without the need to do continuous
polling.
A user can pre-configure certain CPUs to an isolated state
with load balancing disabled at boot time with the "isolcpus"
kernel boot command line option. If those CPUs are to be put
into a partition, they have to be used in an isolated partition.
Device controller
-----------------
Device controller manages access to device files. It includes both
creation of new device files (using mknod), and access to the
existing device files.
Cgroup v2 device controller has no interface files and is implemented
on top of cgroup BPF. To control access to device files, a user may
create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
device file, corresponding BPF programs will be executed, and depending
on the return value the attempt will succeed or fail with -EPERM.
A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
bpf_cgroup_dev_ctx structure, which describes the device access attempt:
access type (mknod/read/write) and device (type, major and minor numbers).
If the program returns 0, the attempt fails with -EPERM, otherwise it
succeeds.
An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
RDMA
----
The "rdma" controller regulates the distribution and accounting of
RDMA resources.
RDMA Interface Files
~~~~~~~~~~~~~~~~~~~~
rdma.max
A readwrite nested-keyed file that exists for all the cgroups
except root that describes current configured resource limit
for a RDMA/IB device.
Lines are keyed by device name and are not ordered.
Each line contains space separated resource name and its configured
limit that can be distributed.
The following nested keys are defined.
========== =============================
hca_handle Maximum number of HCA Handles
hca_object Maximum number of HCA Objects
========== =============================
An example for mlx4 and ocrdma device follows::
mlx4_0 hca_handle=2 hca_object=2000
ocrdma1 hca_handle=3 hca_object=max
rdma.current
A read-only file that describes current resource usage.
It exists for all the cgroup except root.
An example for mlx4 and ocrdma device follows::
mlx4_0 hca_handle=1 hca_object=20
ocrdma1 hca_handle=1 hca_object=23
HugeTLB
-------
The HugeTLB controller allows to limit the HugeTLB usage per control group and
enforces the controller limit during page fault.
HugeTLB Interface Files
~~~~~~~~~~~~~~~~~~~~~~~
hugetlb.<hugepagesize>.current
Show current usage for "hugepagesize" hugetlb. It exists for all
the cgroup except root.
hugetlb.<hugepagesize>.max
Set/show the hard limit of "hugepagesize" hugetlb usage.
The default value is "max". It exists for all the cgroup except root.
hugetlb.<hugepagesize>.events
A read-only flat-keyed file which exists on non-root cgroups.
max
The number of allocation failure due to HugeTLB limit
hugetlb.<hugepagesize>.events.local
Similar to hugetlb.<hugepagesize>.events but the fields in the file
are local to the cgroup i.e. not hierarchical. The file modified event
generated on this file reflects only the local events.
hugetlb.<hugepagesize>.numa_stat
Similar to memory.numa_stat, it shows the numa information of the
hugetlb pages of <hugepagesize> in this cgroup. Only active in
use hugetlb pages are included. The per-node values are in bytes.
Misc
----
The Miscellaneous cgroup provides the resource limiting and tracking
mechanism for the scalar resources which cannot be abstracted like the other
cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
option.
A resource can be added to the controller via enum misc_res_type{} in the
include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
in the kernel/cgroup/misc.c file. Provider of the resource must set its
capacity prior to using the resource by calling misc_cg_set_capacity().
Once a capacity is set then the resource usage can be updated using charge and
uncharge APIs. All of the APIs to interact with misc controller are in
include/linux/misc_cgroup.h.
Misc Interface Files
~~~~~~~~~~~~~~~~~~~~
Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
misc.capacity
A read-only flat-keyed file shown only in the root cgroup. It shows
miscellaneous scalar resources available on the platform along with
their quantities::
$ cat misc.capacity
res_a 50
res_b 10
misc.current
A read-only flat-keyed file shown in the all cgroups. It shows
the current usage of the resources in the cgroup and its children.::
$ cat misc.current
res_a 3
res_b 0
misc.peak
A read-only flat-keyed file shown in all cgroups. It shows the
historical maximum usage of the resources in the cgroup and its
children.::
$ cat misc.peak
res_a 10
res_b 8
misc.max
A read-write flat-keyed file shown in the non root cgroups. Allowed
maximum usage of the resources in the cgroup and its children.::
$ cat misc.max
res_a max
res_b 4
Limit can be set by::
# echo res_a 1 > misc.max
Limit can be set to max by::
# echo res_a max > misc.max
Limits can be set higher than the capacity value in the misc.capacity
file.
misc.events
A read-only flat-keyed file which exists on non-root cgroups. The
following entries are defined. Unless specified otherwise, a value
change in this file generates a file modified event. All fields in
this file are hierarchical.
max
The number of times the cgroup's resource usage was
about to go over the max boundary.
misc.events.local
Similar to misc.events but the fields in the file are local to the
cgroup i.e. not hierarchical. The file modified event generated on
this file reflects only the local events.
Migration and Ownership
~~~~~~~~~~~~~~~~~~~~~~~
A miscellaneous scalar resource is charged to the cgroup in which it is used
first, and stays charged to that cgroup until that resource is freed. Migrating
a process to a different cgroup does not move the charge to the destination
cgroup where the process has moved.
Others
------
perf_event
~~~~~~~~~~
perf_event controller, if not mounted on a legacy hierarchy, is
automatically enabled on the v2 hierarchy so that perf events can
always be filtered by cgroup v2 path. The controller can still be
moved to a legacy hierarchy after v2 hierarchy is populated.
Non-normative information
-------------------------
This section contains information that isn't considered to be a part of
the stable kernel API and so is subject to change.
CPU controller root cgroup process behaviour
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When distributing CPU cycles in the root cgroup each thread in this
cgroup is treated as if it was hosted in a separate child cgroup of the
root cgroup. This child cgroup weight is dependent on its thread nice
level.
For details of this mapping see sched_prio_to_weight array in
kernel/sched/core.c file (values from this array should be scaled
appropriately so the neutral - nice 0 - value is 100 instead of 1024).
IO controller root cgroup process behaviour
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Root cgroup processes are hosted in an implicit leaf child node.
When distributing IO resources this implicit child node is taken into
account as if it was a normal child cgroup of the root cgroup with a
weight value of 200.
Namespace
=========
Basics
------
cgroup namespace provides a mechanism to virtualize the view of the
"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
flag can be used with clone(2) and unshare(2) to create a new cgroup
namespace. The process running inside the cgroup namespace will have
its "/proc/$PID/cgroup" output restricted to cgroupns root. The
cgroupns root is the cgroup of the process at the time of creation of
the cgroup namespace.
Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
complete path of the cgroup of a process. In a container setup where
a set of cgroups and namespaces are intended to isolate processes the
"/proc/$PID/cgroup" file may leak potential system level information
to the isolated processes. For example::
# cat /proc/self/cgroup
0::/batchjobs/container_id1
The path '/batchjobs/container_id1' can be considered as system-data
and undesirable to expose to the isolated processes. cgroup namespace
can be used to restrict visibility of this path. For example, before
creating a cgroup namespace, one would see::
# ls -l /proc/self/ns/cgroup
lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
# cat /proc/self/cgroup
0::/batchjobs/container_id1
After unsharing a new namespace, the view changes::
# ls -l /proc/self/ns/cgroup
lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
# cat /proc/self/cgroup
0::/
When some thread from a multi-threaded process unshares its cgroup
namespace, the new cgroupns gets applied to the entire process (all
the threads). This is natural for the v2 hierarchy; however, for the
legacy hierarchies, this may be unexpected.
A cgroup namespace is alive as long as there are processes inside or
mounts pinning it. When the last usage goes away, the cgroup
namespace is destroyed. The cgroupns root and the actual cgroups
remain.
The Root and Views
------------------
The 'cgroupns root' for a cgroup namespace is the cgroup in which the
process calling unshare(2) is running. For example, if a process in
/batchjobs/container_id1 cgroup calls unshare, cgroup
/batchjobs/container_id1 becomes the cgroupns root. For the
init_cgroup_ns, this is the real root ('/') cgroup.
The cgroupns root cgroup does not change even if the namespace creator
process later moves to a different cgroup::
# ~/unshare -c # unshare cgroupns in some cgroup
# cat /proc/self/cgroup
0::/
# mkdir sub_cgrp_1
# echo 0 > sub_cgrp_1/cgroup.procs
# cat /proc/self/cgroup
0::/sub_cgrp_1
Each process gets its namespace-specific view of "/proc/$PID/cgroup"
Processes running inside the cgroup namespace will be able to see
cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
From within an unshared cgroupns::
# sleep 100000 &
[1] 7353
# echo 7353 > sub_cgrp_1/cgroup.procs
# cat /proc/7353/cgroup
0::/sub_cgrp_1
From the initial cgroup namespace, the real cgroup path will be
visible::
$ cat /proc/7353/cgroup
0::/batchjobs/container_id1/sub_cgrp_1
From a sibling cgroup namespace (that is, a namespace rooted at a
different cgroup), the cgroup path relative to its own cgroup
namespace root will be shown. For instance, if PID 7353's cgroup
namespace root is at '/batchjobs/container_id2', then it will see::
# cat /proc/7353/cgroup
0::/../container_id2/sub_cgrp_1
Note that the relative path always starts with '/' to indicate that
its relative to the cgroup namespace root of the caller.
Migration and setns(2)
----------------------
Processes inside a cgroup namespace can move into and out of the
namespace root if they have proper access to external cgroups. For
example, from inside a namespace with cgroupns root at
/batchjobs/container_id1, and assuming that the global hierarchy is
still accessible inside cgroupns::
# cat /proc/7353/cgroup
0::/sub_cgrp_1
# echo 7353 > batchjobs/container_id2/cgroup.procs
# cat /proc/7353/cgroup
0::/../container_id2
Note that this kind of setup is not encouraged. A task inside cgroup
namespace should only be exposed to its own cgroupns hierarchy.
setns(2) to another cgroup namespace is allowed when:
(a) the process has CAP_SYS_ADMIN against its current user namespace
(b) the process has CAP_SYS_ADMIN against the target cgroup
namespace's userns
No implicit cgroup changes happen with attaching to another cgroup
namespace. It is expected that the someone moves the attaching
process under the target cgroup namespace root.
Interaction with Other Namespaces
---------------------------------
Namespace specific cgroup hierarchy can be mounted by a process
running inside a non-init cgroup namespace::
# mount -t cgroup2 none $MOUNT_POINT
This will mount the unified cgroup hierarchy with cgroupns root as the
filesystem root. The process needs CAP_SYS_ADMIN against its user and
mount namespaces.
The virtualization of /proc/self/cgroup file combined with restricting
the view of cgroup hierarchy by namespace-private cgroupfs mount
provides a properly isolated cgroup view inside the container.
Information on Kernel Programming
=================================
This section contains kernel programming information in the areas
where interacting with cgroup is necessary. cgroup core and
controllers are not covered.
Filesystem Support for Writeback
--------------------------------
A filesystem can support cgroup writeback by updating
address_space_operations->writepage[s]() to annotate bio's using the
following two functions.
wbc_init_bio(@wbc, @bio)
Should be called for each bio carrying writeback data and
associates the bio with the inode's owner cgroup and the
corresponding request queue. This must be called after
a queue (device) has been associated with the bio and
before submission.
wbc_account_cgroup_owner(@wbc, @folio, @bytes)
Should be called for each data segment being written out.
While this function doesn't care exactly when it's called
during the writeback session, it's the easiest and most
natural to call it as data segments are added to a bio.
With writeback bio's annotated, cgroup support can be enabled per
super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
selective disabling of cgroup writeback support which is helpful when
certain filesystem features, e.g. journaled data mode, are
incompatible.
wbc_init_bio() binds the specified bio to its cgroup. Depending on
the configuration, the bio may be executed at a lower priority and if
the writeback session is holding shared resources, e.g. a journal
entry, may lead to priority inversion. There is no one easy solution
for the problem. Filesystems can try to work around specific problem
cases by skipping wbc_init_bio() and using bio_associate_blkg()
directly.
Deprecated v1 Core Features
===========================
- Multiple hierarchies including named ones are not supported.
- All v1 mount options are not supported.
- The "tasks" file is removed and "cgroup.procs" is not sorted.
- "cgroup.clone_children" is removed.
- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or
"cgroup.stat" files at the root instead.
Issues with v1 and Rationales for v2
====================================
Multiple Hierarchies
--------------------
cgroup v1 allowed an arbitrary number of hierarchies and each
hierarchy could host any number of controllers. While this seemed to
provide a high level of flexibility, it wasn't useful in practice.
For example, as there is only one instance of each controller, utility
type controllers such as freezer which can be useful in all
hierarchies could only be used in one. The issue is exacerbated by
the fact that controllers couldn't be moved to another hierarchy once
hierarchies were populated. Another issue was that all controllers
bound to a hierarchy were forced to have exactly the same view of the
hierarchy. It wasn't possible to vary the granularity depending on
the specific controller.
In practice, these issues heavily limited which controllers could be
put on the same hierarchy and most configurations resorted to putting
each controller on its own hierarchy. Only closely related ones, such
as the cpu and cpuacct controllers, made sense to be put on the same
hierarchy. This often meant that userland ended up managing multiple
similar hierarchies repeating the same steps on each hierarchy
whenever a hierarchy management operation was necessary.
Furthermore, support for multiple hierarchies came at a steep cost.
It greatly complicated cgroup core implementation but more importantly
the support for multiple hierarchies restricted how cgroup could be
used in general and what controllers was able to do.
There was no limit on how many hierarchies there might be, which meant
that a thread's cgroup membership couldn't be described in finite
length. The key might contain any number of entries and was unlimited
in length, which made it highly awkward to manipulate and led to
addition of controllers which existed only to identify membership,
which in turn exacerbated the original problem of proliferating number
of hierarchies.
Also, as a controller couldn't have any expectation regarding the
topologies of hierarchies other controllers might be on, each
controller had to assume that all other controllers were attached to
completely orthogonal hierarchies. This made it impossible, or at
least very cumbersome, for controllers to cooperate with each other.
In most use cases, putting controllers on hierarchies which are
completely orthogonal to each other isn't necessary. What usually is
called for is the ability to have differing levels of granularity
depending on the specific controller. In other words, hierarchy may
be collapsed from leaf towards root when viewed from specific
controllers. For example, a given configuration might not care about
how memory is distributed beyond a certain level while still wanting
to control how CPU cycles are distributed.
Thread Granularity
------------------
cgroup v1 allowed threads of a process to belong to different cgroups.
This didn't make sense for some controllers and those controllers
ended up implementing different ways to ignore such situations but
much more importantly it blurred the line between API exposed to
individual applications and system management interface.
Generally, in-process knowledge is available only to the process
itself; thus, unlike service-level organization of processes,
categorizing threads of a process requires active participation from
the application which owns the target process.
cgroup v1 had an ambiguously defined delegation model which got abused
in combination with thread granularity. cgroups were delegated to
individual applications so that they can create and manage their own
sub-hierarchies and control resource distributions along them. This
effectively raised cgroup to the status of a syscall-like API exposed
to lay programs.
First of all, cgroup has a fundamentally inadequate interface to be
exposed this way. For a process to access its own knobs, it has to
extract the path on the target hierarchy from /proc/self/cgroup,
construct the path by appending the name of the knob to the path, open
and then read and/or write to it. This is not only extremely clunky
and unusual but also inherently racy. There is no conventional way to
define transaction across the required steps and nothing can guarantee
that the process would actually be operating on its own sub-hierarchy.
cgroup controllers implemented a number of knobs which would never be
accepted as public APIs because they were just adding control knobs to
system-management pseudo filesystem. cgroup ended up with interface
knobs which were not properly abstracted or refined and directly
revealed kernel internal details. These knobs got exposed to
individual applications through the ill-defined delegation mechanism
effectively abusing cgroup as a shortcut to implementing public APIs
without going through the required scrutiny.
This was painful for both userland and kernel. Userland ended up with
misbehaving and poorly abstracted interfaces and kernel exposing and
locked into constructs inadvertently.
Competition Between Inner Nodes and Threads
-------------------------------------------
cgroup v1 allowed threads to be in any cgroups which created an
interesting problem where threads belonging to a parent cgroup and its
children cgroups competed for resources. This was nasty as two
different types of entities competed and there was no obvious way to
settle it. Different controllers did different things.
The cpu controller considered threads and cgroups as equivalents and
mapped nice levels to cgroup weights. This worked for some cases but
fell flat when children wanted to be allocated specific ratios of CPU
cycles and the number of internal threads fluctuated - the ratios
constantly changed as the number of competing entities fluctuated.
There also were other issues. The mapping from nice level to weight
wasn't obvious or universal, and there were various other knobs which
simply weren't available for threads.
The io controller implicitly created a hidden leaf node for each
cgroup to host the threads. The hidden leaf had its own copies of all
the knobs with ``leaf_`` prefixed. While this allowed equivalent
control over internal threads, it was with serious drawbacks. It
always added an extra layer of nesting which wouldn't be necessary
otherwise, made the interface messy and significantly complicated the
implementation.
The memory controller didn't have a way to control what happened
between internal tasks and child cgroups and the behavior was not
clearly defined. There were attempts to add ad-hoc behaviors and
knobs to tailor the behavior to specific workloads which would have
led to problems extremely difficult to resolve in the long term.
Multiple controllers struggled with internal tasks and came up with
different ways to deal with it; unfortunately, all the approaches were
severely flawed and, furthermore, the widely different behaviors
made cgroup as a whole highly inconsistent.
This clearly is a problem which needs to be addressed from cgroup core
in a uniform way.
Other Interface Issues
----------------------
cgroup v1 grew without oversight and developed a large number of
idiosyncrasies and inconsistencies. One issue on the cgroup core side
was how an empty cgroup was notified - a userland helper binary was
forked and executed for each event. The event delivery wasn't
recursive or delegatable. The limitations of the mechanism also led
to in-kernel event delivery filtering mechanism further complicating
the interface.
Controller interfaces were problematic too. An extreme example is
controllers completely ignoring hierarchical organization and treating
all cgroups as if they were all located directly under the root
cgroup. Some controllers exposed a large amount of inconsistent
implementation details to userland.
There also was no consistency across controllers. When a new cgroup
was created, some controllers defaulted to not imposing extra
restrictions while others disallowed any resource usage until
explicitly configured. Configuration knobs for the same type of
control used widely differing naming schemes and formats. Statistics
and information knobs were named arbitrarily and used different
formats and units even in the same controller.
cgroup v2 establishes common conventions where appropriate and updates
controllers so that they expose minimal and consistent interfaces.
Controller Issues and Remedies
------------------------------
Memory
~~~~~~
The original lower boundary, the soft limit, is defined as a limit
that is per default unset. As a result, the set of cgroups that
global reclaim prefers is opt-in, rather than opt-out. The costs for
optimizing these mostly negative lookups are so high that the
implementation, despite its enormous size, does not even provide the
basic desirable behavior. First off, the soft limit has no
hierarchical meaning. All configured groups are organized in a global
rbtree and treated like equal peers, regardless where they are located
in the hierarchy. This makes subtree delegation impossible. Second,
the soft limit reclaim pass is so aggressive that it not just
introduces high allocation latencies into the system, but also impacts
system performance due to overreclaim, to the point where the feature
becomes self-defeating.
The memory.low boundary on the other hand is a top-down allocated
reserve. A cgroup enjoys reclaim protection when it's within its
effective low, which makes delegation of subtrees possible. It also
enjoys having reclaim pressure proportional to its overage when
above its effective low.
The original high boundary, the hard limit, is defined as a strict
limit that can not budge, even if the OOM killer has to be called.
But this generally goes against the goal of making the most out of the
available memory. The memory consumption of workloads varies during
runtime, and that requires users to overcommit. But doing that with a
strict upper limit requires either a fairly accurate prediction of the
working set size or adding slack to the limit. Since working set size
estimation is hard and error prone, and getting it wrong results in
OOM kills, most users tend to err on the side of a looser limit and
end up wasting precious resources.
The memory.high boundary on the other hand can be set much more
conservatively. When hit, it throttles allocations by forcing them
into direct reclaim to work off the excess, but it never invokes the
OOM killer. As a result, a high boundary that is chosen too
aggressively will not terminate the processes, but instead it will
lead to gradual performance degradation. The user can monitor this
and make corrections until the minimal memory footprint that still
gives acceptable performance is found.
In extreme cases, with many concurrent allocations and a complete
breakdown of reclaim progress within the group, the high boundary can
be exceeded. But even then it's mostly better to satisfy the
allocation from the slack available in other groups or the rest of the
system than killing the group. Otherwise, memory.max is there to
limit this type of spillover and ultimately contain buggy or even
malicious applications.
Setting the original memory.limit_in_bytes below the current usage was
subject to a race condition, where concurrent charges could cause the
limit setting to fail. memory.max on the other hand will first set the
limit to prevent new charges, and then reclaim and OOM kill until the
new limit is met - or the task writing to memory.max is killed.
The combined memory+swap accounting and limiting is replaced by real
control over swap space.
The main argument for a combined memory+swap facility in the original
cgroup design was that global or parental pressure would always be
able to swap all anonymous memory of a child group, regardless of the
child's own (possibly untrusted) configuration. However, untrusted
groups can sabotage swapping by other means - such as referencing its
anonymous memory in a tight loop - and an admin can not assume full
swappability when overcommitting untrusted jobs.
For trusted jobs, on the other hand, a combined counter is not an
intuitive userspace interface, and it flies in the face of the idea
that cgroup controllers should account and limit specific physical
resources. Swap space is a resource like all others in the system,
and that's why unified hierarchy allows distributing it separately. | linux | cgroup v2 Control Group v2 Date October 2015 Author Tejun Heo tj kernel org This is the authoritative documentation on the design interface and conventions of cgroup v2 It describes all userland visible aspects of cgroup including core and specific controller behaviors All future changes must be reflected in this document Documentation for v1 is available under ref Documentation admin guide cgroup v1 index rst cgroup v1 CONTENTS 1 Introduction 1 1 Terminology 1 2 What is cgroup 2 Basic Operations 2 1 Mounting 2 2 Organizing Processes and Threads 2 2 1 Processes 2 2 2 Threads 2 3 Un populated Notification 2 4 Controlling Controllers 2 4 1 Enabling and Disabling 2 4 2 Top down Constraint 2 4 3 No Internal Process Constraint 2 5 Delegation 2 5 1 Model of Delegation 2 5 2 Delegation Containment 2 6 Guidelines 2 6 1 Organize Once and Control 2 6 2 Avoid Name Collisions 3 Resource Distribution Models 3 1 Weights 3 2 Limits 3 3 Protections 3 4 Allocations 4 Interface Files 4 1 Format 4 2 Conventions 4 3 Core Interface Files 5 Controllers 5 1 CPU 5 1 1 CPU Interface Files 5 2 Memory 5 2 1 Memory Interface Files 5 2 2 Usage Guidelines 5 2 3 Memory Ownership 5 3 IO 5 3 1 IO Interface Files 5 3 2 Writeback 5 3 3 IO Latency 5 3 3 1 How IO Latency Throttling Works 5 3 3 2 IO Latency Interface Files 5 3 4 IO Priority 5 4 PID 5 4 1 PID Interface Files 5 5 Cpuset 5 5 1 Cpuset Interface Files 5 6 Device 5 7 RDMA 5 7 1 RDMA Interface Files 5 8 HugeTLB 5 8 1 HugeTLB Interface Files 5 9 Misc 5 9 1 Miscellaneous cgroup Interface Files 5 9 2 Migration and Ownership 5 10 Others 5 10 1 perf event 5 N Non normative information 5 N 1 CPU controller root cgroup process behaviour 5 N 2 IO controller root cgroup process behaviour 6 Namespace 6 1 Basics 6 2 The Root and Views 6 3 Migration and setns 2 6 4 Interaction with Other Namespaces P Information on Kernel Programming P 1 Filesystem Support for Writeback D Deprecated v1 Core Features R Issues with v1 and Rationales for v2 R 1 Multiple Hierarchies R 2 Thread Granularity R 3 Competition Between Inner Nodes and Threads R 4 Other Interface Issues R 5 Controller Issues and Remedies R 5 1 Memory Introduction Terminology cgroup stands for control group and is never capitalized The singular form is used to designate the whole feature and also as a qualifier as in cgroup controllers When explicitly referring to multiple individual control groups the plural form cgroups is used What is cgroup cgroup is a mechanism to organize processes hierarchically and distribute system resources along the hierarchy in a controlled and configurable manner cgroup is largely composed of two parts the core and controllers cgroup core is primarily responsible for hierarchically organizing processes A cgroup controller is usually responsible for distributing a specific type of system resource along the hierarchy although there are utility controllers which serve purposes other than resource distribution cgroups form a tree structure and every process in the system belongs to one and only one cgroup All threads of a process belong to the same cgroup On creation all processes are put in the cgroup that the parent process belongs to at the time A process can be migrated to another cgroup Migration of a process doesn t affect already existing descendant processes Following certain structural constraints controllers may be enabled or disabled selectively on a cgroup All controller behaviors are hierarchical if a controller is enabled on a cgroup it affects all processes which belong to the cgroups consisting the inclusive sub hierarchy of the cgroup When a controller is enabled on a nested cgroup it always restricts the resource distribution further The restrictions set closer to the root in the hierarchy can not be overridden from further away Basic Operations Mounting Unlike v1 cgroup v2 has only single hierarchy The cgroup v2 hierarchy can be mounted with the following mount command mount t cgroup2 none MOUNT POINT cgroup2 filesystem has the magic number 0x63677270 cgrp All controllers which support v2 and are not bound to a v1 hierarchy are automatically bound to the v2 hierarchy and show up at the root Controllers which are not in active use in the v2 hierarchy can be bound to other hierarchies This allows mixing v2 hierarchy with the legacy v1 multiple hierarchies in a fully backward compatible way A controller can be moved across hierarchies only after the controller is no longer referenced in its current hierarchy Because per cgroup controller states are destroyed asynchronously and controllers may have lingering references a controller may not show up immediately on the v2 hierarchy after the final umount of the previous hierarchy Similarly a controller should be fully disabled to be moved out of the unified hierarchy and it may take some time for the disabled controller to become available for other hierarchies furthermore due to inter controller dependencies other controllers may need to be disabled too While useful for development and manual configurations moving controllers dynamically between the v2 and other hierarchies is strongly discouraged for production use It is recommended to decide the hierarchies and controller associations before starting using the controllers after system boot During transition to v2 system management software might still automount the v1 cgroup filesystem and so hijack all controllers during boot before manual intervention is possible To make testing and experimenting easier the kernel parameter cgroup no v1 allows disabling controllers in v1 and make them always available in v2 cgroup v2 currently supports the following mount options nsdelegate Consider cgroup namespaces as delegation boundaries This option is system wide and can only be set on mount or modified through remount from the init namespace The mount option is ignored on non init namespace mounts Please refer to the Delegation section for details favordynmods Reduce the latencies of dynamic cgroup modifications such as task migrations and controller on offs at the cost of making hot path operations such as forks and exits more expensive The static usage pattern of creating a cgroup enabling controllers and then seeding it with CLONE INTO CGROUP is not affected by this option memory localevents Only populate memory events with data for the current cgroup and not any subtrees This is legacy behaviour the default behaviour without this option is to include subtree counts This option is system wide and can only be set on mount or modified through remount from the init namespace The mount option is ignored on non init namespace mounts memory recursiveprot Recursively apply memory min and memory low protection to entire subtrees without requiring explicit downward propagation into leaf cgroups This allows protecting entire subtrees from one another while retaining free competition within those subtrees This should have been the default behavior but is a mount option to avoid regressing setups relying on the original semantics e g specifying bogusly high bypass protection values at higher tree levels memory hugetlb accounting Count HugeTLB memory usage towards the cgroup s overall memory usage for the memory controller for the purpose of statistics reporting and memory protetion This is a new behavior that could regress existing setups so it must be explicitly opted in with this mount option A few caveats to keep in mind There is no HugeTLB pool management involved in the memory controller The pre allocated pool does not belong to anyone Specifically when a new HugeTLB folio is allocated to the pool it is not accounted for from the perspective of the memory controller It is only charged to a cgroup when it is actually used for e g at page fault time Host memory overcommit management has to consider this when configuring hard limits In general HugeTLB pool management should be done via other mechanisms such as the HugeTLB controller Failure to charge a HugeTLB folio to the memory controller results in SIGBUS This could happen even if the HugeTLB pool still has pages available but the cgroup limit is hit and reclaim attempt fails Charging HugeTLB memory towards the memory controller affects memory protection and reclaim dynamics Any userspace tuning of low min limits for e g needs to take this into account HugeTLB pages utilized while this option is not selected will not be tracked by the memory controller even if cgroup v2 is remounted later on pids localevents The option restores v1 like behavior of pids events max that is only local inside cgroup proper fork failures are counted Without this option pids events max represents any pids max enforcemnt across cgroup s subtree Organizing Processes and Threads Processes Initially only the root cgroup exists to which all processes belong A child cgroup can be created by creating a sub directory mkdir CGROUP NAME A given cgroup may have multiple child cgroups forming a tree structure Each cgroup has a read writable interface file cgroup procs When read it lists the PIDs of all processes which belong to the cgroup one per line The PIDs are not ordered and the same PID may show up more than once if the process got moved to another cgroup and then back or the PID got recycled while reading A process can be migrated into a cgroup by writing its PID to the target cgroup s cgroup procs file Only one process can be migrated on a single write 2 call If a process is composed of multiple threads writing the PID of any thread migrates all threads of the process When a process forks a child process the new process is born into the cgroup that the forking process belongs to at the time of the operation After exit a process stays associated with the cgroup that it belonged to at the time of exit until it s reaped however a zombie process does not appear in cgroup procs and thus can t be moved to another cgroup A cgroup which doesn t have any children or live processes can be destroyed by removing the directory Note that a cgroup which doesn t have any children and is associated only with zombie processes is considered empty and can be removed rmdir CGROUP NAME proc PID cgroup lists a process s cgroup membership If legacy cgroup is in use in the system this file may contain multiple lines one for each hierarchy The entry for cgroup v2 is always in the format 0 PATH cat proc 842 cgroup 0 test cgroup test cgroup nested If the process becomes a zombie and the cgroup it was associated with is removed subsequently deleted is appended to the path cat proc 842 cgroup 0 test cgroup test cgroup nested deleted Threads cgroup v2 supports thread granularity for a subset of controllers to support use cases requiring hierarchical resource distribution across the threads of a group of processes By default all threads of a process belong to the same cgroup which also serves as the resource domain to host resource consumptions which are not specific to a process or thread The thread mode allows threads to be spread across a subtree while still maintaining the common resource domain for them Controllers which support thread mode are called threaded controllers The ones which don t are called domain controllers Marking a cgroup threaded makes it join the resource domain of its parent as a threaded cgroup The parent may be another threaded cgroup whose resource domain is further up in the hierarchy The root of a threaded subtree that is the nearest ancestor which is not threaded is called threaded domain or thread root interchangeably and serves as the resource domain for the entire subtree Inside a threaded subtree threads of a process can be put in different cgroups and are not subject to the no internal process constraint threaded controllers can be enabled on non leaf cgroups whether they have threads in them or not As the threaded domain cgroup hosts all the domain resource consumptions of the subtree it is considered to have internal resource consumptions whether there are processes in it or not and can t have populated child cgroups which aren t threaded Because the root cgroup is not subject to no internal process constraint it can serve both as a threaded domain and a parent to domain cgroups The current operation mode or type of the cgroup is shown in the cgroup type file which indicates whether the cgroup is a normal domain a domain which is serving as the domain of a threaded subtree or a threaded cgroup On creation a cgroup is always a domain cgroup and can be made threaded by writing threaded to the cgroup type file The operation is single direction echo threaded cgroup type Once threaded the cgroup can t be made a domain again To enable the thread mode the following conditions must be met As the cgroup will join the parent s resource domain The parent must either be a valid threaded domain or a threaded cgroup When the parent is an unthreaded domain it must not have any domain controllers enabled or populated domain children The root is exempt from this requirement Topology wise a cgroup can be in an invalid state Please consider the following topology A threaded domain B threaded C domain just created C is created as a domain but isn t connected to a parent which can host child domains C can t be used until it is turned into a threaded cgroup cgroup type file will report domain invalid in these cases Operations which fail due to invalid topology use EOPNOTSUPP as the errno A domain cgroup is turned into a threaded domain when one of its child cgroup becomes threaded or threaded controllers are enabled in the cgroup subtree control file while there are processes in the cgroup A threaded domain reverts to a normal domain when the conditions clear When read cgroup threads contains the list of the thread IDs of all threads in the cgroup Except that the operations are per thread instead of per process cgroup threads has the same format and behaves the same way as cgroup procs While cgroup threads can be written to in any cgroup as it can only move threads inside the same threaded domain its operations are confined inside each threaded subtree The threaded domain cgroup serves as the resource domain for the whole subtree and while the threads can be scattered across the subtree all the processes are considered to be in the threaded domain cgroup cgroup procs in a threaded domain cgroup contains the PIDs of all processes in the subtree and is not readable in the subtree proper However cgroup procs can be written to from anywhere in the subtree to migrate all threads of the matching process to the cgroup Only threaded controllers can be enabled in a threaded subtree When a threaded controller is enabled inside a threaded subtree it only accounts for and controls resource consumptions associated with the threads in the cgroup and its descendants All consumptions which aren t tied to a specific thread belong to the threaded domain cgroup Because a threaded subtree is exempt from no internal process constraint a threaded controller must be able to handle competition between threads in a non leaf cgroup and its child cgroups Each threaded controller defines how such competitions are handled Currently the following controllers are threaded and can be enabled in a threaded cgroup cpu cpuset perf event pids Un populated Notification Each non root cgroup has a cgroup events file which contains populated field indicating whether the cgroup s sub hierarchy has live processes in it Its value is 0 if there is no live process in the cgroup and its descendants otherwise 1 poll and id notify events are triggered when the value changes This can be used for example to start a clean up operation after all processes of a given sub hierarchy have exited The populated state updates and notifications are recursive Consider the following sub hierarchy where the numbers in the parentheses represent the numbers of processes in each cgroup A 4 B 0 C 1 D 0 A B and C s populated fields would be 1 while D s 0 After the one process in C exits B and C s populated fields would flip to 0 and file modified events will be generated on the cgroup events files of both cgroups Controlling Controllers Enabling and Disabling Each cgroup has a cgroup controllers file which lists all controllers available for the cgroup to enable cat cgroup controllers cpu io memory No controller is enabled by default Controllers can be enabled and disabled by writing to the cgroup subtree control file echo cpu memory io cgroup subtree control Only controllers which are listed in cgroup controllers can be enabled When multiple operations are specified as above either they all succeed or fail If multiple operations on the same controller are specified the last one is effective Enabling a controller in a cgroup indicates that the distribution of the target resource across its immediate children will be controlled Consider the following sub hierarchy The enabled controllers are listed in parentheses A cpu memory B memory C D As A has cpu and memory enabled A will control the distribution of CPU cycles and memory to its children in this case B As B has memory enabled but not CPU C and D will compete freely on CPU cycles but their division of memory available to B will be controlled As a controller regulates the distribution of the target resource to the cgroup s children enabling it creates the controller s interface files in the child cgroups In the above example enabling cpu on B would create the cpu prefixed controller interface files in C and D Likewise disabling memory from B would remove the memory prefixed controller interface files from C and D This means that the controller interface files anything which doesn t start with cgroup are owned by the parent rather than the cgroup itself Top down Constraint Resources are distributed top down and a cgroup can further distribute a resource only if the resource has been distributed to it from the parent This means that all non root cgroup subtree control files can only contain controllers which are enabled in the parent s cgroup subtree control file A controller can be enabled only if the parent has the controller enabled and a controller can t be disabled if one or more children have it enabled No Internal Process Constraint Non root cgroups can distribute domain resources to their children only when they don t have any processes of their own In other words only domain cgroups which don t contain any processes can have domain controllers enabled in their cgroup subtree control files This guarantees that when a domain controller is looking at the part of the hierarchy which has it enabled processes are always only on the leaves This rules out situations where child cgroups compete against internal processes of the parent The root cgroup is exempt from this restriction Root contains processes and anonymous resource consumption which can t be associated with any other cgroups and requires special treatment from most controllers How resource consumption in the root cgroup is governed is up to each controller for more information on this topic please refer to the Non normative information section in the Controllers chapter Note that the restriction doesn t get in the way if there is no enabled controller in the cgroup s cgroup subtree control This is important as otherwise it wouldn t be possible to create children of a populated cgroup To control resource distribution of a cgroup the cgroup must create children and transfer all its processes to the children before enabling controllers in its cgroup subtree control file Delegation Model of Delegation A cgroup can be delegated in two ways First to a less privileged user by granting write access of the directory and its cgroup procs cgroup threads and cgroup subtree control files to the user Second if the nsdelegate mount option is set automatically to a cgroup namespace on namespace creation Because the resource control interface files in a given directory control the distribution of the parent s resources the delegatee shouldn t be allowed to write to them For the first method this is achieved by not granting access to these files For the second files outside the namespace should be hidden from the delegatee by the means of at least mount namespacing and the kernel rejects writes to all files on a namespace root from inside the cgroup namespace except for those files listed in sys kernel cgroup delegate including cgroup procs cgroup threads cgroup subtree control etc The end results are equivalent for both delegation types Once delegated the user can build sub hierarchy under the directory organize processes inside it as it sees fit and further distribute the resources it received from the parent The limits and other settings of all resource controllers are hierarchical and regardless of what happens in the delegated sub hierarchy nothing can escape the resource restrictions imposed by the parent Currently cgroup doesn t impose any restrictions on the number of cgroups in or nesting depth of a delegated sub hierarchy however this may be limited explicitly in the future Delegation Containment A delegated sub hierarchy is contained in the sense that processes can t be moved into or out of the sub hierarchy by the delegatee For delegations to a less privileged user this is achieved by requiring the following conditions for a process with a non root euid to migrate a target process into a cgroup by writing its PID to the cgroup procs file The writer must have write access to the cgroup procs file The writer must have write access to the cgroup procs file of the common ancestor of the source and destination cgroups The above two constraints ensure that while a delegatee may migrate processes around freely in the delegated sub hierarchy it can t pull in from or push out to outside the sub hierarchy For an example let s assume cgroups C0 and C1 have been delegated to user U0 who created C00 C01 under C0 and C10 under C1 as follows and all processes under C0 and C1 belong to U0 C0 C00 cgroup C01 hierarchy C1 C10 Let s also say U0 wants to write the PID of a process which is currently in C10 into C00 cgroup procs U0 has write access to the file however the common ancestor of the source cgroup C10 and the destination cgroup C00 is above the points of delegation and U0 would not have write access to its cgroup procs files and thus the write will be denied with EACCES For delegations to namespaces containment is achieved by requiring that both the source and destination cgroups are reachable from the namespace of the process which is attempting the migration If either is not reachable the migration is rejected with ENOENT Guidelines Organize Once and Control Migrating a process across cgroups is a relatively expensive operation and stateful resources such as memory are not moved together with the process This is an explicit design decision as there often exist inherent trade offs between migration and various hot paths in terms of synchronization cost As such migrating processes across cgroups frequently as a means to apply different resource restrictions is discouraged A workload should be assigned to a cgroup according to the system s logical and resource structure once on start up Dynamic adjustments to resource distribution can be made by changing controller configuration through the interface files Avoid Name Collisions Interface files for a cgroup and its children cgroups occupy the same directory and it is possible to create children cgroups which collide with interface files All cgroup core interface files are prefixed with cgroup and each controller s interface files are prefixed with the controller name and a dot A controller s name is composed of lower case alphabets and s but never begins with an so it can be used as the prefix character for collision avoidance Also interface file names won t start or end with terms which are often used in categorizing workloads such as job service slice unit or workload cgroup doesn t do anything to prevent name collisions and it s the user s responsibility to avoid them Resource Distribution Models cgroup controllers implement several resource distribution schemes depending on the resource type and expected use cases This section describes major schemes in use along with their expected behaviors Weights A parent s resource is distributed by adding up the weights of all active children and giving each the fraction matching the ratio of its weight against the sum As only children which can make use of the resource at the moment participate in the distribution this is work conserving Due to the dynamic nature this model is usually used for stateless resources All weights are in the range 1 10000 with the default at 100 This allows symmetric multiplicative biases in both directions at fine enough granularity while staying in the intuitive range As long as the weight is in range all configuration combinations are valid and there is no reason to reject configuration changes or process migrations cpu weight proportionally distributes CPU cycles to active children and is an example of this type cgroupv2 limits distributor Limits A child can only consume up to the configured amount of the resource Limits can be over committed the sum of the limits of children can exceed the amount of resource available to the parent Limits are in the range 0 max and defaults to max which is noop As limits can be over committed all configuration combinations are valid and there is no reason to reject configuration changes or process migrations io max limits the maximum BPS and or IOPS that a cgroup can consume on an IO device and is an example of this type cgroupv2 protections distributor Protections A cgroup is protected up to the configured amount of the resource as long as the usages of all its ancestors are under their protected levels Protections can be hard guarantees or best effort soft boundaries Protections can also be over committed in which case only up to the amount available to the parent is protected among children Protections are in the range 0 max and defaults to 0 which is noop As protections can be over committed all configuration combinations are valid and there is no reason to reject configuration changes or process migrations memory low implements best effort memory protection and is an example of this type Allocations A cgroup is exclusively allocated a certain amount of a finite resource Allocations can t be over committed the sum of the allocations of children can not exceed the amount of resource available to the parent Allocations are in the range 0 max and defaults to 0 which is no resource As allocations can t be over committed some configuration combinations are invalid and should be rejected Also if the resource is mandatory for execution of processes process migrations may be rejected cpu rt max hard allocates realtime slices and is an example of this type Interface Files Format All interface files should be in one of the following formats whenever possible New line separated values when only one value can be written at once VAL0 n VAL1 n Space separated values when read only or multiple values can be written at once VAL0 VAL1 n Flat keyed KEY0 VAL0 n KEY1 VAL1 n Nested keyed KEY0 SUB KEY0 VAL00 SUB KEY1 VAL01 KEY1 SUB KEY0 VAL10 SUB KEY1 VAL11 For a writable file the format for writing should generally match reading however controllers may allow omitting later fields or implement restricted shortcuts for most common use cases For both flat and nested keyed files only the values for a single key can be written at a time For nested keyed files the sub key pairs may be specified in any order and not all pairs have to be specified Conventions Settings for a single feature should be contained in a single file The root cgroup should be exempt from resource control and thus shouldn t have resource control interface files The default time unit is microseconds If a different unit is ever used an explicit unit suffix must be present A parts per quantity should use a percentage decimal with at least two digit fractional part e g 13 40 If a controller implements weight based resource distribution its interface file should be named weight and have the range 1 10000 with 100 as the default The values are chosen to allow enough and symmetric bias in both directions while keeping it intuitive the default is 100 If a controller implements an absolute resource guarantee and or limit the interface files should be named min and max respectively If a controller implements best effort resource guarantee and or limit the interface files should be named low and high respectively In the above four control files the special token max should be used to represent upward infinity for both reading and writing If a setting has a configurable default value and keyed specific overrides the default entry should be keyed with default and appear as the first entry in the file The default value can be updated by writing either default VAL or VAL When writing to update a specific override default can be used as the value to indicate removal of the override Override entries with default as the value must not appear when read For example a setting which is keyed by major minor device numbers with integer values may look like the following cat cgroup example interface file default 150 8 0 300 The default value can be updated by echo 125 cgroup example interface file or echo default 125 cgroup example interface file An override can be set by echo 8 16 170 cgroup example interface file and cleared by echo 8 0 default cgroup example interface file cat cgroup example interface file default 125 8 16 170 For events which are not very high frequency an interface file events should be created which lists event key value pairs Whenever a notifiable event happens file modified event should be generated on the file Core Interface Files All cgroup core files are prefixed with cgroup cgroup type A read write single value file which exists on non root cgroups When read it indicates the current type of the cgroup which can be one of the following values domain A normal valid domain cgroup domain threaded A threaded domain cgroup which is serving as the root of a threaded subtree domain invalid A cgroup which is in an invalid state It can t be populated or have controllers enabled It may be allowed to become a threaded cgroup threaded A threaded cgroup which is a member of a threaded subtree A cgroup can be turned into a threaded cgroup by writing threaded to this file cgroup procs A read write new line separated values file which exists on all cgroups When read it lists the PIDs of all processes which belong to the cgroup one per line The PIDs are not ordered and the same PID may show up more than once if the process got moved to another cgroup and then back or the PID got recycled while reading A PID can be written to migrate the process associated with the PID to the cgroup The writer should match all of the following conditions It must have write access to the cgroup procs file It must have write access to the cgroup procs file of the common ancestor of the source and destination cgroups When delegating a sub hierarchy write access to this file should be granted along with the containing directory In a threaded cgroup reading this file fails with EOPNOTSUPP as all the processes belong to the thread root Writing is supported and moves every thread of the process to the cgroup cgroup threads A read write new line separated values file which exists on all cgroups When read it lists the TIDs of all threads which belong to the cgroup one per line The TIDs are not ordered and the same TID may show up more than once if the thread got moved to another cgroup and then back or the TID got recycled while reading A TID can be written to migrate the thread associated with the TID to the cgroup The writer should match all of the following conditions It must have write access to the cgroup threads file The cgroup that the thread is currently in must be in the same resource domain as the destination cgroup It must have write access to the cgroup procs file of the common ancestor of the source and destination cgroups When delegating a sub hierarchy write access to this file should be granted along with the containing directory cgroup controllers A read only space separated values file which exists on all cgroups It shows space separated list of all controllers available to the cgroup The controllers are not ordered cgroup subtree control A read write space separated values file which exists on all cgroups Starts out empty When read it shows space separated list of the controllers which are enabled to control resource distribution from the cgroup to its children Space separated list of controllers prefixed with or can be written to enable or disable controllers A controller name prefixed with enables the controller and disables If a controller appears more than once on the list the last one is effective When multiple enable and disable operations are specified either all succeed or all fail cgroup events A read only flat keyed file which exists on non root cgroups The following entries are defined Unless specified otherwise a value change in this file generates a file modified event populated 1 if the cgroup or its descendants contains any live processes otherwise 0 frozen 1 if the cgroup is frozen otherwise 0 cgroup max descendants A read write single value files The default is max Maximum allowed number of descent cgroups If the actual number of descendants is equal or larger an attempt to create a new cgroup in the hierarchy will fail cgroup max depth A read write single value files The default is max Maximum allowed descent depth below the current cgroup If the actual descent depth is equal or larger an attempt to create a new child cgroup will fail cgroup stat A read only flat keyed file with the following entries nr descendants Total number of visible descendant cgroups nr dying descendants Total number of dying descendant cgroups A cgroup becomes dying after being deleted by a user The cgroup will remain in dying state for some time undefined time which can depend on system load before being completely destroyed A process can t enter a dying cgroup under any circumstances a dying cgroup can t revive A dying cgroup can consume system resources not exceeding limits which were active at the moment of cgroup deletion nr subsys cgroup subsys Total number of live cgroup subsystems e g memory cgroup at and beneath the current cgroup nr dying subsys cgroup subsys Total number of dying cgroup subsystems e g memory cgroup at and beneath the current cgroup cgroup freeze A read write single value file which exists on non root cgroups Allowed values are 0 and 1 The default is 0 Writing 1 to the file causes freezing of the cgroup and all descendant cgroups This means that all belonging processes will be stopped and will not run until the cgroup will be explicitly unfrozen Freezing of the cgroup may take some time when this action is completed the frozen value in the cgroup events control file will be updated to 1 and the corresponding notification will be issued A cgroup can be frozen either by its own settings or by settings of any ancestor cgroups If any of ancestor cgroups is frozen the cgroup will remain frozen Processes in the frozen cgroup can be killed by a fatal signal They also can enter and leave a frozen cgroup either by an explicit move by a user or if freezing of the cgroup races with fork If a process is moved to a frozen cgroup it stops If a process is moved out of a frozen cgroup it becomes running Frozen status of a cgroup doesn t affect any cgroup tree operations it s possible to delete a frozen and empty cgroup as well as create new sub cgroups cgroup kill A write only single value file which exists in non root cgroups The only allowed value is 1 Writing 1 to the file causes the cgroup and all descendant cgroups to be killed This means that all processes located in the affected cgroup tree will be killed via SIGKILL Killing a cgroup tree will deal with concurrent forks appropriately and is protected against migrations In a threaded cgroup writing this file fails with EOPNOTSUPP as killing cgroups is a process directed operation i e it affects the whole thread group cgroup pressure A read write single value file that allowed values are 0 and 1 The default is 1 Writing 0 to the file will disable the cgroup PSI accounting Writing 1 to the file will re enable the cgroup PSI accounting This control attribute is not hierarchical so disable or enable PSI accounting in a cgroup does not affect PSI accounting in descendants and doesn t need pass enablement via ancestors from root The reason this control attribute exists is that PSI accounts stalls for each cgroup separately and aggregates it at each level of the hierarchy This may cause non negligible overhead for some workloads when under deep level of the hierarchy in which case this control attribute can be used to disable PSI accounting in the non leaf cgroups irq pressure A read write nested keyed file Shows pressure stall information for IRQ SOFTIRQ See ref Documentation accounting psi rst psi for details Controllers cgroup v2 cpu CPU The cpu controllers regulates distribution of CPU cycles This controller implements weight and absolute bandwidth limit models for normal scheduling policy and absolute bandwidth allocation model for realtime scheduling policy In all the above models cycles distribution is defined only on a temporal base and it does not account for the frequency at which tasks are executed The optional utilization clamping support allows to hint the schedutil cpufreq governor about the minimum desired frequency which should always be provided by a CPU as well as the maximum desired frequency which should not be exceeded by a CPU WARNING cgroup2 doesn t yet support control of realtime processes For a kernel built with the CONFIG RT GROUP SCHED option enabled for group scheduling of realtime processes the cpu controller can only be enabled when all RT processes are in the root cgroup This limitation does not apply if CONFIG RT GROUP SCHED is disabled Be aware that system management software may already have placed RT processes into nonroot cgroups during the system boot process and these processes may need to be moved to the root cgroup before the cpu controller can be enabled with a CONFIG RT GROUP SCHED enabled kernel CPU Interface Files All time durations are in microseconds cpu stat A read only flat keyed file This file exists whether the controller is enabled or not It always reports the following three stats usage usec user usec system usec and the following five when the controller is enabled nr periods nr throttled throttled usec nr bursts burst usec cpu weight A read write single value file which exists on non root cgroups The default is 100 For non idle groups cpu idle 0 the weight is in the range 1 10000 If the cgroup has been configured to be SCHED IDLE cpu idle 1 then the weight will show as a 0 cpu weight nice A read write single value file which exists on non root cgroups The default is 0 The nice value is in the range 20 19 This interface file is an alternative interface for cpu weight and allows reading and setting weight using the same values used by nice 2 Because the range is smaller and granularity is coarser for the nice values the read value is the closest approximation of the current weight cpu max A read write two value file which exists on non root cgroups The default is max 100000 The maximum bandwidth limit It s in the following format MAX PERIOD which indicates that the group may consume up to MAX in each PERIOD duration max for MAX indicates no limit If only one number is written MAX is updated cpu max burst A read write single value file which exists on non root cgroups The default is 0 The burst in the range 0 MAX cpu pressure A read write nested keyed file Shows pressure stall information for CPU See ref Documentation accounting psi rst psi for details cpu uclamp min A read write single value file which exists on non root cgroups The default is 0 i e no utilization boosting The requested minimum utilization protection as a percentage rational number e g 12 34 for 12 34 This interface allows reading and setting minimum utilization clamp values similar to the sched setattr 2 This minimum utilization value is used to clamp the task specific minimum utilization clamp The requested minimum utilization protection is always capped by the current value for the maximum utilization limit i e cpu uclamp max cpu uclamp max A read write single value file which exists on non root cgroups The default is max i e no utilization capping The requested maximum utilization limit as a percentage rational number e g 98 76 for 98 76 This interface allows reading and setting maximum utilization clamp values similar to the sched setattr 2 This maximum utilization value is used to clamp the task specific maximum utilization clamp cpu idle A read write single value file which exists on non root cgroups The default is 0 This is the cgroup analog of the per task SCHED IDLE sched policy Setting this value to a 1 will make the scheduling policy of the cgroup SCHED IDLE The threads inside the cgroup will retain their own relative priorities but the cgroup itself will be treated as very low priority relative to its peers Memory The memory controller regulates distribution of memory Memory is stateful and implements both limit and protection models Due to the intertwining between memory usage and reclaim pressure and the stateful nature of memory the distribution model is relatively complex While not completely water tight all major memory usages by a given cgroup are tracked so that the total memory consumption can be accounted and controlled to a reasonable extent Currently the following types of memory usages are tracked Userland memory page cache and anonymous memory Kernel data structures such as dentries and inodes TCP socket buffers The above list may expand in the future for better coverage Memory Interface Files All memory amounts are in bytes If a value which is not aligned to PAGE SIZE is written the value may be rounded up to the closest PAGE SIZE multiple when read back memory current A read only single value file which exists on non root cgroups The total amount of memory currently being used by the cgroup and its descendants memory min A read write single value file which exists on non root cgroups The default is 0 Hard memory protection If the memory usage of a cgroup is within its effective min boundary the cgroup s memory won t be reclaimed under any conditions If there is no unprotected reclaimable memory available OOM killer is invoked Above the effective min boundary or effective low boundary if it is higher pages are reclaimed proportionally to the overage reducing reclaim pressure for smaller overages Effective min boundary is limited by memory min values of all ancestor cgroups If there is memory min overcommitment child cgroup or cgroups are requiring more protected memory than parent will allow then each child cgroup will get the part of parent s protection proportional to its actual memory usage below memory min Putting more memory than generally available under this protection is discouraged and may lead to constant OOMs If a memory cgroup is not populated with processes its memory min is ignored memory low A read write single value file which exists on non root cgroups The default is 0 Best effort memory protection If the memory usage of a cgroup is within its effective low boundary the cgroup s memory won t be reclaimed unless there is no reclaimable memory available in unprotected cgroups Above the effective low boundary or effective min boundary if it is higher pages are reclaimed proportionally to the overage reducing reclaim pressure for smaller overages Effective low boundary is limited by memory low values of all ancestor cgroups If there is memory low overcommitment child cgroup or cgroups are requiring more protected memory than parent will allow then each child cgroup will get the part of parent s protection proportional to its actual memory usage below memory low Putting more memory than generally available under this protection is discouraged memory high A read write single value file which exists on non root cgroups The default is max Memory usage throttle limit If a cgroup s usage goes over the high boundary the processes of the cgroup are throttled and put under heavy reclaim pressure Going over the high limit never invokes the OOM killer and under extreme conditions the limit may be breached The high limit should be used in scenarios where an external process monitors the limited cgroup to alleviate heavy reclaim pressure memory max A read write single value file which exists on non root cgroups The default is max Memory usage hard limit This is the main mechanism to limit memory usage of a cgroup If a cgroup s memory usage reaches this limit and can t be reduced the OOM killer is invoked in the cgroup Under certain circumstances the usage may go over the limit temporarily In default configuration regular 0 order allocations always succeed unless OOM killer chooses current task as a victim Some kinds of allocations don t invoke the OOM killer Caller could retry them differently return into userspace as ENOMEM or silently ignore in cases like disk readahead memory reclaim A write only nested keyed file which exists for all cgroups This is a simple interface to trigger memory reclaim in the target cgroup Example echo 1G memory reclaim Please note that the kernel can over or under reclaim from the target cgroup If less bytes are reclaimed than the specified amount EAGAIN is returned Please note that the proactive reclaim triggered by this interface is not meant to indicate memory pressure on the memory cgroup Therefore socket memory balancing triggered by the memory reclaim normally is not exercised in this case This means that the networking layer will not adapt based on reclaim induced by memory reclaim The following nested keys are defined swappiness Swappiness value to reclaim with Specifying a swappiness value instructs the kernel to perform the reclaim with that swappiness value Note that this has the same semantics as vm swappiness applied to memcg reclaim with all the existing limitations and potential future extensions memory peak A read write single value file which exists on non root cgroups The max memory usage recorded for the cgroup and its descendants since either the creation of the cgroup or the most recent reset for that FD A write of any non empty string to this file resets it to the current memory usage for subsequent reads through the same file descriptor memory oom group A read write single value file which exists on non root cgroups The default value is 0 Determines whether the cgroup should be treated as an indivisible workload by the OOM killer If set all tasks belonging to the cgroup or to its descendants if the memory cgroup is not a leaf cgroup are killed together or not at all This can be used to avoid partial kills to guarantee workload integrity Tasks with the OOM protection oom score adj set to 1000 are treated as an exception and are never killed If the OOM killer is invoked in a cgroup it s not going to kill any tasks outside of this cgroup regardless memory oom group values of ancestor cgroups memory events A read only flat keyed file which exists on non root cgroups The following entries are defined Unless specified otherwise a value change in this file generates a file modified event Note that all fields in this file are hierarchical and the file modified event can be generated due to an event down the hierarchy For the local events at the cgroup level see memory events local low The number of times the cgroup is reclaimed due to high memory pressure even though its usage is under the low boundary This usually indicates that the low boundary is over committed high The number of times processes of the cgroup are throttled and routed to perform direct memory reclaim because the high memory boundary was exceeded For a cgroup whose memory usage is capped by the high limit rather than global memory pressure this event s occurrences are expected max The number of times the cgroup s memory usage was about to go over the max boundary If direct reclaim fails to bring it down the cgroup goes to OOM state oom The number of time the cgroup s memory usage was reached the limit and allocation was about to fail This event is not raised if the OOM killer is not considered as an option e g for failed high order allocations or if caller asked to not retry attempts oom kill The number of processes belonging to this cgroup killed by any kind of OOM killer oom group kill The number of times a group OOM has occurred memory events local Similar to memory events but the fields in the file are local to the cgroup i e not hierarchical The file modified event generated on this file reflects only the local events memory stat A read only flat keyed file which exists on non root cgroups This breaks down the cgroup s memory footprint into different types of memory type specific details and other information on the state and past events of the memory management system All memory amounts are in bytes The entries are ordered to be human readable and new entries can show up in the middle Don t rely on items remaining in a fixed position use the keys to look up specific values If the entry has no per node counter or not show in the memory numa stat We use npn non per node as the tag to indicate that it will not show in the memory numa stat anon Amount of memory used in anonymous mappings such as brk sbrk and mmap MAP ANONYMOUS file Amount of memory used to cache filesystem data including tmpfs and shared memory kernel npn Amount of total kernel memory including kernel stack pagetables percpu vmalloc slab in addition to other kernel memory use cases kernel stack Amount of memory allocated to kernel stacks pagetables Amount of memory allocated for page tables sec pagetables Amount of memory allocated for secondary page tables this currently includes KVM mmu allocations on x86 and arm64 and IOMMU page tables percpu npn Amount of memory used for storing per cpu kernel data structures sock npn Amount of memory used in network transmission buffers vmalloc npn Amount of memory used for vmap backed memory shmem Amount of cached filesystem data that is swap backed such as tmpfs shm segments shared anonymous mmap s zswap Amount of memory consumed by the zswap compression backend zswapped Amount of application memory swapped out to zswap file mapped Amount of cached filesystem data mapped with mmap file dirty Amount of cached filesystem data that was modified but not yet written back to disk file writeback Amount of cached filesystem data that was modified and is currently being written back to disk swapcached Amount of swap cached in memory The swapcache is accounted against both memory and swap usage anon thp Amount of memory used in anonymous mappings backed by transparent hugepages file thp Amount of cached filesystem data backed by transparent hugepages shmem thp Amount of shm tmpfs shared anonymous mmap s backed by transparent hugepages inactive anon active anon inactive file active file unevictable Amount of memory swap backed and filesystem backed on the internal memory management lists used by the page reclaim algorithm As these represent internal list state eg shmem pages are on anon memory management lists inactive foo active foo may not be equal to the value for the foo counter since the foo counter is type based not list based slab reclaimable Part of slab that might be reclaimed such as dentries and inodes slab unreclaimable Part of slab that cannot be reclaimed on memory pressure slab npn Amount of memory used for storing in kernel data structures workingset refault anon Number of refaults of previously evicted anonymous pages workingset refault file Number of refaults of previously evicted file pages workingset activate anon Number of refaulted anonymous pages that were immediately activated workingset activate file Number of refaulted file pages that were immediately activated workingset restore anon Number of restored anonymous pages which have been detected as an active workingset before they got reclaimed workingset restore file Number of restored file pages which have been detected as an active workingset before they got reclaimed workingset nodereclaim Number of times a shadow node has been reclaimed pgscan npn Amount of scanned pages in an inactive LRU list pgsteal npn Amount of reclaimed pages pgscan kswapd npn Amount of scanned pages by kswapd in an inactive LRU list pgscan direct npn Amount of scanned pages directly in an inactive LRU list pgscan khugepaged npn Amount of scanned pages by khugepaged in an inactive LRU list pgsteal kswapd npn Amount of reclaimed pages by kswapd pgsteal direct npn Amount of reclaimed pages directly pgsteal khugepaged npn Amount of reclaimed pages by khugepaged pgfault npn Total number of page faults incurred pgmajfault npn Number of major page faults incurred pgrefill npn Amount of scanned pages in an active LRU list pgactivate npn Amount of pages moved to the active LRU list pgdeactivate npn Amount of pages moved to the inactive LRU list pglazyfree npn Amount of pages postponed to be freed under memory pressure pglazyfreed npn Amount of reclaimed lazyfree pages swpin zero Number of pages swapped into memory and filled with zero where I O was optimized out because the page content was detected to be zero during swapout swpout zero Number of zero filled pages swapped out with I O skipped due to the content being detected as zero zswpin Number of pages moved in to memory from zswap zswpout Number of pages moved out of memory to zswap zswpwb Number of pages written from zswap to swap thp fault alloc npn Number of transparent hugepages which were allocated to satisfy a page fault This counter is not present when CONFIG TRANSPARENT HUGEPAGE is not set thp collapse alloc npn Number of transparent hugepages which were allocated to allow collapsing an existing range of pages This counter is not present when CONFIG TRANSPARENT HUGEPAGE is not set thp swpout npn Number of transparent hugepages which are swapout in one piece without splitting thp swpout fallback npn Number of transparent hugepages which were split before swapout Usually because failed to allocate some continuous swap space for the huge page numa pages migrated npn Number of pages migrated by NUMA balancing numa pte updates npn Number of pages whose page table entries are modified by NUMA balancing to produce NUMA hinting faults on access numa hint faults npn Number of NUMA hinting faults pgdemote kswapd Number of pages demoted by kswapd pgdemote direct Number of pages demoted directly pgdemote khugepaged Number of pages demoted by khugepaged hugetlb Amount of memory used by hugetlb pages This metric only shows up if hugetlb usage is accounted for in memory current i e cgroup is mounted with the memory hugetlb accounting option memory numa stat A read only nested keyed file which exists on non root cgroups This breaks down the cgroup s memory footprint into different types of memory type specific details and other information per node on the state of the memory management system This is useful for providing visibility into the NUMA locality information within an memcg since the pages are allowed to be allocated from any physical node One of the use case is evaluating application performance by combining this information with the application s CPU allocation All memory amounts are in bytes The output format of memory numa stat is type N0 bytes in node 0 N1 bytes in node 1 The entries are ordered to be human readable and new entries can show up in the middle Don t rely on items remaining in a fixed position use the keys to look up specific values The entries can refer to the memory stat memory swap current A read only single value file which exists on non root cgroups The total amount of swap currently being used by the cgroup and its descendants memory swap high A read write single value file which exists on non root cgroups The default is max Swap usage throttle limit If a cgroup s swap usage exceeds this limit all its further allocations will be throttled to allow userspace to implement custom out of memory procedures This limit marks a point of no return for the cgroup It is NOT designed to manage the amount of swapping a workload does during regular operation Compare to memory swap max which prohibits swapping past a set amount but lets the cgroup continue unimpeded as long as other memory can be reclaimed Healthy workloads are not expected to reach this limit memory swap peak A read write single value file which exists on non root cgroups The max swap usage recorded for the cgroup and its descendants since the creation of the cgroup or the most recent reset for that FD A write of any non empty string to this file resets it to the current memory usage for subsequent reads through the same file descriptor memory swap max A read write single value file which exists on non root cgroups The default is max Swap usage hard limit If a cgroup s swap usage reaches this limit anonymous memory of the cgroup will not be swapped out memory swap events A read only flat keyed file which exists on non root cgroups The following entries are defined Unless specified otherwise a value change in this file generates a file modified event high The number of times the cgroup s swap usage was over the high threshold max The number of times the cgroup s swap usage was about to go over the max boundary and swap allocation failed fail The number of times swap allocation failed either because of running out of swap system wide or max limit When reduced under the current usage the existing swap entries are reclaimed gradually and the swap usage may stay higher than the limit for an extended period of time This reduces the impact on the workload and memory management memory zswap current A read only single value file which exists on non root cgroups The total amount of memory consumed by the zswap compression backend memory zswap max A read write single value file which exists on non root cgroups The default is max Zswap usage hard limit If a cgroup s zswap pool reaches this limit it will refuse to take any more stores before existing entries fault back in or are written out to disk memory zswap writeback A read write single value file The default value is 1 Note that this setting is hierarchical i e the writeback would be implicitly disabled for child cgroups if the upper hierarchy does so When this is set to 0 all swapping attempts to swapping devices are disabled This included both zswap writebacks and swapping due to zswap store failures If the zswap store failures are recurring for e g if the pages are incompressible users can observe reclaim inefficiency after disabling writeback because the same pages might be rejected again and again Note that this is subtly different from setting memory swap max to 0 as it still allows for pages to be written to the zswap pool This setting has no effect if zswap is disabled and swapping is allowed unless memory swap max is set to 0 memory pressure A read only nested keyed file Shows pressure stall information for memory See ref Documentation accounting psi rst psi for details Usage Guidelines memory high is the main mechanism to control memory usage Over committing on high limit sum of high limits available memory and letting global memory pressure to distribute memory according to usage is a viable strategy Because breach of the high limit doesn t trigger the OOM killer but throttles the offending cgroup a management agent has ample opportunities to monitor and take appropriate actions such as granting more memory or terminating the workload Determining whether a cgroup has enough memory is not trivial as memory usage doesn t indicate whether the workload can benefit from more memory For example a workload which writes data received from network to a file can use all available memory but can also operate as performant with a small amount of memory A measure of memory pressure how much the workload is being impacted due to lack of memory is necessary to determine whether a workload needs more memory unfortunately memory pressure monitoring mechanism isn t implemented yet Memory Ownership A memory area is charged to the cgroup which instantiated it and stays charged to the cgroup until the area is released Migrating a process to a different cgroup doesn t move the memory usages that it instantiated while in the previous cgroup to the new cgroup A memory area may be used by processes belonging to different cgroups To which cgroup the area will be charged is in deterministic however over time the memory area is likely to end up in a cgroup which has enough memory allowance to avoid high reclaim pressure If a cgroup sweeps a considerable amount of memory which is expected to be accessed repeatedly by other cgroups it may make sense to use POSIX FADV DONTNEED to relinquish the ownership of memory areas belonging to the affected files to ensure correct memory ownership IO The io controller regulates the distribution of IO resources This controller implements both weight based and absolute bandwidth or IOPS limit distribution however weight based distribution is available only if cfq iosched is in use and neither scheme is available for blk mq devices IO Interface Files io stat A read only nested keyed file Lines are keyed by MAJ MIN device numbers and not ordered The following nested keys are defined rbytes Bytes read wbytes Bytes written rios Number of read IOs wios Number of write IOs dbytes Bytes discarded dios Number of discard IOs An example read output follows 8 16 rbytes 1459200 wbytes 314773504 rios 192 wios 353 dbytes 0 dios 0 8 0 rbytes 90430464 wbytes 299008000 rios 8950 wios 1252 dbytes 50331648 dios 3021 io cost qos A read write nested keyed file which exists only on the root cgroup This file configures the Quality of Service of the IO cost model based controller CONFIG BLK CGROUP IOCOST which currently implements io weight proportional control Lines are keyed by MAJ MIN device numbers and not ordered The line for a given device is populated on the first write for the device on io cost qos or io cost model The following nested keys are defined enable Weight based control enable ctrl auto or user rpct Read latency percentile 0 100 rlat Read latency threshold wpct Write latency percentile 0 100 wlat Write latency threshold min Minimum scaling percentage 1 10000 max Maximum scaling percentage 1 10000 The controller is disabled by default and can be enabled by setting enable to 1 rpct and wpct parameters default to zero and the controller uses internal device saturation state to adjust the overall IO rate between min and max When a better control quality is needed latency QoS parameters can be configured For example 8 16 enable 1 ctrl auto rpct 95 00 rlat 75000 wpct 95 00 wlat 150000 min 50 00 max 150 0 shows that on sdb the controller is enabled will consider the device saturated if the 95th percentile of read completion latencies is above 75ms or write 150ms and adjust the overall IO issue rate between 50 and 150 accordingly The lower the saturation point the better the latency QoS at the cost of aggregate bandwidth The narrower the allowed adjustment range between min and max the more conformant to the cost model the IO behavior Note that the IO issue base rate may be far off from 100 and setting min and max blindly can lead to a significant loss of device capacity or control quality min and max are useful for regulating devices which show wide temporary behavior changes e g a ssd which accepts writes at the line speed for a while and then completely stalls for multiple seconds When ctrl is auto the parameters are controlled by the kernel and may change automatically Setting ctrl to user or setting any of the percentile and latency parameters puts it into user mode and disables the automatic changes The automatic mode can be restored by setting ctrl to auto io cost model A read write nested keyed file which exists only on the root cgroup This file configures the cost model of the IO cost model based controller CONFIG BLK CGROUP IOCOST which currently implements io weight proportional control Lines are keyed by MAJ MIN device numbers and not ordered The line for a given device is populated on the first write for the device on io cost qos or io cost model The following nested keys are defined ctrl auto or user model The cost model in use linear When ctrl is auto the kernel may change all parameters dynamically When ctrl is set to user or any other parameters are written to ctrl become user and the automatic changes are disabled When model is linear the following model parameters are defined r w bps The maximum sequential IO throughput r w seqiops The maximum 4k sequential IOs per second r w randiops The maximum 4k random IOs per second From the above the builtin linear model determines the base costs of a sequential and random IO and the cost coefficient for the IO size While simple this model can cover most common device classes acceptably The IO cost model isn t expected to be accurate in absolute sense and is scaled to the device behavior dynamically If needed tools cgroup iocost coef gen py can be used to generate device specific coefficients io weight A read write flat keyed file which exists on non root cgroups The default is default 100 The first line is the default weight applied to devices without specific override The rest are overrides keyed by MAJ MIN device numbers and not ordered The weights are in the range 1 10000 and specifies the relative amount IO time the cgroup can use in relation to its siblings The default weight can be updated by writing either default WEIGHT or simply WEIGHT Overrides can be set by writing MAJ MIN WEIGHT and unset by writing MAJ MIN default An example read output follows default 100 8 16 200 8 0 50 io max A read write nested keyed file which exists on non root cgroups BPS and IOPS based IO limit Lines are keyed by MAJ MIN device numbers and not ordered The following nested keys are defined rbps Max read bytes per second wbps Max write bytes per second riops Max read IO operations per second wiops Max write IO operations per second When writing any number of nested key value pairs can be specified in any order max can be specified as the value to remove a specific limit If the same key is specified multiple times the outcome is undefined BPS and IOPS are measured in each IO direction and IOs are delayed if limit is reached Temporary bursts are allowed Setting read limit at 2M BPS and write at 120 IOPS for 8 16 echo 8 16 rbps 2097152 wiops 120 io max Reading returns the following 8 16 rbps 2097152 wbps max riops max wiops 120 Write IOPS limit can be removed by writing the following echo 8 16 wiops max io max Reading now returns the following 8 16 rbps 2097152 wbps max riops max wiops max io pressure A read only nested keyed file Shows pressure stall information for IO See ref Documentation accounting psi rst psi for details Writeback Page cache is dirtied through buffered writes and shared mmaps and written asynchronously to the backing filesystem by the writeback mechanism Writeback sits between the memory and IO domains and regulates the proportion of dirty memory by balancing dirtying and write IOs The io controller in conjunction with the memory controller implements control of page cache writeback IOs The memory controller defines the memory domain that dirty memory ratio is calculated and maintained for and the io controller defines the io domain which writes out dirty pages for the memory domain Both system wide and per cgroup dirty memory states are examined and the more restrictive of the two is enforced cgroup writeback requires explicit support from the underlying filesystem Currently cgroup writeback is implemented on ext2 ext4 btrfs f2fs and xfs On other filesystems all writeback IOs are attributed to the root cgroup There are inherent differences in memory and writeback management which affects how cgroup ownership is tracked Memory is tracked per page while writeback per inode For the purpose of writeback an inode is assigned to a cgroup and all IO requests to write dirty pages from the inode are attributed to that cgroup As cgroup ownership for memory is tracked per page there can be pages which are associated with different cgroups than the one the inode is associated with These are called foreign pages The writeback constantly keeps track of foreign pages and if a particular foreign cgroup becomes the majority over a certain period of time switches the ownership of the inode to that cgroup While this model is enough for most use cases where a given inode is mostly dirtied by a single cgroup even when the main writing cgroup changes over time use cases where multiple cgroups write to a single inode simultaneously are not supported well In such circumstances a significant portion of IOs are likely to be attributed incorrectly As memory controller assigns page ownership on the first use and doesn t update it until the page is released even if writeback strictly follows page ownership multiple cgroups dirtying overlapping areas wouldn t work as expected It s recommended to avoid such usage patterns The sysctl knobs which affect writeback behavior are applied to cgroup writeback as follows vm dirty background ratio vm dirty ratio These ratios apply the same to cgroup writeback with the amount of available memory capped by limits imposed by the memory controller and system wide clean memory vm dirty background bytes vm dirty bytes For cgroup writeback this is calculated into ratio against total available memory and applied the same way as vm dirty background ratio IO Latency This is a cgroup v2 controller for IO workload protection You provide a group with a latency target and if the average latency exceeds that target the controller will throttle any peers that have a lower latency target than the protected workload The limits are only applied at the peer level in the hierarchy This means that in the diagram below only groups A B and C will influence each other and groups D and F will influence each other Group G will influence nobody root A B C D F G So the ideal way to configure this is to set io latency in groups A B and C Generally you do not want to set a value lower than the latency your device supports Experiment to find the value that works best for your workload Start at higher than the expected latency for your device and watch the avg lat value in io stat for your workload group to get an idea of the latency you see during normal operation Use the avg lat value as a basis for your real setting setting at 10 15 higher than the value in io stat How IO Latency Throttling Works io latency is work conserving so as long as everybody is meeting their latency target the controller doesn t do anything Once a group starts missing its target it begins throttling any peer group that has a higher target than itself This throttling takes 2 forms Queue depth throttling This is the number of outstanding IO s a group is allowed to have We will clamp down relatively quickly starting at no limit and going all the way down to 1 IO at a time Artificial delay induction There are certain types of IO that cannot be throttled without possibly adversely affecting higher priority groups This includes swapping and metadata IO These types of IO are allowed to occur normally however they are charged to the originating group If the originating group is being throttled you will see the use delay and delay fields in io stat increase The delay value is how many microseconds that are being added to any process that runs in this group Because this number can grow quite large if there is a lot of swapping or metadata IO occurring we limit the individual delay events to 1 second at a time Once the victimized group starts meeting its latency target again it will start unthrottling any peer groups that were throttled previously If the victimized group simply stops doing IO the global counter will unthrottle appropriately IO Latency Interface Files io latency This takes a similar format as the other controllers MAJOR MINOR target target time in microseconds io stat If the controller is enabled you will see extra stats in io stat in addition to the normal ones depth This is the current queue depth for the group avg lat This is an exponential moving average with a decay rate of 1 exp bound by the sampling interval The decay rate interval can be calculated by multiplying the win value in io stat by the corresponding number of samples based on the win value win The sampling window size in milliseconds This is the minimum duration of time between evaluation events Windows only elapse with IO activity Idle periods extend the most recent window IO Priority A single attribute controls the behavior of the I O priority cgroup policy namely the io prio class attribute The following values are accepted for that attribute no change Do not modify the I O priority class promote to rt For requests that have a non RT I O priority class change it into RT Also change the priority level of these requests to 4 Do not modify the I O priority of requests that have priority class RT restrict to be For requests that do not have an I O priority class or that have I O priority class RT change it into BE Also change the priority level of these requests to 0 Do not modify the I O priority class of requests that have priority class IDLE idle Change the I O priority class of all requests into IDLE the lowest I O priority class none to rt Deprecated Just an alias for promote to rt The following numerical values are associated with the I O priority policies no change 0 promote to rt 1 restrict to be 2 idle 3 The numerical value that corresponds to each I O priority class is as follows IOPRIO CLASS NONE 0 IOPRIO CLASS RT real time 1 IOPRIO CLASS BE best effort 2 IOPRIO CLASS IDLE 3 The algorithm to set the I O priority class for a request is as follows If I O priority class policy is promote to rt change the request I O priority class to IOPRIO CLASS RT and change the request I O priority level to 4 If I O priority class policy is not promote to rt translate the I O priority class policy into a number then change the request I O priority class into the maximum of the I O priority class policy number and the numerical I O priority class PID The process number controller is used to allow a cgroup to stop any new tasks from being fork d or clone d after a specified limit is reached The number of tasks in a cgroup can be exhausted in ways which other controllers cannot prevent thus warranting its own controller For example a fork bomb is likely to exhaust the number of tasks before hitting memory restrictions Note that PIDs used in this controller refer to TIDs process IDs as used by the kernel PID Interface Files pids max A read write single value file which exists on non root cgroups The default is max Hard limit of number of processes pids current A read only single value file which exists on non root cgroups The number of processes currently in the cgroup and its descendants pids peak A read only single value file which exists on non root cgroups The maximum value that the number of processes in the cgroup and its descendants has ever reached pids events A read only flat keyed file which exists on non root cgroups Unless specified otherwise a value change in this file generates a file modified event The following entries are defined max The number of times the cgroup s total number of processes hit the pids max limit see also pids localevents pids events local Similar to pids events but the fields in the file are local to the cgroup i e not hierarchical The file modified event generated on this file reflects only the local events Organisational operations are not blocked by cgroup policies so it is possible to have pids current pids max This can be done by either setting the limit to be smaller than pids current or attaching enough processes to the cgroup such that pids current is larger than pids max However it is not possible to violate a cgroup PID policy through fork or clone These will return EAGAIN if the creation of a new process would cause a cgroup policy to be violated Cpuset The cpuset controller provides a mechanism for constraining the CPU and memory node placement of tasks to only the resources specified in the cpuset interface files in a task s current cgroup This is especially valuable on large NUMA systems where placing jobs on properly sized subsets of the systems with careful processor and memory placement to reduce cross node memory access and contention can improve overall system performance The cpuset controller is hierarchical That means the controller cannot use CPUs or memory nodes not allowed in its parent Cpuset Interface Files cpuset cpus A read write multiple values file which exists on non root cpuset enabled cgroups It lists the requested CPUs to be used by tasks within this cgroup The actual list of CPUs to be granted however is subjected to constraints imposed by its parent and can differ from the requested CPUs The CPU numbers are comma separated numbers or ranges For example cat cpuset cpus 0 4 6 8 10 An empty value indicates that the cgroup is using the same setting as the nearest cgroup ancestor with a non empty cpuset cpus or all the available CPUs if none is found The value of cpuset cpus stays constant until the next update and won t be affected by any CPU hotplug events cpuset cpus effective A read only multiple values file which exists on all cpuset enabled cgroups It lists the onlined CPUs that are actually granted to this cgroup by its parent These CPUs are allowed to be used by tasks within the current cgroup If cpuset cpus is empty the cpuset cpus effective file shows all the CPUs from the parent cgroup that can be available to be used by this cgroup Otherwise it should be a subset of cpuset cpus unless none of the CPUs listed in cpuset cpus can be granted In this case it will be treated just like an empty cpuset cpus Its value will be affected by CPU hotplug events cpuset mems A read write multiple values file which exists on non root cpuset enabled cgroups It lists the requested memory nodes to be used by tasks within this cgroup The actual list of memory nodes granted however is subjected to constraints imposed by its parent and can differ from the requested memory nodes The memory node numbers are comma separated numbers or ranges For example cat cpuset mems 0 1 3 An empty value indicates that the cgroup is using the same setting as the nearest cgroup ancestor with a non empty cpuset mems or all the available memory nodes if none is found The value of cpuset mems stays constant until the next update and won t be affected by any memory nodes hotplug events Setting a non empty value to cpuset mems causes memory of tasks within the cgroup to be migrated to the designated nodes if they are currently using memory outside of the designated nodes There is a cost for this memory migration The migration may not be complete and some memory pages may be left behind So it is recommended that cpuset mems should be set properly before spawning new tasks into the cpuset Even if there is a need to change cpuset mems with active tasks it shouldn t be done frequently cpuset mems effective A read only multiple values file which exists on all cpuset enabled cgroups It lists the onlined memory nodes that are actually granted to this cgroup by its parent These memory nodes are allowed to be used by tasks within the current cgroup If cpuset mems is empty it shows all the memory nodes from the parent cgroup that will be available to be used by this cgroup Otherwise it should be a subset of cpuset mems unless none of the memory nodes listed in cpuset mems can be granted In this case it will be treated just like an empty cpuset mems Its value will be affected by memory nodes hotplug events cpuset cpus exclusive A read write multiple values file which exists on non root cpuset enabled cgroups It lists all the exclusive CPUs that are allowed to be used to create a new cpuset partition Its value is not used unless the cgroup becomes a valid partition root See the cpuset cpus partition section below for a description of what a cpuset partition is When the cgroup becomes a partition root the actual exclusive CPUs that are allocated to that partition are listed in cpuset cpus exclusive effective which may be different from cpuset cpus exclusive If cpuset cpus exclusive has previously been set cpuset cpus exclusive effective is always a subset of it Users can manually set it to a value that is different from cpuset cpus One constraint in setting it is that the list of CPUs must be exclusive with respect to cpuset cpus exclusive of its sibling If cpuset cpus exclusive of a sibling cgroup isn t set its cpuset cpus value if set cannot be a subset of it to leave at least one CPU available when the exclusive CPUs are taken away For a parent cgroup any one of its exclusive CPUs can only be distributed to at most one of its child cgroups Having an exclusive CPU appearing in two or more of its child cgroups is not allowed the exclusivity rule A value that violates the exclusivity rule will be rejected with a write error The root cgroup is a partition root and all its available CPUs are in its exclusive CPU set cpuset cpus exclusive effective A read only multiple values file which exists on all non root cpuset enabled cgroups This file shows the effective set of exclusive CPUs that can be used to create a partition root The content of this file will always be a subset of its parent s cpuset cpus exclusive effective if its parent is not the root cgroup It will also be a subset of cpuset cpus exclusive if it is set If cpuset cpus exclusive is not set it is treated to have an implicit value of cpuset cpus in the formation of local partition cpuset cpus isolated A read only and root cgroup only multiple values file This file shows the set of all isolated CPUs used in existing isolated partitions It will be empty if no isolated partition is created cpuset cpus partition A read write single value file which exists on non root cpuset enabled cgroups This flag is owned by the parent cgroup and is not delegatable It accepts only the following input values when written to member Non root member of a partition root Partition root isolated Partition root without load balancing A cpuset partition is a collection of cpuset enabled cgroups with a partition root at the top of the hierarchy and its descendants except those that are separate partition roots themselves and their descendants A partition has exclusive access to the set of exclusive CPUs allocated to it Other cgroups outside of that partition cannot use any CPUs in that set There are two types of partitions local and remote A local partition is one whose parent cgroup is also a valid partition root A remote partition is one whose parent cgroup is not a valid partition root itself Writing to cpuset cpus exclusive is optional for the creation of a local partition as its cpuset cpus exclusive file will assume an implicit value that is the same as cpuset cpus if it is not set Writing the proper cpuset cpus exclusive values down the cgroup hierarchy before the target partition root is mandatory for the creation of a remote partition Currently a remote partition cannot be created under a local partition All the ancestors of a remote partition root except the root cgroup cannot be a partition root The root cgroup is always a partition root and its state cannot be changed All other non root cgroups start out as member When set to root the current cgroup is the root of a new partition or scheduling domain The set of exclusive CPUs is determined by the value of its cpuset cpus exclusive effective When set to isolated the CPUs in that partition will be in an isolated state without any load balancing from the scheduler and excluded from the unbound workqueues Tasks placed in such a partition with multiple CPUs should be carefully distributed and bound to each of the individual CPUs for optimal performance A partition root root or isolated can be in one of the two possible states valid or invalid An invalid partition root is in a degraded state where some state information may be retained but behaves more like a member All possible state transitions among member root and isolated are allowed On read the cpuset cpus partition file can show the following values member Non root member of a partition root Partition root isolated Partition root without load balancing root invalid reason Invalid partition root isolated invalid reason Invalid isolated partition root In the case of an invalid partition root a descriptive string on why the partition is invalid is included within parentheses For a local partition root to be valid the following conditions must be met 1 The parent cgroup is a valid partition root 2 The cpuset cpus exclusive effective file cannot be empty though it may contain offline CPUs 3 The cpuset cpus effective cannot be empty unless there is no task associated with this partition For a remote partition root to be valid all the above conditions except the first one must be met External events like hotplug or changes to cpuset cpus or cpuset cpus exclusive can cause a valid partition root to become invalid and vice versa Note that a task cannot be moved to a cgroup with empty cpuset cpus effective A valid non root parent partition may distribute out all its CPUs to its child local partitions when there is no task associated with it Care must be taken to change a valid partition root to member as all its child local partitions if present will become invalid causing disruption to tasks running in those child partitions These inactivated partitions could be recovered if their parent is switched back to a partition root with a proper value in cpuset cpus or cpuset cpus exclusive Poll and inotify events are triggered whenever the state of cpuset cpus partition changes That includes changes caused by write to cpuset cpus partition cpu hotplug or other changes that modify the validity status of the partition This will allow user space agents to monitor unexpected changes to cpuset cpus partition without the need to do continuous polling A user can pre configure certain CPUs to an isolated state with load balancing disabled at boot time with the isolcpus kernel boot command line option If those CPUs are to be put into a partition they have to be used in an isolated partition Device controller Device controller manages access to device files It includes both creation of new device files using mknod and access to the existing device files Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF To control access to device files a user may create bpf programs of type BPF PROG TYPE CGROUP DEVICE and attach them to cgroups with BPF CGROUP DEVICE flag On an attempt to access a device file corresponding BPF programs will be executed and depending on the return value the attempt will succeed or fail with EPERM A BPF PROG TYPE CGROUP DEVICE program takes a pointer to the bpf cgroup dev ctx structure which describes the device access attempt access type mknod read write and device type major and minor numbers If the program returns 0 the attempt fails with EPERM otherwise it succeeds An example of BPF PROG TYPE CGROUP DEVICE program may be found in tools testing selftests bpf progs dev cgroup c in the kernel source tree RDMA The rdma controller regulates the distribution and accounting of RDMA resources RDMA Interface Files rdma max A readwrite nested keyed file that exists for all the cgroups except root that describes current configured resource limit for a RDMA IB device Lines are keyed by device name and are not ordered Each line contains space separated resource name and its configured limit that can be distributed The following nested keys are defined hca handle Maximum number of HCA Handles hca object Maximum number of HCA Objects An example for mlx4 and ocrdma device follows mlx4 0 hca handle 2 hca object 2000 ocrdma1 hca handle 3 hca object max rdma current A read only file that describes current resource usage It exists for all the cgroup except root An example for mlx4 and ocrdma device follows mlx4 0 hca handle 1 hca object 20 ocrdma1 hca handle 1 hca object 23 HugeTLB The HugeTLB controller allows to limit the HugeTLB usage per control group and enforces the controller limit during page fault HugeTLB Interface Files hugetlb hugepagesize current Show current usage for hugepagesize hugetlb It exists for all the cgroup except root hugetlb hugepagesize max Set show the hard limit of hugepagesize hugetlb usage The default value is max It exists for all the cgroup except root hugetlb hugepagesize events A read only flat keyed file which exists on non root cgroups max The number of allocation failure due to HugeTLB limit hugetlb hugepagesize events local Similar to hugetlb hugepagesize events but the fields in the file are local to the cgroup i e not hierarchical The file modified event generated on this file reflects only the local events hugetlb hugepagesize numa stat Similar to memory numa stat it shows the numa information of the hugetlb pages of hugepagesize in this cgroup Only active in use hugetlb pages are included The per node values are in bytes Misc The Miscellaneous cgroup provides the resource limiting and tracking mechanism for the scalar resources which cannot be abstracted like the other cgroup resources Controller is enabled by the CONFIG CGROUP MISC config option A resource can be added to the controller via enum misc res type in the include linux misc cgroup h file and the corresponding name via misc res name in the kernel cgroup misc c file Provider of the resource must set its capacity prior to using the resource by calling misc cg set capacity Once a capacity is set then the resource usage can be updated using charge and uncharge APIs All of the APIs to interact with misc controller are in include linux misc cgroup h Misc Interface Files Miscellaneous controller provides 3 interface files If two misc resources res a and res b are registered then misc capacity A read only flat keyed file shown only in the root cgroup It shows miscellaneous scalar resources available on the platform along with their quantities cat misc capacity res a 50 res b 10 misc current A read only flat keyed file shown in the all cgroups It shows the current usage of the resources in the cgroup and its children cat misc current res a 3 res b 0 misc peak A read only flat keyed file shown in all cgroups It shows the historical maximum usage of the resources in the cgroup and its children cat misc peak res a 10 res b 8 misc max A read write flat keyed file shown in the non root cgroups Allowed maximum usage of the resources in the cgroup and its children cat misc max res a max res b 4 Limit can be set by echo res a 1 misc max Limit can be set to max by echo res a max misc max Limits can be set higher than the capacity value in the misc capacity file misc events A read only flat keyed file which exists on non root cgroups The following entries are defined Unless specified otherwise a value change in this file generates a file modified event All fields in this file are hierarchical max The number of times the cgroup s resource usage was about to go over the max boundary misc events local Similar to misc events but the fields in the file are local to the cgroup i e not hierarchical The file modified event generated on this file reflects only the local events Migration and Ownership A miscellaneous scalar resource is charged to the cgroup in which it is used first and stays charged to that cgroup until that resource is freed Migrating a process to a different cgroup does not move the charge to the destination cgroup where the process has moved Others perf event perf event controller if not mounted on a legacy hierarchy is automatically enabled on the v2 hierarchy so that perf events can always be filtered by cgroup v2 path The controller can still be moved to a legacy hierarchy after v2 hierarchy is populated Non normative information This section contains information that isn t considered to be a part of the stable kernel API and so is subject to change CPU controller root cgroup process behaviour When distributing CPU cycles in the root cgroup each thread in this cgroup is treated as if it was hosted in a separate child cgroup of the root cgroup This child cgroup weight is dependent on its thread nice level For details of this mapping see sched prio to weight array in kernel sched core c file values from this array should be scaled appropriately so the neutral nice 0 value is 100 instead of 1024 IO controller root cgroup process behaviour Root cgroup processes are hosted in an implicit leaf child node When distributing IO resources this implicit child node is taken into account as if it was a normal child cgroup of the root cgroup with a weight value of 200 Namespace Basics cgroup namespace provides a mechanism to virtualize the view of the proc PID cgroup file and cgroup mounts The CLONE NEWCGROUP clone flag can be used with clone 2 and unshare 2 to create a new cgroup namespace The process running inside the cgroup namespace will have its proc PID cgroup output restricted to cgroupns root The cgroupns root is the cgroup of the process at the time of creation of the cgroup namespace Without cgroup namespace the proc PID cgroup file shows the complete path of the cgroup of a process In a container setup where a set of cgroups and namespaces are intended to isolate processes the proc PID cgroup file may leak potential system level information to the isolated processes For example cat proc self cgroup 0 batchjobs container id1 The path batchjobs container id1 can be considered as system data and undesirable to expose to the isolated processes cgroup namespace can be used to restrict visibility of this path For example before creating a cgroup namespace one would see ls l proc self ns cgroup lrwxrwxrwx 1 root root 0 2014 07 15 10 37 proc self ns cgroup cgroup 4026531835 cat proc self cgroup 0 batchjobs container id1 After unsharing a new namespace the view changes ls l proc self ns cgroup lrwxrwxrwx 1 root root 0 2014 07 15 10 35 proc self ns cgroup cgroup 4026532183 cat proc self cgroup 0 When some thread from a multi threaded process unshares its cgroup namespace the new cgroupns gets applied to the entire process all the threads This is natural for the v2 hierarchy however for the legacy hierarchies this may be unexpected A cgroup namespace is alive as long as there are processes inside or mounts pinning it When the last usage goes away the cgroup namespace is destroyed The cgroupns root and the actual cgroups remain The Root and Views The cgroupns root for a cgroup namespace is the cgroup in which the process calling unshare 2 is running For example if a process in batchjobs container id1 cgroup calls unshare cgroup batchjobs container id1 becomes the cgroupns root For the init cgroup ns this is the real root cgroup The cgroupns root cgroup does not change even if the namespace creator process later moves to a different cgroup unshare c unshare cgroupns in some cgroup cat proc self cgroup 0 mkdir sub cgrp 1 echo 0 sub cgrp 1 cgroup procs cat proc self cgroup 0 sub cgrp 1 Each process gets its namespace specific view of proc PID cgroup Processes running inside the cgroup namespace will be able to see cgroup paths in proc self cgroup only inside their root cgroup From within an unshared cgroupns sleep 100000 1 7353 echo 7353 sub cgrp 1 cgroup procs cat proc 7353 cgroup 0 sub cgrp 1 From the initial cgroup namespace the real cgroup path will be visible cat proc 7353 cgroup 0 batchjobs container id1 sub cgrp 1 From a sibling cgroup namespace that is a namespace rooted at a different cgroup the cgroup path relative to its own cgroup namespace root will be shown For instance if PID 7353 s cgroup namespace root is at batchjobs container id2 then it will see cat proc 7353 cgroup 0 container id2 sub cgrp 1 Note that the relative path always starts with to indicate that its relative to the cgroup namespace root of the caller Migration and setns 2 Processes inside a cgroup namespace can move into and out of the namespace root if they have proper access to external cgroups For example from inside a namespace with cgroupns root at batchjobs container id1 and assuming that the global hierarchy is still accessible inside cgroupns cat proc 7353 cgroup 0 sub cgrp 1 echo 7353 batchjobs container id2 cgroup procs cat proc 7353 cgroup 0 container id2 Note that this kind of setup is not encouraged A task inside cgroup namespace should only be exposed to its own cgroupns hierarchy setns 2 to another cgroup namespace is allowed when a the process has CAP SYS ADMIN against its current user namespace b the process has CAP SYS ADMIN against the target cgroup namespace s userns No implicit cgroup changes happen with attaching to another cgroup namespace It is expected that the someone moves the attaching process under the target cgroup namespace root Interaction with Other Namespaces Namespace specific cgroup hierarchy can be mounted by a process running inside a non init cgroup namespace mount t cgroup2 none MOUNT POINT This will mount the unified cgroup hierarchy with cgroupns root as the filesystem root The process needs CAP SYS ADMIN against its user and mount namespaces The virtualization of proc self cgroup file combined with restricting the view of cgroup hierarchy by namespace private cgroupfs mount provides a properly isolated cgroup view inside the container Information on Kernel Programming This section contains kernel programming information in the areas where interacting with cgroup is necessary cgroup core and controllers are not covered Filesystem Support for Writeback A filesystem can support cgroup writeback by updating address space operations writepage s to annotate bio s using the following two functions wbc init bio wbc bio Should be called for each bio carrying writeback data and associates the bio with the inode s owner cgroup and the corresponding request queue This must be called after a queue device has been associated with the bio and before submission wbc account cgroup owner wbc folio bytes Should be called for each data segment being written out While this function doesn t care exactly when it s called during the writeback session it s the easiest and most natural to call it as data segments are added to a bio With writeback bio s annotated cgroup support can be enabled per super block by setting SB I CGROUPWB in s iflags This allows for selective disabling of cgroup writeback support which is helpful when certain filesystem features e g journaled data mode are incompatible wbc init bio binds the specified bio to its cgroup Depending on the configuration the bio may be executed at a lower priority and if the writeback session is holding shared resources e g a journal entry may lead to priority inversion There is no one easy solution for the problem Filesystems can try to work around specific problem cases by skipping wbc init bio and using bio associate blkg directly Deprecated v1 Core Features Multiple hierarchies including named ones are not supported All v1 mount options are not supported The tasks file is removed and cgroup procs is not sorted cgroup clone children is removed proc cgroups is meaningless for v2 Use cgroup controllers or cgroup stat files at the root instead Issues with v1 and Rationales for v2 Multiple Hierarchies cgroup v1 allowed an arbitrary number of hierarchies and each hierarchy could host any number of controllers While this seemed to provide a high level of flexibility it wasn t useful in practice For example as there is only one instance of each controller utility type controllers such as freezer which can be useful in all hierarchies could only be used in one The issue is exacerbated by the fact that controllers couldn t be moved to another hierarchy once hierarchies were populated Another issue was that all controllers bound to a hierarchy were forced to have exactly the same view of the hierarchy It wasn t possible to vary the granularity depending on the specific controller In practice these issues heavily limited which controllers could be put on the same hierarchy and most configurations resorted to putting each controller on its own hierarchy Only closely related ones such as the cpu and cpuacct controllers made sense to be put on the same hierarchy This often meant that userland ended up managing multiple similar hierarchies repeating the same steps on each hierarchy whenever a hierarchy management operation was necessary Furthermore support for multiple hierarchies came at a steep cost It greatly complicated cgroup core implementation but more importantly the support for multiple hierarchies restricted how cgroup could be used in general and what controllers was able to do There was no limit on how many hierarchies there might be which meant that a thread s cgroup membership couldn t be described in finite length The key might contain any number of entries and was unlimited in length which made it highly awkward to manipulate and led to addition of controllers which existed only to identify membership which in turn exacerbated the original problem of proliferating number of hierarchies Also as a controller couldn t have any expectation regarding the topologies of hierarchies other controllers might be on each controller had to assume that all other controllers were attached to completely orthogonal hierarchies This made it impossible or at least very cumbersome for controllers to cooperate with each other In most use cases putting controllers on hierarchies which are completely orthogonal to each other isn t necessary What usually is called for is the ability to have differing levels of granularity depending on the specific controller In other words hierarchy may be collapsed from leaf towards root when viewed from specific controllers For example a given configuration might not care about how memory is distributed beyond a certain level while still wanting to control how CPU cycles are distributed Thread Granularity cgroup v1 allowed threads of a process to belong to different cgroups This didn t make sense for some controllers and those controllers ended up implementing different ways to ignore such situations but much more importantly it blurred the line between API exposed to individual applications and system management interface Generally in process knowledge is available only to the process itself thus unlike service level organization of processes categorizing threads of a process requires active participation from the application which owns the target process cgroup v1 had an ambiguously defined delegation model which got abused in combination with thread granularity cgroups were delegated to individual applications so that they can create and manage their own sub hierarchies and control resource distributions along them This effectively raised cgroup to the status of a syscall like API exposed to lay programs First of all cgroup has a fundamentally inadequate interface to be exposed this way For a process to access its own knobs it has to extract the path on the target hierarchy from proc self cgroup construct the path by appending the name of the knob to the path open and then read and or write to it This is not only extremely clunky and unusual but also inherently racy There is no conventional way to define transaction across the required steps and nothing can guarantee that the process would actually be operating on its own sub hierarchy cgroup controllers implemented a number of knobs which would never be accepted as public APIs because they were just adding control knobs to system management pseudo filesystem cgroup ended up with interface knobs which were not properly abstracted or refined and directly revealed kernel internal details These knobs got exposed to individual applications through the ill defined delegation mechanism effectively abusing cgroup as a shortcut to implementing public APIs without going through the required scrutiny This was painful for both userland and kernel Userland ended up with misbehaving and poorly abstracted interfaces and kernel exposing and locked into constructs inadvertently Competition Between Inner Nodes and Threads cgroup v1 allowed threads to be in any cgroups which created an interesting problem where threads belonging to a parent cgroup and its children cgroups competed for resources This was nasty as two different types of entities competed and there was no obvious way to settle it Different controllers did different things The cpu controller considered threads and cgroups as equivalents and mapped nice levels to cgroup weights This worked for some cases but fell flat when children wanted to be allocated specific ratios of CPU cycles and the number of internal threads fluctuated the ratios constantly changed as the number of competing entities fluctuated There also were other issues The mapping from nice level to weight wasn t obvious or universal and there were various other knobs which simply weren t available for threads The io controller implicitly created a hidden leaf node for each cgroup to host the threads The hidden leaf had its own copies of all the knobs with leaf prefixed While this allowed equivalent control over internal threads it was with serious drawbacks It always added an extra layer of nesting which wouldn t be necessary otherwise made the interface messy and significantly complicated the implementation The memory controller didn t have a way to control what happened between internal tasks and child cgroups and the behavior was not clearly defined There were attempts to add ad hoc behaviors and knobs to tailor the behavior to specific workloads which would have led to problems extremely difficult to resolve in the long term Multiple controllers struggled with internal tasks and came up with different ways to deal with it unfortunately all the approaches were severely flawed and furthermore the widely different behaviors made cgroup as a whole highly inconsistent This clearly is a problem which needs to be addressed from cgroup core in a uniform way Other Interface Issues cgroup v1 grew without oversight and developed a large number of idiosyncrasies and inconsistencies One issue on the cgroup core side was how an empty cgroup was notified a userland helper binary was forked and executed for each event The event delivery wasn t recursive or delegatable The limitations of the mechanism also led to in kernel event delivery filtering mechanism further complicating the interface Controller interfaces were problematic too An extreme example is controllers completely ignoring hierarchical organization and treating all cgroups as if they were all located directly under the root cgroup Some controllers exposed a large amount of inconsistent implementation details to userland There also was no consistency across controllers When a new cgroup was created some controllers defaulted to not imposing extra restrictions while others disallowed any resource usage until explicitly configured Configuration knobs for the same type of control used widely differing naming schemes and formats Statistics and information knobs were named arbitrarily and used different formats and units even in the same controller cgroup v2 establishes common conventions where appropriate and updates controllers so that they expose minimal and consistent interfaces Controller Issues and Remedies Memory The original lower boundary the soft limit is defined as a limit that is per default unset As a result the set of cgroups that global reclaim prefers is opt in rather than opt out The costs for optimizing these mostly negative lookups are so high that the implementation despite its enormous size does not even provide the basic desirable behavior First off the soft limit has no hierarchical meaning All configured groups are organized in a global rbtree and treated like equal peers regardless where they are located in the hierarchy This makes subtree delegation impossible Second the soft limit reclaim pass is so aggressive that it not just introduces high allocation latencies into the system but also impacts system performance due to overreclaim to the point where the feature becomes self defeating The memory low boundary on the other hand is a top down allocated reserve A cgroup enjoys reclaim protection when it s within its effective low which makes delegation of subtrees possible It also enjoys having reclaim pressure proportional to its overage when above its effective low The original high boundary the hard limit is defined as a strict limit that can not budge even if the OOM killer has to be called But this generally goes against the goal of making the most out of the available memory The memory consumption of workloads varies during runtime and that requires users to overcommit But doing that with a strict upper limit requires either a fairly accurate prediction of the working set size or adding slack to the limit Since working set size estimation is hard and error prone and getting it wrong results in OOM kills most users tend to err on the side of a looser limit and end up wasting precious resources The memory high boundary on the other hand can be set much more conservatively When hit it throttles allocations by forcing them into direct reclaim to work off the excess but it never invokes the OOM killer As a result a high boundary that is chosen too aggressively will not terminate the processes but instead it will lead to gradual performance degradation The user can monitor this and make corrections until the minimal memory footprint that still gives acceptable performance is found In extreme cases with many concurrent allocations and a complete breakdown of reclaim progress within the group the high boundary can be exceeded But even then it s mostly better to satisfy the allocation from the slack available in other groups or the rest of the system than killing the group Otherwise memory max is there to limit this type of spillover and ultimately contain buggy or even malicious applications Setting the original memory limit in bytes below the current usage was subject to a race condition where concurrent charges could cause the limit setting to fail memory max on the other hand will first set the limit to prevent new charges and then reclaim and OOM kill until the new limit is met or the task writing to memory max is killed The combined memory swap accounting and limiting is replaced by real control over swap space The main argument for a combined memory swap facility in the original cgroup design was that global or parental pressure would always be able to swap all anonymous memory of a child group regardless of the child s own possibly untrusted configuration However untrusted groups can sabotage swapping by other means such as referencing its anonymous memory in a tight loop and an admin can not assume full swappability when overcommitting untrusted jobs For trusted jobs on the other hand a combined counter is not an intuitive userspace interface and it flies in the face of the idea that cgroup controllers should account and limit specific physical resources Swap space is a resource like all others in the system and that s why unified hierarchy allows distributing it separately |
linux Reducing OS jitter due to per cpu kthreads them to a housekeeping CPU dedicated to such work This document lists per CPU kthreads in the Linux kernel and presents options to control their OS jitter Note that non per CPU kthreads are References not listed here To reduce OS jitter from non per CPU kthreads bind | ==========================================
Reducing OS jitter due to per-cpu kthreads
==========================================
This document lists per-CPU kthreads in the Linux kernel and presents
options to control their OS jitter. Note that non-per-CPU kthreads are
not listed here. To reduce OS jitter from non-per-CPU kthreads, bind
them to a "housekeeping" CPU dedicated to such work.
References
==========
- Documentation/core-api/irq/irq-affinity.rst: Binding interrupts to sets of CPUs.
- Documentation/admin-guide/cgroup-v1: Using cgroups to bind tasks to sets of CPUs.
- man taskset: Using the taskset command to bind tasks to sets
of CPUs.
- man sched_setaffinity: Using the sched_setaffinity() system
call to bind tasks to sets of CPUs.
- /sys/devices/system/cpu/cpuN/online: Control CPU N's hotplug state,
writing "0" to offline and "1" to online.
- In order to locate kernel-generated OS jitter on CPU N:
cd /sys/kernel/tracing
echo 1 > max_graph_depth # Increase the "1" for more detail
echo function_graph > current_tracer
# run workload
cat per_cpu/cpuN/trace
kthreads
========
Name:
ehca_comp/%u
Purpose:
Periodically process Infiniband-related work.
To reduce its OS jitter, do any of the following:
1. Don't use eHCA Infiniband hardware, instead choosing hardware
that does not require per-CPU kthreads. This will prevent these
kthreads from being created in the first place. (This will
work for most people, as this hardware, though important, is
relatively old and is produced in relatively low unit volumes.)
2. Do all eHCA-Infiniband-related work on other CPUs, including
interrupts.
3. Rework the eHCA driver so that its per-CPU kthreads are
provisioned only on selected CPUs.
Name:
irq/%d-%s
Purpose:
Handle threaded interrupts.
To reduce its OS jitter, do the following:
1. Use irq affinity to force the irq threads to execute on
some other CPU.
Name:
kcmtpd_ctr_%d
Purpose:
Handle Bluetooth work.
To reduce its OS jitter, do one of the following:
1. Don't use Bluetooth, in which case these kthreads won't be
created in the first place.
2. Use irq affinity to force Bluetooth-related interrupts to
occur on some other CPU and furthermore initiate all
Bluetooth activity on some other CPU.
Name:
ksoftirqd/%u
Purpose:
Execute softirq handlers when threaded or when under heavy load.
To reduce its OS jitter, each softirq vector must be handled
separately as follows:
TIMER_SOFTIRQ
-------------
Do all of the following:
1. To the extent possible, keep the CPU out of the kernel when it
is non-idle, for example, by avoiding system calls and by forcing
both kernel threads and interrupts to execute elsewhere.
2. Build with CONFIG_HOTPLUG_CPU=y. After boot completes, force
the CPU offline, then bring it back online. This forces
recurring timers to migrate elsewhere. If you are concerned
with multiple CPUs, force them all offline before bringing the
first one back online. Once you have onlined the CPUs in question,
do not offline any other CPUs, because doing so could force the
timer back onto one of the CPUs in question.
NET_TX_SOFTIRQ and NET_RX_SOFTIRQ
---------------------------------
Do all of the following:
1. Force networking interrupts onto other CPUs.
2. Initiate any network I/O on other CPUs.
3. Once your application has started, prevent CPU-hotplug operations
from being initiated from tasks that might run on the CPU to
be de-jittered. (It is OK to force this CPU offline and then
bring it back online before you start your application.)
BLOCK_SOFTIRQ
-------------
Do all of the following:
1. Force block-device interrupts onto some other CPU.
2. Initiate any block I/O on other CPUs.
3. Once your application has started, prevent CPU-hotplug operations
from being initiated from tasks that might run on the CPU to
be de-jittered. (It is OK to force this CPU offline and then
bring it back online before you start your application.)
IRQ_POLL_SOFTIRQ
----------------
Do all of the following:
1. Force block-device interrupts onto some other CPU.
2. Initiate any block I/O and block-I/O polling on other CPUs.
3. Once your application has started, prevent CPU-hotplug operations
from being initiated from tasks that might run on the CPU to
be de-jittered. (It is OK to force this CPU offline and then
bring it back online before you start your application.)
TASKLET_SOFTIRQ
---------------
Do one or more of the following:
1. Avoid use of drivers that use tasklets. (Such drivers will contain
calls to things like tasklet_schedule().)
2. Convert all drivers that you must use from tasklets to workqueues.
3. Force interrupts for drivers using tasklets onto other CPUs,
and also do I/O involving these drivers on other CPUs.
SCHED_SOFTIRQ
-------------
Do all of the following:
1. Avoid sending scheduler IPIs to the CPU to be de-jittered,
for example, ensure that at most one runnable kthread is present
on that CPU. If a thread that expects to run on the de-jittered
CPU awakens, the scheduler will send an IPI that can result in
a subsequent SCHED_SOFTIRQ.
2. CONFIG_NO_HZ_FULL=y and ensure that the CPU to be de-jittered
is marked as an adaptive-ticks CPU using the "nohz_full="
boot parameter. This reduces the number of scheduler-clock
interrupts that the de-jittered CPU receives, minimizing its
chances of being selected to do the load balancing work that
runs in SCHED_SOFTIRQ context.
3. To the extent possible, keep the CPU out of the kernel when it
is non-idle, for example, by avoiding system calls and by
forcing both kernel threads and interrupts to execute elsewhere.
This further reduces the number of scheduler-clock interrupts
received by the de-jittered CPU.
HRTIMER_SOFTIRQ
---------------
Do all of the following:
1. To the extent possible, keep the CPU out of the kernel when it
is non-idle. For example, avoid system calls and force both
kernel threads and interrupts to execute elsewhere.
2. Build with CONFIG_HOTPLUG_CPU=y. Once boot completes, force the
CPU offline, then bring it back online. This forces recurring
timers to migrate elsewhere. If you are concerned with multiple
CPUs, force them all offline before bringing the first one
back online. Once you have onlined the CPUs in question, do not
offline any other CPUs, because doing so could force the timer
back onto one of the CPUs in question.
RCU_SOFTIRQ
-----------
Do at least one of the following:
1. Offload callbacks and keep the CPU in either dyntick-idle or
adaptive-ticks state by doing all of the following:
a. CONFIG_NO_HZ_FULL=y and ensure that the CPU to be
de-jittered is marked as an adaptive-ticks CPU using the
"nohz_full=" boot parameter. Bind the rcuo kthreads to
housekeeping CPUs, which can tolerate OS jitter.
b. To the extent possible, keep the CPU out of the kernel
when it is non-idle, for example, by avoiding system
calls and by forcing both kernel threads and interrupts
to execute elsewhere.
2. Enable RCU to do its processing remotely via dyntick-idle by
doing all of the following:
a. Build with CONFIG_NO_HZ=y.
b. Ensure that the CPU goes idle frequently, allowing other
CPUs to detect that it has passed through an RCU quiescent
state. If the kernel is built with CONFIG_NO_HZ_FULL=y,
userspace execution also allows other CPUs to detect that
the CPU in question has passed through a quiescent state.
c. To the extent possible, keep the CPU out of the kernel
when it is non-idle, for example, by avoiding system
calls and by forcing both kernel threads and interrupts
to execute elsewhere.
Name:
kworker/%u:%d%s (cpu, id, priority)
Purpose:
Execute workqueue requests
To reduce its OS jitter, do any of the following:
1. Run your workload at a real-time priority, which will allow
preempting the kworker daemons.
2. A given workqueue can be made visible in the sysfs filesystem
by passing the WQ_SYSFS to that workqueue's alloc_workqueue().
Such a workqueue can be confined to a given subset of the
CPUs using the ``/sys/devices/virtual/workqueue/*/cpumask`` sysfs
files. The set of WQ_SYSFS workqueues can be displayed using
"ls /sys/devices/virtual/workqueue". That said, the workqueues
maintainer would like to caution people against indiscriminately
sprinkling WQ_SYSFS across all the workqueues. The reason for
caution is that it is easy to add WQ_SYSFS, but because sysfs is
part of the formal user/kernel API, it can be nearly impossible
to remove it, even if its addition was a mistake.
3. Do any of the following needed to avoid jitter that your
application cannot tolerate:
a. Avoid using oprofile, thus avoiding OS jitter from
wq_sync_buffer().
b. Limit your CPU frequency so that a CPU-frequency
governor is not required, possibly enlisting the aid of
special heatsinks or other cooling technologies. If done
correctly, and if you CPU architecture permits, you should
be able to build your kernel with CONFIG_CPU_FREQ=n to
avoid the CPU-frequency governor periodically running
on each CPU, including cs_dbs_timer() and od_dbs_timer().
WARNING: Please check your CPU specifications to
make sure that this is safe on your particular system.
c. As of v3.18, Christoph Lameter's on-demand vmstat workers
commit prevents OS jitter due to vmstat_update() on
CONFIG_SMP=y systems. Before v3.18, is not possible
to entirely get rid of the OS jitter, but you can
decrease its frequency by writing a large value to
/proc/sys/vm/stat_interval. The default value is HZ,
for an interval of one second. Of course, larger values
will make your virtual-memory statistics update more
slowly. Of course, you can also run your workload at
a real-time priority, thus preempting vmstat_update(),
but if your workload is CPU-bound, this is a bad idea.
However, there is an RFC patch from Christoph Lameter
(based on an earlier one from Gilad Ben-Yossef) that
reduces or even eliminates vmstat overhead for some
workloads at https://lore.kernel.org/r/00000140e9dfd6bd-40db3d4f-c1be-434f-8132-7820f81bb586-000000@email.amazonses.com.
d. If running on high-end powerpc servers, build with
CONFIG_PPC_RTAS_DAEMON=n. This prevents the RTAS
daemon from running on each CPU every second or so.
(This will require editing Kconfig files and will defeat
this platform's RAS functionality.) This avoids jitter
due to the rtas_event_scan() function.
WARNING: Please check your CPU specifications to
make sure that this is safe on your particular system.
e. If running on Cell Processor, build your kernel with
CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from
spu_gov_work().
WARNING: Please check your CPU specifications to
make sure that this is safe on your particular system.
f. If running on PowerMAC, build your kernel with
CONFIG_PMAC_RACKMETER=n to disable the CPU-meter,
avoiding OS jitter from rackmeter_do_timer().
Name:
rcuc/%u
Purpose:
Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
To reduce its OS jitter, do at least one of the following:
1. Build the kernel with CONFIG_PREEMPT=n. This prevents these
kthreads from being created in the first place, and also obviates
the need for RCU priority boosting. This approach is feasible
for workloads that do not require high degrees of responsiveness.
2. Build the kernel with CONFIG_RCU_BOOST=n. This prevents these
kthreads from being created in the first place. This approach
is feasible only if your workload never requires RCU priority
boosting, for example, if you ensure frequent idle time on all
CPUs that might execute within the kernel.
3. Build with CONFIG_RCU_NOCB_CPU=y and boot with the rcu_nocbs=
boot parameter offloading RCU callbacks from all CPUs susceptible
to OS jitter. This approach prevents the rcuc/%u kthreads from
having any work to do, so that they are never awakened.
4. Ensure that the CPU never enters the kernel, and, in particular,
avoid initiating any CPU hotplug operations on this CPU. This is
another way of preventing any callbacks from being queued on the
CPU, again preventing the rcuc/%u kthreads from having any work
to do.
Name:
rcuop/%d, rcuos/%d, and rcuog/%d
Purpose:
Offload RCU callbacks from the corresponding CPU.
To reduce its OS jitter, do at least one of the following:
1. Use affinity, cgroups, or other mechanism to force these kthreads
to execute on some other CPU.
2. Build with CONFIG_RCU_NOCB_CPU=n, which will prevent these
kthreads from being created in the first place. However, please
note that this will not eliminate OS jitter, but will instead
shift it to RCU_SOFTIRQ. | linux | Reducing OS jitter due to per cpu kthreads This document lists per CPU kthreads in the Linux kernel and presents options to control their OS jitter Note that non per CPU kthreads are not listed here To reduce OS jitter from non per CPU kthreads bind them to a housekeeping CPU dedicated to such work References Documentation core api irq irq affinity rst Binding interrupts to sets of CPUs Documentation admin guide cgroup v1 Using cgroups to bind tasks to sets of CPUs man taskset Using the taskset command to bind tasks to sets of CPUs man sched setaffinity Using the sched setaffinity system call to bind tasks to sets of CPUs sys devices system cpu cpuN online Control CPU N s hotplug state writing 0 to offline and 1 to online In order to locate kernel generated OS jitter on CPU N cd sys kernel tracing echo 1 max graph depth Increase the 1 for more detail echo function graph current tracer run workload cat per cpu cpuN trace kthreads Name ehca comp u Purpose Periodically process Infiniband related work To reduce its OS jitter do any of the following 1 Don t use eHCA Infiniband hardware instead choosing hardware that does not require per CPU kthreads This will prevent these kthreads from being created in the first place This will work for most people as this hardware though important is relatively old and is produced in relatively low unit volumes 2 Do all eHCA Infiniband related work on other CPUs including interrupts 3 Rework the eHCA driver so that its per CPU kthreads are provisioned only on selected CPUs Name irq d s Purpose Handle threaded interrupts To reduce its OS jitter do the following 1 Use irq affinity to force the irq threads to execute on some other CPU Name kcmtpd ctr d Purpose Handle Bluetooth work To reduce its OS jitter do one of the following 1 Don t use Bluetooth in which case these kthreads won t be created in the first place 2 Use irq affinity to force Bluetooth related interrupts to occur on some other CPU and furthermore initiate all Bluetooth activity on some other CPU Name ksoftirqd u Purpose Execute softirq handlers when threaded or when under heavy load To reduce its OS jitter each softirq vector must be handled separately as follows TIMER SOFTIRQ Do all of the following 1 To the extent possible keep the CPU out of the kernel when it is non idle for example by avoiding system calls and by forcing both kernel threads and interrupts to execute elsewhere 2 Build with CONFIG HOTPLUG CPU y After boot completes force the CPU offline then bring it back online This forces recurring timers to migrate elsewhere If you are concerned with multiple CPUs force them all offline before bringing the first one back online Once you have onlined the CPUs in question do not offline any other CPUs because doing so could force the timer back onto one of the CPUs in question NET TX SOFTIRQ and NET RX SOFTIRQ Do all of the following 1 Force networking interrupts onto other CPUs 2 Initiate any network I O on other CPUs 3 Once your application has started prevent CPU hotplug operations from being initiated from tasks that might run on the CPU to be de jittered It is OK to force this CPU offline and then bring it back online before you start your application BLOCK SOFTIRQ Do all of the following 1 Force block device interrupts onto some other CPU 2 Initiate any block I O on other CPUs 3 Once your application has started prevent CPU hotplug operations from being initiated from tasks that might run on the CPU to be de jittered It is OK to force this CPU offline and then bring it back online before you start your application IRQ POLL SOFTIRQ Do all of the following 1 Force block device interrupts onto some other CPU 2 Initiate any block I O and block I O polling on other CPUs 3 Once your application has started prevent CPU hotplug operations from being initiated from tasks that might run on the CPU to be de jittered It is OK to force this CPU offline and then bring it back online before you start your application TASKLET SOFTIRQ Do one or more of the following 1 Avoid use of drivers that use tasklets Such drivers will contain calls to things like tasklet schedule 2 Convert all drivers that you must use from tasklets to workqueues 3 Force interrupts for drivers using tasklets onto other CPUs and also do I O involving these drivers on other CPUs SCHED SOFTIRQ Do all of the following 1 Avoid sending scheduler IPIs to the CPU to be de jittered for example ensure that at most one runnable kthread is present on that CPU If a thread that expects to run on the de jittered CPU awakens the scheduler will send an IPI that can result in a subsequent SCHED SOFTIRQ 2 CONFIG NO HZ FULL y and ensure that the CPU to be de jittered is marked as an adaptive ticks CPU using the nohz full boot parameter This reduces the number of scheduler clock interrupts that the de jittered CPU receives minimizing its chances of being selected to do the load balancing work that runs in SCHED SOFTIRQ context 3 To the extent possible keep the CPU out of the kernel when it is non idle for example by avoiding system calls and by forcing both kernel threads and interrupts to execute elsewhere This further reduces the number of scheduler clock interrupts received by the de jittered CPU HRTIMER SOFTIRQ Do all of the following 1 To the extent possible keep the CPU out of the kernel when it is non idle For example avoid system calls and force both kernel threads and interrupts to execute elsewhere 2 Build with CONFIG HOTPLUG CPU y Once boot completes force the CPU offline then bring it back online This forces recurring timers to migrate elsewhere If you are concerned with multiple CPUs force them all offline before bringing the first one back online Once you have onlined the CPUs in question do not offline any other CPUs because doing so could force the timer back onto one of the CPUs in question RCU SOFTIRQ Do at least one of the following 1 Offload callbacks and keep the CPU in either dyntick idle or adaptive ticks state by doing all of the following a CONFIG NO HZ FULL y and ensure that the CPU to be de jittered is marked as an adaptive ticks CPU using the nohz full boot parameter Bind the rcuo kthreads to housekeeping CPUs which can tolerate OS jitter b To the extent possible keep the CPU out of the kernel when it is non idle for example by avoiding system calls and by forcing both kernel threads and interrupts to execute elsewhere 2 Enable RCU to do its processing remotely via dyntick idle by doing all of the following a Build with CONFIG NO HZ y b Ensure that the CPU goes idle frequently allowing other CPUs to detect that it has passed through an RCU quiescent state If the kernel is built with CONFIG NO HZ FULL y userspace execution also allows other CPUs to detect that the CPU in question has passed through a quiescent state c To the extent possible keep the CPU out of the kernel when it is non idle for example by avoiding system calls and by forcing both kernel threads and interrupts to execute elsewhere Name kworker u d s cpu id priority Purpose Execute workqueue requests To reduce its OS jitter do any of the following 1 Run your workload at a real time priority which will allow preempting the kworker daemons 2 A given workqueue can be made visible in the sysfs filesystem by passing the WQ SYSFS to that workqueue s alloc workqueue Such a workqueue can be confined to a given subset of the CPUs using the sys devices virtual workqueue cpumask sysfs files The set of WQ SYSFS workqueues can be displayed using ls sys devices virtual workqueue That said the workqueues maintainer would like to caution people against indiscriminately sprinkling WQ SYSFS across all the workqueues The reason for caution is that it is easy to add WQ SYSFS but because sysfs is part of the formal user kernel API it can be nearly impossible to remove it even if its addition was a mistake 3 Do any of the following needed to avoid jitter that your application cannot tolerate a Avoid using oprofile thus avoiding OS jitter from wq sync buffer b Limit your CPU frequency so that a CPU frequency governor is not required possibly enlisting the aid of special heatsinks or other cooling technologies If done correctly and if you CPU architecture permits you should be able to build your kernel with CONFIG CPU FREQ n to avoid the CPU frequency governor periodically running on each CPU including cs dbs timer and od dbs timer WARNING Please check your CPU specifications to make sure that this is safe on your particular system c As of v3 18 Christoph Lameter s on demand vmstat workers commit prevents OS jitter due to vmstat update on CONFIG SMP y systems Before v3 18 is not possible to entirely get rid of the OS jitter but you can decrease its frequency by writing a large value to proc sys vm stat interval The default value is HZ for an interval of one second Of course larger values will make your virtual memory statistics update more slowly Of course you can also run your workload at a real time priority thus preempting vmstat update but if your workload is CPU bound this is a bad idea However there is an RFC patch from Christoph Lameter based on an earlier one from Gilad Ben Yossef that reduces or even eliminates vmstat overhead for some workloads at https lore kernel org r 00000140e9dfd6bd 40db3d4f c1be 434f 8132 7820f81bb586 000000 email amazonses com d If running on high end powerpc servers build with CONFIG PPC RTAS DAEMON n This prevents the RTAS daemon from running on each CPU every second or so This will require editing Kconfig files and will defeat this platform s RAS functionality This avoids jitter due to the rtas event scan function WARNING Please check your CPU specifications to make sure that this is safe on your particular system e If running on Cell Processor build your kernel with CBE CPUFREQ SPU GOVERNOR n to avoid OS jitter from spu gov work WARNING Please check your CPU specifications to make sure that this is safe on your particular system f If running on PowerMAC build your kernel with CONFIG PMAC RACKMETER n to disable the CPU meter avoiding OS jitter from rackmeter do timer Name rcuc u Purpose Execute RCU callbacks in CONFIG RCU BOOST y kernels To reduce its OS jitter do at least one of the following 1 Build the kernel with CONFIG PREEMPT n This prevents these kthreads from being created in the first place and also obviates the need for RCU priority boosting This approach is feasible for workloads that do not require high degrees of responsiveness 2 Build the kernel with CONFIG RCU BOOST n This prevents these kthreads from being created in the first place This approach is feasible only if your workload never requires RCU priority boosting for example if you ensure frequent idle time on all CPUs that might execute within the kernel 3 Build with CONFIG RCU NOCB CPU y and boot with the rcu nocbs boot parameter offloading RCU callbacks from all CPUs susceptible to OS jitter This approach prevents the rcuc u kthreads from having any work to do so that they are never awakened 4 Ensure that the CPU never enters the kernel and in particular avoid initiating any CPU hotplug operations on this CPU This is another way of preventing any callbacks from being queued on the CPU again preventing the rcuc u kthreads from having any work to do Name rcuop d rcuos d and rcuog d Purpose Offload RCU callbacks from the corresponding CPU To reduce its OS jitter do at least one of the following 1 Use affinity cgroups or other mechanism to force these kthreads to execute on some other CPU 2 Build with CONFIG RCU NOCB CPU n which will prevent these kthreads from being created in the first place However please note that this will not eliminate OS jitter but will instead shift it to RCU SOFTIRQ |
linux Reporting issues See the bottom of this file for additional redistribution information SPDX License Identifier GPL 2 0 OR CC BY 4 0 The short guide aka TL DR | .. SPDX-License-Identifier: (GPL-2.0+ OR CC-BY-4.0)
.. See the bottom of this file for additional redistribution information.
Reporting issues
++++++++++++++++
The short guide (aka TL;DR)
===========================
Are you facing a regression with vanilla kernels from the same stable or
longterm series? One still supported? Then search the `LKML
<https://lore.kernel.org/lkml/>`_ and the `Linux stable mailing list
<https://lore.kernel.org/stable/>`_ archives for matching reports to join. If
you don't find any, install `the latest release from that series
<https://kernel.org/>`_. If it still shows the issue, report it to the stable
mailing list ([email protected]) and CC the regressions list
([email protected]); ideally also CC the maintainer and the mailing
list for the subsystem in question.
In all other cases try your best guess which kernel part might be causing the
issue. Check the :ref:`MAINTAINERS <maintainers>` file for how its developers
expect to be told about problems, which most of the time will be by email with a
mailing list in CC. Check the destination's archives for matching reports;
search the `LKML <https://lore.kernel.org/lkml/>`_ and the web, too. If you
don't find any to join, install `the latest mainline kernel
<https://kernel.org/>`_. If the issue is present there, send a report.
The issue was fixed there, but you would like to see it resolved in a still
supported stable or longterm series as well? Then install its latest release.
If it shows the problem, search for the change that fixed it in mainline and
check if backporting is in the works or was discarded; if it's neither, ask
those who handled the change for it.
**General remarks**: When installing and testing a kernel as outlined above,
ensure it's vanilla (IOW: not patched and not using add-on modules). Also make
sure it's built and running in a healthy environment and not already tainted
before the issue occurs.
If you are facing multiple issues with the Linux kernel at once, report each
separately. While writing your report, include all information relevant to the
issue, like the kernel and the distro used. In case of a regression, CC the
regressions mailing list ([email protected]) to your report. Also try
to pin-point the culprit with a bisection; if you succeed, include its
commit-id and CC everyone in the sign-off-by chain.
Once the report is out, answer any questions that come up and help where you
can. That includes keeping the ball rolling by occasionally retesting with newer
releases and sending a status update afterwards.
Step-by-step guide how to report issues to the kernel maintainers
=================================================================
The above TL;DR outlines roughly how to report issues to the Linux kernel
developers. It might be all that's needed for people already familiar with
reporting issues to Free/Libre & Open Source Software (FLOSS) projects. For
everyone else there is this section. It is more detailed and uses a
step-by-step approach. It still tries to be brief for readability and leaves
out a lot of details; those are described below the step-by-step guide in a
reference section, which explains each of the steps in more detail.
Note: this section covers a few more aspects than the TL;DR and does things in
a slightly different order. That's in your interest, to make sure you notice
early if an issue that looks like a Linux kernel problem is actually caused by
something else. These steps thus help to ensure the time you invest in this
process won't feel wasted in the end:
* Are you facing an issue with a Linux kernel a hardware or software vendor
provided? Then in almost all cases you are better off to stop reading this
document and reporting the issue to your vendor instead, unless you are
willing to install the latest Linux version yourself. Be aware the latter
will often be needed anyway to hunt down and fix issues.
* Perform a rough search for existing reports with your favorite internet
search engine; additionally, check the archives of the `Linux Kernel Mailing
List (LKML) <https://lore.kernel.org/lkml/>`_. If you find matching reports,
join the discussion instead of sending a new one.
* See if the issue you are dealing with qualifies as regression, security
issue, or a really severe problem: those are 'issues of high priority' that
need special handling in some steps that are about to follow.
* Make sure it's not the kernel's surroundings that are causing the issue
you face.
* Create a fresh backup and put system repair and restore tools at hand.
* Ensure your system does not enhance its kernels by building additional
kernel modules on-the-fly, which solutions like DKMS might be doing locally
without your knowledge.
* Check if your kernel was 'tainted' when the issue occurred, as the event
that made the kernel set this flag might be causing the issue you face.
* Write down coarsely how to reproduce the issue. If you deal with multiple
issues at once, create separate notes for each of them and make sure they
work independently on a freshly booted system. That's needed, as each issue
needs to get reported to the kernel developers separately, unless they are
strongly entangled.
* If you are facing a regression within a stable or longterm version line
(say something broke when updating from 5.10.4 to 5.10.5), scroll down to
'Dealing with regressions within a stable and longterm kernel line'.
* Locate the driver or kernel subsystem that seems to be causing the issue.
Find out how and where its developers expect reports. Note: most of the
time this won't be bugzilla.kernel.org, as issues typically need to be sent
by mail to a maintainer and a public mailing list.
* Search the archives of the bug tracker or mailing list in question
thoroughly for reports that might match your issue. If you find anything,
join the discussion instead of sending a new report.
After these preparations you'll now enter the main part:
* Unless you are already running the latest 'mainline' Linux kernel, better
go and install it for the reporting process. Testing and reporting with
the latest 'stable' Linux can be an acceptable alternative in some
situations; during the merge window that actually might be even the best
approach, but in that development phase it can be an even better idea to
suspend your efforts for a few days anyway. Whatever version you choose,
ideally use a 'vanilla' build. Ignoring these advices will dramatically
increase the risk your report will be rejected or ignored.
* Ensure the kernel you just installed does not 'taint' itself when
running.
* Reproduce the issue with the kernel you just installed. If it doesn't show
up there, scroll down to the instructions for issues only happening with
stable and longterm kernels.
* Optimize your notes: try to find and write the most straightforward way to
reproduce your issue. Make sure the end result has all the important
details, and at the same time is easy to read and understand for others
that hear about it for the first time. And if you learned something in this
process, consider searching again for existing reports about the issue.
* If your failure involves a 'panic', 'Oops', 'warning', or 'BUG', consider
decoding the kernel log to find the line of code that triggered the error.
* If your problem is a regression, try to narrow down when the issue was
introduced as much as possible.
* Start to compile the report by writing a detailed description about the
issue. Always mention a few things: the latest kernel version you installed
for reproducing, the Linux Distribution used, and your notes on how to
reproduce the issue. Ideally, make the kernel's build configuration
(.config) and the output from ``dmesg`` available somewhere on the net and
link to it. Include or upload all other information that might be relevant,
like the output/screenshot of an Oops or the output from ``lspci``. Once
you wrote this main part, insert a normal length paragraph on top of it
outlining the issue and the impact quickly. On top of this add one sentence
that briefly describes the problem and gets people to read on. Now give the
thing a descriptive title or subject that yet again is shorter. Then you're
ready to send or file the report like the MAINTAINERS file told you, unless
you are dealing with one of those 'issues of high priority': they need
special care which is explained in 'Special handling for high priority
issues' below.
* Wait for reactions and keep the thing rolling until you can accept the
outcome in one way or the other. Thus react publicly and in a timely manner
to any inquiries. Test proposed fixes. Do proactive testing: retest with at
least every first release candidate (RC) of a new mainline version and
report your results. Send friendly reminders if things stall. And try to
help yourself, if you don't get any help or if it's unsatisfying.
Reporting regressions within a stable and longterm kernel line
--------------------------------------------------------------
This subsection is for you, if you followed above process and got sent here at
the point about regression within a stable or longterm kernel version line. You
face one of those if something breaks when updating from 5.10.4 to 5.10.5 (a
switch from 5.9.15 to 5.10.5 does not qualify). The developers want to fix such
regressions as quickly as possible, hence there is a streamlined process to
report them:
* Check if the kernel developers still maintain the Linux kernel version
line you care about: go to the `front page of kernel.org
<https://kernel.org/>`_ and make sure it mentions
the latest release of the particular version line without an '[EOL]' tag.
* Check the archives of the `Linux stable mailing list
<https://lore.kernel.org/stable/>`_ for existing reports.
* Install the latest release from the particular version line as a vanilla
kernel. Ensure this kernel is not tainted and still shows the problem, as
the issue might have already been fixed there. If you first noticed the
problem with a vendor kernel, check a vanilla build of the last version
known to work performs fine as well.
* Send a short problem report to the Linux stable mailing list
([email protected]) and CC the Linux regressions mailing list
([email protected]); if you suspect the cause in a particular
subsystem, CC its maintainer and its mailing list. Roughly describe the
issue and ideally explain how to reproduce it. Mention the first version
that shows the problem and the last version that's working fine. Then
wait for further instructions.
The reference section below explains each of these steps in more detail.
Reporting issues only occurring in older kernel version lines
-------------------------------------------------------------
This subsection is for you, if you tried the latest mainline kernel as outlined
above, but failed to reproduce your issue there; at the same time you want to
see the issue fixed in a still supported stable or longterm series or vendor
kernels regularly rebased on those. If that the case, follow these steps:
* Prepare yourself for the possibility that going through the next few steps
might not get the issue solved in older releases: the fix might be too big
or risky to get backported there.
* Perform the first three steps in the section "Dealing with regressions
within a stable and longterm kernel line" above.
* Search the Linux kernel version control system for the change that fixed
the issue in mainline, as its commit message might tell you if the fix is
scheduled for backporting already. If you don't find anything that way,
search the appropriate mailing lists for posts that discuss such an issue
or peer-review possible fixes; then check the discussions if the fix was
deemed unsuitable for backporting. If backporting was not considered at
all, join the newest discussion, asking if it's in the cards.
* One of the former steps should lead to a solution. If that doesn't work
out, ask the maintainers for the subsystem that seems to be causing the
issue for advice; CC the mailing list for the particular subsystem as well
as the stable mailing list.
The reference section below explains each of these steps in more detail.
Reference section: Reporting issues to the kernel maintainers
=============================================================
The detailed guides above outline all the major steps in brief fashion, which
should be enough for most people. But sometimes there are situations where even
experienced users might wonder how to actually do one of those steps. That's
what this section is for, as it will provide a lot more details on each of the
above steps. Consider this as reference documentation: it's possible to read it
from top to bottom. But it's mainly meant to skim over and a place to look up
details how to actually perform those steps.
A few words of general advice before digging into the details:
* The Linux kernel developers are well aware this process is complicated and
demands more than other FLOSS projects. We'd love to make it simpler. But
that would require work in various places as well as some infrastructure,
which would need constant maintenance; nobody has stepped up to do that
work, so that's just how things are for now.
* A warranty or support contract with some vendor doesn't entitle you to
request fixes from developers in the upstream Linux kernel community: such
contracts are completely outside the scope of the Linux kernel, its
development community, and this document. That's why you can't demand
anything such a contract guarantees in this context, not even if the
developer handling the issue works for the vendor in question. If you want
to claim your rights, use the vendor's support channel instead. When doing
so, you might want to mention you'd like to see the issue fixed in the
upstream Linux kernel; motivate them by saying it's the only way to ensure
the fix in the end will get incorporated in all Linux distributions.
* If you never reported an issue to a FLOSS project before you should consider
reading `How to Report Bugs Effectively
<https://www.chiark.greenend.org.uk/~sgtatham/bugs.html>`_, `How To Ask
Questions The Smart Way
<http://www.catb.org/esr/faqs/smart-questions.html>`_, and `How to ask good
questions <https://jvns.ca/blog/good-questions/>`_.
With that off the table, find below the details on how to properly report
issues to the Linux kernel developers.
Make sure you're using the upstream Linux kernel
------------------------------------------------
*Are you facing an issue with a Linux kernel a hardware or software vendor
provided? Then in almost all cases you are better off to stop reading this
document and reporting the issue to your vendor instead, unless you are
willing to install the latest Linux version yourself. Be aware the latter
will often be needed anyway to hunt down and fix issues.*
Like most programmers, Linux kernel developers don't like to spend time dealing
with reports for issues that don't even happen with their current code. It's
just a waste everybody's time, especially yours. Unfortunately such situations
easily happen when it comes to the kernel and often leads to frustration on both
sides. That's because almost all Linux-based kernels pre-installed on devices
(Computers, Laptops, Smartphones, Routers, …) and most shipped by Linux
distributors are quite distant from the official Linux kernel as distributed by
kernel.org: these kernels from these vendors are often ancient from the point of
Linux development or heavily modified, often both.
Most of these vendor kernels are quite unsuitable for reporting issues to the
Linux kernel developers: an issue you face with one of them might have been
fixed by the Linux kernel developers months or years ago already; additionally,
the modifications and enhancements by the vendor might be causing the issue you
face, even if they look small or totally unrelated. That's why you should report
issues with these kernels to the vendor. Its developers should look into the
report and, in case it turns out to be an upstream issue, fix it directly
upstream or forward the report there. In practice that often does not work out
or might not what you want. You thus might want to consider circumventing the
vendor by installing the very latest Linux kernel core yourself. If that's an
option for you move ahead in this process, as a later step in this guide will
explain how to do that once it rules out other potential causes for your issue.
Note, the previous paragraph is starting with the word 'most', as sometimes
developers in fact are willing to handle reports about issues occurring with
vendor kernels. If they do in the end highly depends on the developers and the
issue in question. Your chances are quite good if the distributor applied only
small modifications to a kernel based on a recent Linux version; that for
example often holds true for the mainline kernels shipped by Debian GNU/Linux
Sid or Fedora Rawhide. Some developers will also accept reports about issues
with kernels from distributions shipping the latest stable kernel, as long as
its only slightly modified; that for example is often the case for Arch Linux,
regular Fedora releases, and openSUSE Tumbleweed. But keep in mind, you better
want to use a mainline Linux and avoid using a stable kernel for this
process, as outlined in the section 'Install a fresh kernel for testing' in more
detail.
Obviously you are free to ignore all this advice and report problems with an old
or heavily modified vendor kernel to the upstream Linux developers. But note,
those often get rejected or ignored, so consider yourself warned. But it's still
better than not reporting the issue at all: sometimes such reports directly or
indirectly will help to get the issue fixed over time.
Search for existing reports, first run
--------------------------------------
*Perform a rough search for existing reports with your favorite internet
search engine; additionally, check the archives of the Linux Kernel Mailing
List (LKML). If you find matching reports, join the discussion instead of
sending a new one.*
Reporting an issue that someone else already brought forward is often a waste of
time for everyone involved, especially you as the reporter. So it's in your own
interest to thoroughly check if somebody reported the issue already. At this
step of the process it's okay to just perform a rough search: a later step will
tell you to perform a more detailed search once you know where your issue needs
to be reported to. Nevertheless, do not hurry with this step of the reporting
process, it can save you time and trouble.
Simply search the internet with your favorite search engine first. Afterwards,
search the `Linux Kernel Mailing List (LKML) archives
<https://lore.kernel.org/lkml/>`_.
If you get flooded with results consider telling your search engine to limit
search timeframe to the past month or year. And wherever you search, make sure
to use good search terms; vary them a few times, too. While doing so try to
look at the issue from the perspective of someone else: that will help you to
come up with other words to use as search terms. Also make sure not to use too
many search terms at once. Remember to search with and without information like
the name of the kernel driver or the name of the affected hardware component.
But its exact brand name (say 'ASUS Red Devil Radeon RX 5700 XT Gaming OC')
often is not much helpful, as it is too specific. Instead try search terms like
the model line (Radeon 5700 or Radeon 5000) and the code name of the main chip
('Navi' or 'Navi10') with and without its manufacturer ('AMD').
In case you find an existing report about your issue, join the discussion, as
you might be able to provide valuable additional information. That can be
important even when a fix is prepared or in its final stages already, as
developers might look for people that can provide additional information or
test a proposed fix. Jump to the section 'Duties after the report went out' for
details on how to get properly involved.
Note, searching `bugzilla.kernel.org <https://bugzilla.kernel.org/>`_ might also
be a good idea, as that might provide valuable insights or turn up matching
reports. If you find the latter, just keep in mind: most subsystems expect
reports in different places, as described below in the section "Check where you
need to report your issue". The developers that should take care of the issue
thus might not even be aware of the bugzilla ticket. Hence, check the ticket if
the issue already got reported as outlined in this document and if not consider
doing so.
Issue of high priority?
-----------------------
*See if the issue you are dealing with qualifies as regression, security
issue, or a really severe problem: those are 'issues of high priority' that
need special handling in some steps that are about to follow.*
Linus Torvalds and the leading Linux kernel developers want to see some issues
fixed as soon as possible, hence there are 'issues of high priority' that get
handled slightly differently in the reporting process. Three type of cases
qualify: regressions, security issues, and really severe problems.
You deal with a regression if some application or practical use case running
fine with one Linux kernel works worse or not at all with a newer version
compiled using a similar configuration. The document
Documentation/admin-guide/reporting-regressions.rst explains this in more
detail. It also provides a good deal of other information about regressions you
might want to be aware of; it for example explains how to add your issue to the
list of tracked regressions, to ensure it won't fall through the cracks.
What qualifies as security issue is left to your judgment. Consider reading
Documentation/process/security-bugs.rst before proceeding, as it
provides additional details how to best handle security issues.
An issue is a 'really severe problem' when something totally unacceptably bad
happens. That's for example the case when a Linux kernel corrupts the data it's
handling or damages hardware it's running on. You're also dealing with a severe
issue when the kernel suddenly stops working with an error message ('kernel
panic') or without any farewell note at all. Note: do not confuse a 'panic' (a
fatal error where the kernel stop itself) with a 'Oops' (a recoverable error),
as the kernel remains running after the latter.
Ensure a healthy environment
----------------------------
*Make sure it's not the kernel's surroundings that are causing the issue
you face.*
Problems that look a lot like a kernel issue are sometimes caused by build or
runtime environment. It's hard to rule out that problem completely, but you
should minimize it:
* Use proven tools when building your kernel, as bugs in the compiler or the
binutils can cause the resulting kernel to misbehave.
* Ensure your computer components run within their design specifications;
that's especially important for the main processor, the main memory, and the
motherboard. Therefore, stop undervolting or overclocking when facing a
potential kernel issue.
* Try to make sure it's not faulty hardware that is causing your issue. Bad
main memory for example can result in a multitude of issues that will
manifest itself in problems looking like kernel issues.
* If you're dealing with a filesystem issue, you might want to check the file
system in question with ``fsck``, as it might be damaged in a way that leads
to unexpected kernel behavior.
* When dealing with a regression, make sure it's not something else that
changed in parallel to updating the kernel. The problem for example might be
caused by other software that was updated at the same time. It can also
happen that a hardware component coincidentally just broke when you rebooted
into a new kernel for the first time. Updating the systems BIOS or changing
something in the BIOS Setup can also lead to problems that on look a lot
like a kernel regression.
Prepare for emergencies
-----------------------
*Create a fresh backup and put system repair and restore tools at hand.*
Reminder, you are dealing with computers, which sometimes do unexpected things,
especially if you fiddle with crucial parts like the kernel of its operating
system. That's what you are about to do in this process. Thus, make sure to
create a fresh backup; also ensure you have all tools at hand to repair or
reinstall the operating system as well as everything you need to restore the
backup.
Make sure your kernel doesn't get enhanced
------------------------------------------
*Ensure your system does not enhance its kernels by building additional
kernel modules on-the-fly, which solutions like DKMS might be doing locally
without your knowledge.*
The risk your issue report gets ignored or rejected dramatically increases if
your kernel gets enhanced in any way. That's why you should remove or disable
mechanisms like akmods and DKMS: those build add-on kernel modules
automatically, for example when you install a new Linux kernel or boot it for
the first time. Also remove any modules they might have installed. Then reboot
before proceeding.
Note, you might not be aware that your system is using one of these solutions:
they often get set up silently when you install Nvidia's proprietary graphics
driver, VirtualBox, or other software that requires a some support from a
module not part of the Linux kernel. That why your might need to uninstall the
packages with such software to get rid of any 3rd party kernel module.
Check 'taint' flag
------------------
*Check if your kernel was 'tainted' when the issue occurred, as the event
that made the kernel set this flag might be causing the issue you face.*
The kernel marks itself with a 'taint' flag when something happens that might
lead to follow-up errors that look totally unrelated. The issue you face might
be such an error if your kernel is tainted. That's why it's in your interest to
rule this out early before investing more time into this process. This is the
only reason why this step is here, as this process later will tell you to
install the latest mainline kernel; you will need to check the taint flag again
then, as that's when it matters because it's the kernel the report will focus
on.
On a running system is easy to check if the kernel tainted itself: if ``cat
/proc/sys/kernel/tainted`` returns '0' then the kernel is not tainted and
everything is fine. Checking that file is impossible in some situations; that's
why the kernel also mentions the taint status when it reports an internal
problem (a 'kernel bug'), a recoverable error (a 'kernel Oops') or a
non-recoverable error before halting operation (a 'kernel panic'). Look near
the top of the error messages printed when one of these occurs and search for a
line starting with 'CPU:'. It should end with 'Not tainted' if the kernel was
not tainted when it noticed the problem; it was tainted if you see 'Tainted:'
followed by a few spaces and some letters.
If your kernel is tainted, study Documentation/admin-guide/tainted-kernels.rst
to find out why. Try to eliminate the reason. Often it's caused by one these
three things:
1. A recoverable error (a 'kernel Oops') occurred and the kernel tainted
itself, as the kernel knows it might misbehave in strange ways after that
point. In that case check your kernel or system log and look for a section
that starts with this::
Oops: 0000 [#1] SMP
That's the first Oops since boot-up, as the '#1' between the brackets shows.
Every Oops and any other problem that happens after that point might be a
follow-up problem to that first Oops, even if both look totally unrelated.
Rule this out by getting rid of the cause for the first Oops and reproducing
the issue afterwards. Sometimes simply restarting will be enough, sometimes
a change to the configuration followed by a reboot can eliminate the Oops.
But don't invest too much time into this at this point of the process, as
the cause for the Oops might already be fixed in the newer Linux kernel
version you are going to install later in this process.
2. Your system uses a software that installs its own kernel modules, for
example Nvidia's proprietary graphics driver or VirtualBox. The kernel
taints itself when it loads such module from external sources (even if
they are Open Source): they sometimes cause errors in unrelated kernel
areas and thus might be causing the issue you face. You therefore have to
prevent those modules from loading when you want to report an issue to the
Linux kernel developers. Most of the time the easiest way to do that is:
temporarily uninstall such software including any modules they might have
installed. Afterwards reboot.
3. The kernel also taints itself when it's loading a module that resides in
the staging tree of the Linux kernel source. That's a special area for
code (mostly drivers) that does not yet fulfill the normal Linux kernel
quality standards. When you report an issue with such a module it's
obviously okay if the kernel is tainted; just make sure the module in
question is the only reason for the taint. If the issue happens in an
unrelated area reboot and temporarily block the module from being loaded
by specifying ``foo.blacklist=1`` as kernel parameter (replace 'foo' with
the name of the module in question).
Document how to reproduce issue
-------------------------------
*Write down coarsely how to reproduce the issue. If you deal with multiple
issues at once, create separate notes for each of them and make sure they
work independently on a freshly booted system. That's needed, as each issue
needs to get reported to the kernel developers separately, unless they are
strongly entangled.*
If you deal with multiple issues at once, you'll have to report each of them
separately, as they might be handled by different developers. Describing
various issues in one report also makes it quite difficult for others to tear
it apart. Hence, only combine issues in one report if they are very strongly
entangled.
Additionally, during the reporting process you will have to test if the issue
happens with other kernel versions. Therefore, it will make your work easier if
you know exactly how to reproduce an issue quickly on a freshly booted system.
Note: it's often fruitless to report issues that only happened once, as they
might be caused by a bit flip due to cosmic radiation. That's why you should
try to rule that out by reproducing the issue before going further. Feel free
to ignore this advice if you are experienced enough to tell a one-time error
due to faulty hardware apart from a kernel issue that rarely happens and thus
is hard to reproduce.
Regression in stable or longterm kernel?
----------------------------------------
*If you are facing a regression within a stable or longterm version line
(say something broke when updating from 5.10.4 to 5.10.5), scroll down to
'Dealing with regressions within a stable and longterm kernel line'.*
Regression within a stable and longterm kernel version line are something the
Linux developers want to fix badly, as such issues are even more unwanted than
regression in the main development branch, as they can quickly affect a lot of
people. The developers thus want to learn about such issues as quickly as
possible, hence there is a streamlined process to report them. Note,
regressions with newer kernel version line (say something broke when switching
from 5.9.15 to 5.10.5) do not qualify.
Check where you need to report your issue
-----------------------------------------
*Locate the driver or kernel subsystem that seems to be causing the issue.
Find out how and where its developers expect reports. Note: most of the
time this won't be bugzilla.kernel.org, as issues typically need to be sent
by mail to a maintainer and a public mailing list.*
It's crucial to send your report to the right people, as the Linux kernel is a
big project and most of its developers are only familiar with a small subset of
it. Quite a few programmers for example only care for just one driver, for
example one for a WiFi chip; its developer likely will only have small or no
knowledge about the internals of remote or unrelated "subsystems", like the TCP
stack, the PCIe/PCI subsystem, memory management or file systems.
Problem is: the Linux kernel lacks a central bug tracker where you can simply
file your issue and make it reach the developers that need to know about it.
That's why you have to find the right place and way to report issues yourself.
You can do that with the help of a script (see below), but it mainly targets
kernel developers and experts. For everybody else the MAINTAINERS file is the
better place.
How to read the MAINTAINERS file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To illustrate how to use the :ref:`MAINTAINERS <maintainers>` file, lets assume
the WiFi in your Laptop suddenly misbehaves after updating the kernel. In that
case it's likely an issue in the WiFi driver. Obviously it could also be some
code it builds upon, but unless you suspect something like that stick to the
driver. If it's really something else, the driver's developers will get the
right people involved.
Sadly, there is no way to check which code is driving a particular hardware
component that is both universal and easy.
In case of a problem with the WiFi driver you for example might want to look at
the output of ``lspci -k``, as it lists devices on the PCI/PCIe bus and the
kernel module driving it::
[user@something ~]$ lspci -k
[...]
3a:00.0 Network controller: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter (rev 32)
Subsystem: Bigfoot Networks, Inc. Device 1535
Kernel driver in use: ath10k_pci
Kernel modules: ath10k_pci
[...]
But this approach won't work if your WiFi chip is connected over USB or some
other internal bus. In those cases you might want to check your WiFi manager or
the output of ``ip link``. Look for the name of the problematic network
interface, which might be something like 'wlp58s0'. This name can be used like
this to find the module driving it::
[user@something ~]$ realpath --relative-to=/sys/module/ /sys/class/net/wlp58s0/device/driver/module
ath10k_pci
In case tricks like these don't bring you any further, try to search the
internet on how to narrow down the driver or subsystem in question. And if you
are unsure which it is: just try your best guess, somebody will help you if you
guessed poorly.
Once you know the driver or subsystem, you want to search for it in the
MAINTAINERS file. In the case of 'ath10k_pci' you won't find anything, as the
name is too specific. Sometimes you will need to search on the net for help;
but before doing so, try a somewhat shorted or modified name when searching the
MAINTAINERS file, as then you might find something like this::
QUALCOMM ATHEROS ATH10K WIRELESS DRIVER
Mail: A. Some Human <[email protected]>
Mailing list: [email protected]
Status: Supported
Web-page: https://wireless.wiki.kernel.org/en/users/Drivers/ath10k
SCM: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
Files: drivers/net/wireless/ath/ath10k/
Note: the line description will be abbreviations, if you read the plain
MAINTAINERS file found in the root of the Linux source tree. 'Mail:' for
example will be 'M:', 'Mailing list:' will be 'L', and 'Status:' will be 'S:'.
A section near the top of the file explains these and other abbreviations.
First look at the line 'Status'. Ideally it should be 'Supported' or
'Maintained'. If it states 'Obsolete' then you are using some outdated approach
that was replaced by a newer solution you need to switch to. Sometimes the code
only has someone who provides 'Odd Fixes' when feeling motivated. And with
'Orphan' you are totally out of luck, as nobody takes care of the code anymore.
That only leaves these options: arrange yourself to live with the issue, fix it
yourself, or find a programmer somewhere willing to fix it.
After checking the status, look for a line starting with 'bugs:': it will tell
you where to find a subsystem specific bug tracker to file your issue. The
example above does not have such a line. That is the case for most sections, as
Linux kernel development is completely driven by mail. Very few subsystems use
a bug tracker, and only some of those rely on bugzilla.kernel.org.
In this and many other cases you thus have to look for lines starting with
'Mail:' instead. Those mention the name and the email addresses for the
maintainers of the particular code. Also look for a line starting with 'Mailing
list:', which tells you the public mailing list where the code is developed.
Your report later needs to go by mail to those addresses. Additionally, for all
issue reports sent by email, make sure to add the Linux Kernel Mailing List
(LKML) <[email protected]> to CC. Don't omit either of the mailing
lists when sending your issue report by mail later! Maintainers are busy people
and might leave some work for other developers on the subsystem specific list;
and LKML is important to have one place where all issue reports can be found.
Finding the maintainers with the help of a script
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For people that have the Linux sources at hand there is a second option to find
the proper place to report: the script 'scripts/get_maintainer.pl' which tries
to find all people to contact. It queries the MAINTAINERS file and needs to be
called with a path to the source code in question. For drivers compiled as
module if often can be found with a command like this::
$ modinfo ath10k_pci | grep filename | sed 's!/lib/modules/.*/kernel/!!; s!filename:!!; s!\.ko\(\|\.xz\)!!'
drivers/net/wireless/ath/ath10k/ath10k_pci.ko
Pass parts of this to the script::
$ ./scripts/get_maintainer.pl -f drivers/net/wireless/ath/ath10k*
Some Human <[email protected]> (supporter:QUALCOMM ATHEROS ATH10K WIRELESS DRIVER)
Another S. Human <[email protected]> (maintainer:NETWORKING DRIVERS)
[email protected] (open list:QUALCOMM ATHEROS ATH10K WIRELESS DRIVER)
[email protected] (open list:NETWORKING DRIVERS (WIRELESS))
[email protected] (open list:NETWORKING DRIVERS)
[email protected] (open list)
Don't sent your report to all of them. Send it to the maintainers, which the
script calls "supporter:"; additionally CC the most specific mailing list for
the code as well as the Linux Kernel Mailing List (LKML). In this case you thus
would need to send the report to 'Some Human <[email protected]>' with
'[email protected]' and '[email protected]' in CC.
Note: in case you cloned the Linux sources with git you might want to call
``get_maintainer.pl`` a second time with ``--git``. The script then will look
at the commit history to find which people recently worked on the code in
question, as they might be able to help. But use these results with care, as it
can easily send you in a wrong direction. That for example happens quickly in
areas rarely changed (like old or unmaintained drivers): sometimes such code is
modified during tree-wide cleanups by developers that do not care about the
particular driver at all.
Search for existing reports, second run
---------------------------------------
*Search the archives of the bug tracker or mailing list in question
thoroughly for reports that might match your issue. If you find anything,
join the discussion instead of sending a new report.*
As mentioned earlier already: reporting an issue that someone else already
brought forward is often a waste of time for everyone involved, especially you
as the reporter. That's why you should search for existing report again, now
that you know where they need to be reported to. If it's mailing list, you will
often find its archives on `lore.kernel.org <https://lore.kernel.org/>`_.
But some list are hosted in different places. That for example is the case for
the ath10k WiFi driver used as example in the previous step. But you'll often
find the archives for these lists easily on the net. Searching for 'archive
[email protected]' for example will lead you to the `Info page for the
ath10k mailing list <https://lists.infradead.org/mailman/listinfo/ath10k>`_,
which at the top links to its
`list archives <https://lists.infradead.org/pipermail/ath10k/>`_. Sadly this and
quite a few other lists miss a way to search the archives. In those cases use a
regular internet search engine and add something like
'site:lists.infradead.org/pipermail/ath10k/' to your search terms, which limits
the results to the archives at that URL.
It's also wise to check the internet, LKML and maybe bugzilla.kernel.org again
at this point. If your report needs to be filed in a bug tracker, you may want
to check the mailing list archives for the subsystem as well, as someone might
have reported it only there.
For details how to search and what to do if you find matching reports see
"Search for existing reports, first run" above.
Do not hurry with this step of the reporting process: spending 30 to 60 minutes
or even more time can save you and others quite a lot of time and trouble.
Install a fresh kernel for testing
----------------------------------
*Unless you are already running the latest 'mainline' Linux kernel, better
go and install it for the reporting process. Testing and reporting with
the latest 'stable' Linux can be an acceptable alternative in some
situations; during the merge window that actually might be even the best
approach, but in that development phase it can be an even better idea to
suspend your efforts for a few days anyway. Whatever version you choose,
ideally use a 'vanilla' built. Ignoring these advices will dramatically
increase the risk your report will be rejected or ignored.*
As mentioned in the detailed explanation for the first step already: Like most
programmers, Linux kernel developers don't like to spend time dealing with
reports for issues that don't even happen with the current code. It's just a
waste everybody's time, especially yours. That's why it's in everybody's
interest that you confirm the issue still exists with the latest upstream code
before reporting it. You are free to ignore this advice, but as outlined
earlier: doing so dramatically increases the risk that your issue report might
get rejected or simply ignored.
In the scope of the kernel "latest upstream" normally means:
* Install a mainline kernel; the latest stable kernel can be an option, but
most of the time is better avoided. Longterm kernels (sometimes called 'LTS
kernels') are unsuitable at this point of the process. The next subsection
explains all of this in more detail.
* The over next subsection describes way to obtain and install such a kernel.
It also outlines that using a pre-compiled kernel are fine, but better are
vanilla, which means: it was built using Linux sources taken straight `from
kernel.org <https://kernel.org/>`_ and not modified or enhanced in any way.
Choosing the right version for testing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Head over to `kernel.org <https://kernel.org/>`_ to find out which version you
want to use for testing. Ignore the big yellow button that says 'Latest release'
and look a little lower at the table. At its top you'll see a line starting with
mainline, which most of the time will point to a pre-release with a version
number like '5.8-rc2'. If that's the case, you'll want to use this mainline
kernel for testing, as that where all fixes have to be applied first. Do not let
that 'rc' scare you, these 'development kernels' are pretty reliable — and you
made a backup, as you were instructed above, didn't you?
In about two out of every nine to ten weeks, mainline might point you to a
proper release with a version number like '5.7'. If that happens, consider
suspending the reporting process until the first pre-release of the next
version (5.8-rc1) shows up on kernel.org. That's because the Linux development
cycle then is in its two-week long 'merge window'. The bulk of the changes and
all intrusive ones get merged for the next release during this time. It's a bit
more risky to use mainline during this period. Kernel developers are also often
quite busy then and might have no spare time to deal with issue reports. It's
also quite possible that one of the many changes applied during the merge
window fixes the issue you face; that's why you soon would have to retest with
a newer kernel version anyway, as outlined below in the section 'Duties after
the report went out'.
That's why it might make sense to wait till the merge window is over. But don't
to that if you're dealing with something that shouldn't wait. In that case
consider obtaining the latest mainline kernel via git (see below) or use the
latest stable version offered on kernel.org. Using that is also acceptable in
case mainline for some reason does currently not work for you. An in general:
using it for reproducing the issue is also better than not reporting it issue
at all.
Better avoid using the latest stable kernel outside merge windows, as all fixes
must be applied to mainline first. That's why checking the latest mainline
kernel is so important: any issue you want to see fixed in older version lines
needs to be fixed in mainline first before it can get backported, which can
take a few days or weeks. Another reason: the fix you hope for might be too
hard or risky for backporting; reporting the issue again hence is unlikely to
change anything.
These aspects are also why longterm kernels (sometimes called "LTS kernels")
are unsuitable for this part of the reporting process: they are to distant from
the current code. Hence go and test mainline first and follow the process
further: if the issue doesn't occur with mainline it will guide you how to get
it fixed in older version lines, if that's in the cards for the fix in question.
How to obtain a fresh Linux kernel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Using a pre-compiled kernel**: This is often the quickest, easiest, and safest
way for testing — especially is you are unfamiliar with the Linux kernel. The
problem: most of those shipped by distributors or add-on repositories are build
from modified Linux sources. They are thus not vanilla and therefore often
unsuitable for testing and issue reporting: the changes might cause the issue
you face or influence it somehow.
But you are in luck if you are using a popular Linux distribution: for quite a
few of them you'll find repositories on the net that contain packages with the
latest mainline or stable Linux built as vanilla kernel. It's totally okay to
use these, just make sure from the repository's description they are vanilla or
at least close to it. Additionally ensure the packages contain the latest
versions as offered on kernel.org. The packages are likely unsuitable if they
are older than a week, as new mainline and stable kernels typically get released
at least once a week.
Please note that you might need to build your own kernel manually later: that's
sometimes needed for debugging or testing fixes, as described later in this
document. Also be aware that pre-compiled kernels might lack debug symbols that
are needed to decode messages the kernel prints when a panic, Oops, warning, or
BUG occurs; if you plan to decode those, you might be better off compiling a
kernel yourself (see the end of this subsection and the section titled 'Decode
failure messages' for details).
**Using git**: Developers and experienced Linux users familiar with git are
often best served by obtaining the latest Linux kernel sources straight from the
`official development repository on kernel.org
<https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/>`_.
Those are likely a bit ahead of the latest mainline pre-release. Don't worry
about it: they are as reliable as a proper pre-release, unless the kernel's
development cycle is currently in the middle of a merge window. But even then
they are quite reliable.
**Conventional**: People unfamiliar with git are often best served by
downloading the sources as tarball from `kernel.org <https://kernel.org/>`_.
How to actually build a kernel is not described here, as many websites explain
the necessary steps already. If you are new to it, consider following one of
those how-to's that suggest to use ``make localmodconfig``, as that tries to
pick up the configuration of your current kernel and then tries to adjust it
somewhat for your system. That does not make the resulting kernel any better,
but quicker to compile.
Note: If you are dealing with a panic, Oops, warning, or BUG from the kernel,
please try to enable CONFIG_KALLSYMS when configuring your kernel.
Additionally, enable CONFIG_DEBUG_KERNEL and CONFIG_DEBUG_INFO, too; the
latter is the relevant one of those two, but can only be reached if you enable
the former. Be aware CONFIG_DEBUG_INFO increases the storage space required to
build a kernel by quite a bit. But that's worth it, as these options will allow
you later to pinpoint the exact line of code that triggers your issue. The
section 'Decode failure messages' below explains this in more detail.
But keep in mind: Always keep a record of the issue encountered in case it is
hard to reproduce. Sending an undecoded report is better than not reporting
the issue at all.
Check 'taint' flag
------------------
*Ensure the kernel you just installed does not 'taint' itself when
running.*
As outlined above in more detail already: the kernel sets a 'taint' flag when
something happens that can lead to follow-up errors that look totally
unrelated. That's why you need to check if the kernel you just installed does
not set this flag. And if it does, you in almost all the cases needs to
eliminate the reason for it before you reporting issues that occur with it. See
the section above for details how to do that.
Reproduce issue with the fresh kernel
-------------------------------------
*Reproduce the issue with the kernel you just installed. If it doesn't show
up there, scroll down to the instructions for issues only happening with
stable and longterm kernels.*
Check if the issue occurs with the fresh Linux kernel version you just
installed. If it was fixed there already, consider sticking with this version
line and abandoning your plan to report the issue. But keep in mind that other
users might still be plagued by it, as long as it's not fixed in either stable
and longterm version from kernel.org (and thus vendor kernels derived from
those). If you prefer to use one of those or just want to help their users,
head over to the section "Details about reporting issues only occurring in
older kernel version lines" below.
Optimize description to reproduce issue
---------------------------------------
*Optimize your notes: try to find and write the most straightforward way to
reproduce your issue. Make sure the end result has all the important
details, and at the same time is easy to read and understand for others
that hear about it for the first time. And if you learned something in this
process, consider searching again for existing reports about the issue.*
An unnecessarily complex report will make it hard for others to understand your
report. Thus try to find a reproducer that's straight forward to describe and
thus easy to understand in written form. Include all important details, but at
the same time try to keep it as short as possible.
In this in the previous steps you likely have learned a thing or two about the
issue you face. Use this knowledge and search again for existing reports
instead you can join.
Decode failure messages
-----------------------
*If your failure involves a 'panic', 'Oops', 'warning', or 'BUG', consider
decoding the kernel log to find the line of code that triggered the error.*
When the kernel detects an internal problem, it will log some information about
the executed code. This makes it possible to pinpoint the exact line in the
source code that triggered the issue and shows how it was called. But that only
works if you enabled CONFIG_DEBUG_INFO and CONFIG_KALLSYMS when configuring
your kernel. If you did so, consider to decode the information from the
kernel's log. That will make it a lot easier to understand what lead to the
'panic', 'Oops', 'warning', or 'BUG', which increases the chances that someone
can provide a fix.
Decoding can be done with a script you find in the Linux source tree. If you
are running a kernel you compiled yourself earlier, call it like this::
[user@something ~]$ sudo dmesg | ./linux-5.10.5/scripts/decode_stacktrace.sh ./linux-5.10.5/vmlinux
If you are running a packaged vanilla kernel, you will likely have to install
the corresponding packages with debug symbols. Then call the script (which you
might need to get from the Linux sources if your distro does not package it)
like this::
[user@something ~]$ sudo dmesg | ./linux-5.10.5/scripts/decode_stacktrace.sh \
/usr/lib/debug/lib/modules/5.10.10-4.1.x86_64/vmlinux /usr/src/kernels/5.10.10-4.1.x86_64/
The script will work on log lines like the following, which show the address of
the code the kernel was executing when the error occurred::
[ 68.387301] RIP: 0010:test_module_init+0x5/0xffa [test_module]
Once decoded, these lines will look like this::
[ 68.387301] RIP: 0010:test_module_init (/home/username/linux-5.10.5/test-module/test-module.c:16) test_module
In this case the executed code was built from the file
'~/linux-5.10.5/test-module/test-module.c' and the error occurred by the
instructions found in line '16'.
The script will similarly decode the addresses mentioned in the section
starting with 'Call trace', which show the path to the function where the
problem occurred. Additionally, the script will show the assembler output for
the code section the kernel was executing.
Note, if you can't get this to work, simply skip this step and mention the
reason for it in the report. If you're lucky, it might not be needed. And if it
is, someone might help you to get things going. Also be aware this is just one
of several ways to decode kernel stack traces. Sometimes different steps will
be required to retrieve the relevant details. Don't worry about that, if that's
needed in your case, developers will tell you what to do.
Special care for regressions
----------------------------
*If your problem is a regression, try to narrow down when the issue was
introduced as much as possible.*
Linux lead developer Linus Torvalds insists that the Linux kernel never
worsens, that's why he deems regressions as unacceptable and wants to see them
fixed quickly. That's why changes that introduced a regression are often
promptly reverted if the issue they cause can't get solved quickly any other
way. Reporting a regression is thus a bit like playing a kind of trump card to
get something quickly fixed. But for that to happen the change that's causing
the regression needs to be known. Normally it's up to the reporter to track
down the culprit, as maintainers often won't have the time or setup at hand to
reproduce it themselves.
To find the change there is a process called 'bisection' which the document
Documentation/admin-guide/bug-bisect.rst describes in detail. That process
will often require you to build about ten to twenty kernel images, trying to
reproduce the issue with each of them before building the next. Yes, that takes
some time, but don't worry, it works a lot quicker than most people assume.
Thanks to a 'binary search' this will lead you to the one commit in the source
code management system that's causing the regression. Once you find it, search
the net for the subject of the change, its commit id and the shortened commit id
(the first 12 characters of the commit id). This will lead you to existing
reports about it, if there are any.
Note, a bisection needs a bit of know-how, which not everyone has, and quite a
bit of effort, which not everyone is willing to invest. Nevertheless, it's
highly recommended performing a bisection yourself. If you really can't or
don't want to go down that route at least find out which mainline kernel
introduced the regression. If something for example breaks when switching from
5.5.15 to 5.8.4, then try at least all the mainline releases in that area (5.6,
5.7 and 5.8) to check when it first showed up. Unless you're trying to find a
regression in a stable or longterm kernel, avoid testing versions which number
has three sections (5.6.12, 5.7.8), as that makes the outcome hard to
interpret, which might render your testing useless. Once you found the major
version which introduced the regression, feel free to move on in the reporting
process. But keep in mind: it depends on the issue at hand if the developers
will be able to help without knowing the culprit. Sometimes they might
recognize from the report want went wrong and can fix it; other times they will
be unable to help unless you perform a bisection.
When dealing with regressions make sure the issue you face is really caused by
the kernel and not by something else, as outlined above already.
In the whole process keep in mind: an issue only qualifies as regression if the
older and the newer kernel got built with a similar configuration. This can be
achieved by using ``make olddefconfig``, as explained in more detail by
Documentation/admin-guide/reporting-regressions.rst; that document also
provides a good deal of other information about regressions you might want to be
aware of.
Write and send the report
-------------------------
*Start to compile the report by writing a detailed description about the
issue. Always mention a few things: the latest kernel version you installed
for reproducing, the Linux Distribution used, and your notes on how to
reproduce the issue. Ideally, make the kernel's build configuration
(.config) and the output from ``dmesg`` available somewhere on the net and
link to it. Include or upload all other information that might be relevant,
like the output/screenshot of an Oops or the output from ``lspci``. Once
you wrote this main part, insert a normal length paragraph on top of it
outlining the issue and the impact quickly. On top of this add one sentence
that briefly describes the problem and gets people to read on. Now give the
thing a descriptive title or subject that yet again is shorter. Then you're
ready to send or file the report like the MAINTAINERS file told you, unless
you are dealing with one of those 'issues of high priority': they need
special care which is explained in 'Special handling for high priority
issues' below.*
Now that you have prepared everything it's time to write your report. How to do
that is partly explained by the three documents linked to in the preface above.
That's why this text will only mention a few of the essentials as well as
things specific to the Linux kernel.
There is one thing that fits both categories: the most crucial parts of your
report are the title/subject, the first sentence, and the first paragraph.
Developers often get quite a lot of mail. They thus often just take a few
seconds to skim a mail before deciding to move on or look closer. Thus: the
better the top section of your report, the higher are the chances that someone
will look into it and help you. And that is why you should ignore them for now
and write the detailed report first. ;-)
Things each report should mention
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Describe in detail how your issue happens with the fresh vanilla kernel you
installed. Try to include the step-by-step instructions you wrote and optimized
earlier that outline how you and ideally others can reproduce the issue; in
those rare cases where that's impossible try to describe what you did to
trigger it.
Also include all the relevant information others might need to understand the
issue and its environment. What's actually needed depends a lot on the issue,
but there are some things you should include always:
* the output from ``cat /proc/version``, which contains the Linux kernel
version number and the compiler it was built with.
* the Linux distribution the machine is running (``hostnamectl | grep
"Operating System"``)
* the architecture of the CPU and the operating system (``uname -mi``)
* if you are dealing with a regression and performed a bisection, mention the
subject and the commit-id of the change that is causing it.
In a lot of cases it's also wise to make two more things available to those
that read your report:
* the configuration used for building your Linux kernel (the '.config' file)
* the kernel's messages that you get from ``dmesg`` written to a file. Make
sure that it starts with a line like 'Linux version 5.8-1
([email protected]) (gcc (GCC) 10.2.1, GNU ld version 2.34) #1 SMP Mon Aug
3 14:54:37 UTC 2020' If it's missing, then important messages from the first
boot phase already got discarded. In this case instead consider using
``journalctl -b 0 -k``; alternatively you can also reboot, reproduce the
issue and call ``dmesg`` right afterwards.
These two files are big, that's why it's a bad idea to put them directly into
your report. If you are filing the issue in a bug tracker then attach them to
the ticket. If you report the issue by mail do not attach them, as that makes
the mail too large; instead do one of these things:
* Upload the files somewhere public (your website, a public file paste
service, a ticket created just for this purpose on `bugzilla.kernel.org
<https://bugzilla.kernel.org/>`_, ...) and include a link to them in your
report. Ideally use something where the files stay available for years, as
they could be useful to someone many years from now; this for example can
happen if five or ten years from now a developer works on some code that was
changed just to fix your issue.
* Put the files aside and mention you will send them later in individual
replies to your own mail. Just remember to actually do that once the report
went out. ;-)
Things that might be wise to provide
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Depending on the issue you might need to add more background data. Here are a
few suggestions what often is good to provide:
* If you are dealing with a 'warning', an 'OOPS' or a 'panic' from the kernel,
include it. If you can't copy'n'paste it, try to capture a netconsole trace
or at least take a picture of the screen.
* If the issue might be related to your computer hardware, mention what kind
of system you use. If you for example have problems with your graphics card,
mention its manufacturer, the card's model, and what chip is uses. If it's a
laptop mention its name, but try to make sure it's meaningful. 'Dell XPS 13'
for example is not, because it might be the one from 2012; that one looks
not that different from the one sold today, but apart from that the two have
nothing in common. Hence, in such cases add the exact model number, which
for example are '9380' or '7390' for XPS 13 models introduced during 2019.
Names like 'Lenovo Thinkpad T590' are also somewhat ambiguous: there are
variants of this laptop with and without a dedicated graphics chip, so try
to find the exact model name or specify the main components.
* Mention the relevant software in use. If you have problems with loading
modules, you want to mention the versions of kmod, systemd, and udev in use.
If one of the DRM drivers misbehaves, you want to state the versions of
libdrm and Mesa; also specify your Wayland compositor or the X-Server and
its driver. If you have a filesystem issue, mention the version of
corresponding filesystem utilities (e2fsprogs, btrfs-progs, xfsprogs, ...).
* Gather additional information from the kernel that might be of interest. The
output from ``lspci -nn`` will for example help others to identify what
hardware you use. If you have a problem with hardware you even might want to
make the output from ``sudo lspci -vvv`` available, as that provides
insights how the components were configured. For some issues it might be
good to include the contents of files like ``/proc/cpuinfo``,
``/proc/ioports``, ``/proc/iomem``, ``/proc/modules``, or
``/proc/scsi/scsi``. Some subsystem also offer tools to collect relevant
information. One such tool is ``alsa-info.sh`` `which the audio/sound
subsystem developers provide <https://www.alsa-project.org/wiki/AlsaInfo>`_.
Those examples should give your some ideas of what data might be wise to
attach, but you have to think yourself what will be helpful for others to know.
Don't worry too much about forgetting something, as developers will ask for
additional details they need. But making everything important available from
the start increases the chance someone will take a closer look.
The important part: the head of your report
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Now that you have the detailed part of the report prepared let's get to the
most important section: the first few sentences. Thus go to the top, add
something like 'The detailed description:' before the part you just wrote and
insert two newlines at the top. Now write one normal length paragraph that
describes the issue roughly. Leave out all boring details and focus on the
crucial parts readers need to know to understand what this is all about; if you
think this bug affects a lot of users, mention this to get people interested.
Once you did that insert two more lines at the top and write a one sentence
summary that explains quickly what the report is about. After that you have to
get even more abstract and write an even shorter subject/title for the report.
Now that you have written this part take some time to optimize it, as it is the
most important parts of your report: a lot of people will only read this before
they decide if reading the rest is time well spent.
Now send or file the report like the :ref:`MAINTAINERS <maintainers>` file told
you, unless it's one of those 'issues of high priority' outlined earlier: in
that case please read the next subsection first before sending the report on
its way.
Special handling for high priority issues
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Reports for high priority issues need special handling.
**Severe issues**: make sure the subject or ticket title as well as the first
paragraph makes the severeness obvious.
**Regressions**: make the report's subject start with '[REGRESSION]'.
In case you performed a successful bisection, use the title of the change that
introduced the regression as the second part of your subject. Make the report
also mention the commit id of the culprit. In case of an unsuccessful bisection,
make your report mention the latest tested version that's working fine (say 5.7)
and the oldest where the issue occurs (say 5.8-rc1).
When sending the report by mail, CC the Linux regressions mailing list
([email protected]). In case the report needs to be filed to some web
tracker, proceed to do so. Once filed, forward the report by mail to the
regressions list; CC the maintainer and the mailing list for the subsystem in
question. Make sure to inline the forwarded report, hence do not attach it.
Also add a short note at the top where you mention the URL to the ticket.
When mailing or forwarding the report, in case of a successful bisection add the
author of the culprit to the recipients; also CC everyone in the signed-off-by
chain, which you find at the end of its commit message.
**Security issues**: for these issues your will have to evaluate if a
short-term risk to other users would arise if details were publicly disclosed.
If that's not the case simply proceed with reporting the issue as described.
For issues that bear such a risk you will need to adjust the reporting process
slightly:
* If the MAINTAINERS file instructed you to report the issue by mail, do not
CC any public mailing lists.
* If you were supposed to file the issue in a bug tracker make sure to mark
the ticket as 'private' or 'security issue'. If the bug tracker does not
offer a way to keep reports private, forget about it and send your report as
a private mail to the maintainers instead.
In both cases make sure to also mail your report to the addresses the
MAINTAINERS file lists in the section 'security contact'. Ideally directly CC
them when sending the report by mail. If you filed it in a bug tracker, forward
the report's text to these addresses; but on top of it put a small note where
you mention that you filed it with a link to the ticket.
See Documentation/process/security-bugs.rst for more information.
Duties after the report went out
--------------------------------
*Wait for reactions and keep the thing rolling until you can accept the
outcome in one way or the other. Thus react publicly and in a timely manner
to any inquiries. Test proposed fixes. Do proactive testing: retest with at
least every first release candidate (RC) of a new mainline version and
report your results. Send friendly reminders if things stall. And try to
help yourself, if you don't get any help or if it's unsatisfying.*
If your report was good and you are really lucky then one of the developers
might immediately spot what's causing the issue; they then might write a patch
to fix it, test it, and send it straight for integration in mainline while
tagging it for later backport to stable and longterm kernels that need it. Then
all you need to do is reply with a 'Thank you very much' and switch to a version
with the fix once it gets released.
But this ideal scenario rarely happens. That's why the job is only starting
once you got the report out. What you'll have to do depends on the situations,
but often it will be the things listed below. But before digging into the
details, here are a few important things you need to keep in mind for this part
of the process.
General advice for further interactions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Always reply in public**: When you filed the issue in a bug tracker, always
reply there and do not contact any of the developers privately about it. For
mailed reports always use the 'Reply-all' function when replying to any mails
you receive. That includes mails with any additional data you might want to add
to your report: go to your mail applications 'Sent' folder and use 'reply-all'
on your mail with the report. This approach will make sure the public mailing
list(s) and everyone else that gets involved over time stays in the loop; it
also keeps the mail thread intact, which among others is really important for
mailing lists to group all related mails together.
There are just two situations where a comment in a bug tracker or a 'Reply-all'
is unsuitable:
* Someone tells you to send something privately.
* You were told to send something, but noticed it contains sensitive
information that needs to be kept private. In that case it's okay to send it
in private to the developer that asked for it. But note in the ticket or a
mail that you did that, so everyone else knows you honored the request.
**Do research before asking for clarifications or help**: In this part of the
process someone might tell you to do something that requires a skill you might
not have mastered yet. For example, you might be asked to use some test tools
you never have heard of yet; or you might be asked to apply a patch to the
Linux kernel sources to test if it helps. In some cases it will be fine sending
a reply asking for instructions how to do that. But before going that route try
to find the answer own your own by searching the internet; alternatively
consider asking in other places for advice. For example ask a friend or post
about it to a chatroom or forum you normally hang out.
**Be patient**: If you are really lucky you might get a reply to your report
within a few hours. But most of the time it will take longer, as maintainers
are scattered around the globe and thus might be in a different time zone – one
where they already enjoy their night away from keyboard.
In general, kernel developers will take one to five business days to respond to
reports. Sometimes it will take longer, as they might be busy with the merge
windows, other work, visiting developer conferences, or simply enjoying a long
summer holiday.
The 'issues of high priority' (see above for an explanation) are an exception
here: maintainers should address them as soon as possible; that's why you
should wait a week at maximum (or just two days if it's something urgent)
before sending a friendly reminder.
Sometimes the maintainer might not be responding in a timely manner; other
times there might be disagreements, for example if an issue qualifies as
regression or not. In such cases raise your concerns on the mailing list and
ask others for public or private replies how to move on. If that fails, it
might be appropriate to get a higher authority involved. In case of a WiFi
driver that would be the wireless maintainers; if there are no higher level
maintainers or all else fails, it might be one of those rare situations where
it's okay to get Linus Torvalds involved.
**Proactive testing**: Every time the first pre-release (the 'rc1') of a new
mainline kernel version gets released, go and check if the issue is fixed there
or if anything of importance changed. Mention the outcome in the ticket or in a
mail you sent as reply to your report (make sure it has all those in the CC
that up to that point participated in the discussion). This will show your
commitment and that you are willing to help. It also tells developers if the
issue persists and makes sure they do not forget about it. A few other
occasional retests (for example with rc3, rc5 and the final) are also a good
idea, but only report your results if something relevant changed or if you are
writing something anyway.
With all these general things off the table let's get into the details of how
to help to get issues resolved once they were reported.
Inquires and testing request
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Here are your duties in case you got replies to your report:
**Check who you deal with**: Most of the time it will be the maintainer or a
developer of the particular code area that will respond to your report. But as
issues are normally reported in public it could be anyone that's replying —
including people that want to help, but in the end might guide you totally off
track with their questions or requests. That rarely happens, but it's one of
many reasons why it's wise to quickly run an internet search to see who you're
interacting with. By doing this you also get aware if your report was heard by
the right people, as a reminder to the maintainer (see below) might be in order
later if discussion fades out without leading to a satisfying solution for the
issue.
**Inquiries for data**: Often you will be asked to test something or provide
additional details. Try to provide the requested information soon, as you have
the attention of someone that might help and risk losing it the longer you
wait; that outcome is even likely if you do not provide the information within
a few business days.
**Requests for testing**: When you are asked to test a diagnostic patch or a
possible fix, try to test it in timely manner, too. But do it properly and make
sure to not rush it: mixing things up can happen easily and can lead to a lot
of confusion for everyone involved. A common mistake for example is thinking a
proposed patch with a fix was applied, but in fact wasn't. Things like that
happen even to experienced testers occasionally, but they most of the time will
notice when the kernel with the fix behaves just as one without it.
What to do when nothing of substance happens
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some reports will not get any reaction from the responsible Linux kernel
developers; or a discussion around the issue evolved, but faded out with
nothing of substance coming out of it.
In these cases wait two (better: three) weeks before sending a friendly
reminder: maybe the maintainer was just away from keyboard for a while when
your report arrived or had something more important to take care of. When
writing the reminder, kindly ask if anything else from your side is needed to
get the ball running somehow. If the report got out by mail, do that in the
first lines of a mail that is a reply to your initial mail (see above) which
includes a full quote of the original report below: that's on of those few
situations where such a 'TOFU' (Text Over, Fullquote Under) is the right
approach, as then all the recipients will have the details at hand immediately
in the proper order.
After the reminder wait three more weeks for replies. If you still don't get a
proper reaction, you first should reconsider your approach. Did you maybe try
to reach out to the wrong people? Was the report maybe offensive or so
confusing that people decided to completely stay away from it? The best way to
rule out such factors: show the report to one or two people familiar with FLOSS
issue reporting and ask for their opinion. Also ask them for their advice how
to move forward. That might mean: prepare a better report and make those people
review it before you send it out. Such an approach is totally fine; just
mention that this is the second and improved report on the issue and include a
link to the first report.
If the report was proper you can send a second reminder; in it ask for advice
why the report did not get any replies. A good moment for this second reminder
mail is shortly after the first pre-release (the 'rc1') of a new Linux kernel
version got published, as you should retest and provide a status update at that
point anyway (see above).
If the second reminder again results in no reaction within a week, try to
contact a higher-level maintainer asking for advice: even busy maintainers by
then should at least have sent some kind of acknowledgment.
Remember to prepare yourself for a disappointment: maintainers ideally should
react somehow to every issue report, but they are only obliged to fix those
'issues of high priority' outlined earlier. So don't be too devastating if you
get a reply along the lines of 'thanks for the report, I have more important
issues to deal with currently and won't have time to look into this for the
foreseeable future'.
It's also possible that after some discussion in the bug tracker or on a list
nothing happens anymore and reminders don't help to motivate anyone to work out
a fix. Such situations can be devastating, but is within the cards when it
comes to Linux kernel development. This and several other reasons for not
getting help are explained in 'Why some issues won't get any reaction or remain
unfixed after being reported' near the end of this document.
Don't get devastated if you don't find any help or if the issue in the end does
not get solved: the Linux kernel is FLOSS and thus you can still help yourself.
You for example could try to find others that are affected and team up with
them to get the issue resolved. Such a team could prepare a fresh report
together that mentions how many you are and why this is something that in your
option should get fixed. Maybe together you can also narrow down the root cause
or the change that introduced a regression, which often makes developing a fix
easier. And with a bit of luck there might be someone in the team that knows a
bit about programming and might be able to write a fix.
Reference for "Reporting regressions within a stable and longterm kernel line"
------------------------------------------------------------------------------
This subsection provides details for the steps you need to perform if you face
a regression within a stable and longterm kernel line.
Make sure the particular version line still gets support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Check if the kernel developers still maintain the Linux kernel version
line you care about: go to the front page of kernel.org and make sure it
mentions the latest release of the particular version line without an
'[EOL]' tag.*
Most kernel version lines only get supported for about three months, as
maintaining them longer is quite a lot of work. Hence, only one per year is
chosen and gets supported for at least two years (often six). That's why you
need to check if the kernel developers still support the version line you care
for.
Note, if kernel.org lists two stable version lines on the front page, you
should consider switching to the newer one and forget about the older one:
support for it is likely to be abandoned soon. Then it will get a "end-of-life"
(EOL) stamp. Version lines that reached that point still get mentioned on the
kernel.org front page for a week or two, but are unsuitable for testing and
reporting.
Search stable mailing list
~~~~~~~~~~~~~~~~~~~~~~~~~~
*Check the archives of the Linux stable mailing list for existing reports.*
Maybe the issue you face is already known and was fixed or is about to. Hence,
`search the archives of the Linux stable mailing list
<https://lore.kernel.org/stable/>`_ for reports about an issue like yours. If
you find any matches, consider joining the discussion, unless the fix is
already finished and scheduled to get applied soon.
Reproduce issue with the newest release
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Install the latest release from the particular version line as a vanilla
kernel. Ensure this kernel is not tainted and still shows the problem, as
the issue might have already been fixed there. If you first noticed the
problem with a vendor kernel, check a vanilla build of the last version
known to work performs fine as well.*
Before investing any more time in this process you want to check if the issue
was already fixed in the latest release of version line you're interested in.
This kernel needs to be vanilla and shouldn't be tainted before the issue
happens, as detailed outlined already above in the section "Install a fresh
kernel for testing".
Did you first notice the regression with a vendor kernel? Then changes the
vendor applied might be interfering. You need to rule that out by performing
a recheck. Say something broke when you updated from 5.10.4-vendor.42 to
5.10.5-vendor.43. Then after testing the latest 5.10 release as outlined in
the previous paragraph check if a vanilla build of Linux 5.10.4 works fine as
well. If things are broken there, the issue does not qualify as upstream
regression and you need switch back to the main step-by-step guide to report
the issue.
Report the regression
~~~~~~~~~~~~~~~~~~~~~
*Send a short problem report to the Linux stable mailing list
([email protected]) and CC the Linux regressions mailing list
([email protected]); if you suspect the cause in a particular
subsystem, CC its maintainer and its mailing list. Roughly describe the
issue and ideally explain how to reproduce it. Mention the first version
that shows the problem and the last version that's working fine. Then
wait for further instructions.*
When reporting a regression that happens within a stable or longterm kernel
line (say when updating from 5.10.4 to 5.10.5) a brief report is enough for
the start to get the issue reported quickly. Hence a rough description to the
stable and regressions mailing list is all it takes; but in case you suspect
the cause in a particular subsystem, CC its maintainers and its mailing list
as well, because that will speed things up.
And note, it helps developers a great deal if you can specify the exact version
that introduced the problem. Hence if possible within a reasonable time frame,
try to find that version using vanilla kernels. Lets assume something broke when
your distributor released a update from Linux kernel 5.10.5 to 5.10.8. Then as
instructed above go and check the latest kernel from that version line, say
5.10.9. If it shows the problem, try a vanilla 5.10.5 to ensure that no patches
the distributor applied interfere. If the issue doesn't manifest itself there,
try 5.10.7 and then (depending on the outcome) 5.10.8 or 5.10.6 to find the
first version where things broke. Mention it in the report and state that 5.10.9
is still broken.
What the previous paragraph outlines is basically a rough manual 'bisection'.
Once your report is out your might get asked to do a proper one, as it allows to
pinpoint the exact change that causes the issue (which then can easily get
reverted to fix the issue quickly). Hence consider to do a proper bisection
right away if time permits. See the section 'Special care for regressions' and
the document Documentation/admin-guide/bug-bisect.rst for details how to
perform one. In case of a successful bisection add the author of the culprit to
the recipients; also CC everyone in the signed-off-by chain, which you find at
the end of its commit message.
Reference for "Reporting issues only occurring in older kernel version lines"
-----------------------------------------------------------------------------
This section provides details for the steps you need to take if you could not
reproduce your issue with a mainline kernel, but want to see it fixed in older
version lines (aka stable and longterm kernels).
Some fixes are too complex
~~~~~~~~~~~~~~~~~~~~~~~~~~
*Prepare yourself for the possibility that going through the next few steps
might not get the issue solved in older releases: the fix might be too big
or risky to get backported there.*
Even small and seemingly obvious code-changes sometimes introduce new and
totally unexpected problems. The maintainers of the stable and longterm kernels
are very aware of that and thus only apply changes to these kernels that are
within rules outlined in Documentation/process/stable-kernel-rules.rst.
Complex or risky changes for example do not qualify and thus only get applied
to mainline. Other fixes are easy to get backported to the newest stable and
longterm kernels, but too risky to integrate into older ones. So be aware the
fix you are hoping for might be one of those that won't be backported to the
version line your care about. In that case you'll have no other choice then to
live with the issue or switch to a newer Linux version, unless you want to
patch the fix into your kernels yourself.
Common preparations
~~~~~~~~~~~~~~~~~~~
*Perform the first three steps in the section "Reporting issues only
occurring in older kernel version lines" above.*
You need to carry out a few steps already described in another section of this
guide. Those steps will let you:
* Check if the kernel developers still maintain the Linux kernel version line
you care about.
* Search the Linux stable mailing list for exiting reports.
* Check with the latest release.
Check code history and search for existing discussions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Search the Linux kernel version control system for the change that fixed
the issue in mainline, as its commit message might tell you if the fix is
scheduled for backporting already. If you don't find anything that way,
search the appropriate mailing lists for posts that discuss such an issue
or peer-review possible fixes; then check the discussions if the fix was
deemed unsuitable for backporting. If backporting was not considered at
all, join the newest discussion, asking if it's in the cards.*
In a lot of cases the issue you deal with will have happened with mainline, but
got fixed there. The commit that fixed it would need to get backported as well
to get the issue solved. That's why you want to search for it or any
discussions abound it.
* First try to find the fix in the Git repository that holds the Linux kernel
sources. You can do this with the web interfaces `on kernel.org
<https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/>`_
or its mirror `on GitHub <https://github.com/torvalds/linux>`_; if you have
a local clone you alternatively can search on the command line with ``git
log --grep=<pattern>``.
If you find the fix, look if the commit message near the end contains a
'stable tag' that looks like this:
Cc: <[email protected]> # 5.4+
If that's case the developer marked the fix safe for backporting to version
line 5.4 and later. Most of the time it's getting applied there within two
weeks, but sometimes it takes a bit longer.
* If the commit doesn't tell you anything or if you can't find the fix, look
again for discussions about the issue. Search the net with your favorite
internet search engine as well as the archives for the `Linux kernel
developers mailing list <https://lore.kernel.org/lkml/>`_. Also read the
section `Locate kernel area that causes the issue` above and follow the
instructions to find the subsystem in question: its bug tracker or mailing
list archive might have the answer you are looking for.
* If you see a proposed fix, search for it in the version control system as
outlined above, as the commit might tell you if a backport can be expected.
* Check the discussions for any indicators the fix might be too risky to get
backported to the version line you care about. If that's the case you have
to live with the issue or switch to the kernel version line where the fix
got applied.
* If the fix doesn't contain a stable tag and backporting was not discussed,
join the discussion: mention the version where you face the issue and that
you would like to see it fixed, if suitable.
Ask for advice
~~~~~~~~~~~~~~
*One of the former steps should lead to a solution. If that doesn't work
out, ask the maintainers for the subsystem that seems to be causing the
issue for advice; CC the mailing list for the particular subsystem as well
as the stable mailing list.*
If the previous three steps didn't get you closer to a solution there is only
one option left: ask for advice. Do that in a mail you sent to the maintainers
for the subsystem where the issue seems to have its roots; CC the mailing list
for the subsystem as well as the stable mailing list ([email protected]).
Why some issues won't get any reaction or remain unfixed after being reported
=============================================================================
When reporting a problem to the Linux developers, be aware only 'issues of high
priority' (regressions, security issues, severe problems) are definitely going
to get resolved. The maintainers or if all else fails Linus Torvalds himself
will make sure of that. They and the other kernel developers will fix a lot of
other issues as well. But be aware that sometimes they can't or won't help; and
sometimes there isn't even anyone to send a report to.
This is best explained with kernel developers that contribute to the Linux
kernel in their spare time. Quite a few of the drivers in the kernel were
written by such programmers, often because they simply wanted to make their
hardware usable on their favorite operating system.
These programmers most of the time will happily fix problems other people
report. But nobody can force them to do, as they are contributing voluntarily.
Then there are situations where such developers really want to fix an issue,
but can't: sometimes they lack hardware programming documentation to do so.
This often happens when the publicly available docs are superficial or the
driver was written with the help of reverse engineering.
Sooner or later spare time developers will also stop caring for the driver.
Maybe their test hardware broke, got replaced by something more fancy, or is so
old that it's something you don't find much outside of computer museums
anymore. Sometimes developer stops caring for their code and Linux at all, as
something different in their life became way more important. In some cases
nobody is willing to take over the job as maintainer – and nobody can be forced
to, as contributing to the Linux kernel is done on a voluntary basis. Abandoned
drivers nevertheless remain in the kernel: they are still useful for people and
removing would be a regression.
The situation is not that different with developers that are paid for their
work on the Linux kernel. Those contribute most changes these days. But their
employers sooner or later also stop caring for their code or make its
programmer focus on other things. Hardware vendors for example earn their money
mainly by selling new hardware; quite a few of them hence are not investing
much time and energy in maintaining a Linux kernel driver for something they
stopped selling years ago. Enterprise Linux distributors often care for a
longer time period, but in new versions often leave support for old and rare
hardware aside to limit the scope. Often spare time contributors take over once
a company orphans some code, but as mentioned above: sooner or later they will
leave the code behind, too.
Priorities are another reason why some issues are not fixed, as maintainers
quite often are forced to set those, as time to work on Linux is limited.
That's true for spare time or the time employers grant their developers to
spend on maintenance work on the upstream kernel. Sometimes maintainers also
get overwhelmed with reports, even if a driver is working nearly perfectly. To
not get completely stuck, the programmer thus might have no other choice than
to prioritize issue reports and reject some of them.
But don't worry too much about all of this, a lot of drivers have active
maintainers who are quite interested in fixing as many issues as possible.
Closing words
=============
Compared with other Free/Libre & Open Source Software it's hard to report
issues to the Linux kernel developers: the length and complexity of this
document and the implications between the lines illustrate that. But that's how
it is for now. The main author of this text hopes documenting the state of the
art will lay some groundwork to improve the situation over time.
..
end-of-content
..
This document is maintained by Thorsten Leemhuis <[email protected]>. If
you spot a typo or small mistake, feel free to let him know directly and
he'll fix it. You are free to do the same in a mostly informal way if you
want to contribute changes to the text, but for copyright reasons please CC
[email protected] and "sign-off" your contribution as
Documentation/process/submitting-patches.rst outlines in the section "Sign
your work - the Developer's Certificate of Origin".
..
This text is available under GPL-2.0+ or CC-BY-4.0, as stated at the top
of the file. If you want to distribute this text under CC-BY-4.0 only,
please use "The Linux kernel developers" for author attribution and link
this as source:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/plain/Documentation/admin-guide/reporting-issues.rst
..
Note: Only the content of this RST file as found in the Linux kernel sources
is available under CC-BY-4.0, as versions of this text that were processed
(for example by the kernel's build system) might contain content taken from
files which use a more restrictive license. | linux | SPDX License Identifier GPL 2 0 OR CC BY 4 0 See the bottom of this file for additional redistribution information Reporting issues The short guide aka TL DR Are you facing a regression with vanilla kernels from the same stable or longterm series One still supported Then search the LKML https lore kernel org lkml and the Linux stable mailing list https lore kernel org stable archives for matching reports to join If you don t find any install the latest release from that series https kernel org If it still shows the issue report it to the stable mailing list stable vger kernel org and CC the regressions list regressions lists linux dev ideally also CC the maintainer and the mailing list for the subsystem in question In all other cases try your best guess which kernel part might be causing the issue Check the ref MAINTAINERS maintainers file for how its developers expect to be told about problems which most of the time will be by email with a mailing list in CC Check the destination s archives for matching reports search the LKML https lore kernel org lkml and the web too If you don t find any to join install the latest mainline kernel https kernel org If the issue is present there send a report The issue was fixed there but you would like to see it resolved in a still supported stable or longterm series as well Then install its latest release If it shows the problem search for the change that fixed it in mainline and check if backporting is in the works or was discarded if it s neither ask those who handled the change for it General remarks When installing and testing a kernel as outlined above ensure it s vanilla IOW not patched and not using add on modules Also make sure it s built and running in a healthy environment and not already tainted before the issue occurs If you are facing multiple issues with the Linux kernel at once report each separately While writing your report include all information relevant to the issue like the kernel and the distro used In case of a regression CC the regressions mailing list regressions lists linux dev to your report Also try to pin point the culprit with a bisection if you succeed include its commit id and CC everyone in the sign off by chain Once the report is out answer any questions that come up and help where you can That includes keeping the ball rolling by occasionally retesting with newer releases and sending a status update afterwards Step by step guide how to report issues to the kernel maintainers The above TL DR outlines roughly how to report issues to the Linux kernel developers It might be all that s needed for people already familiar with reporting issues to Free Libre Open Source Software FLOSS projects For everyone else there is this section It is more detailed and uses a step by step approach It still tries to be brief for readability and leaves out a lot of details those are described below the step by step guide in a reference section which explains each of the steps in more detail Note this section covers a few more aspects than the TL DR and does things in a slightly different order That s in your interest to make sure you notice early if an issue that looks like a Linux kernel problem is actually caused by something else These steps thus help to ensure the time you invest in this process won t feel wasted in the end Are you facing an issue with a Linux kernel a hardware or software vendor provided Then in almost all cases you are better off to stop reading this document and reporting the issue to your vendor instead unless you are willing to install the latest Linux version yourself Be aware the latter will often be needed anyway to hunt down and fix issues Perform a rough search for existing reports with your favorite internet search engine additionally check the archives of the Linux Kernel Mailing List LKML https lore kernel org lkml If you find matching reports join the discussion instead of sending a new one See if the issue you are dealing with qualifies as regression security issue or a really severe problem those are issues of high priority that need special handling in some steps that are about to follow Make sure it s not the kernel s surroundings that are causing the issue you face Create a fresh backup and put system repair and restore tools at hand Ensure your system does not enhance its kernels by building additional kernel modules on the fly which solutions like DKMS might be doing locally without your knowledge Check if your kernel was tainted when the issue occurred as the event that made the kernel set this flag might be causing the issue you face Write down coarsely how to reproduce the issue If you deal with multiple issues at once create separate notes for each of them and make sure they work independently on a freshly booted system That s needed as each issue needs to get reported to the kernel developers separately unless they are strongly entangled If you are facing a regression within a stable or longterm version line say something broke when updating from 5 10 4 to 5 10 5 scroll down to Dealing with regressions within a stable and longterm kernel line Locate the driver or kernel subsystem that seems to be causing the issue Find out how and where its developers expect reports Note most of the time this won t be bugzilla kernel org as issues typically need to be sent by mail to a maintainer and a public mailing list Search the archives of the bug tracker or mailing list in question thoroughly for reports that might match your issue If you find anything join the discussion instead of sending a new report After these preparations you ll now enter the main part Unless you are already running the latest mainline Linux kernel better go and install it for the reporting process Testing and reporting with the latest stable Linux can be an acceptable alternative in some situations during the merge window that actually might be even the best approach but in that development phase it can be an even better idea to suspend your efforts for a few days anyway Whatever version you choose ideally use a vanilla build Ignoring these advices will dramatically increase the risk your report will be rejected or ignored Ensure the kernel you just installed does not taint itself when running Reproduce the issue with the kernel you just installed If it doesn t show up there scroll down to the instructions for issues only happening with stable and longterm kernels Optimize your notes try to find and write the most straightforward way to reproduce your issue Make sure the end result has all the important details and at the same time is easy to read and understand for others that hear about it for the first time And if you learned something in this process consider searching again for existing reports about the issue If your failure involves a panic Oops warning or BUG consider decoding the kernel log to find the line of code that triggered the error If your problem is a regression try to narrow down when the issue was introduced as much as possible Start to compile the report by writing a detailed description about the issue Always mention a few things the latest kernel version you installed for reproducing the Linux Distribution used and your notes on how to reproduce the issue Ideally make the kernel s build configuration config and the output from dmesg available somewhere on the net and link to it Include or upload all other information that might be relevant like the output screenshot of an Oops or the output from lspci Once you wrote this main part insert a normal length paragraph on top of it outlining the issue and the impact quickly On top of this add one sentence that briefly describes the problem and gets people to read on Now give the thing a descriptive title or subject that yet again is shorter Then you re ready to send or file the report like the MAINTAINERS file told you unless you are dealing with one of those issues of high priority they need special care which is explained in Special handling for high priority issues below Wait for reactions and keep the thing rolling until you can accept the outcome in one way or the other Thus react publicly and in a timely manner to any inquiries Test proposed fixes Do proactive testing retest with at least every first release candidate RC of a new mainline version and report your results Send friendly reminders if things stall And try to help yourself if you don t get any help or if it s unsatisfying Reporting regressions within a stable and longterm kernel line This subsection is for you if you followed above process and got sent here at the point about regression within a stable or longterm kernel version line You face one of those if something breaks when updating from 5 10 4 to 5 10 5 a switch from 5 9 15 to 5 10 5 does not qualify The developers want to fix such regressions as quickly as possible hence there is a streamlined process to report them Check if the kernel developers still maintain the Linux kernel version line you care about go to the front page of kernel org https kernel org and make sure it mentions the latest release of the particular version line without an EOL tag Check the archives of the Linux stable mailing list https lore kernel org stable for existing reports Install the latest release from the particular version line as a vanilla kernel Ensure this kernel is not tainted and still shows the problem as the issue might have already been fixed there If you first noticed the problem with a vendor kernel check a vanilla build of the last version known to work performs fine as well Send a short problem report to the Linux stable mailing list stable vger kernel org and CC the Linux regressions mailing list regressions lists linux dev if you suspect the cause in a particular subsystem CC its maintainer and its mailing list Roughly describe the issue and ideally explain how to reproduce it Mention the first version that shows the problem and the last version that s working fine Then wait for further instructions The reference section below explains each of these steps in more detail Reporting issues only occurring in older kernel version lines This subsection is for you if you tried the latest mainline kernel as outlined above but failed to reproduce your issue there at the same time you want to see the issue fixed in a still supported stable or longterm series or vendor kernels regularly rebased on those If that the case follow these steps Prepare yourself for the possibility that going through the next few steps might not get the issue solved in older releases the fix might be too big or risky to get backported there Perform the first three steps in the section Dealing with regressions within a stable and longterm kernel line above Search the Linux kernel version control system for the change that fixed the issue in mainline as its commit message might tell you if the fix is scheduled for backporting already If you don t find anything that way search the appropriate mailing lists for posts that discuss such an issue or peer review possible fixes then check the discussions if the fix was deemed unsuitable for backporting If backporting was not considered at all join the newest discussion asking if it s in the cards One of the former steps should lead to a solution If that doesn t work out ask the maintainers for the subsystem that seems to be causing the issue for advice CC the mailing list for the particular subsystem as well as the stable mailing list The reference section below explains each of these steps in more detail Reference section Reporting issues to the kernel maintainers The detailed guides above outline all the major steps in brief fashion which should be enough for most people But sometimes there are situations where even experienced users might wonder how to actually do one of those steps That s what this section is for as it will provide a lot more details on each of the above steps Consider this as reference documentation it s possible to read it from top to bottom But it s mainly meant to skim over and a place to look up details how to actually perform those steps A few words of general advice before digging into the details The Linux kernel developers are well aware this process is complicated and demands more than other FLOSS projects We d love to make it simpler But that would require work in various places as well as some infrastructure which would need constant maintenance nobody has stepped up to do that work so that s just how things are for now A warranty or support contract with some vendor doesn t entitle you to request fixes from developers in the upstream Linux kernel community such contracts are completely outside the scope of the Linux kernel its development community and this document That s why you can t demand anything such a contract guarantees in this context not even if the developer handling the issue works for the vendor in question If you want to claim your rights use the vendor s support channel instead When doing so you might want to mention you d like to see the issue fixed in the upstream Linux kernel motivate them by saying it s the only way to ensure the fix in the end will get incorporated in all Linux distributions If you never reported an issue to a FLOSS project before you should consider reading How to Report Bugs Effectively https www chiark greenend org uk sgtatham bugs html How To Ask Questions The Smart Way http www catb org esr faqs smart questions html and How to ask good questions https jvns ca blog good questions With that off the table find below the details on how to properly report issues to the Linux kernel developers Make sure you re using the upstream Linux kernel Are you facing an issue with a Linux kernel a hardware or software vendor provided Then in almost all cases you are better off to stop reading this document and reporting the issue to your vendor instead unless you are willing to install the latest Linux version yourself Be aware the latter will often be needed anyway to hunt down and fix issues Like most programmers Linux kernel developers don t like to spend time dealing with reports for issues that don t even happen with their current code It s just a waste everybody s time especially yours Unfortunately such situations easily happen when it comes to the kernel and often leads to frustration on both sides That s because almost all Linux based kernels pre installed on devices Computers Laptops Smartphones Routers and most shipped by Linux distributors are quite distant from the official Linux kernel as distributed by kernel org these kernels from these vendors are often ancient from the point of Linux development or heavily modified often both Most of these vendor kernels are quite unsuitable for reporting issues to the Linux kernel developers an issue you face with one of them might have been fixed by the Linux kernel developers months or years ago already additionally the modifications and enhancements by the vendor might be causing the issue you face even if they look small or totally unrelated That s why you should report issues with these kernels to the vendor Its developers should look into the report and in case it turns out to be an upstream issue fix it directly upstream or forward the report there In practice that often does not work out or might not what you want You thus might want to consider circumventing the vendor by installing the very latest Linux kernel core yourself If that s an option for you move ahead in this process as a later step in this guide will explain how to do that once it rules out other potential causes for your issue Note the previous paragraph is starting with the word most as sometimes developers in fact are willing to handle reports about issues occurring with vendor kernels If they do in the end highly depends on the developers and the issue in question Your chances are quite good if the distributor applied only small modifications to a kernel based on a recent Linux version that for example often holds true for the mainline kernels shipped by Debian GNU Linux Sid or Fedora Rawhide Some developers will also accept reports about issues with kernels from distributions shipping the latest stable kernel as long as its only slightly modified that for example is often the case for Arch Linux regular Fedora releases and openSUSE Tumbleweed But keep in mind you better want to use a mainline Linux and avoid using a stable kernel for this process as outlined in the section Install a fresh kernel for testing in more detail Obviously you are free to ignore all this advice and report problems with an old or heavily modified vendor kernel to the upstream Linux developers But note those often get rejected or ignored so consider yourself warned But it s still better than not reporting the issue at all sometimes such reports directly or indirectly will help to get the issue fixed over time Search for existing reports first run Perform a rough search for existing reports with your favorite internet search engine additionally check the archives of the Linux Kernel Mailing List LKML If you find matching reports join the discussion instead of sending a new one Reporting an issue that someone else already brought forward is often a waste of time for everyone involved especially you as the reporter So it s in your own interest to thoroughly check if somebody reported the issue already At this step of the process it s okay to just perform a rough search a later step will tell you to perform a more detailed search once you know where your issue needs to be reported to Nevertheless do not hurry with this step of the reporting process it can save you time and trouble Simply search the internet with your favorite search engine first Afterwards search the Linux Kernel Mailing List LKML archives https lore kernel org lkml If you get flooded with results consider telling your search engine to limit search timeframe to the past month or year And wherever you search make sure to use good search terms vary them a few times too While doing so try to look at the issue from the perspective of someone else that will help you to come up with other words to use as search terms Also make sure not to use too many search terms at once Remember to search with and without information like the name of the kernel driver or the name of the affected hardware component But its exact brand name say ASUS Red Devil Radeon RX 5700 XT Gaming OC often is not much helpful as it is too specific Instead try search terms like the model line Radeon 5700 or Radeon 5000 and the code name of the main chip Navi or Navi10 with and without its manufacturer AMD In case you find an existing report about your issue join the discussion as you might be able to provide valuable additional information That can be important even when a fix is prepared or in its final stages already as developers might look for people that can provide additional information or test a proposed fix Jump to the section Duties after the report went out for details on how to get properly involved Note searching bugzilla kernel org https bugzilla kernel org might also be a good idea as that might provide valuable insights or turn up matching reports If you find the latter just keep in mind most subsystems expect reports in different places as described below in the section Check where you need to report your issue The developers that should take care of the issue thus might not even be aware of the bugzilla ticket Hence check the ticket if the issue already got reported as outlined in this document and if not consider doing so Issue of high priority See if the issue you are dealing with qualifies as regression security issue or a really severe problem those are issues of high priority that need special handling in some steps that are about to follow Linus Torvalds and the leading Linux kernel developers want to see some issues fixed as soon as possible hence there are issues of high priority that get handled slightly differently in the reporting process Three type of cases qualify regressions security issues and really severe problems You deal with a regression if some application or practical use case running fine with one Linux kernel works worse or not at all with a newer version compiled using a similar configuration The document Documentation admin guide reporting regressions rst explains this in more detail It also provides a good deal of other information about regressions you might want to be aware of it for example explains how to add your issue to the list of tracked regressions to ensure it won t fall through the cracks What qualifies as security issue is left to your judgment Consider reading Documentation process security bugs rst before proceeding as it provides additional details how to best handle security issues An issue is a really severe problem when something totally unacceptably bad happens That s for example the case when a Linux kernel corrupts the data it s handling or damages hardware it s running on You re also dealing with a severe issue when the kernel suddenly stops working with an error message kernel panic or without any farewell note at all Note do not confuse a panic a fatal error where the kernel stop itself with a Oops a recoverable error as the kernel remains running after the latter Ensure a healthy environment Make sure it s not the kernel s surroundings that are causing the issue you face Problems that look a lot like a kernel issue are sometimes caused by build or runtime environment It s hard to rule out that problem completely but you should minimize it Use proven tools when building your kernel as bugs in the compiler or the binutils can cause the resulting kernel to misbehave Ensure your computer components run within their design specifications that s especially important for the main processor the main memory and the motherboard Therefore stop undervolting or overclocking when facing a potential kernel issue Try to make sure it s not faulty hardware that is causing your issue Bad main memory for example can result in a multitude of issues that will manifest itself in problems looking like kernel issues If you re dealing with a filesystem issue you might want to check the file system in question with fsck as it might be damaged in a way that leads to unexpected kernel behavior When dealing with a regression make sure it s not something else that changed in parallel to updating the kernel The problem for example might be caused by other software that was updated at the same time It can also happen that a hardware component coincidentally just broke when you rebooted into a new kernel for the first time Updating the systems BIOS or changing something in the BIOS Setup can also lead to problems that on look a lot like a kernel regression Prepare for emergencies Create a fresh backup and put system repair and restore tools at hand Reminder you are dealing with computers which sometimes do unexpected things especially if you fiddle with crucial parts like the kernel of its operating system That s what you are about to do in this process Thus make sure to create a fresh backup also ensure you have all tools at hand to repair or reinstall the operating system as well as everything you need to restore the backup Make sure your kernel doesn t get enhanced Ensure your system does not enhance its kernels by building additional kernel modules on the fly which solutions like DKMS might be doing locally without your knowledge The risk your issue report gets ignored or rejected dramatically increases if your kernel gets enhanced in any way That s why you should remove or disable mechanisms like akmods and DKMS those build add on kernel modules automatically for example when you install a new Linux kernel or boot it for the first time Also remove any modules they might have installed Then reboot before proceeding Note you might not be aware that your system is using one of these solutions they often get set up silently when you install Nvidia s proprietary graphics driver VirtualBox or other software that requires a some support from a module not part of the Linux kernel That why your might need to uninstall the packages with such software to get rid of any 3rd party kernel module Check taint flag Check if your kernel was tainted when the issue occurred as the event that made the kernel set this flag might be causing the issue you face The kernel marks itself with a taint flag when something happens that might lead to follow up errors that look totally unrelated The issue you face might be such an error if your kernel is tainted That s why it s in your interest to rule this out early before investing more time into this process This is the only reason why this step is here as this process later will tell you to install the latest mainline kernel you will need to check the taint flag again then as that s when it matters because it s the kernel the report will focus on On a running system is easy to check if the kernel tainted itself if cat proc sys kernel tainted returns 0 then the kernel is not tainted and everything is fine Checking that file is impossible in some situations that s why the kernel also mentions the taint status when it reports an internal problem a kernel bug a recoverable error a kernel Oops or a non recoverable error before halting operation a kernel panic Look near the top of the error messages printed when one of these occurs and search for a line starting with CPU It should end with Not tainted if the kernel was not tainted when it noticed the problem it was tainted if you see Tainted followed by a few spaces and some letters If your kernel is tainted study Documentation admin guide tainted kernels rst to find out why Try to eliminate the reason Often it s caused by one these three things 1 A recoverable error a kernel Oops occurred and the kernel tainted itself as the kernel knows it might misbehave in strange ways after that point In that case check your kernel or system log and look for a section that starts with this Oops 0000 1 SMP That s the first Oops since boot up as the 1 between the brackets shows Every Oops and any other problem that happens after that point might be a follow up problem to that first Oops even if both look totally unrelated Rule this out by getting rid of the cause for the first Oops and reproducing the issue afterwards Sometimes simply restarting will be enough sometimes a change to the configuration followed by a reboot can eliminate the Oops But don t invest too much time into this at this point of the process as the cause for the Oops might already be fixed in the newer Linux kernel version you are going to install later in this process 2 Your system uses a software that installs its own kernel modules for example Nvidia s proprietary graphics driver or VirtualBox The kernel taints itself when it loads such module from external sources even if they are Open Source they sometimes cause errors in unrelated kernel areas and thus might be causing the issue you face You therefore have to prevent those modules from loading when you want to report an issue to the Linux kernel developers Most of the time the easiest way to do that is temporarily uninstall such software including any modules they might have installed Afterwards reboot 3 The kernel also taints itself when it s loading a module that resides in the staging tree of the Linux kernel source That s a special area for code mostly drivers that does not yet fulfill the normal Linux kernel quality standards When you report an issue with such a module it s obviously okay if the kernel is tainted just make sure the module in question is the only reason for the taint If the issue happens in an unrelated area reboot and temporarily block the module from being loaded by specifying foo blacklist 1 as kernel parameter replace foo with the name of the module in question Document how to reproduce issue Write down coarsely how to reproduce the issue If you deal with multiple issues at once create separate notes for each of them and make sure they work independently on a freshly booted system That s needed as each issue needs to get reported to the kernel developers separately unless they are strongly entangled If you deal with multiple issues at once you ll have to report each of them separately as they might be handled by different developers Describing various issues in one report also makes it quite difficult for others to tear it apart Hence only combine issues in one report if they are very strongly entangled Additionally during the reporting process you will have to test if the issue happens with other kernel versions Therefore it will make your work easier if you know exactly how to reproduce an issue quickly on a freshly booted system Note it s often fruitless to report issues that only happened once as they might be caused by a bit flip due to cosmic radiation That s why you should try to rule that out by reproducing the issue before going further Feel free to ignore this advice if you are experienced enough to tell a one time error due to faulty hardware apart from a kernel issue that rarely happens and thus is hard to reproduce Regression in stable or longterm kernel If you are facing a regression within a stable or longterm version line say something broke when updating from 5 10 4 to 5 10 5 scroll down to Dealing with regressions within a stable and longterm kernel line Regression within a stable and longterm kernel version line are something the Linux developers want to fix badly as such issues are even more unwanted than regression in the main development branch as they can quickly affect a lot of people The developers thus want to learn about such issues as quickly as possible hence there is a streamlined process to report them Note regressions with newer kernel version line say something broke when switching from 5 9 15 to 5 10 5 do not qualify Check where you need to report your issue Locate the driver or kernel subsystem that seems to be causing the issue Find out how and where its developers expect reports Note most of the time this won t be bugzilla kernel org as issues typically need to be sent by mail to a maintainer and a public mailing list It s crucial to send your report to the right people as the Linux kernel is a big project and most of its developers are only familiar with a small subset of it Quite a few programmers for example only care for just one driver for example one for a WiFi chip its developer likely will only have small or no knowledge about the internals of remote or unrelated subsystems like the TCP stack the PCIe PCI subsystem memory management or file systems Problem is the Linux kernel lacks a central bug tracker where you can simply file your issue and make it reach the developers that need to know about it That s why you have to find the right place and way to report issues yourself You can do that with the help of a script see below but it mainly targets kernel developers and experts For everybody else the MAINTAINERS file is the better place How to read the MAINTAINERS file To illustrate how to use the ref MAINTAINERS maintainers file lets assume the WiFi in your Laptop suddenly misbehaves after updating the kernel In that case it s likely an issue in the WiFi driver Obviously it could also be some code it builds upon but unless you suspect something like that stick to the driver If it s really something else the driver s developers will get the right people involved Sadly there is no way to check which code is driving a particular hardware component that is both universal and easy In case of a problem with the WiFi driver you for example might want to look at the output of lspci k as it lists devices on the PCI PCIe bus and the kernel module driving it user something lspci k 3a 00 0 Network controller Qualcomm Atheros QCA6174 802 11ac Wireless Network Adapter rev 32 Subsystem Bigfoot Networks Inc Device 1535 Kernel driver in use ath10k pci Kernel modules ath10k pci But this approach won t work if your WiFi chip is connected over USB or some other internal bus In those cases you might want to check your WiFi manager or the output of ip link Look for the name of the problematic network interface which might be something like wlp58s0 This name can be used like this to find the module driving it user something realpath relative to sys module sys class net wlp58s0 device driver module ath10k pci In case tricks like these don t bring you any further try to search the internet on how to narrow down the driver or subsystem in question And if you are unsure which it is just try your best guess somebody will help you if you guessed poorly Once you know the driver or subsystem you want to search for it in the MAINTAINERS file In the case of ath10k pci you won t find anything as the name is too specific Sometimes you will need to search on the net for help but before doing so try a somewhat shorted or modified name when searching the MAINTAINERS file as then you might find something like this QUALCOMM ATHEROS ATH10K WIRELESS DRIVER Mail A Some Human shuman example com Mailing list ath10k lists infradead org Status Supported Web page https wireless wiki kernel org en users Drivers ath10k SCM git git git kernel org pub scm linux kernel git kvalo ath git Files drivers net wireless ath ath10k Note the line description will be abbreviations if you read the plain MAINTAINERS file found in the root of the Linux source tree Mail for example will be M Mailing list will be L and Status will be S A section near the top of the file explains these and other abbreviations First look at the line Status Ideally it should be Supported or Maintained If it states Obsolete then you are using some outdated approach that was replaced by a newer solution you need to switch to Sometimes the code only has someone who provides Odd Fixes when feeling motivated And with Orphan you are totally out of luck as nobody takes care of the code anymore That only leaves these options arrange yourself to live with the issue fix it yourself or find a programmer somewhere willing to fix it After checking the status look for a line starting with bugs it will tell you where to find a subsystem specific bug tracker to file your issue The example above does not have such a line That is the case for most sections as Linux kernel development is completely driven by mail Very few subsystems use a bug tracker and only some of those rely on bugzilla kernel org In this and many other cases you thus have to look for lines starting with Mail instead Those mention the name and the email addresses for the maintainers of the particular code Also look for a line starting with Mailing list which tells you the public mailing list where the code is developed Your report later needs to go by mail to those addresses Additionally for all issue reports sent by email make sure to add the Linux Kernel Mailing List LKML linux kernel vger kernel org to CC Don t omit either of the mailing lists when sending your issue report by mail later Maintainers are busy people and might leave some work for other developers on the subsystem specific list and LKML is important to have one place where all issue reports can be found Finding the maintainers with the help of a script For people that have the Linux sources at hand there is a second option to find the proper place to report the script scripts get maintainer pl which tries to find all people to contact It queries the MAINTAINERS file and needs to be called with a path to the source code in question For drivers compiled as module if often can be found with a command like this modinfo ath10k pci grep filename sed s lib modules kernel s filename s ko xz drivers net wireless ath ath10k ath10k pci ko Pass parts of this to the script scripts get maintainer pl f drivers net wireless ath ath10k Some Human shuman example com supporter QUALCOMM ATHEROS ATH10K WIRELESS DRIVER Another S Human asomehuman example com maintainer NETWORKING DRIVERS ath10k lists infradead org open list QUALCOMM ATHEROS ATH10K WIRELESS DRIVER linux wireless vger kernel org open list NETWORKING DRIVERS WIRELESS netdev vger kernel org open list NETWORKING DRIVERS linux kernel vger kernel org open list Don t sent your report to all of them Send it to the maintainers which the script calls supporter additionally CC the most specific mailing list for the code as well as the Linux Kernel Mailing List LKML In this case you thus would need to send the report to Some Human shuman example com with ath10k lists infradead org and linux kernel vger kernel org in CC Note in case you cloned the Linux sources with git you might want to call get maintainer pl a second time with git The script then will look at the commit history to find which people recently worked on the code in question as they might be able to help But use these results with care as it can easily send you in a wrong direction That for example happens quickly in areas rarely changed like old or unmaintained drivers sometimes such code is modified during tree wide cleanups by developers that do not care about the particular driver at all Search for existing reports second run Search the archives of the bug tracker or mailing list in question thoroughly for reports that might match your issue If you find anything join the discussion instead of sending a new report As mentioned earlier already reporting an issue that someone else already brought forward is often a waste of time for everyone involved especially you as the reporter That s why you should search for existing report again now that you know where they need to be reported to If it s mailing list you will often find its archives on lore kernel org https lore kernel org But some list are hosted in different places That for example is the case for the ath10k WiFi driver used as example in the previous step But you ll often find the archives for these lists easily on the net Searching for archive ath10k lists infradead org for example will lead you to the Info page for the ath10k mailing list https lists infradead org mailman listinfo ath10k which at the top links to its list archives https lists infradead org pipermail ath10k Sadly this and quite a few other lists miss a way to search the archives In those cases use a regular internet search engine and add something like site lists infradead org pipermail ath10k to your search terms which limits the results to the archives at that URL It s also wise to check the internet LKML and maybe bugzilla kernel org again at this point If your report needs to be filed in a bug tracker you may want to check the mailing list archives for the subsystem as well as someone might have reported it only there For details how to search and what to do if you find matching reports see Search for existing reports first run above Do not hurry with this step of the reporting process spending 30 to 60 minutes or even more time can save you and others quite a lot of time and trouble Install a fresh kernel for testing Unless you are already running the latest mainline Linux kernel better go and install it for the reporting process Testing and reporting with the latest stable Linux can be an acceptable alternative in some situations during the merge window that actually might be even the best approach but in that development phase it can be an even better idea to suspend your efforts for a few days anyway Whatever version you choose ideally use a vanilla built Ignoring these advices will dramatically increase the risk your report will be rejected or ignored As mentioned in the detailed explanation for the first step already Like most programmers Linux kernel developers don t like to spend time dealing with reports for issues that don t even happen with the current code It s just a waste everybody s time especially yours That s why it s in everybody s interest that you confirm the issue still exists with the latest upstream code before reporting it You are free to ignore this advice but as outlined earlier doing so dramatically increases the risk that your issue report might get rejected or simply ignored In the scope of the kernel latest upstream normally means Install a mainline kernel the latest stable kernel can be an option but most of the time is better avoided Longterm kernels sometimes called LTS kernels are unsuitable at this point of the process The next subsection explains all of this in more detail The over next subsection describes way to obtain and install such a kernel It also outlines that using a pre compiled kernel are fine but better are vanilla which means it was built using Linux sources taken straight from kernel org https kernel org and not modified or enhanced in any way Choosing the right version for testing Head over to kernel org https kernel org to find out which version you want to use for testing Ignore the big yellow button that says Latest release and look a little lower at the table At its top you ll see a line starting with mainline which most of the time will point to a pre release with a version number like 5 8 rc2 If that s the case you ll want to use this mainline kernel for testing as that where all fixes have to be applied first Do not let that rc scare you these development kernels are pretty reliable and you made a backup as you were instructed above didn t you In about two out of every nine to ten weeks mainline might point you to a proper release with a version number like 5 7 If that happens consider suspending the reporting process until the first pre release of the next version 5 8 rc1 shows up on kernel org That s because the Linux development cycle then is in its two week long merge window The bulk of the changes and all intrusive ones get merged for the next release during this time It s a bit more risky to use mainline during this period Kernel developers are also often quite busy then and might have no spare time to deal with issue reports It s also quite possible that one of the many changes applied during the merge window fixes the issue you face that s why you soon would have to retest with a newer kernel version anyway as outlined below in the section Duties after the report went out That s why it might make sense to wait till the merge window is over But don t to that if you re dealing with something that shouldn t wait In that case consider obtaining the latest mainline kernel via git see below or use the latest stable version offered on kernel org Using that is also acceptable in case mainline for some reason does currently not work for you An in general using it for reproducing the issue is also better than not reporting it issue at all Better avoid using the latest stable kernel outside merge windows as all fixes must be applied to mainline first That s why checking the latest mainline kernel is so important any issue you want to see fixed in older version lines needs to be fixed in mainline first before it can get backported which can take a few days or weeks Another reason the fix you hope for might be too hard or risky for backporting reporting the issue again hence is unlikely to change anything These aspects are also why longterm kernels sometimes called LTS kernels are unsuitable for this part of the reporting process they are to distant from the current code Hence go and test mainline first and follow the process further if the issue doesn t occur with mainline it will guide you how to get it fixed in older version lines if that s in the cards for the fix in question How to obtain a fresh Linux kernel Using a pre compiled kernel This is often the quickest easiest and safest way for testing especially is you are unfamiliar with the Linux kernel The problem most of those shipped by distributors or add on repositories are build from modified Linux sources They are thus not vanilla and therefore often unsuitable for testing and issue reporting the changes might cause the issue you face or influence it somehow But you are in luck if you are using a popular Linux distribution for quite a few of them you ll find repositories on the net that contain packages with the latest mainline or stable Linux built as vanilla kernel It s totally okay to use these just make sure from the repository s description they are vanilla or at least close to it Additionally ensure the packages contain the latest versions as offered on kernel org The packages are likely unsuitable if they are older than a week as new mainline and stable kernels typically get released at least once a week Please note that you might need to build your own kernel manually later that s sometimes needed for debugging or testing fixes as described later in this document Also be aware that pre compiled kernels might lack debug symbols that are needed to decode messages the kernel prints when a panic Oops warning or BUG occurs if you plan to decode those you might be better off compiling a kernel yourself see the end of this subsection and the section titled Decode failure messages for details Using git Developers and experienced Linux users familiar with git are often best served by obtaining the latest Linux kernel sources straight from the official development repository on kernel org https git kernel org pub scm linux kernel git torvalds linux git tree Those are likely a bit ahead of the latest mainline pre release Don t worry about it they are as reliable as a proper pre release unless the kernel s development cycle is currently in the middle of a merge window But even then they are quite reliable Conventional People unfamiliar with git are often best served by downloading the sources as tarball from kernel org https kernel org How to actually build a kernel is not described here as many websites explain the necessary steps already If you are new to it consider following one of those how to s that suggest to use make localmodconfig as that tries to pick up the configuration of your current kernel and then tries to adjust it somewhat for your system That does not make the resulting kernel any better but quicker to compile Note If you are dealing with a panic Oops warning or BUG from the kernel please try to enable CONFIG KALLSYMS when configuring your kernel Additionally enable CONFIG DEBUG KERNEL and CONFIG DEBUG INFO too the latter is the relevant one of those two but can only be reached if you enable the former Be aware CONFIG DEBUG INFO increases the storage space required to build a kernel by quite a bit But that s worth it as these options will allow you later to pinpoint the exact line of code that triggers your issue The section Decode failure messages below explains this in more detail But keep in mind Always keep a record of the issue encountered in case it is hard to reproduce Sending an undecoded report is better than not reporting the issue at all Check taint flag Ensure the kernel you just installed does not taint itself when running As outlined above in more detail already the kernel sets a taint flag when something happens that can lead to follow up errors that look totally unrelated That s why you need to check if the kernel you just installed does not set this flag And if it does you in almost all the cases needs to eliminate the reason for it before you reporting issues that occur with it See the section above for details how to do that Reproduce issue with the fresh kernel Reproduce the issue with the kernel you just installed If it doesn t show up there scroll down to the instructions for issues only happening with stable and longterm kernels Check if the issue occurs with the fresh Linux kernel version you just installed If it was fixed there already consider sticking with this version line and abandoning your plan to report the issue But keep in mind that other users might still be plagued by it as long as it s not fixed in either stable and longterm version from kernel org and thus vendor kernels derived from those If you prefer to use one of those or just want to help their users head over to the section Details about reporting issues only occurring in older kernel version lines below Optimize description to reproduce issue Optimize your notes try to find and write the most straightforward way to reproduce your issue Make sure the end result has all the important details and at the same time is easy to read and understand for others that hear about it for the first time And if you learned something in this process consider searching again for existing reports about the issue An unnecessarily complex report will make it hard for others to understand your report Thus try to find a reproducer that s straight forward to describe and thus easy to understand in written form Include all important details but at the same time try to keep it as short as possible In this in the previous steps you likely have learned a thing or two about the issue you face Use this knowledge and search again for existing reports instead you can join Decode failure messages If your failure involves a panic Oops warning or BUG consider decoding the kernel log to find the line of code that triggered the error When the kernel detects an internal problem it will log some information about the executed code This makes it possible to pinpoint the exact line in the source code that triggered the issue and shows how it was called But that only works if you enabled CONFIG DEBUG INFO and CONFIG KALLSYMS when configuring your kernel If you did so consider to decode the information from the kernel s log That will make it a lot easier to understand what lead to the panic Oops warning or BUG which increases the chances that someone can provide a fix Decoding can be done with a script you find in the Linux source tree If you are running a kernel you compiled yourself earlier call it like this user something sudo dmesg linux 5 10 5 scripts decode stacktrace sh linux 5 10 5 vmlinux If you are running a packaged vanilla kernel you will likely have to install the corresponding packages with debug symbols Then call the script which you might need to get from the Linux sources if your distro does not package it like this user something sudo dmesg linux 5 10 5 scripts decode stacktrace sh usr lib debug lib modules 5 10 10 4 1 x86 64 vmlinux usr src kernels 5 10 10 4 1 x86 64 The script will work on log lines like the following which show the address of the code the kernel was executing when the error occurred 68 387301 RIP 0010 test module init 0x5 0xffa test module Once decoded these lines will look like this 68 387301 RIP 0010 test module init home username linux 5 10 5 test module test module c 16 test module In this case the executed code was built from the file linux 5 10 5 test module test module c and the error occurred by the instructions found in line 16 The script will similarly decode the addresses mentioned in the section starting with Call trace which show the path to the function where the problem occurred Additionally the script will show the assembler output for the code section the kernel was executing Note if you can t get this to work simply skip this step and mention the reason for it in the report If you re lucky it might not be needed And if it is someone might help you to get things going Also be aware this is just one of several ways to decode kernel stack traces Sometimes different steps will be required to retrieve the relevant details Don t worry about that if that s needed in your case developers will tell you what to do Special care for regressions If your problem is a regression try to narrow down when the issue was introduced as much as possible Linux lead developer Linus Torvalds insists that the Linux kernel never worsens that s why he deems regressions as unacceptable and wants to see them fixed quickly That s why changes that introduced a regression are often promptly reverted if the issue they cause can t get solved quickly any other way Reporting a regression is thus a bit like playing a kind of trump card to get something quickly fixed But for that to happen the change that s causing the regression needs to be known Normally it s up to the reporter to track down the culprit as maintainers often won t have the time or setup at hand to reproduce it themselves To find the change there is a process called bisection which the document Documentation admin guide bug bisect rst describes in detail That process will often require you to build about ten to twenty kernel images trying to reproduce the issue with each of them before building the next Yes that takes some time but don t worry it works a lot quicker than most people assume Thanks to a binary search this will lead you to the one commit in the source code management system that s causing the regression Once you find it search the net for the subject of the change its commit id and the shortened commit id the first 12 characters of the commit id This will lead you to existing reports about it if there are any Note a bisection needs a bit of know how which not everyone has and quite a bit of effort which not everyone is willing to invest Nevertheless it s highly recommended performing a bisection yourself If you really can t or don t want to go down that route at least find out which mainline kernel introduced the regression If something for example breaks when switching from 5 5 15 to 5 8 4 then try at least all the mainline releases in that area 5 6 5 7 and 5 8 to check when it first showed up Unless you re trying to find a regression in a stable or longterm kernel avoid testing versions which number has three sections 5 6 12 5 7 8 as that makes the outcome hard to interpret which might render your testing useless Once you found the major version which introduced the regression feel free to move on in the reporting process But keep in mind it depends on the issue at hand if the developers will be able to help without knowing the culprit Sometimes they might recognize from the report want went wrong and can fix it other times they will be unable to help unless you perform a bisection When dealing with regressions make sure the issue you face is really caused by the kernel and not by something else as outlined above already In the whole process keep in mind an issue only qualifies as regression if the older and the newer kernel got built with a similar configuration This can be achieved by using make olddefconfig as explained in more detail by Documentation admin guide reporting regressions rst that document also provides a good deal of other information about regressions you might want to be aware of Write and send the report Start to compile the report by writing a detailed description about the issue Always mention a few things the latest kernel version you installed for reproducing the Linux Distribution used and your notes on how to reproduce the issue Ideally make the kernel s build configuration config and the output from dmesg available somewhere on the net and link to it Include or upload all other information that might be relevant like the output screenshot of an Oops or the output from lspci Once you wrote this main part insert a normal length paragraph on top of it outlining the issue and the impact quickly On top of this add one sentence that briefly describes the problem and gets people to read on Now give the thing a descriptive title or subject that yet again is shorter Then you re ready to send or file the report like the MAINTAINERS file told you unless you are dealing with one of those issues of high priority they need special care which is explained in Special handling for high priority issues below Now that you have prepared everything it s time to write your report How to do that is partly explained by the three documents linked to in the preface above That s why this text will only mention a few of the essentials as well as things specific to the Linux kernel There is one thing that fits both categories the most crucial parts of your report are the title subject the first sentence and the first paragraph Developers often get quite a lot of mail They thus often just take a few seconds to skim a mail before deciding to move on or look closer Thus the better the top section of your report the higher are the chances that someone will look into it and help you And that is why you should ignore them for now and write the detailed report first Things each report should mention Describe in detail how your issue happens with the fresh vanilla kernel you installed Try to include the step by step instructions you wrote and optimized earlier that outline how you and ideally others can reproduce the issue in those rare cases where that s impossible try to describe what you did to trigger it Also include all the relevant information others might need to understand the issue and its environment What s actually needed depends a lot on the issue but there are some things you should include always the output from cat proc version which contains the Linux kernel version number and the compiler it was built with the Linux distribution the machine is running hostnamectl grep Operating System the architecture of the CPU and the operating system uname mi if you are dealing with a regression and performed a bisection mention the subject and the commit id of the change that is causing it In a lot of cases it s also wise to make two more things available to those that read your report the configuration used for building your Linux kernel the config file the kernel s messages that you get from dmesg written to a file Make sure that it starts with a line like Linux version 5 8 1 foobar example com gcc GCC 10 2 1 GNU ld version 2 34 1 SMP Mon Aug 3 14 54 37 UTC 2020 If it s missing then important messages from the first boot phase already got discarded In this case instead consider using journalctl b 0 k alternatively you can also reboot reproduce the issue and call dmesg right afterwards These two files are big that s why it s a bad idea to put them directly into your report If you are filing the issue in a bug tracker then attach them to the ticket If you report the issue by mail do not attach them as that makes the mail too large instead do one of these things Upload the files somewhere public your website a public file paste service a ticket created just for this purpose on bugzilla kernel org https bugzilla kernel org and include a link to them in your report Ideally use something where the files stay available for years as they could be useful to someone many years from now this for example can happen if five or ten years from now a developer works on some code that was changed just to fix your issue Put the files aside and mention you will send them later in individual replies to your own mail Just remember to actually do that once the report went out Things that might be wise to provide Depending on the issue you might need to add more background data Here are a few suggestions what often is good to provide If you are dealing with a warning an OOPS or a panic from the kernel include it If you can t copy n paste it try to capture a netconsole trace or at least take a picture of the screen If the issue might be related to your computer hardware mention what kind of system you use If you for example have problems with your graphics card mention its manufacturer the card s model and what chip is uses If it s a laptop mention its name but try to make sure it s meaningful Dell XPS 13 for example is not because it might be the one from 2012 that one looks not that different from the one sold today but apart from that the two have nothing in common Hence in such cases add the exact model number which for example are 9380 or 7390 for XPS 13 models introduced during 2019 Names like Lenovo Thinkpad T590 are also somewhat ambiguous there are variants of this laptop with and without a dedicated graphics chip so try to find the exact model name or specify the main components Mention the relevant software in use If you have problems with loading modules you want to mention the versions of kmod systemd and udev in use If one of the DRM drivers misbehaves you want to state the versions of libdrm and Mesa also specify your Wayland compositor or the X Server and its driver If you have a filesystem issue mention the version of corresponding filesystem utilities e2fsprogs btrfs progs xfsprogs Gather additional information from the kernel that might be of interest The output from lspci nn will for example help others to identify what hardware you use If you have a problem with hardware you even might want to make the output from sudo lspci vvv available as that provides insights how the components were configured For some issues it might be good to include the contents of files like proc cpuinfo proc ioports proc iomem proc modules or proc scsi scsi Some subsystem also offer tools to collect relevant information One such tool is alsa info sh which the audio sound subsystem developers provide https www alsa project org wiki AlsaInfo Those examples should give your some ideas of what data might be wise to attach but you have to think yourself what will be helpful for others to know Don t worry too much about forgetting something as developers will ask for additional details they need But making everything important available from the start increases the chance someone will take a closer look The important part the head of your report Now that you have the detailed part of the report prepared let s get to the most important section the first few sentences Thus go to the top add something like The detailed description before the part you just wrote and insert two newlines at the top Now write one normal length paragraph that describes the issue roughly Leave out all boring details and focus on the crucial parts readers need to know to understand what this is all about if you think this bug affects a lot of users mention this to get people interested Once you did that insert two more lines at the top and write a one sentence summary that explains quickly what the report is about After that you have to get even more abstract and write an even shorter subject title for the report Now that you have written this part take some time to optimize it as it is the most important parts of your report a lot of people will only read this before they decide if reading the rest is time well spent Now send or file the report like the ref MAINTAINERS maintainers file told you unless it s one of those issues of high priority outlined earlier in that case please read the next subsection first before sending the report on its way Special handling for high priority issues Reports for high priority issues need special handling Severe issues make sure the subject or ticket title as well as the first paragraph makes the severeness obvious Regressions make the report s subject start with REGRESSION In case you performed a successful bisection use the title of the change that introduced the regression as the second part of your subject Make the report also mention the commit id of the culprit In case of an unsuccessful bisection make your report mention the latest tested version that s working fine say 5 7 and the oldest where the issue occurs say 5 8 rc1 When sending the report by mail CC the Linux regressions mailing list regressions lists linux dev In case the report needs to be filed to some web tracker proceed to do so Once filed forward the report by mail to the regressions list CC the maintainer and the mailing list for the subsystem in question Make sure to inline the forwarded report hence do not attach it Also add a short note at the top where you mention the URL to the ticket When mailing or forwarding the report in case of a successful bisection add the author of the culprit to the recipients also CC everyone in the signed off by chain which you find at the end of its commit message Security issues for these issues your will have to evaluate if a short term risk to other users would arise if details were publicly disclosed If that s not the case simply proceed with reporting the issue as described For issues that bear such a risk you will need to adjust the reporting process slightly If the MAINTAINERS file instructed you to report the issue by mail do not CC any public mailing lists If you were supposed to file the issue in a bug tracker make sure to mark the ticket as private or security issue If the bug tracker does not offer a way to keep reports private forget about it and send your report as a private mail to the maintainers instead In both cases make sure to also mail your report to the addresses the MAINTAINERS file lists in the section security contact Ideally directly CC them when sending the report by mail If you filed it in a bug tracker forward the report s text to these addresses but on top of it put a small note where you mention that you filed it with a link to the ticket See Documentation process security bugs rst for more information Duties after the report went out Wait for reactions and keep the thing rolling until you can accept the outcome in one way or the other Thus react publicly and in a timely manner to any inquiries Test proposed fixes Do proactive testing retest with at least every first release candidate RC of a new mainline version and report your results Send friendly reminders if things stall And try to help yourself if you don t get any help or if it s unsatisfying If your report was good and you are really lucky then one of the developers might immediately spot what s causing the issue they then might write a patch to fix it test it and send it straight for integration in mainline while tagging it for later backport to stable and longterm kernels that need it Then all you need to do is reply with a Thank you very much and switch to a version with the fix once it gets released But this ideal scenario rarely happens That s why the job is only starting once you got the report out What you ll have to do depends on the situations but often it will be the things listed below But before digging into the details here are a few important things you need to keep in mind for this part of the process General advice for further interactions Always reply in public When you filed the issue in a bug tracker always reply there and do not contact any of the developers privately about it For mailed reports always use the Reply all function when replying to any mails you receive That includes mails with any additional data you might want to add to your report go to your mail applications Sent folder and use reply all on your mail with the report This approach will make sure the public mailing list s and everyone else that gets involved over time stays in the loop it also keeps the mail thread intact which among others is really important for mailing lists to group all related mails together There are just two situations where a comment in a bug tracker or a Reply all is unsuitable Someone tells you to send something privately You were told to send something but noticed it contains sensitive information that needs to be kept private In that case it s okay to send it in private to the developer that asked for it But note in the ticket or a mail that you did that so everyone else knows you honored the request Do research before asking for clarifications or help In this part of the process someone might tell you to do something that requires a skill you might not have mastered yet For example you might be asked to use some test tools you never have heard of yet or you might be asked to apply a patch to the Linux kernel sources to test if it helps In some cases it will be fine sending a reply asking for instructions how to do that But before going that route try to find the answer own your own by searching the internet alternatively consider asking in other places for advice For example ask a friend or post about it to a chatroom or forum you normally hang out Be patient If you are really lucky you might get a reply to your report within a few hours But most of the time it will take longer as maintainers are scattered around the globe and thus might be in a different time zone one where they already enjoy their night away from keyboard In general kernel developers will take one to five business days to respond to reports Sometimes it will take longer as they might be busy with the merge windows other work visiting developer conferences or simply enjoying a long summer holiday The issues of high priority see above for an explanation are an exception here maintainers should address them as soon as possible that s why you should wait a week at maximum or just two days if it s something urgent before sending a friendly reminder Sometimes the maintainer might not be responding in a timely manner other times there might be disagreements for example if an issue qualifies as regression or not In such cases raise your concerns on the mailing list and ask others for public or private replies how to move on If that fails it might be appropriate to get a higher authority involved In case of a WiFi driver that would be the wireless maintainers if there are no higher level maintainers or all else fails it might be one of those rare situations where it s okay to get Linus Torvalds involved Proactive testing Every time the first pre release the rc1 of a new mainline kernel version gets released go and check if the issue is fixed there or if anything of importance changed Mention the outcome in the ticket or in a mail you sent as reply to your report make sure it has all those in the CC that up to that point participated in the discussion This will show your commitment and that you are willing to help It also tells developers if the issue persists and makes sure they do not forget about it A few other occasional retests for example with rc3 rc5 and the final are also a good idea but only report your results if something relevant changed or if you are writing something anyway With all these general things off the table let s get into the details of how to help to get issues resolved once they were reported Inquires and testing request Here are your duties in case you got replies to your report Check who you deal with Most of the time it will be the maintainer or a developer of the particular code area that will respond to your report But as issues are normally reported in public it could be anyone that s replying including people that want to help but in the end might guide you totally off track with their questions or requests That rarely happens but it s one of many reasons why it s wise to quickly run an internet search to see who you re interacting with By doing this you also get aware if your report was heard by the right people as a reminder to the maintainer see below might be in order later if discussion fades out without leading to a satisfying solution for the issue Inquiries for data Often you will be asked to test something or provide additional details Try to provide the requested information soon as you have the attention of someone that might help and risk losing it the longer you wait that outcome is even likely if you do not provide the information within a few business days Requests for testing When you are asked to test a diagnostic patch or a possible fix try to test it in timely manner too But do it properly and make sure to not rush it mixing things up can happen easily and can lead to a lot of confusion for everyone involved A common mistake for example is thinking a proposed patch with a fix was applied but in fact wasn t Things like that happen even to experienced testers occasionally but they most of the time will notice when the kernel with the fix behaves just as one without it What to do when nothing of substance happens Some reports will not get any reaction from the responsible Linux kernel developers or a discussion around the issue evolved but faded out with nothing of substance coming out of it In these cases wait two better three weeks before sending a friendly reminder maybe the maintainer was just away from keyboard for a while when your report arrived or had something more important to take care of When writing the reminder kindly ask if anything else from your side is needed to get the ball running somehow If the report got out by mail do that in the first lines of a mail that is a reply to your initial mail see above which includes a full quote of the original report below that s on of those few situations where such a TOFU Text Over Fullquote Under is the right approach as then all the recipients will have the details at hand immediately in the proper order After the reminder wait three more weeks for replies If you still don t get a proper reaction you first should reconsider your approach Did you maybe try to reach out to the wrong people Was the report maybe offensive or so confusing that people decided to completely stay away from it The best way to rule out such factors show the report to one or two people familiar with FLOSS issue reporting and ask for their opinion Also ask them for their advice how to move forward That might mean prepare a better report and make those people review it before you send it out Such an approach is totally fine just mention that this is the second and improved report on the issue and include a link to the first report If the report was proper you can send a second reminder in it ask for advice why the report did not get any replies A good moment for this second reminder mail is shortly after the first pre release the rc1 of a new Linux kernel version got published as you should retest and provide a status update at that point anyway see above If the second reminder again results in no reaction within a week try to contact a higher level maintainer asking for advice even busy maintainers by then should at least have sent some kind of acknowledgment Remember to prepare yourself for a disappointment maintainers ideally should react somehow to every issue report but they are only obliged to fix those issues of high priority outlined earlier So don t be too devastating if you get a reply along the lines of thanks for the report I have more important issues to deal with currently and won t have time to look into this for the foreseeable future It s also possible that after some discussion in the bug tracker or on a list nothing happens anymore and reminders don t help to motivate anyone to work out a fix Such situations can be devastating but is within the cards when it comes to Linux kernel development This and several other reasons for not getting help are explained in Why some issues won t get any reaction or remain unfixed after being reported near the end of this document Don t get devastated if you don t find any help or if the issue in the end does not get solved the Linux kernel is FLOSS and thus you can still help yourself You for example could try to find others that are affected and team up with them to get the issue resolved Such a team could prepare a fresh report together that mentions how many you are and why this is something that in your option should get fixed Maybe together you can also narrow down the root cause or the change that introduced a regression which often makes developing a fix easier And with a bit of luck there might be someone in the team that knows a bit about programming and might be able to write a fix Reference for Reporting regressions within a stable and longterm kernel line This subsection provides details for the steps you need to perform if you face a regression within a stable and longterm kernel line Make sure the particular version line still gets support Check if the kernel developers still maintain the Linux kernel version line you care about go to the front page of kernel org and make sure it mentions the latest release of the particular version line without an EOL tag Most kernel version lines only get supported for about three months as maintaining them longer is quite a lot of work Hence only one per year is chosen and gets supported for at least two years often six That s why you need to check if the kernel developers still support the version line you care for Note if kernel org lists two stable version lines on the front page you should consider switching to the newer one and forget about the older one support for it is likely to be abandoned soon Then it will get a end of life EOL stamp Version lines that reached that point still get mentioned on the kernel org front page for a week or two but are unsuitable for testing and reporting Search stable mailing list Check the archives of the Linux stable mailing list for existing reports Maybe the issue you face is already known and was fixed or is about to Hence search the archives of the Linux stable mailing list https lore kernel org stable for reports about an issue like yours If you find any matches consider joining the discussion unless the fix is already finished and scheduled to get applied soon Reproduce issue with the newest release Install the latest release from the particular version line as a vanilla kernel Ensure this kernel is not tainted and still shows the problem as the issue might have already been fixed there If you first noticed the problem with a vendor kernel check a vanilla build of the last version known to work performs fine as well Before investing any more time in this process you want to check if the issue was already fixed in the latest release of version line you re interested in This kernel needs to be vanilla and shouldn t be tainted before the issue happens as detailed outlined already above in the section Install a fresh kernel for testing Did you first notice the regression with a vendor kernel Then changes the vendor applied might be interfering You need to rule that out by performing a recheck Say something broke when you updated from 5 10 4 vendor 42 to 5 10 5 vendor 43 Then after testing the latest 5 10 release as outlined in the previous paragraph check if a vanilla build of Linux 5 10 4 works fine as well If things are broken there the issue does not qualify as upstream regression and you need switch back to the main step by step guide to report the issue Report the regression Send a short problem report to the Linux stable mailing list stable vger kernel org and CC the Linux regressions mailing list regressions lists linux dev if you suspect the cause in a particular subsystem CC its maintainer and its mailing list Roughly describe the issue and ideally explain how to reproduce it Mention the first version that shows the problem and the last version that s working fine Then wait for further instructions When reporting a regression that happens within a stable or longterm kernel line say when updating from 5 10 4 to 5 10 5 a brief report is enough for the start to get the issue reported quickly Hence a rough description to the stable and regressions mailing list is all it takes but in case you suspect the cause in a particular subsystem CC its maintainers and its mailing list as well because that will speed things up And note it helps developers a great deal if you can specify the exact version that introduced the problem Hence if possible within a reasonable time frame try to find that version using vanilla kernels Lets assume something broke when your distributor released a update from Linux kernel 5 10 5 to 5 10 8 Then as instructed above go and check the latest kernel from that version line say 5 10 9 If it shows the problem try a vanilla 5 10 5 to ensure that no patches the distributor applied interfere If the issue doesn t manifest itself there try 5 10 7 and then depending on the outcome 5 10 8 or 5 10 6 to find the first version where things broke Mention it in the report and state that 5 10 9 is still broken What the previous paragraph outlines is basically a rough manual bisection Once your report is out your might get asked to do a proper one as it allows to pinpoint the exact change that causes the issue which then can easily get reverted to fix the issue quickly Hence consider to do a proper bisection right away if time permits See the section Special care for regressions and the document Documentation admin guide bug bisect rst for details how to perform one In case of a successful bisection add the author of the culprit to the recipients also CC everyone in the signed off by chain which you find at the end of its commit message Reference for Reporting issues only occurring in older kernel version lines This section provides details for the steps you need to take if you could not reproduce your issue with a mainline kernel but want to see it fixed in older version lines aka stable and longterm kernels Some fixes are too complex Prepare yourself for the possibility that going through the next few steps might not get the issue solved in older releases the fix might be too big or risky to get backported there Even small and seemingly obvious code changes sometimes introduce new and totally unexpected problems The maintainers of the stable and longterm kernels are very aware of that and thus only apply changes to these kernels that are within rules outlined in Documentation process stable kernel rules rst Complex or risky changes for example do not qualify and thus only get applied to mainline Other fixes are easy to get backported to the newest stable and longterm kernels but too risky to integrate into older ones So be aware the fix you are hoping for might be one of those that won t be backported to the version line your care about In that case you ll have no other choice then to live with the issue or switch to a newer Linux version unless you want to patch the fix into your kernels yourself Common preparations Perform the first three steps in the section Reporting issues only occurring in older kernel version lines above You need to carry out a few steps already described in another section of this guide Those steps will let you Check if the kernel developers still maintain the Linux kernel version line you care about Search the Linux stable mailing list for exiting reports Check with the latest release Check code history and search for existing discussions Search the Linux kernel version control system for the change that fixed the issue in mainline as its commit message might tell you if the fix is scheduled for backporting already If you don t find anything that way search the appropriate mailing lists for posts that discuss such an issue or peer review possible fixes then check the discussions if the fix was deemed unsuitable for backporting If backporting was not considered at all join the newest discussion asking if it s in the cards In a lot of cases the issue you deal with will have happened with mainline but got fixed there The commit that fixed it would need to get backported as well to get the issue solved That s why you want to search for it or any discussions abound it First try to find the fix in the Git repository that holds the Linux kernel sources You can do this with the web interfaces on kernel org https git kernel org pub scm linux kernel git torvalds linux git tree or its mirror on GitHub https github com torvalds linux if you have a local clone you alternatively can search on the command line with git log grep pattern If you find the fix look if the commit message near the end contains a stable tag that looks like this Cc stable vger kernel org 5 4 If that s case the developer marked the fix safe for backporting to version line 5 4 and later Most of the time it s getting applied there within two weeks but sometimes it takes a bit longer If the commit doesn t tell you anything or if you can t find the fix look again for discussions about the issue Search the net with your favorite internet search engine as well as the archives for the Linux kernel developers mailing list https lore kernel org lkml Also read the section Locate kernel area that causes the issue above and follow the instructions to find the subsystem in question its bug tracker or mailing list archive might have the answer you are looking for If you see a proposed fix search for it in the version control system as outlined above as the commit might tell you if a backport can be expected Check the discussions for any indicators the fix might be too risky to get backported to the version line you care about If that s the case you have to live with the issue or switch to the kernel version line where the fix got applied If the fix doesn t contain a stable tag and backporting was not discussed join the discussion mention the version where you face the issue and that you would like to see it fixed if suitable Ask for advice One of the former steps should lead to a solution If that doesn t work out ask the maintainers for the subsystem that seems to be causing the issue for advice CC the mailing list for the particular subsystem as well as the stable mailing list If the previous three steps didn t get you closer to a solution there is only one option left ask for advice Do that in a mail you sent to the maintainers for the subsystem where the issue seems to have its roots CC the mailing list for the subsystem as well as the stable mailing list stable vger kernel org Why some issues won t get any reaction or remain unfixed after being reported When reporting a problem to the Linux developers be aware only issues of high priority regressions security issues severe problems are definitely going to get resolved The maintainers or if all else fails Linus Torvalds himself will make sure of that They and the other kernel developers will fix a lot of other issues as well But be aware that sometimes they can t or won t help and sometimes there isn t even anyone to send a report to This is best explained with kernel developers that contribute to the Linux kernel in their spare time Quite a few of the drivers in the kernel were written by such programmers often because they simply wanted to make their hardware usable on their favorite operating system These programmers most of the time will happily fix problems other people report But nobody can force them to do as they are contributing voluntarily Then there are situations where such developers really want to fix an issue but can t sometimes they lack hardware programming documentation to do so This often happens when the publicly available docs are superficial or the driver was written with the help of reverse engineering Sooner or later spare time developers will also stop caring for the driver Maybe their test hardware broke got replaced by something more fancy or is so old that it s something you don t find much outside of computer museums anymore Sometimes developer stops caring for their code and Linux at all as something different in their life became way more important In some cases nobody is willing to take over the job as maintainer and nobody can be forced to as contributing to the Linux kernel is done on a voluntary basis Abandoned drivers nevertheless remain in the kernel they are still useful for people and removing would be a regression The situation is not that different with developers that are paid for their work on the Linux kernel Those contribute most changes these days But their employers sooner or later also stop caring for their code or make its programmer focus on other things Hardware vendors for example earn their money mainly by selling new hardware quite a few of them hence are not investing much time and energy in maintaining a Linux kernel driver for something they stopped selling years ago Enterprise Linux distributors often care for a longer time period but in new versions often leave support for old and rare hardware aside to limit the scope Often spare time contributors take over once a company orphans some code but as mentioned above sooner or later they will leave the code behind too Priorities are another reason why some issues are not fixed as maintainers quite often are forced to set those as time to work on Linux is limited That s true for spare time or the time employers grant their developers to spend on maintenance work on the upstream kernel Sometimes maintainers also get overwhelmed with reports even if a driver is working nearly perfectly To not get completely stuck the programmer thus might have no other choice than to prioritize issue reports and reject some of them But don t worry too much about all of this a lot of drivers have active maintainers who are quite interested in fixing as many issues as possible Closing words Compared with other Free Libre Open Source Software it s hard to report issues to the Linux kernel developers the length and complexity of this document and the implications between the lines illustrate that But that s how it is for now The main author of this text hopes documenting the state of the art will lay some groundwork to improve the situation over time end of content This document is maintained by Thorsten Leemhuis linux leemhuis info If you spot a typo or small mistake feel free to let him know directly and he ll fix it You are free to do the same in a mostly informal way if you want to contribute changes to the text but for copyright reasons please CC linux doc vger kernel org and sign off your contribution as Documentation process submitting patches rst outlines in the section Sign your work the Developer s Certificate of Origin This text is available under GPL 2 0 or CC BY 4 0 as stated at the top of the file If you want to distribute this text under CC BY 4 0 only please use The Linux kernel developers for author attribution and link this as source https git kernel org pub scm linux kernel git torvalds linux git plain Documentation admin guide reporting issues rst Note Only the content of this RST file as found in the Linux kernel sources is available under CC BY 4 0 as versions of this text that were processed for example by the kernel s build system might contain content taken from files which use a more restrictive license |
linux XFS is a high performance journaling filesystem which originated on the SGI IRIX platform It is completely multi threaded can variable block sizes is extent based and makes extensive use of The SGI XFS Filesystem support large files and large filesystems extended attributes SPDX License Identifier GPL 2 0 | .. SPDX-License-Identifier: GPL-2.0
======================
The SGI XFS Filesystem
======================
XFS is a high performance journaling filesystem which originated
on the SGI IRIX platform. It is completely multi-threaded, can
support large files and large filesystems, extended attributes,
variable block sizes, is extent based, and makes extensive use of
Btrees (directories, extents, free space) to aid both performance
and scalability.
Refer to the documentation at https://xfs.wiki.kernel.org/
for further details. This implementation is on-disk compatible
with the IRIX version of XFS.
Mount Options
=============
When mounting an XFS filesystem, the following options are accepted.
allocsize=size
Sets the buffered I/O end-of-file preallocation size when
doing delayed allocation writeout (default size is 64KiB).
Valid values for this option are page size (typically 4KiB)
through to 1GiB, inclusive, in power-of-2 increments.
The default behaviour is for dynamic end-of-file
preallocation size, which uses a set of heuristics to
optimise the preallocation size based on the current
allocation patterns within the file and the access patterns
to the file. Specifying a fixed ``allocsize`` value turns off
the dynamic behaviour.
attr2 or noattr2
The options enable/disable an "opportunistic" improvement to
be made in the way inline extended attributes are stored
on-disk. When the new form is used for the first time when
``attr2`` is selected (either when setting or removing extended
attributes) the on-disk superblock feature bit field will be
updated to reflect this format being in use.
The default behaviour is determined by the on-disk feature
bit indicating that ``attr2`` behaviour is active. If either
mount option is set, then that becomes the new default used
by the filesystem.
CRC enabled filesystems always use the ``attr2`` format, and so
will reject the ``noattr2`` mount option if it is set.
discard or nodiscard (default)
Enable/disable the issuing of commands to let the block
device reclaim space freed by the filesystem. This is
useful for SSD devices, thinly provisioned LUNs and virtual
machine images, but may have a performance impact.
Note: It is currently recommended that you use the ``fstrim``
application to ``discard`` unused blocks rather than the ``discard``
mount option because the performance impact of this option
is quite severe.
grpid/bsdgroups or nogrpid/sysvgroups (default)
These options define what group ID a newly created file
gets. When ``grpid`` is set, it takes the group ID of the
directory in which it is created; otherwise it takes the
``fsgid`` of the current process, unless the directory has the
``setgid`` bit set, in which case it takes the ``gid`` from the
parent directory, and also gets the ``setgid`` bit set if it is
a directory itself.
filestreams
Make the data allocator use the filestreams allocation mode
across the entire filesystem rather than just on directories
configured to use it.
ikeep or noikeep (default)
When ``ikeep`` is specified, XFS does not delete empty inode
clusters and keeps them around on disk. When ``noikeep`` is
specified, empty inode clusters are returned to the free
space pool.
inode32 or inode64 (default)
When ``inode32`` is specified, it indicates that XFS limits
inode creation to locations which will not result in inode
numbers with more than 32 bits of significance.
When ``inode64`` is specified, it indicates that XFS is allowed
to create inodes at any location in the filesystem,
including those which will result in inode numbers occupying
more than 32 bits of significance.
``inode32`` is provided for backwards compatibility with older
systems and applications, since 64 bits inode numbers might
cause problems for some applications that cannot handle
large inode numbers. If applications are in use which do
not handle inode numbers bigger than 32 bits, the ``inode32``
option should be specified.
largeio or nolargeio (default)
If ``nolargeio`` is specified, the optimal I/O reported in
``st_blksize`` by **stat(2)** will be as small as possible to allow
user applications to avoid inefficient read/modify/write
I/O. This is typically the page size of the machine, as
this is the granularity of the page cache.
If ``largeio`` is specified, a filesystem that was created with a
``swidth`` specified will return the ``swidth`` value (in bytes)
in ``st_blksize``. If the filesystem does not have a ``swidth``
specified but does specify an ``allocsize`` then ``allocsize``
(in bytes) will be returned instead. Otherwise the behaviour
is the same as if ``nolargeio`` was specified.
logbufs=value
Set the number of in-memory log buffers. Valid numbers
range from 2-8 inclusive.
The default value is 8 buffers.
If the memory cost of 8 log buffers is too high on small
systems, then it may be reduced at some cost to performance
on metadata intensive workloads. The ``logbsize`` option below
controls the size of each buffer and so is also relevant to
this case.
logbsize=value
Set the size of each in-memory log buffer. The size may be
specified in bytes, or in kilobytes with a "k" suffix.
Valid sizes for version 1 and version 2 logs are 16384 (16k)
and 32768 (32k). Valid sizes for version 2 logs also
include 65536 (64k), 131072 (128k) and 262144 (256k). The
logbsize must be an integer multiple of the log
stripe unit configured at **mkfs(8)** time.
The default value for version 1 logs is 32768, while the
default value for version 2 logs is MAX(32768, log_sunit).
logdev=device and rtdev=device
Use an external log (metadata journal) and/or real-time device.
An XFS filesystem has up to three parts: a data section, a log
section, and a real-time section. The real-time section is
optional, and the log section can be separate from the data
section or contained within it.
noalign
Data allocations will not be aligned at stripe unit
boundaries. This is only relevant to filesystems created
with non-zero data alignment parameters (``sunit``, ``swidth``) by
**mkfs(8)**.
norecovery
The filesystem will be mounted without running log recovery.
If the filesystem was not cleanly unmounted, it is likely to
be inconsistent when mounted in ``norecovery`` mode.
Some files or directories may not be accessible because of this.
Filesystems mounted ``norecovery`` must be mounted read-only or
the mount will fail.
nouuid
Don't check for double mounted file systems using the file
system ``uuid``. This is useful to mount LVM snapshot volumes,
and often used in combination with ``norecovery`` for mounting
read-only snapshots.
noquota
Forcibly turns off all quota accounting and enforcement
within the filesystem.
uquota/usrquota/uqnoenforce/quota
User disk quota accounting enabled, and limits (optionally)
enforced. Refer to **xfs_quota(8)** for further details.
gquota/grpquota/gqnoenforce
Group disk quota accounting enabled and limits (optionally)
enforced. Refer to **xfs_quota(8)** for further details.
pquota/prjquota/pqnoenforce
Project disk quota accounting enabled and limits (optionally)
enforced. Refer to **xfs_quota(8)** for further details.
sunit=value and swidth=value
Used to specify the stripe unit and width for a RAID device
or a stripe volume. "value" must be specified in 512-byte
block units. These options are only relevant to filesystems
that were created with non-zero data alignment parameters.
The ``sunit`` and ``swidth`` parameters specified must be compatible
with the existing filesystem alignment characteristics. In
general, that means the only valid changes to ``sunit`` are
increasing it by a power-of-2 multiple. Valid ``swidth`` values
are any integer multiple of a valid ``sunit`` value.
Typically the only time these mount options are necessary if
after an underlying RAID device has had its geometry
modified, such as adding a new disk to a RAID5 lun and
reshaping it.
swalloc
Data allocations will be rounded up to stripe width boundaries
when the current end of file is being extended and the file
size is larger than the stripe width size.
wsync
When specified, all filesystem namespace operations are
executed synchronously. This ensures that when the namespace
operation (create, unlink, etc) completes, the change to the
namespace is on stable storage. This is useful in HA setups
where failover must not result in clients seeing
inconsistent namespace presentation during or after a
failover event.
Deprecation of V4 Format
========================
The V4 filesystem format lacks certain features that are supported by
the V5 format, such as metadata checksumming, strengthened metadata
verification, and the ability to store timestamps past the year 2038.
Because of this, the V4 format is deprecated. All users should upgrade
by backing up their files, reformatting, and restoring from the backup.
Administrators and users can detect a V4 filesystem by running xfs_info
against a filesystem mountpoint and checking for a string containing
"crc=". If no such string is found, please upgrade xfsprogs to the
latest version and try again.
The deprecation will take place in two parts. Support for mounting V4
filesystems can now be disabled at kernel build time via Kconfig option.
The option will default to yes until September 2025, at which time it
will be changed to default to no. In September 2030, support will be
removed from the codebase entirely.
Note: Distributors may choose to withdraw V4 format support earlier than
the dates listed above.
Deprecated Mount Options
========================
============================ ================
Name Removal Schedule
============================ ================
Mounting with V4 filesystem September 2030
Mounting ascii-ci filesystem September 2030
ikeep/noikeep September 2025
attr2/noattr2 September 2025
============================ ================
Removed Mount Options
=====================
=========================== =======
Name Removed
=========================== =======
delaylog/nodelaylog v4.0
ihashsize v4.0
irixsgid v4.0
osyncisdsync/osyncisosync v4.0
barrier v4.19
nobarrier v4.19
=========================== =======
sysctls
=======
The following sysctls are available for the XFS filesystem:
fs.xfs.stats_clear (Min: 0 Default: 0 Max: 1)
Setting this to "1" clears accumulated XFS statistics
in /proc/fs/xfs/stat. It then immediately resets to "0".
fs.xfs.xfssyncd_centisecs (Min: 100 Default: 3000 Max: 720000)
The interval at which the filesystem flushes metadata
out to disk and runs internal cache cleanup routines.
fs.xfs.filestream_centisecs (Min: 1 Default: 3000 Max: 360000)
The interval at which the filesystem ages filestreams cache
references and returns timed-out AGs back to the free stream
pool.
fs.xfs.speculative_prealloc_lifetime
(Units: seconds Min: 1 Default: 300 Max: 86400)
The interval at which the background scanning for inodes
with unused speculative preallocation runs. The scan
removes unused preallocation from clean inodes and releases
the unused space back to the free pool.
fs.xfs.speculative_cow_prealloc_lifetime
This is an alias for speculative_prealloc_lifetime.
fs.xfs.error_level (Min: 0 Default: 3 Max: 11)
A volume knob for error reporting when internal errors occur.
This will generate detailed messages & backtraces for filesystem
shutdowns, for example. Current threshold values are:
XFS_ERRLEVEL_OFF: 0
XFS_ERRLEVEL_LOW: 1
XFS_ERRLEVEL_HIGH: 5
fs.xfs.panic_mask (Min: 0 Default: 0 Max: 511)
Causes certain error conditions to call BUG(). Value is a bitmask;
OR together the tags which represent errors which should cause panics:
XFS_NO_PTAG 0
XFS_PTAG_IFLUSH 0x00000001
XFS_PTAG_LOGRES 0x00000002
XFS_PTAG_AILDELETE 0x00000004
XFS_PTAG_ERROR_REPORT 0x00000008
XFS_PTAG_SHUTDOWN_CORRUPT 0x00000010
XFS_PTAG_SHUTDOWN_IOERROR 0x00000020
XFS_PTAG_SHUTDOWN_LOGERROR 0x00000040
XFS_PTAG_FSBLOCK_ZERO 0x00000080
XFS_PTAG_VERIFIER_ERROR 0x00000100
This option is intended for debugging only.
fs.xfs.irix_symlink_mode (Min: 0 Default: 0 Max: 1)
Controls whether symlinks are created with mode 0777 (default)
or whether their mode is affected by the umask (irix mode).
fs.xfs.irix_sgid_inherit (Min: 0 Default: 0 Max: 1)
Controls files created in SGID directories.
If the group ID of the new file does not match the effective group
ID or one of the supplementary group IDs of the parent dir, the
ISGID bit is cleared if the irix_sgid_inherit compatibility sysctl
is set.
fs.xfs.inherit_sync (Min: 0 Default: 1 Max: 1)
Setting this to "1" will cause the "sync" flag set
by the **xfs_io(8)** chattr command on a directory to be
inherited by files in that directory.
fs.xfs.inherit_nodump (Min: 0 Default: 1 Max: 1)
Setting this to "1" will cause the "nodump" flag set
by the **xfs_io(8)** chattr command on a directory to be
inherited by files in that directory.
fs.xfs.inherit_noatime (Min: 0 Default: 1 Max: 1)
Setting this to "1" will cause the "noatime" flag set
by the **xfs_io(8)** chattr command on a directory to be
inherited by files in that directory.
fs.xfs.inherit_nosymlinks (Min: 0 Default: 1 Max: 1)
Setting this to "1" will cause the "nosymlinks" flag set
by the **xfs_io(8)** chattr command on a directory to be
inherited by files in that directory.
fs.xfs.inherit_nodefrag (Min: 0 Default: 1 Max: 1)
Setting this to "1" will cause the "nodefrag" flag set
by the **xfs_io(8)** chattr command on a directory to be
inherited by files in that directory.
fs.xfs.rotorstep (Min: 1 Default: 1 Max: 256)
In "inode32" allocation mode, this option determines how many
files the allocator attempts to allocate in the same allocation
group before moving to the next allocation group. The intent
is to control the rate at which the allocator moves between
allocation groups when allocating extents for new files.
Deprecated Sysctls
==================
=========================================== ================
Name Removal Schedule
=========================================== ================
fs.xfs.irix_sgid_inherit September 2025
fs.xfs.irix_symlink_mode September 2025
fs.xfs.speculative_cow_prealloc_lifetime September 2025
=========================================== ================
Removed Sysctls
===============
============================= =======
Name Removed
============================= =======
fs.xfs.xfsbufd_centisec v4.0
fs.xfs.age_buffer_centisecs v4.0
============================= =======
Error handling
==============
XFS can act differently according to the type of error found during its
operation. The implementation introduces the following concepts to the error
handler:
-failure speed:
Defines how fast XFS should propagate an error upwards when a specific
error is found during the filesystem operation. It can propagate
immediately, after a defined number of retries, after a set time period,
or simply retry forever.
-error classes:
Specifies the subsystem the error configuration will apply to, such as
metadata IO or memory allocation. Different subsystems will have
different error handlers for which behaviour can be configured.
-error handlers:
Defines the behavior for a specific error.
The filesystem behavior during an error can be set via ``sysfs`` files. Each
error handler works independently - the first condition met by an error handler
for a specific class will cause the error to be propagated rather than reset and
retried.
The action taken by the filesystem when the error is propagated is context
dependent - it may cause a shut down in the case of an unrecoverable error,
it may be reported back to userspace, or it may even be ignored because
there's nothing useful we can with the error or anyone we can report it to (e.g.
during unmount).
The configuration files are organized into the following hierarchy for each
mounted filesystem:
/sys/fs/xfs/<dev>/error/<class>/<error>/
Where:
<dev>
The short device name of the mounted filesystem. This is the same device
name that shows up in XFS kernel error messages as "XFS(<dev>): ..."
<class>
The subsystem the error configuration belongs to. As of 4.9, the defined
classes are:
- "metadata": applies metadata buffer write IO
<error>
The individual error handler configurations.
Each filesystem has "global" error configuration options defined in their top
level directory:
/sys/fs/xfs/<dev>/error/
fail_at_unmount (Min: 0 Default: 1 Max: 1)
Defines the filesystem error behavior at unmount time.
If set to a value of 1, XFS will override all other error configurations
during unmount and replace them with "immediate fail" characteristics.
i.e. no retries, no retry timeout. This will always allow unmount to
succeed when there are persistent errors present.
If set to 0, the configured retry behaviour will continue until all
retries and/or timeouts have been exhausted. This will delay unmount
completion when there are persistent errors, and it may prevent the
filesystem from ever unmounting fully in the case of "retry forever"
handler configurations.
Note: there is no guarantee that fail_at_unmount can be set while an
unmount is in progress. It is possible that the ``sysfs`` entries are
removed by the unmounting filesystem before a "retry forever" error
handler configuration causes unmount to hang, and hence the filesystem
must be configured appropriately before unmount begins to prevent
unmount hangs.
Each filesystem has specific error class handlers that define the error
propagation behaviour for specific errors. There is also a "default" error
handler defined, which defines the behaviour for all errors that don't have
specific handlers defined. Where multiple retry constraints are configured for
a single error, the first retry configuration that expires will cause the error
to be propagated. The handler configurations are found in the directory:
/sys/fs/xfs/<dev>/error/<class>/<error>/
max_retries (Min: -1 Default: Varies Max: INTMAX)
Defines the allowed number of retries of a specific error before
the filesystem will propagate the error. The retry count for a given
error context (e.g. a specific metadata buffer) is reset every time
there is a successful completion of the operation.
Setting the value to "-1" will cause XFS to retry forever for this
specific error.
Setting the value to "0" will cause XFS to fail immediately when the
specific error is reported.
Setting the value to "N" (where 0 < N < Max) will make XFS retry the
operation "N" times before propagating the error.
retry_timeout_seconds (Min: -1 Default: Varies Max: 1 day)
Define the amount of time (in seconds) that the filesystem is
allowed to retry its operations when the specific error is
found.
Setting the value to "-1" will allow XFS to retry forever for this
specific error.
Setting the value to "0" will cause XFS to fail immediately when the
specific error is reported.
Setting the value to "N" (where 0 < N < Max) will allow XFS to retry the
operation for up to "N" seconds before propagating the error.
**Note:** The default behaviour for a specific error handler is dependent on both
the class and error context. For example, the default values for
"metadata/ENODEV" are "0" rather than "-1" so that this error handler defaults
to "fail immediately" behaviour. This is done because ENODEV is a fatal,
unrecoverable error no matter how many times the metadata IO is retried.
Workqueue Concurrency
=====================
XFS uses kernel workqueues to parallelize metadata update processes. This
enables it to take advantage of storage hardware that can service many IO
operations simultaneously. This interface exposes internal implementation
details of XFS, and as such is explicitly not part of any userspace API/ABI
guarantee the kernel may give userspace. These are undocumented features of
the generic workqueue implementation XFS uses for concurrency, and they are
provided here purely for diagnostic and tuning purposes and may change at any
time in the future.
The control knobs for a filesystem's workqueues are organized by task at hand
and the short name of the data device. They all can be found in:
/sys/bus/workqueue/devices/${task}!${device}
================ ===========
Task Description
================ ===========
xfs_iwalk-$pid Inode scans of the entire filesystem. Currently limited to
mount time quotacheck.
xfs-gc Background garbage collection of disk space that have been
speculatively allocated beyond EOF or for staging copy on
write operations.
================ ===========
For example, the knobs for the quotacheck workqueue for /dev/nvme0n1 would be
found in /sys/bus/workqueue/devices/xfs_iwalk-1111!nvme0n1/.
The interesting knobs for XFS workqueues are as follows:
============ ===========
Knob Description
============ ===========
max_active Maximum number of background threads that can be started to
run the work.
cpumask CPUs upon which the threads are allowed to run.
nice Relative priority of scheduling the threads. These are the
same nice levels that can be applied to userspace processes.
============ =========== | linux | SPDX License Identifier GPL 2 0 The SGI XFS Filesystem XFS is a high performance journaling filesystem which originated on the SGI IRIX platform It is completely multi threaded can support large files and large filesystems extended attributes variable block sizes is extent based and makes extensive use of Btrees directories extents free space to aid both performance and scalability Refer to the documentation at https xfs wiki kernel org for further details This implementation is on disk compatible with the IRIX version of XFS Mount Options When mounting an XFS filesystem the following options are accepted allocsize size Sets the buffered I O end of file preallocation size when doing delayed allocation writeout default size is 64KiB Valid values for this option are page size typically 4KiB through to 1GiB inclusive in power of 2 increments The default behaviour is for dynamic end of file preallocation size which uses a set of heuristics to optimise the preallocation size based on the current allocation patterns within the file and the access patterns to the file Specifying a fixed allocsize value turns off the dynamic behaviour attr2 or noattr2 The options enable disable an opportunistic improvement to be made in the way inline extended attributes are stored on disk When the new form is used for the first time when attr2 is selected either when setting or removing extended attributes the on disk superblock feature bit field will be updated to reflect this format being in use The default behaviour is determined by the on disk feature bit indicating that attr2 behaviour is active If either mount option is set then that becomes the new default used by the filesystem CRC enabled filesystems always use the attr2 format and so will reject the noattr2 mount option if it is set discard or nodiscard default Enable disable the issuing of commands to let the block device reclaim space freed by the filesystem This is useful for SSD devices thinly provisioned LUNs and virtual machine images but may have a performance impact Note It is currently recommended that you use the fstrim application to discard unused blocks rather than the discard mount option because the performance impact of this option is quite severe grpid bsdgroups or nogrpid sysvgroups default These options define what group ID a newly created file gets When grpid is set it takes the group ID of the directory in which it is created otherwise it takes the fsgid of the current process unless the directory has the setgid bit set in which case it takes the gid from the parent directory and also gets the setgid bit set if it is a directory itself filestreams Make the data allocator use the filestreams allocation mode across the entire filesystem rather than just on directories configured to use it ikeep or noikeep default When ikeep is specified XFS does not delete empty inode clusters and keeps them around on disk When noikeep is specified empty inode clusters are returned to the free space pool inode32 or inode64 default When inode32 is specified it indicates that XFS limits inode creation to locations which will not result in inode numbers with more than 32 bits of significance When inode64 is specified it indicates that XFS is allowed to create inodes at any location in the filesystem including those which will result in inode numbers occupying more than 32 bits of significance inode32 is provided for backwards compatibility with older systems and applications since 64 bits inode numbers might cause problems for some applications that cannot handle large inode numbers If applications are in use which do not handle inode numbers bigger than 32 bits the inode32 option should be specified largeio or nolargeio default If nolargeio is specified the optimal I O reported in st blksize by stat 2 will be as small as possible to allow user applications to avoid inefficient read modify write I O This is typically the page size of the machine as this is the granularity of the page cache If largeio is specified a filesystem that was created with a swidth specified will return the swidth value in bytes in st blksize If the filesystem does not have a swidth specified but does specify an allocsize then allocsize in bytes will be returned instead Otherwise the behaviour is the same as if nolargeio was specified logbufs value Set the number of in memory log buffers Valid numbers range from 2 8 inclusive The default value is 8 buffers If the memory cost of 8 log buffers is too high on small systems then it may be reduced at some cost to performance on metadata intensive workloads The logbsize option below controls the size of each buffer and so is also relevant to this case logbsize value Set the size of each in memory log buffer The size may be specified in bytes or in kilobytes with a k suffix Valid sizes for version 1 and version 2 logs are 16384 16k and 32768 32k Valid sizes for version 2 logs also include 65536 64k 131072 128k and 262144 256k The logbsize must be an integer multiple of the log stripe unit configured at mkfs 8 time The default value for version 1 logs is 32768 while the default value for version 2 logs is MAX 32768 log sunit logdev device and rtdev device Use an external log metadata journal and or real time device An XFS filesystem has up to three parts a data section a log section and a real time section The real time section is optional and the log section can be separate from the data section or contained within it noalign Data allocations will not be aligned at stripe unit boundaries This is only relevant to filesystems created with non zero data alignment parameters sunit swidth by mkfs 8 norecovery The filesystem will be mounted without running log recovery If the filesystem was not cleanly unmounted it is likely to be inconsistent when mounted in norecovery mode Some files or directories may not be accessible because of this Filesystems mounted norecovery must be mounted read only or the mount will fail nouuid Don t check for double mounted file systems using the file system uuid This is useful to mount LVM snapshot volumes and often used in combination with norecovery for mounting read only snapshots noquota Forcibly turns off all quota accounting and enforcement within the filesystem uquota usrquota uqnoenforce quota User disk quota accounting enabled and limits optionally enforced Refer to xfs quota 8 for further details gquota grpquota gqnoenforce Group disk quota accounting enabled and limits optionally enforced Refer to xfs quota 8 for further details pquota prjquota pqnoenforce Project disk quota accounting enabled and limits optionally enforced Refer to xfs quota 8 for further details sunit value and swidth value Used to specify the stripe unit and width for a RAID device or a stripe volume value must be specified in 512 byte block units These options are only relevant to filesystems that were created with non zero data alignment parameters The sunit and swidth parameters specified must be compatible with the existing filesystem alignment characteristics In general that means the only valid changes to sunit are increasing it by a power of 2 multiple Valid swidth values are any integer multiple of a valid sunit value Typically the only time these mount options are necessary if after an underlying RAID device has had its geometry modified such as adding a new disk to a RAID5 lun and reshaping it swalloc Data allocations will be rounded up to stripe width boundaries when the current end of file is being extended and the file size is larger than the stripe width size wsync When specified all filesystem namespace operations are executed synchronously This ensures that when the namespace operation create unlink etc completes the change to the namespace is on stable storage This is useful in HA setups where failover must not result in clients seeing inconsistent namespace presentation during or after a failover event Deprecation of V4 Format The V4 filesystem format lacks certain features that are supported by the V5 format such as metadata checksumming strengthened metadata verification and the ability to store timestamps past the year 2038 Because of this the V4 format is deprecated All users should upgrade by backing up their files reformatting and restoring from the backup Administrators and users can detect a V4 filesystem by running xfs info against a filesystem mountpoint and checking for a string containing crc If no such string is found please upgrade xfsprogs to the latest version and try again The deprecation will take place in two parts Support for mounting V4 filesystems can now be disabled at kernel build time via Kconfig option The option will default to yes until September 2025 at which time it will be changed to default to no In September 2030 support will be removed from the codebase entirely Note Distributors may choose to withdraw V4 format support earlier than the dates listed above Deprecated Mount Options Name Removal Schedule Mounting with V4 filesystem September 2030 Mounting ascii ci filesystem September 2030 ikeep noikeep September 2025 attr2 noattr2 September 2025 Removed Mount Options Name Removed delaylog nodelaylog v4 0 ihashsize v4 0 irixsgid v4 0 osyncisdsync osyncisosync v4 0 barrier v4 19 nobarrier v4 19 sysctls The following sysctls are available for the XFS filesystem fs xfs stats clear Min 0 Default 0 Max 1 Setting this to 1 clears accumulated XFS statistics in proc fs xfs stat It then immediately resets to 0 fs xfs xfssyncd centisecs Min 100 Default 3000 Max 720000 The interval at which the filesystem flushes metadata out to disk and runs internal cache cleanup routines fs xfs filestream centisecs Min 1 Default 3000 Max 360000 The interval at which the filesystem ages filestreams cache references and returns timed out AGs back to the free stream pool fs xfs speculative prealloc lifetime Units seconds Min 1 Default 300 Max 86400 The interval at which the background scanning for inodes with unused speculative preallocation runs The scan removes unused preallocation from clean inodes and releases the unused space back to the free pool fs xfs speculative cow prealloc lifetime This is an alias for speculative prealloc lifetime fs xfs error level Min 0 Default 3 Max 11 A volume knob for error reporting when internal errors occur This will generate detailed messages backtraces for filesystem shutdowns for example Current threshold values are XFS ERRLEVEL OFF 0 XFS ERRLEVEL LOW 1 XFS ERRLEVEL HIGH 5 fs xfs panic mask Min 0 Default 0 Max 511 Causes certain error conditions to call BUG Value is a bitmask OR together the tags which represent errors which should cause panics XFS NO PTAG 0 XFS PTAG IFLUSH 0x00000001 XFS PTAG LOGRES 0x00000002 XFS PTAG AILDELETE 0x00000004 XFS PTAG ERROR REPORT 0x00000008 XFS PTAG SHUTDOWN CORRUPT 0x00000010 XFS PTAG SHUTDOWN IOERROR 0x00000020 XFS PTAG SHUTDOWN LOGERROR 0x00000040 XFS PTAG FSBLOCK ZERO 0x00000080 XFS PTAG VERIFIER ERROR 0x00000100 This option is intended for debugging only fs xfs irix symlink mode Min 0 Default 0 Max 1 Controls whether symlinks are created with mode 0777 default or whether their mode is affected by the umask irix mode fs xfs irix sgid inherit Min 0 Default 0 Max 1 Controls files created in SGID directories If the group ID of the new file does not match the effective group ID or one of the supplementary group IDs of the parent dir the ISGID bit is cleared if the irix sgid inherit compatibility sysctl is set fs xfs inherit sync Min 0 Default 1 Max 1 Setting this to 1 will cause the sync flag set by the xfs io 8 chattr command on a directory to be inherited by files in that directory fs xfs inherit nodump Min 0 Default 1 Max 1 Setting this to 1 will cause the nodump flag set by the xfs io 8 chattr command on a directory to be inherited by files in that directory fs xfs inherit noatime Min 0 Default 1 Max 1 Setting this to 1 will cause the noatime flag set by the xfs io 8 chattr command on a directory to be inherited by files in that directory fs xfs inherit nosymlinks Min 0 Default 1 Max 1 Setting this to 1 will cause the nosymlinks flag set by the xfs io 8 chattr command on a directory to be inherited by files in that directory fs xfs inherit nodefrag Min 0 Default 1 Max 1 Setting this to 1 will cause the nodefrag flag set by the xfs io 8 chattr command on a directory to be inherited by files in that directory fs xfs rotorstep Min 1 Default 1 Max 256 In inode32 allocation mode this option determines how many files the allocator attempts to allocate in the same allocation group before moving to the next allocation group The intent is to control the rate at which the allocator moves between allocation groups when allocating extents for new files Deprecated Sysctls Name Removal Schedule fs xfs irix sgid inherit September 2025 fs xfs irix symlink mode September 2025 fs xfs speculative cow prealloc lifetime September 2025 Removed Sysctls Name Removed fs xfs xfsbufd centisec v4 0 fs xfs age buffer centisecs v4 0 Error handling XFS can act differently according to the type of error found during its operation The implementation introduces the following concepts to the error handler failure speed Defines how fast XFS should propagate an error upwards when a specific error is found during the filesystem operation It can propagate immediately after a defined number of retries after a set time period or simply retry forever error classes Specifies the subsystem the error configuration will apply to such as metadata IO or memory allocation Different subsystems will have different error handlers for which behaviour can be configured error handlers Defines the behavior for a specific error The filesystem behavior during an error can be set via sysfs files Each error handler works independently the first condition met by an error handler for a specific class will cause the error to be propagated rather than reset and retried The action taken by the filesystem when the error is propagated is context dependent it may cause a shut down in the case of an unrecoverable error it may be reported back to userspace or it may even be ignored because there s nothing useful we can with the error or anyone we can report it to e g during unmount The configuration files are organized into the following hierarchy for each mounted filesystem sys fs xfs dev error class error Where dev The short device name of the mounted filesystem This is the same device name that shows up in XFS kernel error messages as XFS dev class The subsystem the error configuration belongs to As of 4 9 the defined classes are metadata applies metadata buffer write IO error The individual error handler configurations Each filesystem has global error configuration options defined in their top level directory sys fs xfs dev error fail at unmount Min 0 Default 1 Max 1 Defines the filesystem error behavior at unmount time If set to a value of 1 XFS will override all other error configurations during unmount and replace them with immediate fail characteristics i e no retries no retry timeout This will always allow unmount to succeed when there are persistent errors present If set to 0 the configured retry behaviour will continue until all retries and or timeouts have been exhausted This will delay unmount completion when there are persistent errors and it may prevent the filesystem from ever unmounting fully in the case of retry forever handler configurations Note there is no guarantee that fail at unmount can be set while an unmount is in progress It is possible that the sysfs entries are removed by the unmounting filesystem before a retry forever error handler configuration causes unmount to hang and hence the filesystem must be configured appropriately before unmount begins to prevent unmount hangs Each filesystem has specific error class handlers that define the error propagation behaviour for specific errors There is also a default error handler defined which defines the behaviour for all errors that don t have specific handlers defined Where multiple retry constraints are configured for a single error the first retry configuration that expires will cause the error to be propagated The handler configurations are found in the directory sys fs xfs dev error class error max retries Min 1 Default Varies Max INTMAX Defines the allowed number of retries of a specific error before the filesystem will propagate the error The retry count for a given error context e g a specific metadata buffer is reset every time there is a successful completion of the operation Setting the value to 1 will cause XFS to retry forever for this specific error Setting the value to 0 will cause XFS to fail immediately when the specific error is reported Setting the value to N where 0 N Max will make XFS retry the operation N times before propagating the error retry timeout seconds Min 1 Default Varies Max 1 day Define the amount of time in seconds that the filesystem is allowed to retry its operations when the specific error is found Setting the value to 1 will allow XFS to retry forever for this specific error Setting the value to 0 will cause XFS to fail immediately when the specific error is reported Setting the value to N where 0 N Max will allow XFS to retry the operation for up to N seconds before propagating the error Note The default behaviour for a specific error handler is dependent on both the class and error context For example the default values for metadata ENODEV are 0 rather than 1 so that this error handler defaults to fail immediately behaviour This is done because ENODEV is a fatal unrecoverable error no matter how many times the metadata IO is retried Workqueue Concurrency XFS uses kernel workqueues to parallelize metadata update processes This enables it to take advantage of storage hardware that can service many IO operations simultaneously This interface exposes internal implementation details of XFS and as such is explicitly not part of any userspace API ABI guarantee the kernel may give userspace These are undocumented features of the generic workqueue implementation XFS uses for concurrency and they are provided here purely for diagnostic and tuning purposes and may change at any time in the future The control knobs for a filesystem s workqueues are organized by task at hand and the short name of the data device They all can be found in sys bus workqueue devices task device Task Description xfs iwalk pid Inode scans of the entire filesystem Currently limited to mount time quotacheck xfs gc Background garbage collection of disk space that have been speculatively allocated beyond EOF or for staging copy on write operations For example the knobs for the quotacheck workqueue for dev nvme0n1 would be found in sys bus workqueue devices xfs iwalk 1111 nvme0n1 The interesting knobs for XFS workqueues are as follows Knob Description max active Maximum number of background threads that can be started to run the work cpumask CPUs upon which the threads are allowed to run nice Relative priority of scheduling the threads These are the same nice levels that can be applied to userspace processes |
prometheus sortrank 3 Console templates allow for creation of arbitrary consoles using the Go from the Prometheus server Console templates templating language http golang org pkg text template These are served title Console templates | ---
title: Console templates
sort_rank: 3
---
# Console templates
Console templates allow for creation of arbitrary consoles using the [Go
templating language](http://golang.org/pkg/text/template/). These are served
from the Prometheus server.
Console templates are the most powerful way to create templates that can be
easily managed in source control. There is a learning curve though, so users new
to this style of monitoring should try out
[Grafana](/docs/visualization/grafana/) first.
## Getting started
Prometheus comes with an example set of consoles to get you going. These can be
found at `/consoles/index.html.example` on a running Prometheus and will
display Node Exporter consoles if Prometheus is scraping Node Exporters with a
`job="node"` label.
The example consoles have 5 parts:
1. A navigation bar on top
1. A menu on the left
1. Time controls on the bottom
1. The main content in the center, usually graphs
1. A table on the right
The navigation bar is for links to other systems, such as other Prometheis
<sup>[1](/docs/introduction/faq/#what-is-the-plural-of-prometheus)</sup>,
documentation, and whatever else makes sense to you. The menu is for navigation
inside the same Prometheus server, which is very useful to be able to quickly
open a console in another tab to correlate information. Both are configured in
`console_libraries/menu.lib`.
The time controls allow changing of the duration and range of the graphs.
Console URLs can be shared and will show the same graphs for others.
The main content is usually graphs. There is a configurable JavaScript graphing
library provided that will handle requesting data from Prometheus, and rendering
it via [Rickshaw](https://shutterstock.github.io/rickshaw/).
Finally, the table on the right can be used to display statistics in a more
compact form than graphs.
## Example Console
This is a basic console. It shows the number of tasks, how many of them are up,
the average CPU usage, and the average memory usage in the right-hand-side
table. The main content has a queries-per-second graph.
```
<tr>
<th>MyJob</th>
<th>{{ template "prom_query_drilldown" (args "sum(up{job='myjob'})") }}
/ {{ template "prom_query_drilldown" (args "count(up{job='myjob'})") }}
</th>
</tr>
<tr>
<td>CPU</td>
<td>{{ template "prom_query_drilldown" (args
"avg by(job)(rate(process_cpu_seconds_total{job='myjob'}[5m]))"
"s/s" "humanizeNoSmallPrefix") }}
</td>
</tr>
<tr>
<td>Memory</td>
<td>{{ template "prom_query_drilldown" (args
"avg by(job)(process_resident_memory_bytes{job='myjob'})"
"B" "humanize1024") }}
</td>
</tr>
<h1>MyJob</h1>
<h3>Queries</h3>
<div id="queryGraph"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#queryGraph"),
expr: "sum(rate(http_query_count{job='myjob'}[5m]))",
name: "Queries",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "/s",
yTitle: "Queries"
})
</script>
```
The `prom_right_table_head` and `prom_right_table_tail` templates contain the
right-hand-side table. This is optional.
`prom_query_drilldown` is a template that will evaluate the expression passed to it, format it,
and link to the expression in the [expression browser](/docs/visualization/browser/). The first
argument is the expression. The second argument is the unit to use. The third
argument is how to format the output. Only the first argument is required.
Valid output formats for the third argument to `prom_query_drilldown`:
* Not specified: Default Go display output.
* `humanize`: Display the result using [metric prefixes](http://en.wikipedia.org/wiki/Metric_prefix).
* `humanizeNoSmallPrefix`: For absolute values greater than 1, display the
result using [metric prefixes](http://en.wikipedia.org/wiki/Metric_prefix). For
absolute values less than 1, display 3 significant digits. This is useful
to avoid units such as milliqueries per second that can be produced by
`humanize`.
* `humanize1024`: Display the humanized result using a base of 1024 rather than 1000.
This is usually used with `B` as the second argument to produce units such as `KiB` and `MiB`.
* `printf.3g`: Display 3 significant digits.
Custom formats can be defined. See
[prom.lib](https://github.com/prometheus/prometheus/blob/main/console_libraries/prom.lib) for examples.
## Graph Library
The graph library is invoked as:
```
<div id="queryGraph"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#queryGraph"),
expr: "sum(rate(http_query_count{job='myjob'}[5m]))"
})
</script>
```
The `head` template loads the required Javascript and CSS.
Parameters to the graph library:
| Name | Description
| ------------- | -------------
| expr | Required. Expression to graph. Can be a list.
| node | Required. DOM node to render into.
| duration | Optional. Duration of the graph. Defaults to 1 hour.
| endTime | Optional. Unixtime the graph ends at. Defaults to now.
| width | Optional. Width of the graph, excluding titles. Defaults to auto-detection.
| height | Optional. Height of the graph, excluding titles and legends. Defaults to 200 pixels.
| min | Optional. Minimum x-axis value. Defaults to lowest data value.
| max | Optional. Maximum y-axis value. Defaults to highest data value.
| renderer | Optional. Type of graph. Options are `line` and `area` (stacked graph). Defaults to `line`.
| name | Optional. Title of plots in legend and hover detail. If passed a string, `[[ label ]]` will be substituted with the label value. If passed a function, it will be passed a map of labels and should return the name as a string. Can be a list.
| xTitle | Optional. Title of the x-axis. Defaults to `Time`.
| yUnits | Optional. Units of the y-axis. Defaults to empty.
| yTitle | Optional. Title of the y-axis. Defaults to empty.
| yAxisFormatter | Optional. Number formatter for the y-axis. Defaults to `PromConsole.NumberFormatter.humanize`.
| yHoverFormatter | Optional. Number formatter for the hover detail. Defaults to `PromConsole.NumberFormatter.humanizeExact`.
| colorScheme | Optional. Color scheme to be used by the plots. Can be either a list of hex color codes or one of the [color scheme names](https://github.com/shutterstock/rickshaw/blob/master/src/js/Rickshaw.Fixtures.Color.js) supported by Rickshaw. Defaults to `'colorwheel'`.
If both `expr` and `name` are lists, they must be of the same length. The name
will be applied to the plots for the corresponding expression.
Valid options for the `yAxisFormatter` and `yHoverFormatter`:
* `PromConsole.NumberFormatter.humanize`: Format using [metric prefixes](http://en.wikipedia.org/wiki/Metric_prefix).
* `PromConsole.NumberFormatter.humanizeNoSmallPrefix`: For absolute values
greater than 1, format using using [metric prefixes](http://en.wikipedia.org/wiki/Metric_prefix).
For absolute values less than 1, format with 3 significant digits. This is
useful to avoid units such as milliqueries per second that can be produced by
`PromConsole.NumberFormatter.humanize`.
* `PromConsole.NumberFormatter.humanize1024`: Format the humanized result using a base of 1024 rather than 1000. | prometheus | title Console templates sort rank 3 Console templates Console templates allow for creation of arbitrary consoles using the Go templating language http golang org pkg text template These are served from the Prometheus server Console templates are the most powerful way to create templates that can be easily managed in source control There is a learning curve though so users new to this style of monitoring should try out Grafana docs visualization grafana first Getting started Prometheus comes with an example set of consoles to get you going These can be found at consoles index html example on a running Prometheus and will display Node Exporter consoles if Prometheus is scraping Node Exporters with a job node label The example consoles have 5 parts 1 A navigation bar on top 1 A menu on the left 1 Time controls on the bottom 1 The main content in the center usually graphs 1 A table on the right The navigation bar is for links to other systems such as other Prometheis sup 1 docs introduction faq what is the plural of prometheus sup documentation and whatever else makes sense to you The menu is for navigation inside the same Prometheus server which is very useful to be able to quickly open a console in another tab to correlate information Both are configured in console libraries menu lib The time controls allow changing of the duration and range of the graphs Console URLs can be shared and will show the same graphs for others The main content is usually graphs There is a configurable JavaScript graphing library provided that will handle requesting data from Prometheus and rendering it via Rickshaw https shutterstock github io rickshaw Finally the table on the right can be used to display statistics in a more compact form than graphs Example Console This is a basic console It shows the number of tasks how many of them are up the average CPU usage and the average memory usage in the right hand side table The main content has a queries per second graph tr th MyJob th th template prom query drilldown args sum up job myjob template prom query drilldown args count up job myjob th tr tr td CPU td td template prom query drilldown args avg by job rate process cpu seconds total job myjob 5m s s humanizeNoSmallPrefix td tr tr td Memory td td template prom query drilldown args avg by job process resident memory bytes job myjob B humanize1024 td tr h1 MyJob h1 h3 Queries h3 div id queryGraph div script new PromConsole Graph node document querySelector queryGraph expr sum rate http query count job myjob 5m name Queries yAxisFormatter PromConsole NumberFormatter humanizeNoSmallPrefix yHoverFormatter PromConsole NumberFormatter humanizeNoSmallPrefix yUnits s yTitle Queries script The prom right table head and prom right table tail templates contain the right hand side table This is optional prom query drilldown is a template that will evaluate the expression passed to it format it and link to the expression in the expression browser docs visualization browser The first argument is the expression The second argument is the unit to use The third argument is how to format the output Only the first argument is required Valid output formats for the third argument to prom query drilldown Not specified Default Go display output humanize Display the result using metric prefixes http en wikipedia org wiki Metric prefix humanizeNoSmallPrefix For absolute values greater than 1 display the result using metric prefixes http en wikipedia org wiki Metric prefix For absolute values less than 1 display 3 significant digits This is useful to avoid units such as milliqueries per second that can be produced by humanize humanize1024 Display the humanized result using a base of 1024 rather than 1000 This is usually used with B as the second argument to produce units such as KiB and MiB printf 3g Display 3 significant digits Custom formats can be defined See prom lib https github com prometheus prometheus blob main console libraries prom lib for examples Graph Library The graph library is invoked as div id queryGraph div script new PromConsole Graph node document querySelector queryGraph expr sum rate http query count job myjob 5m script The head template loads the required Javascript and CSS Parameters to the graph library Name Description expr Required Expression to graph Can be a list node Required DOM node to render into duration Optional Duration of the graph Defaults to 1 hour endTime Optional Unixtime the graph ends at Defaults to now width Optional Width of the graph excluding titles Defaults to auto detection height Optional Height of the graph excluding titles and legends Defaults to 200 pixels min Optional Minimum x axis value Defaults to lowest data value max Optional Maximum y axis value Defaults to highest data value renderer Optional Type of graph Options are line and area stacked graph Defaults to line name Optional Title of plots in legend and hover detail If passed a string label will be substituted with the label value If passed a function it will be passed a map of labels and should return the name as a string Can be a list xTitle Optional Title of the x axis Defaults to Time yUnits Optional Units of the y axis Defaults to empty yTitle Optional Title of the y axis Defaults to empty yAxisFormatter Optional Number formatter for the y axis Defaults to PromConsole NumberFormatter humanize yHoverFormatter Optional Number formatter for the hover detail Defaults to PromConsole NumberFormatter humanizeExact colorScheme Optional Color scheme to be used by the plots Can be either a list of hex color codes or one of the color scheme names https github com shutterstock rickshaw blob master src js Rickshaw Fixtures Color js supported by Rickshaw Defaults to colorwheel If both expr and name are lists they must be of the same length The name will be applied to the plots for the corresponding expression Valid options for the yAxisFormatter and yHoverFormatter PromConsole NumberFormatter humanize Format using metric prefixes http en wikipedia org wiki Metric prefix PromConsole NumberFormatter humanizeNoSmallPrefix For absolute values greater than 1 format using using metric prefixes http en wikipedia org wiki Metric prefix For absolute values less than 1 format with 3 significant digits This is useful to avoid units such as milliqueries per second that can be produced by PromConsole NumberFormatter humanize PromConsole NumberFormatter humanize1024 Format the humanized result using a base of 1024 rather than 1000 |
prometheus sortrank 4 There are a number of libraries and servers which help in exporting existing Exporters and integrations title Exporters and integrations cases where it is not feasible to instrument a given system with Prometheus metrics from third party systems as Prometheus metrics This is useful for | ---
title: Exporters and integrations
sort_rank: 4
---
# Exporters and integrations
There are a number of libraries and servers which help in exporting existing
metrics from third-party systems as Prometheus metrics. This is useful for
cases where it is not feasible to instrument a given system with Prometheus
metrics directly (for example, HAProxy or Linux system stats).
## Third-party exporters
Some of these exporters are maintained as part of the official [Prometheus GitHub organization](https://github.com/prometheus),
those are marked as *official*, others are externally contributed and maintained.
We encourage the creation of more exporters but cannot vet all of them for
[best practices](/docs/instrumenting/writing_exporters/).
Commonly, those exporters are hosted outside of the Prometheus GitHub
organization.
The [exporter default
port](https://github.com/prometheus/prometheus/wiki/Default-port-allocations)
wiki page has become another catalog of exporters, and may include exporters
not listed here due to overlapping functionality or still being in development.
The [JMX exporter](https://github.com/prometheus/jmx_exporter) can export from a
wide variety of JVM-based applications, for example [Kafka](http://kafka.apache.org/) and
[Cassandra](http://cassandra.apache.org/).
### Databases
* [Aerospike exporter](https://github.com/aerospike/aerospike-prometheus-exporter)
* [AWS RDS exporter](https://github.com/qonto/prometheus-rds-exporter)
* [ClickHouse exporter](https://github.com/f1yegor/clickhouse_exporter)
* [Consul exporter](https://github.com/prometheus/consul_exporter) (**official**)
* [Couchbase exporter](https://github.com/couchbase/couchbase-exporter)
* [CouchDB exporter](https://github.com/gesellix/couchdb-exporter)
* [Druid Exporter](https://github.com/opstree/druid-exporter)
* [Elasticsearch exporter](https://github.com/prometheus-community/elasticsearch_exporter)
* [EventStore exporter](https://github.com/marcinbudny/eventstore_exporter)
* [IoTDB exporter](https://github.com/fagnercarvalho/prometheus-iotdb-exporter)
* [KDB+ exporter](https://github.com/KxSystems/prometheus-kdb-exporter)
* [Memcached exporter](https://github.com/prometheus/memcached_exporter) (**official**)
* [MongoDB exporter](https://github.com/percona/mongodb_exporter)
* [MongoDB query exporter](https://github.com/raffis/mongodb-query-exporter)
* [MongoDB Node.js Driver exporter](https://github.com/christiangalsterer/mongodb-driver-prometheus-exporter)
* [MSSQL server exporter](https://github.com/awaragi/prometheus-mssql-exporter)
* [MySQL router exporter](https://github.com/rluisr/mysqlrouter_exporter)
* [MySQL server exporter](https://github.com/prometheus/mysqld_exporter) (**official**)
* [OpenTSDB Exporter](https://github.com/cloudflare/opentsdb_exporter)
* [Oracle DB Exporter](https://github.com/iamseth/oracledb_exporter)
* [PgBouncer exporter](https://github.com/prometheus-community/pgbouncer_exporter)
* [PostgreSQL exporter](https://github.com/prometheus-community/postgres_exporter)
* [Presto exporter](https://github.com/yahoojapan/presto_exporter)
* [ProxySQL exporter](https://github.com/percona/proxysql_exporter)
* [RavenDB exporter](https://github.com/marcinbudny/ravendb_exporter)
* [Redis exporter](https://github.com/oliver006/redis_exporter)
* [RethinkDB exporter](https://github.com/oliver006/rethinkdb_exporter)
* [SQL exporter](https://github.com/burningalchemist/sql_exporter)
* [Tarantool metric library](https://github.com/tarantool/metrics)
* [Twemproxy](https://github.com/stuartnelson3/twemproxy_exporter)
### Hardware related
* [apcupsd exporter](https://github.com/mdlayher/apcupsd_exporter)
* [BIG-IP exporter](https://github.com/ExpressenAB/bigip_exporter)
* [Bosch Sensortec BMP/BME exporter](https://github.com/David-Igou/bsbmp-exporter)
* [Collins exporter](https://github.com/soundcloud/collins_exporter)
* [Dell Hardware OMSA exporter](https://github.com/galexrt/dellhw_exporter)
* [Disk usage exporter](https://github.com/dundee/disk_usage_exporter)
* [Fortigate exporter](https://github.com/bluecmd/fortigate_exporter)
* [IBM Z HMC exporter](https://github.com/zhmcclient/zhmc-prometheus-exporter)
* [IoT Edison exporter](https://github.com/roman-vynar/edison_exporter)
* [InfiniBand exporter](https://github.com/treydock/infiniband_exporter)
* [IPMI exporter](https://github.com/soundcloud/ipmi_exporter)
* [knxd exporter](https://github.com/RichiH/knxd_exporter)
* [Modbus exporter](https://github.com/RichiH/modbus_exporter)
* [Netgear Cable Modem Exporter](https://github.com/ickymettle/netgear_cm_exporter)
* [Netgear Router exporter](https://github.com/DRuggeri/netgear_exporter)
* [Network UPS Tools (NUT) exporter](https://github.com/DRuggeri/nut_exporter)
* [Node/system metrics exporter](https://github.com/prometheus/node_exporter) (**official**)
* [NVIDIA GPU exporter](https://github.com/mindprince/nvidia_gpu_prometheus_exporter)
* [ProSAFE exporter](https://github.com/dalance/prosafe_exporter)
* [SmartRAID exporter](https://gitlab.com/calestyo/prometheus-smartraid-exporter)
* [Waveplus Radon Sensor Exporter](https://github.com/jeremybz/waveplus_exporter)
* [Weathergoose Climate Monitor Exporter](https://github.com/branttaylor/watchdog-prometheus-exporter)
* [Windows exporter](https://github.com/prometheus-community/windows_exporter)
* [Intel® Optane™ Persistent Memory Controller Exporter](https://github.com/intel/ipmctl-exporter)
### Issue trackers and continuous integration
* [Bamboo exporter](https://github.com/AndreyVMarkelov/bamboo-prometheus-exporter)
* [Bitbucket exporter](https://github.com/AndreyVMarkelov/prom-bitbucket-exporter)
* [Confluence exporter](https://github.com/AndreyVMarkelov/prom-confluence-exporter)
* [Jenkins exporter](https://github.com/lovoo/jenkins_exporter)
* [JIRA exporter](https://github.com/AndreyVMarkelov/jira-prometheus-exporter)
### Messaging systems
* [Beanstalkd exporter](https://github.com/messagebird/beanstalkd_exporter)
* [EMQ exporter](https://github.com/nuvo/emq_exporter)
* [Gearman exporter](https://github.com/bakins/gearman-exporter)
* [IBM MQ exporter](https://github.com/ibm-messaging/mq-metric-samples/tree/master/cmd/mq_prometheus)
* [Kafka exporter](https://github.com/danielqsj/kafka_exporter)
* [NATS exporter](https://github.com/nats-io/prometheus-nats-exporter)
* [NSQ exporter](https://github.com/lovoo/nsq_exporter)
* [Mirth Connect exporter](https://github.com/vynca/mirth_exporter)
* [MQTT blackbox exporter](https://github.com/inovex/mqtt_blackbox_exporter)
* [MQTT2Prometheus](https://github.com/hikhvar/mqtt2prometheus)
* [RabbitMQ exporter](https://github.com/kbudde/rabbitmq_exporter)
* [RabbitMQ Management Plugin exporter](https://github.com/deadtrickster/prometheus_rabbitmq_exporter)
* [RocketMQ exporter](https://github.com/apache/rocketmq-exporter)
* [Solace exporter](https://github.com/solacecommunity/solace-prometheus-exporter)
### Storage
* [Ceph exporter](https://github.com/digitalocean/ceph_exporter)
* [Ceph RADOSGW exporter](https://github.com/blemmenes/radosgw_usage_exporter)
* [Gluster exporter](https://github.com/ofesseler/gluster_exporter)
* [GPFS exporter](https://github.com/treydock/gpfs_exporter)
* [Hadoop HDFS FSImage exporter](https://github.com/marcelmay/hadoop-hdfs-fsimage-exporter)
* [HPE CSI info metrics provider](https://scod.hpedev.io/csi_driver/metrics.html)
* [HPE storage array exporter](https://hpe-storage.github.io/array-exporter/)
* [Lustre exporter](https://github.com/HewlettPackard/lustre_exporter)
* [NetApp E-Series exporter](https://github.com/treydock/eseries_exporter)
* [Pure Storage exporter](https://github.com/PureStorage-OpenConnect/pure-exporter)
* [ScaleIO exporter](https://github.com/syepes/sio2prom)
* [Tivoli Storage Manager/IBM Spectrum Protect exporter](https://github.com/treydock/tsm_exporter)
### HTTP
* [Apache exporter](https://github.com/Lusitaniae/apache_exporter)
* [HAProxy exporter](https://github.com/prometheus/haproxy_exporter) (**official**)
* [Nginx metric library](https://github.com/knyar/nginx-lua-prometheus)
* [Nginx VTS exporter](https://github.com/sysulq/nginx-vts-exporter)
* [Passenger exporter](https://github.com/stuartnelson3/passenger_exporter)
* [Squid exporter](https://github.com/boynux/squid-exporter)
* [Tinyproxy exporter](https://github.com/gmm42/tinyproxy_exporter)
* [Varnish exporter](https://github.com/jonnenauha/prometheus_varnish_exporter)
* [WebDriver exporter](https://github.com/mattbostock/webdriver_exporter)
### APIs
* [AWS ECS exporter](https://github.com/slok/ecs-exporter)
* [AWS Health exporter](https://github.com/Jimdo/aws-health-exporter)
* [AWS SQS exporter](https://github.com/jmal98/sqs_exporter)
* [Azure Health exporter](https://github.com/FXinnovation/azure-health-exporter)
* [BigBlueButton](https://github.com/greenstatic/bigbluebutton-exporter)
* [Cloudflare exporter](https://gitlab.com/gitlab-org/cloudflare_exporter)
* [Cryptowat exporter](https://github.com/nbarrientos/cryptowat_exporter)
* [DigitalOcean exporter](https://github.com/metalmatze/digitalocean_exporter)
* [Docker Cloud exporter](https://github.com/infinityworks/docker-cloud-exporter)
* [Docker Hub exporter](https://github.com/infinityworks/docker-hub-exporter)
* [Fastly exporter](https://github.com/peterbourgon/fastly-exporter)
* [GitHub exporter](https://github.com/githubexporter/github-exporter)
* [Gmail exporter](https://github.com/jamesread/prometheus-gmail-exporter/)
* [GraphQL exporter](https://github.com/ricardbejarano/graphql_exporter)
* [InstaClustr exporter](https://github.com/fcgravalos/instaclustr_exporter)
* [Mozilla Observatory exporter](https://github.com/Jimdo/observatory-exporter)
* [OpenWeatherMap exporter](https://github.com/RichiH/openweathermap_exporter)
* [Pagespeed exporter](https://github.com/foomo/pagespeed_exporter)
* [Rancher exporter](https://github.com/infinityworks/prometheus-rancher-exporter)
* [Speedtest exporter](https://github.com/nlamirault/speedtest_exporter)
* [Tankerkönig API Exporter](https://github.com/lukasmalkmus/tankerkoenig_exporter)
### Logging
* [Fluentd exporter](https://github.com/V3ckt0r/fluentd_exporter)
* [Google's mtail log data extractor](https://github.com/google/mtail)
* [Grok exporter](https://github.com/fstab/grok_exporter)
### FinOps
* [AWS Cost Exporter](https://github.com/electrolux-oss/aws-cost-exporter)
* [Azure Cost Exporter](https://github.com/electrolux-oss/azure-cost-exporter)
* [Kubernetes Cost Exporter](https://github.com/electrolux-oss/kubernetes-cost-exporter)
### Other monitoring systems
* [Akamai Cloudmonitor exporter](https://github.com/ExpressenAB/cloudmonitor_exporter)
* [Alibaba Cloudmonitor exporter](https://github.com/aylei/aliyun-exporter)
* [AWS CloudWatch exporter](https://github.com/prometheus/cloudwatch_exporter) (**official**)
* [Azure Monitor exporter](https://github.com/RobustPerception/azure_metrics_exporter)
* [Cloud Foundry Firehose exporter](https://github.com/cloudfoundry-community/firehose_exporter)
* [Collectd exporter](https://github.com/prometheus/collectd_exporter) (**official**)
* [Google Stackdriver exporter](https://github.com/frodenas/stackdriver_exporter)
* [Graphite exporter](https://github.com/prometheus/graphite_exporter) (**official**)
* [Heka dashboard exporter](https://github.com/docker-infra/heka_exporter)
* [Heka exporter](https://github.com/imgix/heka_exporter)
* [Huawei Cloudeye exporter](https://github.com/huaweicloud/cloudeye-exporter)
* [InfluxDB exporter](https://github.com/prometheus/influxdb_exporter) (**official**)
* [ITM exporter](https://github.com/rafal-szypulka/itm_exporter)
* [Java GC exporter](https://github.com/loyispa/jgc_exporter)
* [JavaMelody exporter](https://github.com/fschlag/javamelody-prometheus-exporter)
* [JMX exporter](https://github.com/prometheus/jmx_exporter) (**official**)
* [Munin exporter](https://github.com/pvdh/munin_exporter)
* [Nagios / Naemon exporter](https://github.com/Griesbacher/Iapetos)
* [Neptune Apex exporter](https://github.com/dl-romero/neptune_exporter)
* [New Relic exporter](https://github.com/mrf/newrelic_exporter)
* [NRPE exporter](https://github.com/robustperception/nrpe_exporter)
* [Osquery exporter](https://github.com/zwopir/osquery_exporter)
* [OTC CloudEye exporter](https://github.com/tiagoReichert/otc-cloudeye-prometheus-exporter)
* [Pingdom exporter](https://github.com/giantswarm/prometheus-pingdom-exporter)
* [Promitor (Azure Monitor)](https://promitor.io)
* [scollector exporter](https://github.com/tgulacsi/prometheus_scollector)
* [Sensu exporter](https://github.com/reachlin/sensu_exporter)
* [site24x7_exporter](https://github.com/svenstaro/site24x7_exporter)
* [SNMP exporter](https://github.com/prometheus/snmp_exporter) (**official**)
* [StatsD exporter](https://github.com/prometheus/statsd_exporter) (**official**)
* [TencentCloud monitor exporter](https://github.com/tencentyun/tencentcloud-exporter)
* [ThousandEyes exporter](https://github.com/sapcc/1000eyes_exporter)
* [StatusPage exporter](https://github.com/sergeyshevch/statuspage-exporter)
### Miscellaneous
* [ACT Fibernet Exporter](https://git.captnemo.in/nemo/prometheus-act-exporter)
* [BIND exporter](https://github.com/prometheus-community/bind_exporter)
* [BIND query exporter](https://github.com/DRuggeri/bind_query_exporter)
* [Bitcoind exporter](https://github.com/LePetitBloc/bitcoind-exporter)
* [Blackbox exporter](https://github.com/prometheus/blackbox_exporter) (**official**)
* [Bungeecord exporter](https://github.com/weihao/bungeecord-prometheus-exporter)
* [BOSH exporter](https://github.com/cloudfoundry-community/bosh_exporter)
* [cAdvisor](https://github.com/google/cadvisor)
* [Cachet exporter](https://github.com/ContaAzul/cachet_exporter)
* [ccache exporter](https://github.com/virtualtam/ccache_exporter)
* [c-lightning exporter](https://github.com/lightningd/plugins/tree/master/prometheus)
* [DHCPD leases exporter](https://github.com/DRuggeri/dhcpd_leases_exporter)
* [Dovecot exporter](https://github.com/kumina/dovecot_exporter)
* [Dnsmasq exporter](https://github.com/google/dnsmasq_exporter)
* [eBPF exporter](https://github.com/cloudflare/ebpf_exporter)
* [Ethereum Client exporter](https://github.com/31z4/ethereum-prometheus-exporter)
* [File statistics exporter](https://github.com/michael-doubez/filestat_exporter)
* [JFrog Artifactory Exporter](https://github.com/peimanja/artifactory_exporter)
* [Hostapd Exporter](https://github.com/Fundacio-i2CAT/hostapd_prometheus_exporter)
* [IBM Security Verify Access / Security Access Manager Exporter](https://gitlab.com/zeblawson/isva-prometheus-exporter)
* [IPsec exporter](https://github.com/torilabs/ipsec-prometheus-exporter)
* [IRCd exporter](https://github.com/dgl/ircd_exporter)
* [Linux HA ClusterLabs exporter](https://github.com/ClusterLabs/ha_cluster_exporter)
* [JMeter plugin](https://github.com/johrstrom/jmeter-prometheus-plugin)
* [JSON exporter](https://github.com/prometheus-community/json_exporter)
* [Kannel exporter](https://github.com/apostvav/kannel_exporter)
* [Kemp LoadBalancer exporter](https://github.com/giantswarm/prometheus-kemp-exporter)
* [Kibana Exporter](https://github.com/pjhampton/kibana-prometheus-exporter)
* [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
* [Locust Exporter](https://github.com/ContainerSolutions/locust_exporter)
* [Meteor JS web framework exporter](https://atmospherejs.com/sevki/prometheus-exporter)
* [Minecraft exporter module](https://github.com/Baughn/PrometheusIntegration)
* [Minecraft exporter](https://github.com/dirien/minecraft-prometheus-exporter)
* [Nomad exporter](https://gitlab.com/yakshaving.art/nomad-exporter)
* [nftables exporter](https://github.com/Intrinsec/nftables_exporter)
* [OpenStack exporter](https://github.com/openstack-exporter/openstack-exporter)
* [OpenStack blackbox exporter](https://github.com/infraly/openstack_client_exporter)
* [oVirt exporter](https://github.com/czerwonk/ovirt_exporter)
* [Pact Broker exporter](https://github.com/ContainerSolutions/pactbroker_exporter)
* [PHP-FPM exporter](https://github.com/bakins/php-fpm-exporter)
* [PowerDNS exporter](https://github.com/ledgr/powerdns_exporter)
* [Podman exporter](https://github.com/containers/prometheus-podman-exporter)
* [Prefect2 exporter](https://github.com/pathfinder177/prefect2-prometheus-exporter)
* [Process exporter](https://github.com/ncabatoff/process-exporter)
* [rTorrent exporter](https://github.com/mdlayher/rtorrent_exporter)
* [Rundeck exporter](https://github.com/phsmith/rundeck_exporter)
* [SABnzbd exporter](https://github.com/msroest/sabnzbd_exporter)
* [SAML exporter](https://github.com/DoodleScheduling/saml-exporter)
* [Script exporter](https://github.com/adhocteam/script_exporter)
* [Shield exporter](https://github.com/cloudfoundry-community/shield_exporter)
* [Smokeping prober](https://github.com/SuperQ/smokeping_prober)
* [SMTP/Maildir MDA blackbox prober](https://github.com/cherti/mailexporter)
* [SoftEther exporter](https://github.com/dalance/softether_exporter)
* [SSH exporter](https://github.com/treydock/ssh_exporter)
* [Teamspeak3 exporter](https://github.com/hikhvar/ts3exporter)
* [Transmission exporter](https://github.com/metalmatze/transmission-exporter)
* [Unbound exporter](https://github.com/kumina/unbound_exporter)
* [WireGuard exporter](https://github.com/MindFlavor/prometheus_wireguard_exporter)
* [Xen exporter](https://github.com/lovoo/xenstats_exporter)
When implementing a new Prometheus exporter, please follow the
[guidelines on writing exporters](/docs/instrumenting/writing_exporters)
Please also consider consulting the [development mailing
list](https://groups.google.com/forum/#!forum/prometheus-developers). We are
happy to give advice on how to make your exporter as useful and consistent as
possible.
## Software exposing Prometheus metrics
Some third-party software exposes metrics in the Prometheus format, so no
separate exporters are needed:
* [Ansible Automation Platform Automation Controller (AWX)](https://docs.ansible.com/automation-controller/latest/html/administration/metrics.html)
* [App Connect Enterprise](https://github.com/ot4i/ace-docker)
* [Ballerina](https://ballerina.io/)
* [BFE](https://github.com/baidu/bfe)
* [Caddy](https://caddyserver.com/docs/metrics) (**direct**)
* [Ceph](https://docs.ceph.com/en/latest/mgr/prometheus/)
* [CockroachDB](https://www.cockroachlabs.com/docs/stable/monitoring-and-alerting.html#prometheus-endpoint)
* [Collectd](https://collectd.org/wiki/index.php/Plugin:Write_Prometheus)
* [Concourse](https://concourse-ci.org/)
* [CRG Roller Derby Scoreboard](https://github.com/rollerderby/scoreboard) (**direct**)
* [Diffusion](https://docs.pushtechnology.com/docs/latest/manual/html/administratorguide/systemmanagement/r_statistics.html)
* [Docker Daemon](https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-metrics)
* [Doorman](https://github.com/youtube/doorman) (**direct**)
* [Dovecot](https://doc.dovecot.org/configuration_manual/stats/openmetrics/)
* [Envoy](https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html#get--stats?format=prometheus)
* [Etcd](https://github.com/coreos/etcd) (**direct**)
* [Flink](https://github.com/apache/flink)
* [FreeBSD Kernel](https://www.freebsd.org/cgi/man.cgi?query=prometheus_sysctl_exporter&apropos=0&sektion=8&manpath=FreeBSD+12-current&arch=default&format=html)
* [GitLab](https://docs.gitlab.com/ee/administration/monitoring/prometheus/gitlab_metrics.html)
* [Grafana](https://grafana.com/docs/grafana/latest/administration/view-server/internal-metrics/)
* [JavaMelody](https://github.com/javamelody/javamelody/wiki/UserGuideAdvanced#exposing-metrics-to-prometheus)
* [Kong](https://github.com/Kong/kong-plugin-prometheus)
* [Kubernetes](https://github.com/kubernetes/kubernetes) (**direct**)
* [LavinMQ](https://lavinmq.com/)
* [Linkerd](https://github.com/BuoyantIO/linkerd)
* [mgmt](https://github.com/purpleidea/mgmt/blob/master/docs/prometheus.md)
* [MidoNet](https://github.com/midonet/midonet)
* [midonet-kubernetes](https://github.com/midonet/midonet-kubernetes) (**direct**)
* [MinIO](https://docs.minio.io/docs/how-to-monitor-minio-using-prometheus.html)
* [PATROL with Monitoring Studio X](https://www.sentrysoftware.com/library/swsyx/prometheus/exposing-patrol-parameters-in-prometheus.html)
* [Netdata](https://github.com/firehol/netdata)
* [OpenZiti](https://openziti.github.io)
* [Pomerium](https://pomerium.com/reference/#metrics-address)
* [Pretix](https://pretix.eu/)
* [Quobyte](https://www.quobyte.com/) (**direct**)
* [RabbitMQ](https://rabbitmq.com/prometheus.html)
* [RobustIRC](http://robustirc.net/)
* [ScyllaDB](http://github.com/scylladb/scylla)
* [Skipper](https://github.com/zalando/skipper)
* [SkyDNS](https://github.com/skynetservices/skydns) (**direct**)
* [Telegraf](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/prometheus_client)
* [Traefik](https://github.com/containous/traefik)
* [Vector](https://vector.dev)
* [VerneMQ](https://github.com/vernemq/vernemq)
* [Flux](https://github.com/fluxcd/flux2)
* [Xandikos](https://www.xandikos.org/) (**direct**)
* [Zipkin](https://github.com/openzipkin/zipkin/tree/master/zipkin-server#metrics)
The software marked *direct* is also directly instrumented with a Prometheus client library.
## Other third-party utilities
This section lists libraries and other utilities that help you instrument code
in a certain language. They are not Prometheus client libraries themselves but
make use of one of the normal Prometheus client libraries under the hood. As
for all independently maintained software, we cannot vet all of them for best
practices.
* Clojure: [iapetos](https://github.com/clj-commons/iapetos)
* Go: [go-metrics instrumentation library](https://github.com/armon/go-metrics)
* Go: [gokit](https://github.com/peterbourgon/gokit)
* Go: [prombolt](https://github.com/mdlayher/prombolt)
* Java/JVM: [EclipseLink metrics collector](https://github.com/VitaNuova/eclipselinkexporter)
* Java/JVM: [Hystrix metrics publisher](https://github.com/ahus1/prometheus-hystrix)
* Java/JVM: [Jersey metrics collector](https://github.com/VitaNuova/jerseyexporter)
* Java/JVM: [Micrometer Prometheus Registry](https://micrometer.io/docs/registry/prometheus)
* Python-Django: [django-prometheus](https://github.com/korfuri/django-prometheus)
* Node.js: [swagger-stats](https://github.com/slanatech/swagger-stats) | prometheus | title Exporters and integrations sort rank 4 Exporters and integrations There are a number of libraries and servers which help in exporting existing metrics from third party systems as Prometheus metrics This is useful for cases where it is not feasible to instrument a given system with Prometheus metrics directly for example HAProxy or Linux system stats Third party exporters Some of these exporters are maintained as part of the official Prometheus GitHub organization https github com prometheus those are marked as official others are externally contributed and maintained We encourage the creation of more exporters but cannot vet all of them for best practices docs instrumenting writing exporters Commonly those exporters are hosted outside of the Prometheus GitHub organization The exporter default port https github com prometheus prometheus wiki Default port allocations wiki page has become another catalog of exporters and may include exporters not listed here due to overlapping functionality or still being in development The JMX exporter https github com prometheus jmx exporter can export from a wide variety of JVM based applications for example Kafka http kafka apache org and Cassandra http cassandra apache org Databases Aerospike exporter https github com aerospike aerospike prometheus exporter AWS RDS exporter https github com qonto prometheus rds exporter ClickHouse exporter https github com f1yegor clickhouse exporter Consul exporter https github com prometheus consul exporter official Couchbase exporter https github com couchbase couchbase exporter CouchDB exporter https github com gesellix couchdb exporter Druid Exporter https github com opstree druid exporter Elasticsearch exporter https github com prometheus community elasticsearch exporter EventStore exporter https github com marcinbudny eventstore exporter IoTDB exporter https github com fagnercarvalho prometheus iotdb exporter KDB exporter https github com KxSystems prometheus kdb exporter Memcached exporter https github com prometheus memcached exporter official MongoDB exporter https github com percona mongodb exporter MongoDB query exporter https github com raffis mongodb query exporter MongoDB Node js Driver exporter https github com christiangalsterer mongodb driver prometheus exporter MSSQL server exporter https github com awaragi prometheus mssql exporter MySQL router exporter https github com rluisr mysqlrouter exporter MySQL server exporter https github com prometheus mysqld exporter official OpenTSDB Exporter https github com cloudflare opentsdb exporter Oracle DB Exporter https github com iamseth oracledb exporter PgBouncer exporter https github com prometheus community pgbouncer exporter PostgreSQL exporter https github com prometheus community postgres exporter Presto exporter https github com yahoojapan presto exporter ProxySQL exporter https github com percona proxysql exporter RavenDB exporter https github com marcinbudny ravendb exporter Redis exporter https github com oliver006 redis exporter RethinkDB exporter https github com oliver006 rethinkdb exporter SQL exporter https github com burningalchemist sql exporter Tarantool metric library https github com tarantool metrics Twemproxy https github com stuartnelson3 twemproxy exporter Hardware related apcupsd exporter https github com mdlayher apcupsd exporter BIG IP exporter https github com ExpressenAB bigip exporter Bosch Sensortec BMP BME exporter https github com David Igou bsbmp exporter Collins exporter https github com soundcloud collins exporter Dell Hardware OMSA exporter https github com galexrt dellhw exporter Disk usage exporter https github com dundee disk usage exporter Fortigate exporter https github com bluecmd fortigate exporter IBM Z HMC exporter https github com zhmcclient zhmc prometheus exporter IoT Edison exporter https github com roman vynar edison exporter InfiniBand exporter https github com treydock infiniband exporter IPMI exporter https github com soundcloud ipmi exporter knxd exporter https github com RichiH knxd exporter Modbus exporter https github com RichiH modbus exporter Netgear Cable Modem Exporter https github com ickymettle netgear cm exporter Netgear Router exporter https github com DRuggeri netgear exporter Network UPS Tools NUT exporter https github com DRuggeri nut exporter Node system metrics exporter https github com prometheus node exporter official NVIDIA GPU exporter https github com mindprince nvidia gpu prometheus exporter ProSAFE exporter https github com dalance prosafe exporter SmartRAID exporter https gitlab com calestyo prometheus smartraid exporter Waveplus Radon Sensor Exporter https github com jeremybz waveplus exporter Weathergoose Climate Monitor Exporter https github com branttaylor watchdog prometheus exporter Windows exporter https github com prometheus community windows exporter Intel Optane Persistent Memory Controller Exporter https github com intel ipmctl exporter Issue trackers and continuous integration Bamboo exporter https github com AndreyVMarkelov bamboo prometheus exporter Bitbucket exporter https github com AndreyVMarkelov prom bitbucket exporter Confluence exporter https github com AndreyVMarkelov prom confluence exporter Jenkins exporter https github com lovoo jenkins exporter JIRA exporter https github com AndreyVMarkelov jira prometheus exporter Messaging systems Beanstalkd exporter https github com messagebird beanstalkd exporter EMQ exporter https github com nuvo emq exporter Gearman exporter https github com bakins gearman exporter IBM MQ exporter https github com ibm messaging mq metric samples tree master cmd mq prometheus Kafka exporter https github com danielqsj kafka exporter NATS exporter https github com nats io prometheus nats exporter NSQ exporter https github com lovoo nsq exporter Mirth Connect exporter https github com vynca mirth exporter MQTT blackbox exporter https github com inovex mqtt blackbox exporter MQTT2Prometheus https github com hikhvar mqtt2prometheus RabbitMQ exporter https github com kbudde rabbitmq exporter RabbitMQ Management Plugin exporter https github com deadtrickster prometheus rabbitmq exporter RocketMQ exporter https github com apache rocketmq exporter Solace exporter https github com solacecommunity solace prometheus exporter Storage Ceph exporter https github com digitalocean ceph exporter Ceph RADOSGW exporter https github com blemmenes radosgw usage exporter Gluster exporter https github com ofesseler gluster exporter GPFS exporter https github com treydock gpfs exporter Hadoop HDFS FSImage exporter https github com marcelmay hadoop hdfs fsimage exporter HPE CSI info metrics provider https scod hpedev io csi driver metrics html HPE storage array exporter https hpe storage github io array exporter Lustre exporter https github com HewlettPackard lustre exporter NetApp E Series exporter https github com treydock eseries exporter Pure Storage exporter https github com PureStorage OpenConnect pure exporter ScaleIO exporter https github com syepes sio2prom Tivoli Storage Manager IBM Spectrum Protect exporter https github com treydock tsm exporter HTTP Apache exporter https github com Lusitaniae apache exporter HAProxy exporter https github com prometheus haproxy exporter official Nginx metric library https github com knyar nginx lua prometheus Nginx VTS exporter https github com sysulq nginx vts exporter Passenger exporter https github com stuartnelson3 passenger exporter Squid exporter https github com boynux squid exporter Tinyproxy exporter https github com gmm42 tinyproxy exporter Varnish exporter https github com jonnenauha prometheus varnish exporter WebDriver exporter https github com mattbostock webdriver exporter APIs AWS ECS exporter https github com slok ecs exporter AWS Health exporter https github com Jimdo aws health exporter AWS SQS exporter https github com jmal98 sqs exporter Azure Health exporter https github com FXinnovation azure health exporter BigBlueButton https github com greenstatic bigbluebutton exporter Cloudflare exporter https gitlab com gitlab org cloudflare exporter Cryptowat exporter https github com nbarrientos cryptowat exporter DigitalOcean exporter https github com metalmatze digitalocean exporter Docker Cloud exporter https github com infinityworks docker cloud exporter Docker Hub exporter https github com infinityworks docker hub exporter Fastly exporter https github com peterbourgon fastly exporter GitHub exporter https github com githubexporter github exporter Gmail exporter https github com jamesread prometheus gmail exporter GraphQL exporter https github com ricardbejarano graphql exporter InstaClustr exporter https github com fcgravalos instaclustr exporter Mozilla Observatory exporter https github com Jimdo observatory exporter OpenWeatherMap exporter https github com RichiH openweathermap exporter Pagespeed exporter https github com foomo pagespeed exporter Rancher exporter https github com infinityworks prometheus rancher exporter Speedtest exporter https github com nlamirault speedtest exporter Tankerk nig API Exporter https github com lukasmalkmus tankerkoenig exporter Logging Fluentd exporter https github com V3ckt0r fluentd exporter Google s mtail log data extractor https github com google mtail Grok exporter https github com fstab grok exporter FinOps AWS Cost Exporter https github com electrolux oss aws cost exporter Azure Cost Exporter https github com electrolux oss azure cost exporter Kubernetes Cost Exporter https github com electrolux oss kubernetes cost exporter Other monitoring systems Akamai Cloudmonitor exporter https github com ExpressenAB cloudmonitor exporter Alibaba Cloudmonitor exporter https github com aylei aliyun exporter AWS CloudWatch exporter https github com prometheus cloudwatch exporter official Azure Monitor exporter https github com RobustPerception azure metrics exporter Cloud Foundry Firehose exporter https github com cloudfoundry community firehose exporter Collectd exporter https github com prometheus collectd exporter official Google Stackdriver exporter https github com frodenas stackdriver exporter Graphite exporter https github com prometheus graphite exporter official Heka dashboard exporter https github com docker infra heka exporter Heka exporter https github com imgix heka exporter Huawei Cloudeye exporter https github com huaweicloud cloudeye exporter InfluxDB exporter https github com prometheus influxdb exporter official ITM exporter https github com rafal szypulka itm exporter Java GC exporter https github com loyispa jgc exporter JavaMelody exporter https github com fschlag javamelody prometheus exporter JMX exporter https github com prometheus jmx exporter official Munin exporter https github com pvdh munin exporter Nagios Naemon exporter https github com Griesbacher Iapetos Neptune Apex exporter https github com dl romero neptune exporter New Relic exporter https github com mrf newrelic exporter NRPE exporter https github com robustperception nrpe exporter Osquery exporter https github com zwopir osquery exporter OTC CloudEye exporter https github com tiagoReichert otc cloudeye prometheus exporter Pingdom exporter https github com giantswarm prometheus pingdom exporter Promitor Azure Monitor https promitor io scollector exporter https github com tgulacsi prometheus scollector Sensu exporter https github com reachlin sensu exporter site24x7 exporter https github com svenstaro site24x7 exporter SNMP exporter https github com prometheus snmp exporter official StatsD exporter https github com prometheus statsd exporter official TencentCloud monitor exporter https github com tencentyun tencentcloud exporter ThousandEyes exporter https github com sapcc 1000eyes exporter StatusPage exporter https github com sergeyshevch statuspage exporter Miscellaneous ACT Fibernet Exporter https git captnemo in nemo prometheus act exporter BIND exporter https github com prometheus community bind exporter BIND query exporter https github com DRuggeri bind query exporter Bitcoind exporter https github com LePetitBloc bitcoind exporter Blackbox exporter https github com prometheus blackbox exporter official Bungeecord exporter https github com weihao bungeecord prometheus exporter BOSH exporter https github com cloudfoundry community bosh exporter cAdvisor https github com google cadvisor Cachet exporter https github com ContaAzul cachet exporter ccache exporter https github com virtualtam ccache exporter c lightning exporter https github com lightningd plugins tree master prometheus DHCPD leases exporter https github com DRuggeri dhcpd leases exporter Dovecot exporter https github com kumina dovecot exporter Dnsmasq exporter https github com google dnsmasq exporter eBPF exporter https github com cloudflare ebpf exporter Ethereum Client exporter https github com 31z4 ethereum prometheus exporter File statistics exporter https github com michael doubez filestat exporter JFrog Artifactory Exporter https github com peimanja artifactory exporter Hostapd Exporter https github com Fundacio i2CAT hostapd prometheus exporter IBM Security Verify Access Security Access Manager Exporter https gitlab com zeblawson isva prometheus exporter IPsec exporter https github com torilabs ipsec prometheus exporter IRCd exporter https github com dgl ircd exporter Linux HA ClusterLabs exporter https github com ClusterLabs ha cluster exporter JMeter plugin https github com johrstrom jmeter prometheus plugin JSON exporter https github com prometheus community json exporter Kannel exporter https github com apostvav kannel exporter Kemp LoadBalancer exporter https github com giantswarm prometheus kemp exporter Kibana Exporter https github com pjhampton kibana prometheus exporter kube state metrics https github com kubernetes kube state metrics Locust Exporter https github com ContainerSolutions locust exporter Meteor JS web framework exporter https atmospherejs com sevki prometheus exporter Minecraft exporter module https github com Baughn PrometheusIntegration Minecraft exporter https github com dirien minecraft prometheus exporter Nomad exporter https gitlab com yakshaving art nomad exporter nftables exporter https github com Intrinsec nftables exporter OpenStack exporter https github com openstack exporter openstack exporter OpenStack blackbox exporter https github com infraly openstack client exporter oVirt exporter https github com czerwonk ovirt exporter Pact Broker exporter https github com ContainerSolutions pactbroker exporter PHP FPM exporter https github com bakins php fpm exporter PowerDNS exporter https github com ledgr powerdns exporter Podman exporter https github com containers prometheus podman exporter Prefect2 exporter https github com pathfinder177 prefect2 prometheus exporter Process exporter https github com ncabatoff process exporter rTorrent exporter https github com mdlayher rtorrent exporter Rundeck exporter https github com phsmith rundeck exporter SABnzbd exporter https github com msroest sabnzbd exporter SAML exporter https github com DoodleScheduling saml exporter Script exporter https github com adhocteam script exporter Shield exporter https github com cloudfoundry community shield exporter Smokeping prober https github com SuperQ smokeping prober SMTP Maildir MDA blackbox prober https github com cherti mailexporter SoftEther exporter https github com dalance softether exporter SSH exporter https github com treydock ssh exporter Teamspeak3 exporter https github com hikhvar ts3exporter Transmission exporter https github com metalmatze transmission exporter Unbound exporter https github com kumina unbound exporter WireGuard exporter https github com MindFlavor prometheus wireguard exporter Xen exporter https github com lovoo xenstats exporter When implementing a new Prometheus exporter please follow the guidelines on writing exporters docs instrumenting writing exporters Please also consider consulting the development mailing list https groups google com forum forum prometheus developers We are happy to give advice on how to make your exporter as useful and consistent as possible Software exposing Prometheus metrics Some third party software exposes metrics in the Prometheus format so no separate exporters are needed Ansible Automation Platform Automation Controller AWX https docs ansible com automation controller latest html administration metrics html App Connect Enterprise https github com ot4i ace docker Ballerina https ballerina io BFE https github com baidu bfe Caddy https caddyserver com docs metrics direct Ceph https docs ceph com en latest mgr prometheus CockroachDB https www cockroachlabs com docs stable monitoring and alerting html prometheus endpoint Collectd https collectd org wiki index php Plugin Write Prometheus Concourse https concourse ci org CRG Roller Derby Scoreboard https github com rollerderby scoreboard direct Diffusion https docs pushtechnology com docs latest manual html administratorguide systemmanagement r statistics html Docker Daemon https docs docker com engine reference commandline dockerd daemon metrics Doorman https github com youtube doorman direct Dovecot https doc dovecot org configuration manual stats openmetrics Envoy https www envoyproxy io docs envoy latest operations admin html get stats format prometheus Etcd https github com coreos etcd direct Flink https github com apache flink FreeBSD Kernel https www freebsd org cgi man cgi query prometheus sysctl exporter apropos 0 sektion 8 manpath FreeBSD 12 current arch default format html GitLab https docs gitlab com ee administration monitoring prometheus gitlab metrics html Grafana https grafana com docs grafana latest administration view server internal metrics JavaMelody https github com javamelody javamelody wiki UserGuideAdvanced exposing metrics to prometheus Kong https github com Kong kong plugin prometheus Kubernetes https github com kubernetes kubernetes direct LavinMQ https lavinmq com Linkerd https github com BuoyantIO linkerd mgmt https github com purpleidea mgmt blob master docs prometheus md MidoNet https github com midonet midonet midonet kubernetes https github com midonet midonet kubernetes direct MinIO https docs minio io docs how to monitor minio using prometheus html PATROL with Monitoring Studio X https www sentrysoftware com library swsyx prometheus exposing patrol parameters in prometheus html Netdata https github com firehol netdata OpenZiti https openziti github io Pomerium https pomerium com reference metrics address Pretix https pretix eu Quobyte https www quobyte com direct RabbitMQ https rabbitmq com prometheus html RobustIRC http robustirc net ScyllaDB http github com scylladb scylla Skipper https github com zalando skipper SkyDNS https github com skynetservices skydns direct Telegraf https github com influxdata telegraf tree master plugins outputs prometheus client Traefik https github com containous traefik Vector https vector dev VerneMQ https github com vernemq vernemq Flux https github com fluxcd flux2 Xandikos https www xandikos org direct Zipkin https github com openzipkin zipkin tree master zipkin server metrics The software marked direct is also directly instrumented with a Prometheus client library Other third party utilities This section lists libraries and other utilities that help you instrument code in a certain language They are not Prometheus client libraries themselves but make use of one of the normal Prometheus client libraries under the hood As for all independently maintained software we cannot vet all of them for best practices Clojure iapetos https github com clj commons iapetos Go go metrics instrumentation library https github com armon go metrics Go gokit https github com peterbourgon gokit Go prombolt https github com mdlayher prombolt Java JVM EclipseLink metrics collector https github com VitaNuova eclipselinkexporter Java JVM Hystrix metrics publisher https github com ahus1 prometheus hystrix Java JVM Jersey metrics collector https github com VitaNuova jerseyexporter Java JVM Micrometer Prometheus Registry https micrometer io docs registry prometheus Python Django django prometheus https github com korfuri django prometheus Node js swagger stats https github com slanatech swagger stats |
prometheus sortrank 2 Writing client libraries This document covers what functionality and API Prometheus client libraries should offer with the aim of consistency across libraries making the easy use cases easy and avoiding offering functionality that may lead users down the title Writing client libraries | ---
title: Writing client libraries
sort_rank: 2
---
# Writing client libraries
This document covers what functionality and API Prometheus client libraries
should offer, with the aim of consistency across libraries, making the easy use
cases easy and avoiding offering functionality that may lead users down the
wrong path.
There are [10 languages already supported](/docs/instrumenting/clientlibs) at
the time of writing, so we’ve gotten a good sense by now of how to write a
client. These guidelines aim to help authors of new client libraries produce
good libraries.
## Conventions
MUST/MUST NOT/SHOULD/SHOULD NOT/MAY have the meanings given in
[https://www.ietf.org/rfc/rfc2119.txt](https://www.ietf.org/rfc/rfc2119.txt)
In addition ENCOURAGED means that a feature is desirable for a library to have,
but it’s okay if it’s not present. In other words, a nice to have.
Things to keep in mind:
* Take advantage of each language’s features.
* The common use cases should be easy.
* The correct way to do something should be the easy way.
* More complex use cases should be possible.
The common use cases are (in order):
* Counters without labels spread liberally around libraries/applications.
* Timing functions/blocks of code in Summaries/Histograms.
* Gauges to track current states of things (and their limits).
* Monitoring of batch jobs.
## Overall structure
Clients MUST be written to be callback based internally. Clients SHOULD
generally follow the structure described here.
The key class is the Collector. This has a method (typically called ‘collect’)
that returns zero or more metrics and their samples. Collectors get registered
with a CollectorRegistry. Data is exposed by passing a CollectorRegistry to a
class/method/function "bridge", which returns the metrics in a format
Prometheus supports. Every time the CollectorRegistry is scraped it must
callback to each of the Collectors’ collect method.
The interface most users interact with are the Counter, Gauge, Summary, and
Histogram Collectors. These represent a single metric, and should cover the
vast majority of use cases where a user is instrumenting their own code.
More advanced uses cases (such as proxying from another
monitoring/instrumentation system) require writing a custom Collector. Someone
may also want to write a "bridge" that takes a CollectorRegistry and produces
data in a format a different monitoring/instrumentation system understands,
allowing users to only have to think about one instrumentation system.
CollectorRegistry SHOULD offer `register()`/`unregister()` functions, and a
Collector SHOULD be allowed to be registered to multiple CollectorRegistrys.
Client libraries MUST be thread safe.
For non-OO languages such as C, client libraries should follow the spirit of
this structure as much as is practical.
### Naming
Client libraries SHOULD follow function/method/class names mentioned in this
document, keeping in mind the naming conventions of the language they’re
working in. For example, `set_to_current_time()` is good for a method name in
Python, but `SetToCurrentTime()` is better in Go and `setToCurrentTime()` is
the convention in Java. Where names differ for technical reasons (e.g. not
allowing function overloading), documentation/help strings SHOULD point users
towards the other names.
Libraries MUST NOT offer functions/methods/classes with the same or similar
names to ones given here, but with different semantics.
## Metrics
The Counter, Gauge, Summary and Histogram [metric
types](/docs/concepts/metric_types/) are the primary interface by users.
Counter and Gauge MUST be part of the client library. At least one of Summary
and Histogram MUST be offered.
These should be primarily used as file-static variables, that is, global
variables defined in the same file as the code they’re instrumenting. The
client library SHOULD enable this. The common use case is instrumenting a piece
of code overall, not a piece of code in the context of one instance of an
object. Users shouldn’t have to worry about plumbing their metrics throughout
their code, the client library should do that for them (and if it doesn’t,
users will write a wrapper around the library to make it "easier" - which
rarely tends to go well).
There MUST be a default CollectorRegistry, the standard metrics MUST by default
implicitly register into it with no special work required by the user. There
MUST be a way to have metrics not register to the default CollectorRegistry,
for use in batch jobs and unittests. Custom collectors SHOULD also follow this.
Exactly how the metrics should be created varies by language. For some (Java,
Go) a builder approach is best, whereas for others (Python) function arguments
are rich enough to do it in one call.
For example in the Java Simpleclient we have:
```java
class YourClass {
static final Counter requests = Counter.build()
.name("requests_total")
.help("Requests.").register();
}
```
This will register requests with the default CollectorRegistry. By calling
`build()` rather than `register()` the metric won’t be registered (handy for
unittests), you can also pass in a CollectorRegistry to `register()` (handy for
batch jobs).
### Counter
[Counter](/docs/concepts/metric_types/#counter) is a monotonically increasing
counter. It MUST NOT allow the value to decrease, however it MAY be reset to 0
(such as by server restart).
A counter MUST have the following methods:
* `inc()`: Increment the counter by 1
* `inc(double v)`: Increment the counter by the given amount. MUST check that v >= 0.
A counter is ENCOURAGED to have:
A way to count exceptions throw/raised in a given piece of code, and optionally
only certain types of exceptions. This is count_exceptions in Python.
Counters MUST start at 0.
### Gauge
[Gauge](/docs/concepts/metric_types/#gauge) represents a value that can go up
and down.
A gauge MUST have the following methods:
* `inc()`: Increment the gauge by 1
* `inc(double v)`: Increment the gauge by the given amount
* `dec()`: Decrement the gauge by 1
* `dec(double v)`: Decrement the gauge by the given amount
* `set(double v)`: Set the gauge to the given value
Gauges MUST start at 0, you MAY offer a way for a given gauge to start at a
different number.
A gauge SHOULD have the following methods:
* `set_to_current_time()`: Set the gauge to the current unixtime in seconds.
A gauge is ENCOURAGED to have:
A way to track in-progress requests in some piece of code/function. This is
`track_inprogress` in Python.
A way to time a piece of code and set the gauge to its duration in seconds.
This is useful for batch jobs. This is startTimer/setDuration in Java and the
`time()` decorator/context manager in Python. This SHOULD match the pattern in
Summary/Histogram (though `set()` rather than `observe()`).
### Summary
A [summary](/docs/concepts/metric_types/#summary) samples observations (usually
things like request durations) over sliding windows of time and provides
instantaneous insight into their distributions, frequencies, and sums.
A summary MUST NOT allow the user to set "quantile" as a label name, as this is
used internally to designate summary quantiles. A summary is ENCOURAGED to
offer quantiles as exports, though these can’t be aggregated and tend to be
slow. A summary MUST allow not having quantiles, as just `_count`/`_sum` is
quite useful and this MUST be the default.
A summary MUST have the following methods:
* `observe(double v)`: Observe the given amount
A summary SHOULD have the following methods:
Some way to time code for users in seconds. In Python this is the `time()`
decorator/context manager. In Java this is startTimer/observeDuration. Units
other than seconds MUST NOT be offered (if a user wants something else, they
can do it by hand). This should follow the same pattern as Gauge/Histogram.
Summary `_count`/`_sum` MUST start at 0.
### Histogram
[Histograms](/docs/concepts/metric_types/#histogram) allow aggregatable
distributions of events, such as request latencies. This is at its core a
counter per bucket.
A histogram MUST NOT allow `le` as a user-set label, as `le` is used internally
to designate buckets.
A histogram MUST offer a way to manually choose the buckets. Ways to set
buckets in a `linear(start, width, count)` and `exponential(start, factor,
count)` fashion SHOULD be offered. Count MUST include the `+Inf` bucket.
A histogram SHOULD have the same default buckets as other client libraries.
Buckets MUST NOT be changeable once the metric is created.
A histogram MUST have the following methods:
* `observe(double v)`: Observe the given amount
A histogram SHOULD have the following methods:
Some way to time code for users in seconds. In Python this is the `time()`
decorator/context manager. In Java this is `startTimer`/`observeDuration`.
Units other than seconds MUST NOT be offered (if a user wants something else,
they can do it by hand). This should follow the same pattern as Gauge/Summary.
Histogram `_count`/`_sum` and the buckets MUST start at 0.
**Further metrics considerations**
Providing additional functionality in metrics beyond what’s documented above as
makes sense for a given language is ENCOURAGED.
If there’s a common use case you can make simpler then go for it, as long as it
won’t encourage undesirable behaviours (such as suboptimal metric/label
layouts, or doing computation in the client).
### Labels
Labels are one of the [most powerful
aspects](/docs/practices/instrumentation/#use-labels) of Prometheus, but
[easily abused](/docs/practices/instrumentation/#do-not-overuse-labels).
Accordingly client libraries must be very careful in how labels are offered to
users.
Client libraries MUST NOT allow users to have different
label names for the same metric for Gauge/Counter/Summary/Histogram or any
other Collector offered by the library.
Metrics from custom collectors should almost always have consistent label
names. As there are still rare but valid use cases where this is not the case,
client libraries should not verify this.
While labels are powerful, the majority of metrics will not have labels.
Accordingly the API should allow for labels but not dominate it.
A client library MUST allow for optionally specifying a list of label names at
Gauge/Counter/Summary/Histogram creation time. A client library SHOULD support
any number of label names. A client library MUST validate that label names meet
the [documented
requirements](/docs/concepts/data_model/#metric-names-and-labels).
The general way to provide access to labeled dimension of a metric is via a
`labels()` method that takes either a list of the label values or a map from
label name to label value and returns a "Child". The usual
`.inc()`/`.dec()`/`.observe()` etc. methods can then be called on the Child.
The Child returned by `labels()` SHOULD be cacheable by the user, to avoid
having to look it up again - this matters in latency-critical code.
Metrics with labels SHOULD support a `remove()` method with the same signature
as `labels()` that will remove a Child from the metric no longer exporting it,
and a `clear()` method that removes all Children from the metric. These
invalidate caching of Children.
There SHOULD be a way to initialize a given Child with the default value,
usually just calling `labels()`. Metrics without labels MUST always be
initialized to avoid [problems with missing
metrics](/docs/practices/instrumentation/#avoid-missing-metrics).
### Metric names
Metric names must follow the
[specification](/docs/concepts/data_model/#metric-names-and-labels). As with
label names, this MUST be met for uses of Gauge/Counter/Summary/Histogram and
in any other Collector offered with the library.
Many client libraries offer setting the name in three parts:
`namespace_subsystem_name` of which only the `name` is mandatory.
Dynamic/generated metric names or subparts of metric names MUST be discouraged,
except when a custom Collector is proxying from other
instrumentation/monitoring systems. Generated/dynamic metric names are a sign
that you should be using labels instead.
### Metric description and help
Gauge/Counter/Summary/Histogram MUST require metric descriptions/help to be
provided.
Any custom Collectors provided with the client libraries MUST have
descriptions/help on their metrics.
It is suggested to make it a mandatory argument, but not to check that it’s of
a certain length as if someone really doesn’t want to write docs we’re not
going to convince them otherwise. Collectors offered with the library (and
indeed everywhere we can within the ecosystem) SHOULD have good metric
descriptions, to lead by example.
## Exposition
Clients MUST implement the text-based exposition format outlined in the
[exposition formats](/docs/instrumenting/exposition_formats) documentation.
Reproducible order of the exposed metrics is ENCOURAGED (especially for human
readable formats) if it can be implemented without a significant resource cost.
## Standard and runtime collectors
Client libraries SHOULD offer what they can of the Standard exports, documented
below.
These SHOULD be implemented as custom Collectors, and registered by default on
the default CollectorRegistry. There SHOULD be a way to disable these, as there
are some very niche use cases where they get in the way.
### Process metrics
These metrics have the prefix `process_`. If obtaining a necessary value is
problematic or even impossible with the used language or runtime, client
libraries SHOULD prefer leaving out the corresponding metric over exporting
bogus, inaccurate, or special values (like `NaN`). All memory values in bytes,
all times in unixtime/seconds.
| Metric name | Help string | Unit |
| ---------------------------------- | ------------------------------------------------------ | --------------- |
| `process_cpu_seconds_total` | Total user and system CPU time spent in seconds. | seconds |
| `process_open_fds` | Number of open file descriptors. | file descriptors |
| `process_max_fds` | Maximum number of open file descriptors. | file descriptors |
| `process_virtual_memory_bytes` | Virtual memory size in bytes. | bytes |
| `process_virtual_memory_max_bytes` | Maximum amount of virtual memory available in bytes. | bytes |
| `process_resident_memory_bytes` | Resident memory size in bytes. | bytes |
| `process_heap_bytes` | Process heap size in bytes. | bytes |
| `process_start_time_seconds` | Start time of the process since unix epoch in seconds. | seconds |
| `process_threads` | Number of OS threads in the process. | threads |
### Runtime metrics
In addition, client libraries are ENCOURAGED to also offer whatever makes sense
in terms of metrics for their language’s runtime (e.g. garbage collection
stats), with an appropriate prefix such as `go_`, `hotspot_` etc.
## Unit tests
Client libraries SHOULD have unit tests covering the core instrumentation
library and exposition.
Client libraries are ENCOURAGED to offer ways that make it easy for users to
unit-test their use of the instrumentation code. For example, the
`CollectorRegistry.get_sample_value` in Python.
## Packaging and dependencies
Ideally, a client library can be included in any application to add some
instrumentation without breaking the application.
Accordingly, caution is advised when adding dependencies to the client library.
For example, if you add a library that uses a Prometheus client that requires
version x.y of a library but the application uses x.z elsewhere, will that have
an adverse impact on the application?
It is suggested that where this may arise, that the core instrumentation is
separated from the bridges/exposition of metrics in a given format. For
example, the Java simpleclient `simpleclient` module has no dependencies, and
the `simpleclient_servlet` has the HTTP bits.
## Performance considerations
As client libraries must be thread-safe, some form of concurrency control is
required and consideration must be given to performance on multi-core machines
and applications.
In our experience the least performant is mutexes.
Processor atomic instructions tend to be in the middle, and generally
acceptable.
Approaches that avoid different CPUs mutating the same bit of RAM work best,
such as the DoubleAdder in Java’s simpleclient. There is a memory cost though.
As noted above, the result of `labels()` should be cacheable. The concurrent
maps that tend to back metric with labels tend to be relatively slow.
Special-casing metrics without labels to avoid `labels()`-like lookups can help
a lot.
Metrics SHOULD avoid blocking when they are being incremented/decremented/set
etc. as it’s undesirable for the whole application to be held up while a scrape
is ongoing.
Having benchmarks of the main instrumentation operations, including labels, is
ENCOURAGED.
Resource consumption, particularly RAM, should be kept in mind when performing
exposition. Consider reducing the memory footprint by streaming results, and
potentially having a limit on the number of concurrent scrapes. | prometheus | title Writing client libraries sort rank 2 Writing client libraries This document covers what functionality and API Prometheus client libraries should offer with the aim of consistency across libraries making the easy use cases easy and avoiding offering functionality that may lead users down the wrong path There are 10 languages already supported docs instrumenting clientlibs at the time of writing so we ve gotten a good sense by now of how to write a client These guidelines aim to help authors of new client libraries produce good libraries Conventions MUST MUST NOT SHOULD SHOULD NOT MAY have the meanings given in https www ietf org rfc rfc2119 txt https www ietf org rfc rfc2119 txt In addition ENCOURAGED means that a feature is desirable for a library to have but it s okay if it s not present In other words a nice to have Things to keep in mind Take advantage of each language s features The common use cases should be easy The correct way to do something should be the easy way More complex use cases should be possible The common use cases are in order Counters without labels spread liberally around libraries applications Timing functions blocks of code in Summaries Histograms Gauges to track current states of things and their limits Monitoring of batch jobs Overall structure Clients MUST be written to be callback based internally Clients SHOULD generally follow the structure described here The key class is the Collector This has a method typically called collect that returns zero or more metrics and their samples Collectors get registered with a CollectorRegistry Data is exposed by passing a CollectorRegistry to a class method function bridge which returns the metrics in a format Prometheus supports Every time the CollectorRegistry is scraped it must callback to each of the Collectors collect method The interface most users interact with are the Counter Gauge Summary and Histogram Collectors These represent a single metric and should cover the vast majority of use cases where a user is instrumenting their own code More advanced uses cases such as proxying from another monitoring instrumentation system require writing a custom Collector Someone may also want to write a bridge that takes a CollectorRegistry and produces data in a format a different monitoring instrumentation system understands allowing users to only have to think about one instrumentation system CollectorRegistry SHOULD offer register unregister functions and a Collector SHOULD be allowed to be registered to multiple CollectorRegistrys Client libraries MUST be thread safe For non OO languages such as C client libraries should follow the spirit of this structure as much as is practical Naming Client libraries SHOULD follow function method class names mentioned in this document keeping in mind the naming conventions of the language they re working in For example set to current time is good for a method name in Python but SetToCurrentTime is better in Go and setToCurrentTime is the convention in Java Where names differ for technical reasons e g not allowing function overloading documentation help strings SHOULD point users towards the other names Libraries MUST NOT offer functions methods classes with the same or similar names to ones given here but with different semantics Metrics The Counter Gauge Summary and Histogram metric types docs concepts metric types are the primary interface by users Counter and Gauge MUST be part of the client library At least one of Summary and Histogram MUST be offered These should be primarily used as file static variables that is global variables defined in the same file as the code they re instrumenting The client library SHOULD enable this The common use case is instrumenting a piece of code overall not a piece of code in the context of one instance of an object Users shouldn t have to worry about plumbing their metrics throughout their code the client library should do that for them and if it doesn t users will write a wrapper around the library to make it easier which rarely tends to go well There MUST be a default CollectorRegistry the standard metrics MUST by default implicitly register into it with no special work required by the user There MUST be a way to have metrics not register to the default CollectorRegistry for use in batch jobs and unittests Custom collectors SHOULD also follow this Exactly how the metrics should be created varies by language For some Java Go a builder approach is best whereas for others Python function arguments are rich enough to do it in one call For example in the Java Simpleclient we have java class YourClass static final Counter requests Counter build name requests total help Requests register This will register requests with the default CollectorRegistry By calling build rather than register the metric won t be registered handy for unittests you can also pass in a CollectorRegistry to register handy for batch jobs Counter Counter docs concepts metric types counter is a monotonically increasing counter It MUST NOT allow the value to decrease however it MAY be reset to 0 such as by server restart A counter MUST have the following methods inc Increment the counter by 1 inc double v Increment the counter by the given amount MUST check that v 0 A counter is ENCOURAGED to have A way to count exceptions throw raised in a given piece of code and optionally only certain types of exceptions This is count exceptions in Python Counters MUST start at 0 Gauge Gauge docs concepts metric types gauge represents a value that can go up and down A gauge MUST have the following methods inc Increment the gauge by 1 inc double v Increment the gauge by the given amount dec Decrement the gauge by 1 dec double v Decrement the gauge by the given amount set double v Set the gauge to the given value Gauges MUST start at 0 you MAY offer a way for a given gauge to start at a different number A gauge SHOULD have the following methods set to current time Set the gauge to the current unixtime in seconds A gauge is ENCOURAGED to have A way to track in progress requests in some piece of code function This is track inprogress in Python A way to time a piece of code and set the gauge to its duration in seconds This is useful for batch jobs This is startTimer setDuration in Java and the time decorator context manager in Python This SHOULD match the pattern in Summary Histogram though set rather than observe Summary A summary docs concepts metric types summary samples observations usually things like request durations over sliding windows of time and provides instantaneous insight into their distributions frequencies and sums A summary MUST NOT allow the user to set quantile as a label name as this is used internally to designate summary quantiles A summary is ENCOURAGED to offer quantiles as exports though these can t be aggregated and tend to be slow A summary MUST allow not having quantiles as just count sum is quite useful and this MUST be the default A summary MUST have the following methods observe double v Observe the given amount A summary SHOULD have the following methods Some way to time code for users in seconds In Python this is the time decorator context manager In Java this is startTimer observeDuration Units other than seconds MUST NOT be offered if a user wants something else they can do it by hand This should follow the same pattern as Gauge Histogram Summary count sum MUST start at 0 Histogram Histograms docs concepts metric types histogram allow aggregatable distributions of events such as request latencies This is at its core a counter per bucket A histogram MUST NOT allow le as a user set label as le is used internally to designate buckets A histogram MUST offer a way to manually choose the buckets Ways to set buckets in a linear start width count and exponential start factor count fashion SHOULD be offered Count MUST include the Inf bucket A histogram SHOULD have the same default buckets as other client libraries Buckets MUST NOT be changeable once the metric is created A histogram MUST have the following methods observe double v Observe the given amount A histogram SHOULD have the following methods Some way to time code for users in seconds In Python this is the time decorator context manager In Java this is startTimer observeDuration Units other than seconds MUST NOT be offered if a user wants something else they can do it by hand This should follow the same pattern as Gauge Summary Histogram count sum and the buckets MUST start at 0 Further metrics considerations Providing additional functionality in metrics beyond what s documented above as makes sense for a given language is ENCOURAGED If there s a common use case you can make simpler then go for it as long as it won t encourage undesirable behaviours such as suboptimal metric label layouts or doing computation in the client Labels Labels are one of the most powerful aspects docs practices instrumentation use labels of Prometheus but easily abused docs practices instrumentation do not overuse labels Accordingly client libraries must be very careful in how labels are offered to users Client libraries MUST NOT allow users to have different label names for the same metric for Gauge Counter Summary Histogram or any other Collector offered by the library Metrics from custom collectors should almost always have consistent label names As there are still rare but valid use cases where this is not the case client libraries should not verify this While labels are powerful the majority of metrics will not have labels Accordingly the API should allow for labels but not dominate it A client library MUST allow for optionally specifying a list of label names at Gauge Counter Summary Histogram creation time A client library SHOULD support any number of label names A client library MUST validate that label names meet the documented requirements docs concepts data model metric names and labels The general way to provide access to labeled dimension of a metric is via a labels method that takes either a list of the label values or a map from label name to label value and returns a Child The usual inc dec observe etc methods can then be called on the Child The Child returned by labels SHOULD be cacheable by the user to avoid having to look it up again this matters in latency critical code Metrics with labels SHOULD support a remove method with the same signature as labels that will remove a Child from the metric no longer exporting it and a clear method that removes all Children from the metric These invalidate caching of Children There SHOULD be a way to initialize a given Child with the default value usually just calling labels Metrics without labels MUST always be initialized to avoid problems with missing metrics docs practices instrumentation avoid missing metrics Metric names Metric names must follow the specification docs concepts data model metric names and labels As with label names this MUST be met for uses of Gauge Counter Summary Histogram and in any other Collector offered with the library Many client libraries offer setting the name in three parts namespace subsystem name of which only the name is mandatory Dynamic generated metric names or subparts of metric names MUST be discouraged except when a custom Collector is proxying from other instrumentation monitoring systems Generated dynamic metric names are a sign that you should be using labels instead Metric description and help Gauge Counter Summary Histogram MUST require metric descriptions help to be provided Any custom Collectors provided with the client libraries MUST have descriptions help on their metrics It is suggested to make it a mandatory argument but not to check that it s of a certain length as if someone really doesn t want to write docs we re not going to convince them otherwise Collectors offered with the library and indeed everywhere we can within the ecosystem SHOULD have good metric descriptions to lead by example Exposition Clients MUST implement the text based exposition format outlined in the exposition formats docs instrumenting exposition formats documentation Reproducible order of the exposed metrics is ENCOURAGED especially for human readable formats if it can be implemented without a significant resource cost Standard and runtime collectors Client libraries SHOULD offer what they can of the Standard exports documented below These SHOULD be implemented as custom Collectors and registered by default on the default CollectorRegistry There SHOULD be a way to disable these as there are some very niche use cases where they get in the way Process metrics These metrics have the prefix process If obtaining a necessary value is problematic or even impossible with the used language or runtime client libraries SHOULD prefer leaving out the corresponding metric over exporting bogus inaccurate or special values like NaN All memory values in bytes all times in unixtime seconds Metric name Help string Unit process cpu seconds total Total user and system CPU time spent in seconds seconds process open fds Number of open file descriptors file descriptors process max fds Maximum number of open file descriptors file descriptors process virtual memory bytes Virtual memory size in bytes bytes process virtual memory max bytes Maximum amount of virtual memory available in bytes bytes process resident memory bytes Resident memory size in bytes bytes process heap bytes Process heap size in bytes bytes process start time seconds Start time of the process since unix epoch in seconds seconds process threads Number of OS threads in the process threads Runtime metrics In addition client libraries are ENCOURAGED to also offer whatever makes sense in terms of metrics for their language s runtime e g garbage collection stats with an appropriate prefix such as go hotspot etc Unit tests Client libraries SHOULD have unit tests covering the core instrumentation library and exposition Client libraries are ENCOURAGED to offer ways that make it easy for users to unit test their use of the instrumentation code For example the CollectorRegistry get sample value in Python Packaging and dependencies Ideally a client library can be included in any application to add some instrumentation without breaking the application Accordingly caution is advised when adding dependencies to the client library For example if you add a library that uses a Prometheus client that requires version x y of a library but the application uses x z elsewhere will that have an adverse impact on the application It is suggested that where this may arise that the core instrumentation is separated from the bridges exposition of metrics in a given format For example the Java simpleclient simpleclient module has no dependencies and the simpleclient servlet has the HTTP bits Performance considerations As client libraries must be thread safe some form of concurrency control is required and consideration must be given to performance on multi core machines and applications In our experience the least performant is mutexes Processor atomic instructions tend to be in the middle and generally acceptable Approaches that avoid different CPUs mutating the same bit of RAM work best such as the DoubleAdder in Java s simpleclient There is a memory cost though As noted above the result of labels should be cacheable The concurrent maps that tend to back metric with labels tend to be relatively slow Special casing metrics without labels to avoid labels like lookups can help a lot Metrics SHOULD avoid blocking when they are being incremented decremented set etc as it s undesirable for the whole application to be held up while a scrape is ongoing Having benchmarks of the main instrumentation operations including labels is ENCOURAGED Resource consumption particularly RAM should be kept in mind when performing exposition Consider reducing the memory footprint by streaming results and potentially having a limit on the number of concurrent scrapes |
prometheus If you are instrumenting your own code the general rules of how to library docs practices instrumentation should be followed When title Writing exporters instrument code with a Prometheus client sortrank 5 Writing exporters | ---
title: Writing exporters
sort_rank: 5
---
# Writing exporters
If you are instrumenting your own code, the [general rules of how to
instrument code with a Prometheus client
library](/docs/practices/instrumentation/) should be followed. When
taking metrics from another monitoring or instrumentation system, things
tend not to be so black and white.
This document contains things you should consider when writing an
exporter or custom collector. The theory covered will also be of
interest to those doing direct instrumentation.
If you are writing an exporter and are unclear on anything here, please
contact us on IRC (#prometheus on libera) or the [mailing
list](/community).
## Maintainability and purity
The main decision you need to make when writing an exporter is how much
work you’re willing to put in to get perfect metrics out of it.
If the system in question has only a handful of metrics that rarely
change, then getting everything perfect is an easy choice, a good
example of this is the [HAProxy
exporter](https://github.com/prometheus/haproxy_exporter).
On the other hand, if you try to get things perfect when the system has
hundreds of metrics that change frequently with new versions, then
you’ve signed yourself up for a lot of ongoing work. The [MySQL
exporter](https://github.com/prometheus/mysqld_exporter) is on this end
of the spectrum.
The [node exporter](https://github.com/prometheus/node_exporter) is a
mix of these, with complexity varying by module. For example, the
`mdadm` collector hand-parses a file and exposes metrics created
specifically for that collector, so we may as well get the metrics
right. For the `meminfo` collector the results vary across kernel
versions so we end up doing just enough of a transform to create valid
metrics.
## Configuration
When working with applications, you should aim for an exporter that
requires no custom configuration by the user beyond telling it where the
application is. You may also need to offer the ability to filter out
certain metrics if they may be too granular and expensive on large
setups, for example the [HAProxy
exporter](https://github.com/prometheus/haproxy_exporter) allows
filtering of per-server stats. Similarly, there may be expensive metrics
that are disabled by default.
When working with other monitoring systems, frameworks and protocols you
will often need to provide additional configuration or customization to
generate metrics suitable for Prometheus. In the best case scenario, a
monitoring system has a similar enough data model to Prometheus that you
can automatically determine how to transform metrics. This is the case
for [Cloudwatch](https://github.com/prometheus/cloudwatch_exporter),
[SNMP](https://github.com/prometheus/snmp_exporter) and
[collectd](https://github.com/prometheus/collectd_exporter). At most, we
need the ability to let the user select which metrics they want to pull
out.
In other cases, metrics from the system are completely non-standard,
depending on the usage of the system and the underlying application. In
that case the user has to tell us how to transform the metrics. The [JMX
exporter](https://github.com/prometheus/jmx_exporter) is the worst
offender here, with the
[Graphite](https://github.com/prometheus/graphite_exporter) and
[StatsD](https://github.com/prometheus/statsd_exporter) exporters also
requiring configuration to extract labels.
Ensuring the exporter works out of the box without configuration, and
providing a selection of example configurations for transformation if
required, is advised.
YAML is the standard Prometheus configuration format, all configuration
should use YAML by default.
## Metrics
### Naming
Follow the [best practices on metric naming](/docs/practices/naming).
Generally metric names should allow someone who is familiar with
Prometheus but not a particular system to make a good guess as to what a
metric means. A metric named `http_requests_total` is not extremely
useful - are these being measured as they come in, in some filter or
when they get to the user’s code? And `requests_total` is even worse,
what type of requests?
With direct instrumentation, a given metric should exist within exactly
one file. Accordingly, within exporters and collectors, a metric should
apply to exactly one subsystem and be named accordingly.
Metric names should never be procedurally generated, except when writing
a custom collector or exporter.
Metric names for applications should generally be prefixed by the
exporter name, e.g. `haproxy_up`.
Metrics must use base units (e.g. seconds, bytes) and leave converting
them to something more readable to graphing tools. No matter what units
you end up using, the units in the metric name must match the units in
use. Similarly, expose ratios, not percentages. Even better, specify a
counter for each of the two components of the ratio.
Metric names should not include the labels that they’re exported with,
e.g. `by_type`, as that won’t make sense if the label is aggregated
away.
The one exception is when you’re exporting the same data with different
labels via multiple metrics, in which case that’s usually the sanest way
to distinguish them. For direct instrumentation, this should only come
up when exporting a single metric with all the labels would have too
high a cardinality.
Prometheus metrics and label names are written in `snake_case`.
Converting `camelCase` to `snake_case` is desirable, though doing so
automatically doesn’t always produce nice results for things like
`myTCPExample` or `isNaN` so sometimes it’s best to leave them as-is.
Exposed metrics should not contain colons, these are reserved for user
defined recording rules to use when aggregating.
Only `[a-zA-Z0-9:_]` are valid in metric names.
The `_sum`, `_count`, `_bucket` and `_total` suffixes are used by
Summaries, Histograms and Counters. Unless you’re producing one of
those, avoid these suffixes.
`_total` is a convention for counters, you should use it if you’re using
the COUNTER type.
The `process_` and `scrape_` prefixes are reserved. It’s okay to add
your own prefix on to these if they follow matching semantics.
For example, Prometheus has `scrape_duration_seconds` for how long a
scrape took, it's good practice to also have an exporter-centric metric,
e.g. `jmx_scrape_duration_seconds`, saying how long the specific
exporter took to do its thing. For process stats where you have access
to the PID, both Go and Python offer collectors that’ll handle this for
you. A good example of this is the [HAProxy
exporter](https://github.com/prometheus/haproxy_exporter).
When you have a successful request count and a failed request count, the
best way to expose this is as one metric for total requests and another
metric for failed requests. This makes it easy to calculate the failure
ratio. Do not use one metric with a failed or success label. Similarly,
with hit or miss for caches, it’s better to have one metric for total and
another for hits.
Consider the likelihood that someone using monitoring will do a code or
web search for the metric name. If the names are very well-established
and unlikely to be used outside of the realm of people used to those
names, for example SNMP and network engineers, then leaving them as-is
may be a good idea. This logic doesn’t apply for all exporters, for
example the MySQL exporter metrics may be used by a variety of people,
not just DBAs. A `HELP` string with the original name can provide most
of the same benefits as using the original names.
### Labels
Read the [general
advice](/docs/practices/instrumentation/#things-to-watch-out-for) on
labels.
Avoid `type` as a label name, it’s too generic and often meaningless.
You should also try where possible to avoid names that are likely to
clash with target labels, such as `region`, `zone`, `cluster`,
`availability_zone`, `az`, `datacenter`, `dc`, `owner`, `customer`,
`stage`, `service`, `environment` and `env`. If, however, that’s what
the application calls some resource, it’s best not to cause confusion by
renaming it.
Avoid the temptation to put things into one metric just because they
share a prefix. Unless you’re sure something makes sense as one metric,
multiple metrics is safer.
The label `le` has special meaning for Histograms, and `quantile` for
Summaries. Avoid these labels generally.
Read/write and send/receive are best as separate metrics, rather than as
a label. This is usually because you care about only one of them at a
time, and it is easier to use them that way.
The rule of thumb is that one metric should make sense when summed or
averaged. There is one other case that comes up with exporters, and
that’s where the data is fundamentally tabular and doing otherwise would
require users to do regexes on metric names to be usable. Consider the
voltage sensors on your motherboard, while doing math across them is
meaningless, it makes sense to have them in one metric rather than
having one metric per sensor. All values within a metric should
(almost) always have the same unit, for example consider if fan speeds
were mixed in with the voltages, and you had no way to automatically
separate them.
Don’t do this:
<pre>
my_metric{label="a"} 1
my_metric{label="b"} 6
<b>my_metric{label="total"} 7</b>
</pre>
or this:
<pre>
my_metric{label="a"} 1
my_metric{label="b"} 6
<b>my_metric{} 7</b>
</pre>
The former breaks for people who do a `sum()` over your metric, and the
latter breaks sum and is quite difficult to work with. Some client
libraries, for example Go, will actively try to stop you doing the
latter in a custom collector, and all client libraries should stop you
from doing the latter with direct instrumentation. Never do either of
these, rely on Prometheus aggregation instead.
If your monitoring exposes a total like this, drop the total. If you
have to keep it around for some reason, for example the total includes
things not counted individually, use different metric names.
Instrumentation labels should be minimal, every extra label is one more
that users need to consider when writing their PromQL. Accordingly,
avoid having instrumentation labels which could be removed without
affecting the uniqueness of the time series. Additional information
around a metric can be added via an info metric, for an example see
below how to handle version numbers.
However, there are cases where it is expected that virtually all users of
a metric will want the additional information. If so, adding a
non-unique label, rather than an info metric, is the right solution. For
example the
[mysqld_exporter](https://github.com/prometheus/mysqld_exporter)'s
`mysqld_perf_schema_events_statements_total`'s `digest` label is a hash
of the full query pattern and is sufficient for uniqueness. However, it
is of little use without the human readable `digest_text` label, which
for long queries will contain only the start of the query pattern and is
thus not unique. Thus we end up with both the `digest_text` label for
humans and the `digest` label for uniqueness.
### Target labels, not static scraped labels
If you ever find yourself wanting to apply the same label to all of your
metrics, stop.
There’s generally two cases where this comes up.
The first is for some label it would be useful to have on the metrics
such as the version number of the software. Instead, use the approach
described at
[https://www.robustperception.io/how-to-have-labels-for-machine-roles/](http://www.robustperception.io/how-to-have-labels-for-machine-roles/).
The second case is when a label is really a target label. These are
things like region, cluster names, and so on, that come from your
infrastructure setup rather than the application itself. It’s not for an
application to say where it fits in your label taxonomy, that’s for the
person running the Prometheus server to configure and different people
monitoring the same application may give it different names.
Accordingly, these labels belong up in the scrape configs of Prometheus
via whatever service discovery you’re using. It’s okay to apply the
concept of machine roles here as well, as it’s likely useful information
for at least some people scraping it.
### Types
You should try to match up the types of your metrics to Prometheus
types. This usually means counters and gauges. The `_count` and `_sum`
of summaries are also relatively common, and on occasion you’ll see
quantiles. Histograms are rare, if you come across one remember that the
exposition format exposes cumulative values.
Often it won’t be obvious what the type of metric is, especially if
you’re automatically processing a set of metrics. In general `UNTYPED`
is a safe default.
Counters can’t go down, so if you have a counter type coming from
another instrumentation system that can be decremented, for example
Dropwizard metrics then it's not a counter, it's a gauge. `UNTYPED` is
probably the best type to use there, as `GAUGE` would be misleading if
it were being used as a counter.
### Help strings
When you’re transforming metrics it’s useful for users to be able to
track back to what the original was, and what rules were in play that
caused that transformation. Putting in the name of the
collector or exporter, the ID of any rule that was applied and the
name and details of the original metric into the help string will greatly
aid users.
Prometheus doesn’t like one metric having different help strings. If
you’re making one metric from many others, choose one of them to put in
the help string.
For examples of this, the SNMP exporter uses the OID and the JMX
exporter puts in a sample mBean name. The [HAProxy
exporter](https://github.com/prometheus/haproxy_exporter) has
hand-written strings. The [node
exporter](https://github.com/prometheus/node_exporter) also has a wide
variety of examples.
### Drop less useful statistics
Some instrumentation systems expose 1m, 5m, 15m rates, average rates since
application start (these are called `mean` in Dropwizard metrics for
example) in addition to minimums, maximums and standard deviations.
These should all be dropped, as they’re not very useful and add clutter.
Prometheus can calculate rates itself, and usually more accurately as
the averages exposed are usually exponentially decaying. You don’t know
what time the min or max were calculated over, and the standard deviation
is statistically useless and you can always expose sum of squares,
`_sum` and `_count` if you ever need to calculate it.
Quantiles have related issues, you may choose to drop them or put them
in a Summary.
### Dotted strings
Many monitoring systems don’t have labels, instead doing things like
`my.class.path.mymetric.labelvalue1.labelvalue2.labelvalue3`.
The [Graphite](https://github.com/prometheus/graphite_exporter) and
[StatsD](https://github.com/prometheus/statsd_exporter) exporters share
a way of transforming these with a small configuration language. Other
exporters should implement the same. The transformation is currently
implemented only in Go, and would benefit from being factored out into a
separate library.
## Collectors
When implementing the collector for your exporter, you should never use
the usual direct instrumentation approach and then update the metrics on
each scrape.
Rather create new metrics each time. In Go this is done with
[MustNewConstMetric](https://godoc.org/github.com/prometheus/client_golang/prometheus#MustNewConstMetric)
in your `Collect()` method. For Python see
[https://github.com/prometheus/client_python#custom-collectors](https://prometheus.github.io/client_python/collector/custom/)
and for Java generate a `List<MetricFamilySamples>` in your collect
method, see
[StandardExports.java](https://github.com/prometheus/client_java/blob/master/simpleclient_hotspot/src/main/java/io/prometheus/client/hotspot/StandardExports.java)
for an example.
The reason for this is two-fold. Firstly, two scrapes could happen at
the same time, and direct instrumentation uses what are effectively
file-level global variables, so you’ll get race conditions. Secondly, if
a label value disappears, it’ll still be exported.
Instrumenting your exporter itself via direct instrumentation is fine,
e.g. total bytes transferred or calls performed by the exporter across
all scrapes. For exporters such as the [blackbox
exporter](https://github.com/prometheus/blackbox_exporter) and [SNMP
exporter](https://github.com/prometheus/snmp_exporter), which aren’t
tied to a single target, these should only be exposed on a vanilla
`/metrics` call, not on a scrape of a particular target.
### Metrics about the scrape itself
Sometimes you’d like to export metrics that are about the scrape, like
how long it took or how many records you processed.
These should be exposed as gauges as they’re about an event, the scrape,
and the metric name prefixed by the exporter name, for example
`jmx_scrape_duration_seconds`. Usually the `_exporter` is excluded and
if the exporter also makes sense to use as just a collector, then
definitely exclude it.
Other scrape "meta" metrics should be avoided. For example, a counter for
the number of scrapes, or a histogram of the scrape duration. Having the
exporter track these metrics duplicate the [automatically generated
metrics](docs/concepts/jobs_instances/#automatically-generated-labels-and-time-series)
of Prometheus itself. This adds to the storage cost of every exporter instance.
### Machine and process metrics
Many systems, for example Elasticsearch, expose machine metrics such as
CPU, memory and filesystem information. As the [node
exporter](https://github.com/prometheus/node_exporter) provides these in
the Prometheus ecosystem, such metrics should be dropped.
In the Java world, many instrumentation frameworks expose process-level
and JVM-level stats such as CPU and GC. The Java client and JMX exporter
already include these in the preferred form via
[DefaultExports.java](https://github.com/prometheus/client_java/blob/master/simpleclient_hotspot/src/main/java/io/prometheus/client/hotspot/DefaultExports.java),
so these should also be dropped.
Similarly with other languages and frameworks.
## Deployment
Each exporter should monitor exactly one instance application,
preferably sitting right beside it on the same machine. That means for
every HAProxy you run, you run a `haproxy_exporter` process. For every
machine with a Mesos worker, you run the [Mesos
exporter](https://github.com/mesosphere/mesos_exporter) on it, and
another one for the master, if a machine has both.
The theory behind this is that for direct instrumentation this is what
you’d be doing, and we’re trying to get as close to that as we can in
other layouts. This means that all service discovery is done in
Prometheus, not in exporters. This also has the benefit that Prometheus
has the target information it needs to allow users probe your service
with the [blackbox
exporter](https://github.com/prometheus/blackbox_exporter).
There are two exceptions:
The first is where running beside the application you are monitoring is
completely nonsensical. The SNMP, blackbox and IPMI exporters are the
main examples of this. The IPMI and SNMP exporters as the devices are
often black boxes that it’s impossible to run code on (though if you
could run a node exporter on them instead that’d be better), and the
blackbox exporter where you’re monitoring something like a DNS name,
where there’s also nothing to run on. In this case, Prometheus should
still do service discovery, and pass on the target to be scraped. See
the blackbox and SNMP exporters for examples.
Note that it is only currently possible to write this type of exporter
with the Go, Python and Java client libraries.
The second exception is where you’re pulling some stats out of a random
instance of a system and don’t care which one you’re talking to.
Consider a set of MySQL replicas you wanted to run some business queries
against the data to then export. Having an exporter that uses your usual
load balancing approach to talk to one replica is the sanest approach.
This doesn’t apply when you’re monitoring a system with master-election,
in that case you should monitor each instance individually and deal with
the "masterness" in Prometheus. This is as there isn’t always exactly
one master, and changing what a target is underneath Prometheus’s feet
will cause oddities.
### Scheduling
Metrics should only be pulled from the application when Prometheus
scrapes them, exporters should not perform scrapes based on their own
timers. That is, all scrapes should be synchronous.
Accordingly, you should not set timestamps on the metrics you expose, let
Prometheus take care of that. If you think you need timestamps, then you
probably need the
[Pushgateway](https://prometheus.io/docs/instrumenting/pushing/)
instead.
If a metric is particularly expensive to retrieve, i.e. takes more than
a minute, it is acceptable to cache it. This should be noted in the
`HELP` string.
The default scrape timeout for Prometheus is 10 seconds. If your
exporter can be expected to exceed this, you should explicitly call this
out in your user documentation.
### Pushes
Some applications and monitoring systems only push metrics, for example
StatsD, Graphite and collectd.
There are two considerations here.
Firstly, when do you expire metrics? Collectd and things talking to
Graphite both export regularly, and when they stop we want to stop
exposing the metrics. Collectd includes an expiry time so we use that,
Graphite doesn’t so it is a flag on the exporter.
StatsD is a bit different, as it is dealing with events rather than
metrics. The best model is to run one exporter beside each application
and restart them when the application restarts so that the state is
cleared.
Secondly, these sort of systems tend to allow your users to send either
deltas or raw counters. You should rely on the raw counters as far as
possible, as that’s the general Prometheus model.
For service-level metrics, e.g. service-level batch jobs, you should
have your exporter push into the Pushgateway and exit after the event
rather than handling the state yourself. For instance-level batch
metrics, there is no clear pattern yet. The options are either to abuse
the node exporter’s textfile collector, rely on in-memory state
(probably best if you don’t need to persist over a reboot) or implement
similar functionality to the textfile collector.
### Failed scrapes
There are currently two patterns for failed scrapes where the
application you’re talking to doesn’t respond or has other problems.
The first is to return a 5xx error.
The second is to have a `myexporter_up`, e.g. `haproxy_up`, variable
that has a value of 0 or 1 depending on whether the scrape worked.
The latter is better where there’s still some useful metrics you can get
even with a failed scrape, such as the HAProxy exporter providing
process stats. The former is a tad easier for users to deal with, as
[`up` works in the usual way](/docs/concepts/jobs_instances/#automatically-generated-labels-and-time-series), although you can’t distinguish between the
exporter being down and the application being down.
### Landing page
It’s nicer for users if visiting `http://yourexporter/` has a simple
HTML page with the name of the exporter, and a link to the `/metrics`
page.
### Port numbers
A user may have many exporters and Prometheus components on the same
machine, so to make that easier each has a unique port number.
[https://github.com/prometheus/prometheus/wiki/Default-port-allocations](https://github.com/prometheus/prometheus/wiki/Default-port-allocations)
is where we track them, this is publicly editable.
Feel free to grab the next free port number when developing your
exporter, preferably before publicly announcing it. If you’re not ready
to release yet, putting your username and WIP is fine.
This is a registry to make our users’ lives a little easier, not a
commitment to develop particular exporters. For exporters for internal
applications we recommend using ports outside of the range of default
port allocations.
## Announcing
Once you’re ready to announce your exporter to the world, email the
mailing list and send a PR to add it to [the list of available
exporters](https://github.com/prometheus/docs/blob/main/content/docs/instrumenting/exporters.md). | prometheus | title Writing exporters sort rank 5 Writing exporters If you are instrumenting your own code the general rules of how to instrument code with a Prometheus client library docs practices instrumentation should be followed When taking metrics from another monitoring or instrumentation system things tend not to be so black and white This document contains things you should consider when writing an exporter or custom collector The theory covered will also be of interest to those doing direct instrumentation If you are writing an exporter and are unclear on anything here please contact us on IRC prometheus on libera or the mailing list community Maintainability and purity The main decision you need to make when writing an exporter is how much work you re willing to put in to get perfect metrics out of it If the system in question has only a handful of metrics that rarely change then getting everything perfect is an easy choice a good example of this is the HAProxy exporter https github com prometheus haproxy exporter On the other hand if you try to get things perfect when the system has hundreds of metrics that change frequently with new versions then you ve signed yourself up for a lot of ongoing work The MySQL exporter https github com prometheus mysqld exporter is on this end of the spectrum The node exporter https github com prometheus node exporter is a mix of these with complexity varying by module For example the mdadm collector hand parses a file and exposes metrics created specifically for that collector so we may as well get the metrics right For the meminfo collector the results vary across kernel versions so we end up doing just enough of a transform to create valid metrics Configuration When working with applications you should aim for an exporter that requires no custom configuration by the user beyond telling it where the application is You may also need to offer the ability to filter out certain metrics if they may be too granular and expensive on large setups for example the HAProxy exporter https github com prometheus haproxy exporter allows filtering of per server stats Similarly there may be expensive metrics that are disabled by default When working with other monitoring systems frameworks and protocols you will often need to provide additional configuration or customization to generate metrics suitable for Prometheus In the best case scenario a monitoring system has a similar enough data model to Prometheus that you can automatically determine how to transform metrics This is the case for Cloudwatch https github com prometheus cloudwatch exporter SNMP https github com prometheus snmp exporter and collectd https github com prometheus collectd exporter At most we need the ability to let the user select which metrics they want to pull out In other cases metrics from the system are completely non standard depending on the usage of the system and the underlying application In that case the user has to tell us how to transform the metrics The JMX exporter https github com prometheus jmx exporter is the worst offender here with the Graphite https github com prometheus graphite exporter and StatsD https github com prometheus statsd exporter exporters also requiring configuration to extract labels Ensuring the exporter works out of the box without configuration and providing a selection of example configurations for transformation if required is advised YAML is the standard Prometheus configuration format all configuration should use YAML by default Metrics Naming Follow the best practices on metric naming docs practices naming Generally metric names should allow someone who is familiar with Prometheus but not a particular system to make a good guess as to what a metric means A metric named http requests total is not extremely useful are these being measured as they come in in some filter or when they get to the user s code And requests total is even worse what type of requests With direct instrumentation a given metric should exist within exactly one file Accordingly within exporters and collectors a metric should apply to exactly one subsystem and be named accordingly Metric names should never be procedurally generated except when writing a custom collector or exporter Metric names for applications should generally be prefixed by the exporter name e g haproxy up Metrics must use base units e g seconds bytes and leave converting them to something more readable to graphing tools No matter what units you end up using the units in the metric name must match the units in use Similarly expose ratios not percentages Even better specify a counter for each of the two components of the ratio Metric names should not include the labels that they re exported with e g by type as that won t make sense if the label is aggregated away The one exception is when you re exporting the same data with different labels via multiple metrics in which case that s usually the sanest way to distinguish them For direct instrumentation this should only come up when exporting a single metric with all the labels would have too high a cardinality Prometheus metrics and label names are written in snake case Converting camelCase to snake case is desirable though doing so automatically doesn t always produce nice results for things like myTCPExample or isNaN so sometimes it s best to leave them as is Exposed metrics should not contain colons these are reserved for user defined recording rules to use when aggregating Only a zA Z0 9 are valid in metric names The sum count bucket and total suffixes are used by Summaries Histograms and Counters Unless you re producing one of those avoid these suffixes total is a convention for counters you should use it if you re using the COUNTER type The process and scrape prefixes are reserved It s okay to add your own prefix on to these if they follow matching semantics For example Prometheus has scrape duration seconds for how long a scrape took it s good practice to also have an exporter centric metric e g jmx scrape duration seconds saying how long the specific exporter took to do its thing For process stats where you have access to the PID both Go and Python offer collectors that ll handle this for you A good example of this is the HAProxy exporter https github com prometheus haproxy exporter When you have a successful request count and a failed request count the best way to expose this is as one metric for total requests and another metric for failed requests This makes it easy to calculate the failure ratio Do not use one metric with a failed or success label Similarly with hit or miss for caches it s better to have one metric for total and another for hits Consider the likelihood that someone using monitoring will do a code or web search for the metric name If the names are very well established and unlikely to be used outside of the realm of people used to those names for example SNMP and network engineers then leaving them as is may be a good idea This logic doesn t apply for all exporters for example the MySQL exporter metrics may be used by a variety of people not just DBAs A HELP string with the original name can provide most of the same benefits as using the original names Labels Read the general advice docs practices instrumentation things to watch out for on labels Avoid type as a label name it s too generic and often meaningless You should also try where possible to avoid names that are likely to clash with target labels such as region zone cluster availability zone az datacenter dc owner customer stage service environment and env If however that s what the application calls some resource it s best not to cause confusion by renaming it Avoid the temptation to put things into one metric just because they share a prefix Unless you re sure something makes sense as one metric multiple metrics is safer The label le has special meaning for Histograms and quantile for Summaries Avoid these labels generally Read write and send receive are best as separate metrics rather than as a label This is usually because you care about only one of them at a time and it is easier to use them that way The rule of thumb is that one metric should make sense when summed or averaged There is one other case that comes up with exporters and that s where the data is fundamentally tabular and doing otherwise would require users to do regexes on metric names to be usable Consider the voltage sensors on your motherboard while doing math across them is meaningless it makes sense to have them in one metric rather than having one metric per sensor All values within a metric should almost always have the same unit for example consider if fan speeds were mixed in with the voltages and you had no way to automatically separate them Don t do this pre my metric label a 1 my metric label b 6 b my metric label total 7 b pre or this pre my metric label a 1 my metric label b 6 b my metric 7 b pre The former breaks for people who do a sum over your metric and the latter breaks sum and is quite difficult to work with Some client libraries for example Go will actively try to stop you doing the latter in a custom collector and all client libraries should stop you from doing the latter with direct instrumentation Never do either of these rely on Prometheus aggregation instead If your monitoring exposes a total like this drop the total If you have to keep it around for some reason for example the total includes things not counted individually use different metric names Instrumentation labels should be minimal every extra label is one more that users need to consider when writing their PromQL Accordingly avoid having instrumentation labels which could be removed without affecting the uniqueness of the time series Additional information around a metric can be added via an info metric for an example see below how to handle version numbers However there are cases where it is expected that virtually all users of a metric will want the additional information If so adding a non unique label rather than an info metric is the right solution For example the mysqld exporter https github com prometheus mysqld exporter s mysqld perf schema events statements total s digest label is a hash of the full query pattern and is sufficient for uniqueness However it is of little use without the human readable digest text label which for long queries will contain only the start of the query pattern and is thus not unique Thus we end up with both the digest text label for humans and the digest label for uniqueness Target labels not static scraped labels If you ever find yourself wanting to apply the same label to all of your metrics stop There s generally two cases where this comes up The first is for some label it would be useful to have on the metrics such as the version number of the software Instead use the approach described at https www robustperception io how to have labels for machine roles http www robustperception io how to have labels for machine roles The second case is when a label is really a target label These are things like region cluster names and so on that come from your infrastructure setup rather than the application itself It s not for an application to say where it fits in your label taxonomy that s for the person running the Prometheus server to configure and different people monitoring the same application may give it different names Accordingly these labels belong up in the scrape configs of Prometheus via whatever service discovery you re using It s okay to apply the concept of machine roles here as well as it s likely useful information for at least some people scraping it Types You should try to match up the types of your metrics to Prometheus types This usually means counters and gauges The count and sum of summaries are also relatively common and on occasion you ll see quantiles Histograms are rare if you come across one remember that the exposition format exposes cumulative values Often it won t be obvious what the type of metric is especially if you re automatically processing a set of metrics In general UNTYPED is a safe default Counters can t go down so if you have a counter type coming from another instrumentation system that can be decremented for example Dropwizard metrics then it s not a counter it s a gauge UNTYPED is probably the best type to use there as GAUGE would be misleading if it were being used as a counter Help strings When you re transforming metrics it s useful for users to be able to track back to what the original was and what rules were in play that caused that transformation Putting in the name of the collector or exporter the ID of any rule that was applied and the name and details of the original metric into the help string will greatly aid users Prometheus doesn t like one metric having different help strings If you re making one metric from many others choose one of them to put in the help string For examples of this the SNMP exporter uses the OID and the JMX exporter puts in a sample mBean name The HAProxy exporter https github com prometheus haproxy exporter has hand written strings The node exporter https github com prometheus node exporter also has a wide variety of examples Drop less useful statistics Some instrumentation systems expose 1m 5m 15m rates average rates since application start these are called mean in Dropwizard metrics for example in addition to minimums maximums and standard deviations These should all be dropped as they re not very useful and add clutter Prometheus can calculate rates itself and usually more accurately as the averages exposed are usually exponentially decaying You don t know what time the min or max were calculated over and the standard deviation is statistically useless and you can always expose sum of squares sum and count if you ever need to calculate it Quantiles have related issues you may choose to drop them or put them in a Summary Dotted strings Many monitoring systems don t have labels instead doing things like my class path mymetric labelvalue1 labelvalue2 labelvalue3 The Graphite https github com prometheus graphite exporter and StatsD https github com prometheus statsd exporter exporters share a way of transforming these with a small configuration language Other exporters should implement the same The transformation is currently implemented only in Go and would benefit from being factored out into a separate library Collectors When implementing the collector for your exporter you should never use the usual direct instrumentation approach and then update the metrics on each scrape Rather create new metrics each time In Go this is done with MustNewConstMetric https godoc org github com prometheus client golang prometheus MustNewConstMetric in your Collect method For Python see https github com prometheus client python custom collectors https prometheus github io client python collector custom and for Java generate a List MetricFamilySamples in your collect method see StandardExports java https github com prometheus client java blob master simpleclient hotspot src main java io prometheus client hotspot StandardExports java for an example The reason for this is two fold Firstly two scrapes could happen at the same time and direct instrumentation uses what are effectively file level global variables so you ll get race conditions Secondly if a label value disappears it ll still be exported Instrumenting your exporter itself via direct instrumentation is fine e g total bytes transferred or calls performed by the exporter across all scrapes For exporters such as the blackbox exporter https github com prometheus blackbox exporter and SNMP exporter https github com prometheus snmp exporter which aren t tied to a single target these should only be exposed on a vanilla metrics call not on a scrape of a particular target Metrics about the scrape itself Sometimes you d like to export metrics that are about the scrape like how long it took or how many records you processed These should be exposed as gauges as they re about an event the scrape and the metric name prefixed by the exporter name for example jmx scrape duration seconds Usually the exporter is excluded and if the exporter also makes sense to use as just a collector then definitely exclude it Other scrape meta metrics should be avoided For example a counter for the number of scrapes or a histogram of the scrape duration Having the exporter track these metrics duplicate the automatically generated metrics docs concepts jobs instances automatically generated labels and time series of Prometheus itself This adds to the storage cost of every exporter instance Machine and process metrics Many systems for example Elasticsearch expose machine metrics such as CPU memory and filesystem information As the node exporter https github com prometheus node exporter provides these in the Prometheus ecosystem such metrics should be dropped In the Java world many instrumentation frameworks expose process level and JVM level stats such as CPU and GC The Java client and JMX exporter already include these in the preferred form via DefaultExports java https github com prometheus client java blob master simpleclient hotspot src main java io prometheus client hotspot DefaultExports java so these should also be dropped Similarly with other languages and frameworks Deployment Each exporter should monitor exactly one instance application preferably sitting right beside it on the same machine That means for every HAProxy you run you run a haproxy exporter process For every machine with a Mesos worker you run the Mesos exporter https github com mesosphere mesos exporter on it and another one for the master if a machine has both The theory behind this is that for direct instrumentation this is what you d be doing and we re trying to get as close to that as we can in other layouts This means that all service discovery is done in Prometheus not in exporters This also has the benefit that Prometheus has the target information it needs to allow users probe your service with the blackbox exporter https github com prometheus blackbox exporter There are two exceptions The first is where running beside the application you are monitoring is completely nonsensical The SNMP blackbox and IPMI exporters are the main examples of this The IPMI and SNMP exporters as the devices are often black boxes that it s impossible to run code on though if you could run a node exporter on them instead that d be better and the blackbox exporter where you re monitoring something like a DNS name where there s also nothing to run on In this case Prometheus should still do service discovery and pass on the target to be scraped See the blackbox and SNMP exporters for examples Note that it is only currently possible to write this type of exporter with the Go Python and Java client libraries The second exception is where you re pulling some stats out of a random instance of a system and don t care which one you re talking to Consider a set of MySQL replicas you wanted to run some business queries against the data to then export Having an exporter that uses your usual load balancing approach to talk to one replica is the sanest approach This doesn t apply when you re monitoring a system with master election in that case you should monitor each instance individually and deal with the masterness in Prometheus This is as there isn t always exactly one master and changing what a target is underneath Prometheus s feet will cause oddities Scheduling Metrics should only be pulled from the application when Prometheus scrapes them exporters should not perform scrapes based on their own timers That is all scrapes should be synchronous Accordingly you should not set timestamps on the metrics you expose let Prometheus take care of that If you think you need timestamps then you probably need the Pushgateway https prometheus io docs instrumenting pushing instead If a metric is particularly expensive to retrieve i e takes more than a minute it is acceptable to cache it This should be noted in the HELP string The default scrape timeout for Prometheus is 10 seconds If your exporter can be expected to exceed this you should explicitly call this out in your user documentation Pushes Some applications and monitoring systems only push metrics for example StatsD Graphite and collectd There are two considerations here Firstly when do you expire metrics Collectd and things talking to Graphite both export regularly and when they stop we want to stop exposing the metrics Collectd includes an expiry time so we use that Graphite doesn t so it is a flag on the exporter StatsD is a bit different as it is dealing with events rather than metrics The best model is to run one exporter beside each application and restart them when the application restarts so that the state is cleared Secondly these sort of systems tend to allow your users to send either deltas or raw counters You should rely on the raw counters as far as possible as that s the general Prometheus model For service level metrics e g service level batch jobs you should have your exporter push into the Pushgateway and exit after the event rather than handling the state yourself For instance level batch metrics there is no clear pattern yet The options are either to abuse the node exporter s textfile collector rely on in memory state probably best if you don t need to persist over a reboot or implement similar functionality to the textfile collector Failed scrapes There are currently two patterns for failed scrapes where the application you re talking to doesn t respond or has other problems The first is to return a 5xx error The second is to have a myexporter up e g haproxy up variable that has a value of 0 or 1 depending on whether the scrape worked The latter is better where there s still some useful metrics you can get even with a failed scrape such as the HAProxy exporter providing process stats The former is a tad easier for users to deal with as up works in the usual way docs concepts jobs instances automatically generated labels and time series although you can t distinguish between the exporter being down and the application being down Landing page It s nicer for users if visiting http yourexporter has a simple HTML page with the name of the exporter and a link to the metrics page Port numbers A user may have many exporters and Prometheus components on the same machine so to make that easier each has a unique port number https github com prometheus prometheus wiki Default port allocations https github com prometheus prometheus wiki Default port allocations is where we track them this is publicly editable Feel free to grab the next free port number when developing your exporter preferably before publicly announcing it If you re not ready to release yet putting your username and WIP is fine This is a registry to make our users lives a little easier not a commitment to develop particular exporters For exporters for internal applications we recommend using ports outside of the range of default port allocations Announcing Once you re ready to announce your exporter to the world email the mailing list and send a PR to add it to the list of available exporters https github com prometheus docs blob main content docs instrumenting exporters md |
prometheus sortrank 6 title Exposition formats Exposition formats Metrics can be exposed to Prometheus using a simple that implement this format for you If your preferred language doesn t have a client exposition format There are various | ---
title: Exposition formats
sort_rank: 6
---
# Exposition formats
Metrics can be exposed to Prometheus using a simple [text-based](#text-based-format)
exposition format. There are various [client libraries](/docs/instrumenting/clientlibs/)
that implement this format for you. If your preferred language doesn't have a client
library you can [create your own](/docs/instrumenting/writing_clientlibs/).
## Text-based format
As of Prometheus version 2.0, all processes that expose metrics to Prometheus need to use
a text-based format. In this section you can find some [basic information](#basic-info)
about this format as well as a more [detailed breakdown](#text-format-details) of the
format.
### Basic info
| Aspect | Description |
|--------|-------------|
| **Inception** | April 2014 |
| **Supported in** | Prometheus version `>=0.4.0` |
| **Transmission** | HTTP |
| **Encoding** | UTF-8, `\n` line endings |
| **HTTP `Content-Type`** | `text/plain; version=0.0.4` (A missing `version` value will lead to a fall-back to the most recent text format version.) |
| **Optional HTTP `Content-Encoding`** | `gzip` |
| **Advantages** | <ul><li>Human-readable</li><li>Easy to assemble, especially for minimalistic cases (no nesting required)</li><li>Readable line by line (with the exception of type hints and docstrings)</li></ul> |
| **Limitations** | <ul><li>Verbose</li><li>Types and docstrings not integral part of the syntax, meaning little-to-nonexistent metric contract validation</li><li>Parsing cost</li></ul>|
| **Supported metric primitives** | <ul><li>Counter</li><li>Gauge</li><li>Histogram</li><li>Summary</li><li>Untyped</li></ul> |
### Text format details
Prometheus' text-based format is line oriented. Lines are separated by a line
feed character (`\n`). The last line must end with a line feed character.
Empty lines are ignored.
#### Line format
Within a line, tokens can be separated by any number of blanks and/or tabs (and
must be separated by at least one if they would otherwise merge with the previous
token). Leading and trailing whitespace is ignored.
#### Comments, help text, and type information
Lines with a `#` as the first non-whitespace character are comments. They are
ignored unless the first token after `#` is either `HELP` or `TYPE`. Those
lines are treated as follows: If the token is `HELP`, at least one more token
is expected, which is the metric name. All remaining tokens are considered the
docstring for that metric name. `HELP` lines may contain any sequence of UTF-8
characters (after the metric name), but the backslash and the line feed
characters have to be escaped as `\\` and `\n`, respectively. Only one `HELP`
line may exist for any given metric name.
If the token is `TYPE`, exactly two more tokens are expected. The first is the
metric name, and the second is either `counter`, `gauge`, `histogram`,
`summary`, or `untyped`, defining the type for the metric of that name. Only
one `TYPE` line may exist for a given metric name. The `TYPE` line for a
metric name must appear before the first sample is reported for that metric
name. If there is no `TYPE` line for a metric name, the type is set to
`untyped`.
The remaining lines describe samples (one per line) using the following syntax
([EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form)):
```
metric_name [
"{" label_name "=" `"` label_value `"` { "," label_name "=" `"` label_value `"` } [ "," ] "}"
] value [ timestamp ]
```
In the sample syntax:
* `metric_name` and `label_name` carry the usual Prometheus expression language restrictions.
* `label_value` can be any sequence of UTF-8 characters, but the backslash (`\`), double-quote (`"`), and line feed (`\n`) characters have to be escaped as `\\`, `\"`, and `\n`, respectively.
* `value` is a float represented as required by Go's [`ParseFloat()`](https://golang.org/pkg/strconv/#ParseFloat) function. In addition to standard numerical values, `NaN`, `+Inf`, and `-Inf` are valid values representing not a number, positive infinity, and negative infinity, respectively.
* The `timestamp` is an `int64` (milliseconds since epoch, i.e. 1970-01-01 00:00:00 UTC, excluding leap seconds), represented as required by Go's [`ParseInt()`](https://golang.org/pkg/strconv/#ParseInt) function.
#### Grouping and sorting
All lines for a given metric must be provided as one single group, with
the optional `HELP` and `TYPE` lines first (in no particular order). Beyond
that, reproducible sorting in repeated expositions is preferred but not
required, i.e. do not sort if the computational cost is prohibitive.
Each line must have a unique combination of a metric name and labels. Otherwise,
the ingestion behavior is undefined.
#### Histograms and summaries
The `histogram` and `summary` types are difficult to represent in the text
format. The following conventions apply:
* The sample sum for a summary or histogram named `x` is given as a separate sample named `x_sum`.
* The sample count for a summary or histogram named `x` is given as a separate sample named `x_count`.
* Each quantile of a summary named `x` is given as a separate sample line with the same name `x` and a label `{quantile="y"}`.
* Each bucket count of a histogram named `x` is given as a separate sample line with the name `x_bucket` and a label `{le="y"}` (where `y` is the upper bound of the bucket).
* A histogram _must_ have a bucket with `{le="+Inf"}`. Its value _must_ be identical to the value of `x_count`.
* The buckets of a histogram and the quantiles of a summary must appear in increasing numerical order of their label values (for the `le` or the `quantile` label, respectively).
### Text format example
Below is an example of a full-fledged Prometheus metric exposition, including
comments, `HELP` and `TYPE` expressions, a histogram, a summary, character
escaping examples, and more.
```
# HELP http_requests_total The total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{method="post",code="200"} 1027 1395066363000
http_requests_total{method="post",code="400"} 3 1395066363000
# Escaping in label values:
msdos_file_access_time_seconds{path="C:\\DIR\\FILE.TXT",error="Cannot find file:\n\"FILE.TXT\""} 1.458255915e9
# Minimalistic line:
metric_without_timestamp_and_labels 12.47
# A weird metric from before the epoch:
something_weird{problem="division by zero"} +Inf -3982045
# A histogram, which has a pretty complex representation in the text format:
# HELP http_request_duration_seconds A histogram of the request duration.
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{le="0.05"} 24054
http_request_duration_seconds_bucket{le="0.1"} 33444
http_request_duration_seconds_bucket{le="0.2"} 100392
http_request_duration_seconds_bucket{le="0.5"} 129389
http_request_duration_seconds_bucket{le="1"} 133988
http_request_duration_seconds_bucket{le="+Inf"} 144320
http_request_duration_seconds_sum 53423
http_request_duration_seconds_count 144320
# Finally a summary, which has a complex representation, too:
# HELP rpc_duration_seconds A summary of the RPC duration in seconds.
# TYPE rpc_duration_seconds summary
rpc_duration_seconds{quantile="0.01"} 3102
rpc_duration_seconds{quantile="0.05"} 3272
rpc_duration_seconds{quantile="0.5"} 4773
rpc_duration_seconds{quantile="0.9"} 9001
rpc_duration_seconds{quantile="0.99"} 76656
rpc_duration_seconds_sum 1.7560473e+07
rpc_duration_seconds_count 2693
```
## OpenMetrics Text Format
[OpenMetrics](https://github.com/OpenObservability/OpenMetrics) is the an effort to standardize metric wire formatting built off of Prometheus text format. It is possible to scrape targets
and it is also available to use for federating metrics since at least v2.23.0.
### Exemplars (Experimental)
Utilizing the OpenMetrics format allows for the exposition and querying of [Exemplars](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#exemplars).
Exemplars provide a point in time snapshot related to a metric set for an otherwise summarized MetricFamily. Additionally they may have a Trace ID attached to them which when used to together
with a tracing system can provide more detailed information related to the specific service.
To enable this experimental feature you must have at least version v2.26.0 and add `--enable-feature=exemplar-storage` to your arguments.
## Protobuf format
Earlier versions of Prometheus supported an exposition format based on [Protocol Buffers](https://developers.google.com/protocol-buffers/) (aka Protobuf) in addition to the current text-based format. With Prometheus 2.0, the Protobuf format was marked as deprecated and Prometheus stopped ingesting samples from said exposition format.
However, new experimental features were added to Prometheus where the Protobuf format was considered the most viable option. Making Prometheus accept Protocol Buffers once again.
Here is a list of experimental features that, once enabled, will configure Prometheus to favor the Protobuf exposition format:
| feature flag | version that introduced it |
|--------------|----------------------------|
| native-histograms | 2.40.0 |
| created-timestamp-zero-ingestion | 2.50.0 |
## Historical versions
For details on historical format versions, see the legacy
[Client Data Exposition Format](https://docs.google.com/document/d/1ZjyKiKxZV83VI9ZKAXRGKaUKK2BIWCT7oiGBKDBpjEY/edit?usp=sharing)
document.
The current version of the original Protobuf format (with the recent extensions
for native histograms) is maintained in the [prometheus/client_model
repository](https://github.com/prometheus/client_model). | prometheus | title Exposition formats sort rank 6 Exposition formats Metrics can be exposed to Prometheus using a simple text based text based format exposition format There are various client libraries docs instrumenting clientlibs that implement this format for you If your preferred language doesn t have a client library you can create your own docs instrumenting writing clientlibs Text based format As of Prometheus version 2 0 all processes that expose metrics to Prometheus need to use a text based format In this section you can find some basic information basic info about this format as well as a more detailed breakdown text format details of the format Basic info Aspect Description Inception April 2014 Supported in Prometheus version 0 4 0 Transmission HTTP Encoding UTF 8 n line endings HTTP Content Type text plain version 0 0 4 A missing version value will lead to a fall back to the most recent text format version Optional HTTP Content Encoding gzip Advantages ul li Human readable li li Easy to assemble especially for minimalistic cases no nesting required li li Readable line by line with the exception of type hints and docstrings li ul Limitations ul li Verbose li li Types and docstrings not integral part of the syntax meaning little to nonexistent metric contract validation li li Parsing cost li ul Supported metric primitives ul li Counter li li Gauge li li Histogram li li Summary li li Untyped li ul Text format details Prometheus text based format is line oriented Lines are separated by a line feed character n The last line must end with a line feed character Empty lines are ignored Line format Within a line tokens can be separated by any number of blanks and or tabs and must be separated by at least one if they would otherwise merge with the previous token Leading and trailing whitespace is ignored Comments help text and type information Lines with a as the first non whitespace character are comments They are ignored unless the first token after is either HELP or TYPE Those lines are treated as follows If the token is HELP at least one more token is expected which is the metric name All remaining tokens are considered the docstring for that metric name HELP lines may contain any sequence of UTF 8 characters after the metric name but the backslash and the line feed characters have to be escaped as and n respectively Only one HELP line may exist for any given metric name If the token is TYPE exactly two more tokens are expected The first is the metric name and the second is either counter gauge histogram summary or untyped defining the type for the metric of that name Only one TYPE line may exist for a given metric name The TYPE line for a metric name must appear before the first sample is reported for that metric name If there is no TYPE line for a metric name the type is set to untyped The remaining lines describe samples one per line using the following syntax EBNF https en wikipedia org wiki Extended Backus E2 80 93Naur form metric name label name label value label name label value value timestamp In the sample syntax metric name and label name carry the usual Prometheus expression language restrictions label value can be any sequence of UTF 8 characters but the backslash double quote and line feed n characters have to be escaped as and n respectively value is a float represented as required by Go s ParseFloat https golang org pkg strconv ParseFloat function In addition to standard numerical values NaN Inf and Inf are valid values representing not a number positive infinity and negative infinity respectively The timestamp is an int64 milliseconds since epoch i e 1970 01 01 00 00 00 UTC excluding leap seconds represented as required by Go s ParseInt https golang org pkg strconv ParseInt function Grouping and sorting All lines for a given metric must be provided as one single group with the optional HELP and TYPE lines first in no particular order Beyond that reproducible sorting in repeated expositions is preferred but not required i e do not sort if the computational cost is prohibitive Each line must have a unique combination of a metric name and labels Otherwise the ingestion behavior is undefined Histograms and summaries The histogram and summary types are difficult to represent in the text format The following conventions apply The sample sum for a summary or histogram named x is given as a separate sample named x sum The sample count for a summary or histogram named x is given as a separate sample named x count Each quantile of a summary named x is given as a separate sample line with the same name x and a label quantile y Each bucket count of a histogram named x is given as a separate sample line with the name x bucket and a label le y where y is the upper bound of the bucket A histogram must have a bucket with le Inf Its value must be identical to the value of x count The buckets of a histogram and the quantiles of a summary must appear in increasing numerical order of their label values for the le or the quantile label respectively Text format example Below is an example of a full fledged Prometheus metric exposition including comments HELP and TYPE expressions a histogram a summary character escaping examples and more HELP http requests total The total number of HTTP requests TYPE http requests total counter http requests total method post code 200 1027 1395066363000 http requests total method post code 400 3 1395066363000 Escaping in label values msdos file access time seconds path C DIR FILE TXT error Cannot find file n FILE TXT 1 458255915e9 Minimalistic line metric without timestamp and labels 12 47 A weird metric from before the epoch something weird problem division by zero Inf 3982045 A histogram which has a pretty complex representation in the text format HELP http request duration seconds A histogram of the request duration TYPE http request duration seconds histogram http request duration seconds bucket le 0 05 24054 http request duration seconds bucket le 0 1 33444 http request duration seconds bucket le 0 2 100392 http request duration seconds bucket le 0 5 129389 http request duration seconds bucket le 1 133988 http request duration seconds bucket le Inf 144320 http request duration seconds sum 53423 http request duration seconds count 144320 Finally a summary which has a complex representation too HELP rpc duration seconds A summary of the RPC duration in seconds TYPE rpc duration seconds summary rpc duration seconds quantile 0 01 3102 rpc duration seconds quantile 0 05 3272 rpc duration seconds quantile 0 5 4773 rpc duration seconds quantile 0 9 9001 rpc duration seconds quantile 0 99 76656 rpc duration seconds sum 1 7560473e 07 rpc duration seconds count 2693 OpenMetrics Text Format OpenMetrics https github com OpenObservability OpenMetrics is the an effort to standardize metric wire formatting built off of Prometheus text format It is possible to scrape targets and it is also available to use for federating metrics since at least v2 23 0 Exemplars Experimental Utilizing the OpenMetrics format allows for the exposition and querying of Exemplars https github com OpenObservability OpenMetrics blob main specification OpenMetrics md exemplars Exemplars provide a point in time snapshot related to a metric set for an otherwise summarized MetricFamily Additionally they may have a Trace ID attached to them which when used to together with a tracing system can provide more detailed information related to the specific service To enable this experimental feature you must have at least version v2 26 0 and add enable feature exemplar storage to your arguments Protobuf format Earlier versions of Prometheus supported an exposition format based on Protocol Buffers https developers google com protocol buffers aka Protobuf in addition to the current text based format With Prometheus 2 0 the Protobuf format was marked as deprecated and Prometheus stopped ingesting samples from said exposition format However new experimental features were added to Prometheus where the Protobuf format was considered the most viable option Making Prometheus accept Protocol Buffers once again Here is a list of experimental features that once enabled will configure Prometheus to favor the Protobuf exposition format feature flag version that introduced it native histograms 2 40 0 created timestamp zero ingestion 2 50 0 Historical versions For details on historical format versions see the legacy Client Data Exposition Format https docs google com document d 1ZjyKiKxZV83VI9ZKAXRGKaUKK2BIWCT7oiGBKDBpjEY edit usp sharing document The current version of the original Protobuf format with the recent extensions for native histograms is maintained in the prometheus client model repository https github com prometheus client model |
prometheus Prometheus Remote Write Specification title Prometheus Remote Write 1 0 Version 1 0 sortrank 5 Status Published Date April 2023 | ---
title: Prometheus Remote-Write 1.0
sort_rank: 5
---
# Prometheus Remote-Write Specification
- Version: 1.0
- Status: Published
- Date: April 2023
This document is intended to define and standardise the API, wire format, protocol and semantics of the existing, widely and organically adopted protocol, and not to propose anything new.
The remote write specification is intended to document the standard for how Prometheus and Prometheus remote-write-compatible agents send data to a Prometheus or Prometheus remote-write compatible receiver.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
> NOTE: This specification has a 2.0 version available, see [here](./remote_write_spec_2_0.md).
## Introduction
### Background
The remote write protocol is designed to make it possible to reliably propagate samples in real-time from a sender to a receiver, without loss.
The Remote-Write protocol is designed to be stateless; there is strictly no inter-message communication. As such the protocol is not considered "streaming". To achieve a streaming effect multiple messages should be sent over the same connection using e.g. HTTP/1.1 or HTTP/2. "Fancy" technologies such as gRPC were considered, but at the time were not widely adopted, and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB.
The remote write protocol contains opportunities for batching, e.g. sending multiple samples for different series in a single request. It is not expected that multiple samples for the same series will be commonly sent in the same request, although there is support for this in the protocol.
The remote write protocol is not intended for use by applications to push metrics to Prometheus remote-write-compatible receivers. It is intended that a Prometheus remote-write-compatible sender scrapes instrumented applications or exporters and sends remote write messages to a server.
A test suite can be found at https://github.com/prometheus/compliance/tree/main/remotewrite/sender.
### Glossary
For the purposes of this document the following definitions MUST be followed:
- a "Sender" is something that sends Prometheus Remote Write data.
- a "Receiver" is something that receives Prometheus Remote Write data.
- a "Sample" is a pair of (timestamp, value).
- a "Label" is a pair of (key, value).
- a "Series" is a list of samples, identified by a unique set of labels.
## Definitions
### Protocol
The Remote Write Protocol MUST consist of RPCs with the following signature:
```
func Send(WriteRequest)
message WriteRequest {
repeated TimeSeries timeseries = 1;
// Cortex uses this field to determine the source of the write request.
// We reserve it to avoid any compatibility issues.
reserved 2;
// Prometheus uses this field to send metadata, but this is
// omitted from v1 of the spec as it is experimental.
reserved 3;
}
message TimeSeries {
repeated Label labels = 1;
repeated Sample samples = 2;
}
message Label {
string name = 1;
string value = 2;
}
message Sample {
double value = 1;
int64 timestamp = 2;
}
```
Remote write Senders MUST encode the Write Request in the body of a HTTP POST request and send it to the Receivers via HTTP at a provided URL path. The Receiver MAY specify any HTTP URL path to receive metrics.
Timestamps MUST be int64 counted as milliseconds since the Unix epoch. Values MUST be float64.
The following headers MUST be sent with the HTTP request:
- `Content-Encoding: snappy`
- `Content-Type: application/x-protobuf`
- `User-Agent: <name & version of the sender>`
- `X-Prometheus-Remote-Write-Version: 0.1.0`
Clients MAY allow users to send custom HTTP headers; they MUST NOT allow users to configure them in such a way as to send reserved headers. For more info see https://github.com/prometheus/prometheus/pull/8416.
The remote write request in the body of the HTTP POST MUST be compressed with [Google’s Snappy](https://github.com/google/snappy). The block format MUST be used - the framed format MUST NOT be used.
The remote write request MUST be encoded using Google Protobuf 3, and MUST use the schema defined above. Note [the Prometheus implementation](https://github.com/prometheus/prometheus/blob/v2.24.0/prompb/remote.proto) uses [gogoproto optimisations](https://github.com/gogo/protobuf) - for receivers written in languages other than Golang the gogoproto types MAY be substituted for line-level equivalents.
The response body from the remote write receiver SHOULD be empty; clients MUST ignore the response body. The response body is RESERVED for future use.
### Backward and forward compatibility
The protocol follows [semantic versioning 2.0](https://semver.org/): any 1.x compatible receivers MUST be able to read any 1.x compatible sender and so on. Breaking/backwards incompatible changes will result in a 2.x version of the spec.
The proto format itself is forward / backward compatible, in some respects:
- Removing fields from the proto will mean a major version bump.
- Adding (optional) fields will be a minor version bump.
Negotiation:
- Senders MUST send the version number in a headers.
- Receivers MAY return the highest version number they support in a response header ("X-Prometheus-Remote-Write-Version").
- Senders who wish to send in a format >1.x MUST start by sending an empty 1.x, and see if the response says the receiver supports something else. The Sender MAY use any supported version . If there is no version header in the response, senders MUST assume 1.x compatibility only.
### Labels
The complete set of labels MUST be sent with each sample. Whatsmore, the label set associated with samples:
- SHOULD contain a `__name__` label.
- MUST NOT contain repeated label names.
- MUST have label names sorted in lexicographical order.
- MUST NOT contain any empty label names or values.
Senders MUST only send valid metric names, label names, and label values:
- Metric names MUST adhere to the regex `[a-zA-Z_:]([a-zA-Z0-9_:])*`.
- Label names MUST adhere to the regex `[a-zA-Z_]([a-zA-Z0-9_])*`.
- Label values MAY be any sequence of UTF-8 characters .
Receivers MAY impose limits on the number and length of labels, but this will be receiver-specific and is out of scope for this document.
Label names beginning with "__" are RESERVED for system usage and SHOULD NOT be used, see [Prometheus Data Model](https://prometheus.io/docs/concepts/data_model/).
Remote write Receivers MAY ingest valid samples within a write request that otherwise contains invalid samples. Receivers MUST return a HTTP 400 status code ("Bad Request") for write requests that contain any invalid samples. Receivers SHOULD provide a human readable error message in the response body. Senders MUST NOT try and interpret the error message, and SHOULD log it as is.
### Ordering
Prometheus Remote Write compatible senders MUST send samples for any given series in timestamp order. Prometheus Remote Write compatible Senders MAY send multiple requests for different series in parallel.
### Retries & Backoff
Prometheus Remote Write compatible senders MUST retry write requests on HTTP 5xx responses and MUST use a backoff algorithm to prevent overwhelming the server. They MUST NOT retry write requests on HTTP 2xx and 4xx responses other than 429. They MAY retry on HTTP 429 responses, which could result in senders "falling behind" if the server cannot keep up. This is done to ensure data is not lost when there are server side errors, and progress is made when there are client side errors.
Prometheus remote Write compatible receivers MUST respond with a HTTP 2xx status code when the write is successful. They MUST respond with HTTP status code 5xx when the write fails and SHOULD be retried. They MUST respond with HTTP status code 4xx when the request is invalid, will never be able to succeed and should not be retried.
### Stale Markers
Prometheus remote write compatible senders MUST send stale markers when a time series will no longer be appended to.
Stale markers MUST be signalled by the special NaN value 0x7ff0000000000002. This value MUST NOT be used otherwise.
Typically the sender can detect when a time series will no longer be appended to using the following techniques:
1. Detecting, using service discovery, that the target exposing the series has gone away
1. Noticing the target is no longer exposing the time series between successive scrapes
1. Failing to scrape the target that originally exposed a time series
1. Tracking configuration and evaluation for recording and alerting rules
## Out of Scope
This document does not intend to explain all the features required for a fully Prometheus-compatible monitoring system. In particular, the following areas are out of scope for the first version of the spec:
**The "up" metric** The definition and semantics of the "up" metric are beyond the scope of the remote write protocol and should be documented separately.
**HTTP Path** The path for HTTP handler can be anything - and MUST be provided by the sender. Generally we expect the whole URL to be specified in config.
**Persistence** It is recommended that Prometheus Remote Write compatible senders should persistently buffer sample data in the event of outages in the receiver.
**Authentication & Encryption** as remote write uses HTTP, we consider authentication & encryption to be a transport-layer problem. Senders and receivers should support all the usual suspects (Basic auth, TLS etc) and are free to add potentially custom authentication options. Support for custom authentication in the Prometheus remote write sender and eventual agent should not be assumed, but we will endeavour to support common and widely used auth protocols, where feasible.
**Remote Read** this is a separate interface that has already seen some iteration, and is less widely used.
**Sharding** the current sharding scheme in Prometheus for remote write parallelisation is very much an implementation detail, and isn’t part of the spec. When senders do implement parallelisation they MUST preserve per-series sample ordering.
**Backfill** The specification does not place a limit on how old series can be pushed, however server/implementation specific constraints may exist.
**Limits** Limits on the number and length of labels, batch sizes etc are beyond the scope of this document, however it is expected that implementation will impose reasonable limits.
**Push-based Prometheus** Applications pushing metrics to Prometheus Remote Write compatible receivers was not a design goal of this system, and should be explored in a separate doc.
**Labels** Every series MAY include a "job" and/or "instance" label, as these are typically added by service discovery in the Sender. These are not mandatory.
## Future Plans
This section contains speculative plans that are not considered part of protocol specification, but are mentioned here for completeness.
**Transactionality** Prometheus aims at being "transactional" - i.e. to never expose a partially scraped target to a query. We intend to do the same with remote write - for instance, in the future we would like to "align" remote write with scrapes, perhaps such that all the samples, metadata and exemplars for a single scrape are sent in a single remote write request. This is yet to be designed.
**Metadata** and Exemplars In line with above, we also send metadata (type information, help text) and exemplars along with the scraped samples. We plan to package this up in a single remote write request - future versions of the spec may insist on this. Prometheus currently has experimental support for sending metadata and exemplars.
**Optimizations** We would like to investigate various optimizations to reduce message size by eliminating repetition of label names and values.
## Related
### Compatible Senders and Receivers
The spec is intended to describe how the following components interact (as of April 2023):
- [Prometheus](https://github.com/prometheus/prometheus/tree/master/storage/remote) (as both a "sender" and a "receiver")
- [Avalanche](https://github.com/prometheus-community/avalanche) (as a "sender") - A Load Testing Tool Prometheus Metrics.
- [Cortex](https://github.com/cortexproject/cortex/blob/master/pkg/util/push/push.go#L20) (as a "receiver")
- [Elastic Agent](https://docs.elastic.co/integrations/prometheus#prometheus-server-remote-write) (as a "receiver")
- [Grafana Agent](https://github.com/grafana/agent) (as both a "sender" and a "receiver")
- [GreptimeDB](https://github.com/greptimeTeam/greptimedb) (as a ["receiver"](https://docs.greptime.com/user-guide/ingest-data/for-observerbility/prometheus))
- InfluxData’s Telegraf agent. ([as a sender](https://github.com/influxdata/telegraf/tree/master/plugins/serializers/prometheusremotewrite), and [as a receiver](https://github.com/influxdata/telegraf/pull/8967))
- [M3](https://m3db.io/docs/integrations/prometheus/#prometheus-configuration) (as a "receiver")
- [Mimir](https://github.com/grafana/mimir) (as a "receiver")
- [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector-releases/) (as a ["sender"](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusremotewriteexporter#readme) and eventually as a "receiver")
- [Thanos](https://thanos.io/tip/components/receive.md/) (as a "receiver")
- Vector (as a ["sender"](https://vector.dev/docs/reference/configuration/sinks/prometheus_remote_write/) and a ["receiver"](https://vector.dev/docs/reference/configuration/sources/prometheus_remote_write/))
- [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics) (as a ["receiver"](https://docs.victoriametrics.com/#prometheus-setup))
### FAQ
**Why did you not use gRPC?**
Funnily enough we initially used gRPC, but switched to Protos atop HTTP as in 2016 it was hard to get them past ELBs: https://github.com/prometheus/prometheus/issues/1982
**Why not streaming protobuf messages?**
If you use persistent HTTP/1.1 connections, they are pretty close to streaming… Of course headers have to be re-sent, but yes that is less expensive than a new TCP set up.
**Why do we send samples in order?**
The in-order constraint comes from the encoding we use for time series data in Prometheus, the implementation of which is append only. It is possible to remove this constraint, for instance by buffering samples and reordering them before encoding. We can investigate this in future versions of the protocol.
**How can we parallelise requests with the in-order constraint?**
Samples must be in-order _for a given series_. Remote write requests can be sent in parallel as long as they are for different series. In Prometheus, we shard the samples by their labels into separate queues, and then writes happen sequentially in each queue. This guarantees samples for the same series are delivered in order, but samples for different series are sent in parallel - and potentially "out of order" between different series.
We believe this is necessary as, even if the receiver could support out-of-order samples, we can't have agents sending out of order as they would never be able to send to Prometheus, Cortex and Thanos. We’re doing this to ensure the integrity of the ecosystem and to prevent confusing/forking the community into "prometheus-agents-that-can-write-to-prometheus" and those that can’t. | prometheus | title Prometheus Remote Write 1 0 sort rank 5 Prometheus Remote Write Specification Version 1 0 Status Published Date April 2023 This document is intended to define and standardise the API wire format protocol and semantics of the existing widely and organically adopted protocol and not to propose anything new The remote write specification is intended to document the standard for how Prometheus and Prometheus remote write compatible agents send data to a Prometheus or Prometheus remote write compatible receiver The key words MUST MUST NOT REQUIRED SHALL SHALL NOT SHOULD SHOULD NOT RECOMMENDED MAY and OPTIONAL in this document are to be interpreted as described in RFC 2119 https datatracker ietf org doc html rfc2119 NOTE This specification has a 2 0 version available see here remote write spec 2 0 md Introduction Background The remote write protocol is designed to make it possible to reliably propagate samples in real time from a sender to a receiver without loss The Remote Write protocol is designed to be stateless there is strictly no inter message communication As such the protocol is not considered streaming To achieve a streaming effect multiple messages should be sent over the same connection using e g HTTP 1 1 or HTTP 2 Fancy technologies such as gRPC were considered but at the time were not widely adopted and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB The remote write protocol contains opportunities for batching e g sending multiple samples for different series in a single request It is not expected that multiple samples for the same series will be commonly sent in the same request although there is support for this in the protocol The remote write protocol is not intended for use by applications to push metrics to Prometheus remote write compatible receivers It is intended that a Prometheus remote write compatible sender scrapes instrumented applications or exporters and sends remote write messages to a server A test suite can be found at https github com prometheus compliance tree main remotewrite sender Glossary For the purposes of this document the following definitions MUST be followed a Sender is something that sends Prometheus Remote Write data a Receiver is something that receives Prometheus Remote Write data a Sample is a pair of timestamp value a Label is a pair of key value a Series is a list of samples identified by a unique set of labels Definitions Protocol The Remote Write Protocol MUST consist of RPCs with the following signature func Send WriteRequest message WriteRequest repeated TimeSeries timeseries 1 Cortex uses this field to determine the source of the write request We reserve it to avoid any compatibility issues reserved 2 Prometheus uses this field to send metadata but this is omitted from v1 of the spec as it is experimental reserved 3 message TimeSeries repeated Label labels 1 repeated Sample samples 2 message Label string name 1 string value 2 message Sample double value 1 int64 timestamp 2 Remote write Senders MUST encode the Write Request in the body of a HTTP POST request and send it to the Receivers via HTTP at a provided URL path The Receiver MAY specify any HTTP URL path to receive metrics Timestamps MUST be int64 counted as milliseconds since the Unix epoch Values MUST be float64 The following headers MUST be sent with the HTTP request Content Encoding snappy Content Type application x protobuf User Agent name version of the sender X Prometheus Remote Write Version 0 1 0 Clients MAY allow users to send custom HTTP headers they MUST NOT allow users to configure them in such a way as to send reserved headers For more info see https github com prometheus prometheus pull 8416 The remote write request in the body of the HTTP POST MUST be compressed with Google s Snappy https github com google snappy The block format MUST be used the framed format MUST NOT be used The remote write request MUST be encoded using Google Protobuf 3 and MUST use the schema defined above Note the Prometheus implementation https github com prometheus prometheus blob v2 24 0 prompb remote proto uses gogoproto optimisations https github com gogo protobuf for receivers written in languages other than Golang the gogoproto types MAY be substituted for line level equivalents The response body from the remote write receiver SHOULD be empty clients MUST ignore the response body The response body is RESERVED for future use Backward and forward compatibility The protocol follows semantic versioning 2 0 https semver org any 1 x compatible receivers MUST be able to read any 1 x compatible sender and so on Breaking backwards incompatible changes will result in a 2 x version of the spec The proto format itself is forward backward compatible in some respects Removing fields from the proto will mean a major version bump Adding optional fields will be a minor version bump Negotiation Senders MUST send the version number in a headers Receivers MAY return the highest version number they support in a response header X Prometheus Remote Write Version Senders who wish to send in a format 1 x MUST start by sending an empty 1 x and see if the response says the receiver supports something else The Sender MAY use any supported version If there is no version header in the response senders MUST assume 1 x compatibility only Labels The complete set of labels MUST be sent with each sample Whatsmore the label set associated with samples SHOULD contain a name label MUST NOT contain repeated label names MUST have label names sorted in lexicographical order MUST NOT contain any empty label names or values Senders MUST only send valid metric names label names and label values Metric names MUST adhere to the regex a zA Z a zA Z0 9 Label names MUST adhere to the regex a zA Z a zA Z0 9 Label values MAY be any sequence of UTF 8 characters Receivers MAY impose limits on the number and length of labels but this will be receiver specific and is out of scope for this document Label names beginning with are RESERVED for system usage and SHOULD NOT be used see Prometheus Data Model https prometheus io docs concepts data model Remote write Receivers MAY ingest valid samples within a write request that otherwise contains invalid samples Receivers MUST return a HTTP 400 status code Bad Request for write requests that contain any invalid samples Receivers SHOULD provide a human readable error message in the response body Senders MUST NOT try and interpret the error message and SHOULD log it as is Ordering Prometheus Remote Write compatible senders MUST send samples for any given series in timestamp order Prometheus Remote Write compatible Senders MAY send multiple requests for different series in parallel Retries Backoff Prometheus Remote Write compatible senders MUST retry write requests on HTTP 5xx responses and MUST use a backoff algorithm to prevent overwhelming the server They MUST NOT retry write requests on HTTP 2xx and 4xx responses other than 429 They MAY retry on HTTP 429 responses which could result in senders falling behind if the server cannot keep up This is done to ensure data is not lost when there are server side errors and progress is made when there are client side errors Prometheus remote Write compatible receivers MUST respond with a HTTP 2xx status code when the write is successful They MUST respond with HTTP status code 5xx when the write fails and SHOULD be retried They MUST respond with HTTP status code 4xx when the request is invalid will never be able to succeed and should not be retried Stale Markers Prometheus remote write compatible senders MUST send stale markers when a time series will no longer be appended to Stale markers MUST be signalled by the special NaN value 0x7ff0000000000002 This value MUST NOT be used otherwise Typically the sender can detect when a time series will no longer be appended to using the following techniques 1 Detecting using service discovery that the target exposing the series has gone away 1 Noticing the target is no longer exposing the time series between successive scrapes 1 Failing to scrape the target that originally exposed a time series 1 Tracking configuration and evaluation for recording and alerting rules Out of Scope This document does not intend to explain all the features required for a fully Prometheus compatible monitoring system In particular the following areas are out of scope for the first version of the spec The up metric The definition and semantics of the up metric are beyond the scope of the remote write protocol and should be documented separately HTTP Path The path for HTTP handler can be anything and MUST be provided by the sender Generally we expect the whole URL to be specified in config Persistence It is recommended that Prometheus Remote Write compatible senders should persistently buffer sample data in the event of outages in the receiver Authentication Encryption as remote write uses HTTP we consider authentication encryption to be a transport layer problem Senders and receivers should support all the usual suspects Basic auth TLS etc and are free to add potentially custom authentication options Support for custom authentication in the Prometheus remote write sender and eventual agent should not be assumed but we will endeavour to support common and widely used auth protocols where feasible Remote Read this is a separate interface that has already seen some iteration and is less widely used Sharding the current sharding scheme in Prometheus for remote write parallelisation is very much an implementation detail and isn t part of the spec When senders do implement parallelisation they MUST preserve per series sample ordering Backfill The specification does not place a limit on how old series can be pushed however server implementation specific constraints may exist Limits Limits on the number and length of labels batch sizes etc are beyond the scope of this document however it is expected that implementation will impose reasonable limits Push based Prometheus Applications pushing metrics to Prometheus Remote Write compatible receivers was not a design goal of this system and should be explored in a separate doc Labels Every series MAY include a job and or instance label as these are typically added by service discovery in the Sender These are not mandatory Future Plans This section contains speculative plans that are not considered part of protocol specification but are mentioned here for completeness Transactionality Prometheus aims at being transactional i e to never expose a partially scraped target to a query We intend to do the same with remote write for instance in the future we would like to align remote write with scrapes perhaps such that all the samples metadata and exemplars for a single scrape are sent in a single remote write request This is yet to be designed Metadata and Exemplars In line with above we also send metadata type information help text and exemplars along with the scraped samples We plan to package this up in a single remote write request future versions of the spec may insist on this Prometheus currently has experimental support for sending metadata and exemplars Optimizations We would like to investigate various optimizations to reduce message size by eliminating repetition of label names and values Related Compatible Senders and Receivers The spec is intended to describe how the following components interact as of April 2023 Prometheus https github com prometheus prometheus tree master storage remote as both a sender and a receiver Avalanche https github com prometheus community avalanche as a sender A Load Testing Tool Prometheus Metrics Cortex https github com cortexproject cortex blob master pkg util push push go L20 as a receiver Elastic Agent https docs elastic co integrations prometheus prometheus server remote write as a receiver Grafana Agent https github com grafana agent as both a sender and a receiver GreptimeDB https github com greptimeTeam greptimedb as a receiver https docs greptime com user guide ingest data for observerbility prometheus InfluxData s Telegraf agent as a sender https github com influxdata telegraf tree master plugins serializers prometheusremotewrite and as a receiver https github com influxdata telegraf pull 8967 M3 https m3db io docs integrations prometheus prometheus configuration as a receiver Mimir https github com grafana mimir as a receiver OpenTelemetry Collector https github com open telemetry opentelemetry collector releases as a sender https github com open telemetry opentelemetry collector contrib tree main exporter prometheusremotewriteexporter readme and eventually as a receiver Thanos https thanos io tip components receive md as a receiver Vector as a sender https vector dev docs reference configuration sinks prometheus remote write and a receiver https vector dev docs reference configuration sources prometheus remote write VictoriaMetrics https github com VictoriaMetrics VictoriaMetrics as a receiver https docs victoriametrics com prometheus setup FAQ Why did you not use gRPC Funnily enough we initially used gRPC but switched to Protos atop HTTP as in 2016 it was hard to get them past ELBs https github com prometheus prometheus issues 1982 Why not streaming protobuf messages If you use persistent HTTP 1 1 connections they are pretty close to streaming Of course headers have to be re sent but yes that is less expensive than a new TCP set up Why do we send samples in order The in order constraint comes from the encoding we use for time series data in Prometheus the implementation of which is append only It is possible to remove this constraint for instance by buffering samples and reordering them before encoding We can investigate this in future versions of the protocol How can we parallelise requests with the in order constraint Samples must be in order for a given series Remote write requests can be sent in parallel as long as they are for different series In Prometheus we shard the samples by their labels into separate queues and then writes happen sequentially in each queue This guarantees samples for the same series are delivered in order but samples for different series are sent in parallel and potentially out of order between different series We believe this is necessary as even if the receiver could support out of order samples we can t have agents sending out of order as they would never be able to send to Prometheus Cortex and Thanos We re doing this to ensure the integrity of the ecosystem and to prevent confusing forking the community into prometheus agents that can write to prometheus and those that can t |
prometheus Prometheus Remote Write Specification title Prometheus Remote Write 2 0 EXPERIMENTAL sortrank 4 Version 2 0 rc 3 Date May 2024 Status Experimental | ---
title: "Prometheus Remote-Write 2.0 [EXPERIMENTAL]"
sort_rank: 4
---
# Prometheus Remote-Write Specification
* Version: 2.0-rc.3
* Status: **Experimental**
* Date: May 2024
The Remote-Write specification, in general, is intended to document the standard for how Prometheus and Prometheus Remote-Write compatible senders send data to Prometheus or Prometheus Remote-Write compatible receivers.
This document is intended to define a second version of the [Prometheus Remote-Write](./remote_write_spec.md) API with minor changes to protocol and semantics. This second version adds a new Protobuf Message with new features enabling more use cases and wider adoption on top of performance and cost savings. The second version also deprecates the previous Protobuf Message from a [1.0 Remote-Write specification](/docs/specs/remote_write_spec/#protocol) and adds mandatory [`X-Prometheus-Remote-Write-*-Written` HTTP response headers](#required-written-response-headers)for reliability purposes. Finally, this spec outlines how to implement backwards-compatible senders and receivers (even under a single endpoint) using existing basic content negotiation request headers. More advanced, automatic content negotiation mechanisms might come in a future minor version if needed. For the rationales behind the 2.0 specification, see [the formal proposal](https://github.com/prometheus/proposals/pull/35).
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
> NOTE: This is a release candidate for Remote-Write 2.0 specification. This means that this specification is currently in an experimental state--no major changes are expected, but we reserve the right to break the compatibility if it's necessary, based on the early adopters' feedback. The potential feedback, questions and suggestions should be added as comments to the [PR with the open proposal](https://github.com/prometheus/proposals/pull/35).
## Introduction
### Background
The Remote-Write protocol is designed to make it possible to reliably propagate samples in real-time from a sender to a receiver, without loss.
<!---
For the detailed rationales behind each 2.0 Remote-Write decision, see: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md
-->
The Remote-Write protocol is designed to be stateless; there is strictly no inter-message communication. As such the protocol is not considered "streaming". To achieve a streaming effect multiple messages should be sent over the same connection using e.g. HTTP/1.1 or HTTP/2. "Fancy" technologies such as gRPC were considered, but at the time were not widely adopted, and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB.
The Remote-Write protocol contains opportunities for batching, e.g. sending multiple samples for different series in a single request. It is not expected that multiple samples for the same series will be commonly sent in the same request, although there is support for this in the Protobuf Message.
A test suite can be found at https://github.com/prometheus/compliance/tree/main/remote_write_sender. The compliance tests for remote write 2.0 compatibility are still [in progress](https://github.com/prometheus/compliance/issues/101).
### Glossary
In this document, the following definitions are followed:
* `Remote-Write` is the name of this Prometheus protocol.
* a `Protocol` is a communication specification that enables the client and server to transfer metrics.
* a `Protobuf Message` (or Proto Message) refers to the [content type](https://www.rfc-editor.org/rfc/rfc9110.html#name-content-type) definition of the data structure for this Protocol. Since the specification uses [Google Protocol Buffers ("protobuf")](https://protobuf.dev/) exclusively, the schema is defined in a ["proto" file](https://protobuf.dev/programming-guides/proto3/) and represented by a single Protobuf ["message"](https://protobuf.dev/programming-guides/proto3/#simple).
* a `Wire Format` is the format of the data as it travels on the wire (i.e. in a network). In the case of Remote-Write, this is always the compressed binary protobuf format.
* a `Sender` is something that sends Remote-Write data.
* a `Receiver` is something that receives (writes) Remote-Write data. The meaning of `Written` is up to the Receiver e.g. usually it means storing received data in a database, but also just validating, splitting or enhancing it.
* `Written` refers to data the `Receiver` has received and is accepting. Whether or not it has ingested this data to persistent storage, written it to a WAL, etc. is up to the `Receiver`. The only distinction is that the `Receiver` has accepted this data rather than explicitly rejecting it with an error response.
* a `Sample` is a pair of (timestamp, value).
* a `Histogram` is a pair of (timestamp, [histogram value](https://github.com/prometheus/docs/blob/b9657b5f5b264b81add39f6db2f1df36faf03efe/content/docs/concepts/native_histograms.md)).
* a `Label` is a pair of (key, value).
* a `Series` is a list of samples, identified by a unique set of labels.
## Definitions
### Protocol
The Remote-Write Protocol MUST consist of RPCs with the request body serialized using a Google Protocol Buffers and then compressed.
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#a-new-protobuf-message-identified-by-fully-qualified-name-old-one-deprecated
-->
The protobuf serialization MUST use either of the following Protobuf Messages:
* The `prometheus.WriteRequest` introduced in [the Remote-Write 1.0 specification](./remote_write_spec.md#protocol). As of 2.0, this message is deprecated. It SHOULD be used only for compatibility reasons. Senders and Receivers MAY NOT support the `prometheus.WriteRequest`.
* The `io.prometheus.write.v2.Request` introduced in this specification and defined [below](#protobuf-message). Senders and Receivers SHOULD use this message when possible. Senders and Receivers MUST support the `io.prometheus.write.v2.Request`.
Protobuf Message MUST use binary Wire Format. Then, MUST be compressed with [Google’s Snappy](https://github.com/google/snappy). Snappy's [block format](https://github.com/google/snappy/blob/2c94e11145f0b7b184b831577c93e5a41c4c0346/format_description.txt) MUST be used -- [the framed format](https://github.com/google/snappy/blob/2c94e11145f0b7b184b831577c93e5a41c4c0346/framing_format.txt) MUST NOT be used.
Senders MUST send a serialized and compressed Protobuf Message in the body of an HTTP POST request and send it to the Receiver via HTTP at the provided URL path. Receivers MAY specify any HTTP URL path to receive metrics.
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#basic-content-negotiation-built-on-what-we-have
-->
Senders MUST send the following reserved headers with the HTTP request:
- `Content-Encoding`
- `Content-Type`
- `X-Prometheus-Remote-Write-Version`
- `User-Agent`
Senders MAY allow users to add custom HTTP headers; they MUST NOT allow users to configure them in such a way as to send reserved headers.
#### Content-Encoding
```
Content-Encoding: <compression>
```
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#no-new-compression-added--yet-
-->
Content encoding request header MUST follow [the RFC 9110](https://www.rfc-editor.org/rfc/rfc9110.html#name-content-encoding). Senders MUST use the `snappy` value. Receivers MUST support `snappy` compression. New, optional compression algorithms might come in 2.x or beyond.
#### Content-Type
```
Content-Type: application/x-protobuf
Content-Type: application/x-protobuf;proto=<fully qualified name>
```
Content type request header MUST follow [the RFC 9110](https://www.rfc-editor.org/rfc/rfc9110.html#name-content-type). Senders MUST use `application/x-protobuf` as the only media type. Senders MAY add `;proto=` parameter to the header's value to indicate the fully qualified name of the Protobuf Message that was used, from the two mentioned above. As a result, Senders MUST send any of the three supported header values:
For the deprecated message introduced in PRW 1.0, identified by `prometheus.WriteRequest`:
* `Content-Type: application/x-protobuf`
* `Content-Type: application/x-protobuf;proto=prometheus.WriteRequest`
For the message introduced in PRW 2.0, identified by `io.prometheus.write.v2.Request`:
* `Content-Type: application/x-protobuf;proto=io.prometheus.write.v2.Request`
When talking to 1.x Receivers, Senders SHOULD use `Content-Type: application/x-protobuf` for backward compatibility. Otherwise, Senders SHOULD use `Content-Type: application/x-protobuf;proto=io.prometheus.write.v2.Request`. More Protobuf Messages might come in 2.x or beyond.
Receivers MUST use the content type header to identify the Protobuf Message schema to use. Accidental wrong schema choices may result in non-deterministic behaviour (e.g. corruptions).
> NOTE: Thanks to reserved fields in [`io.prometheus.write.v2.Request`](#protobuf-message), Receiver accidental use of wrong schema with `prometheus.WriteRequest` will result in empty message. This is generally for convenience to avoid surprising errors, but don't rely on it -- future Protobuf Messages might not have this feature.
#### X-Prometheus-Remote-Write-Version
```
X-Prometheus-Remote-Write-Version: <Remote-Write spec major and minor version>
```
When talking to 1.x Receivers, Senders MUST use `X-Prometheus-Remote-Write-Version: 0.1.0` for backward compatibility. Otherwise, Senders SHOULD use the newest Remote-Write version it is compatible with e.g. `X-Prometheus-Remote-Write-Version: 2.0.0`.
#### User-Agent
```
User-Agent: <name & version of the Sender>
```
Senders MUST include a user agent header that SHOULD follow [the RFC 9110 User-Agent header format](https://www.rfc-editor.org/rfc/rfc9110.html#name-user-agent).
### Response
Receivers that written all data successfully MUST return a [success 2xx HTTP status code](https://www.rfc-editor.org/rfc/rfc9110.html#name-successful-2xx). In such a successful case, the response body from the Receiver SHOULD be empty and the status code SHOULD be [204 HTTP No Content](https://www.rfc-editor.org/rfc/rfc9110.html#name-204-no-content); Senders MUST ignore the response body. The response body is RESERVED for future use.
Receivers MUST NOT return a 2xx HTTP status code if any of the pieces of sent data known to the Receiver (e.g. Samples, Histograms, Exemplars) were NOT written successfully (both [partial write](#partial-write) or full write rejection). In such a case, the Receiver MUST provide a human-readable error message in the response body. The Receiver's error SHOULD contain information about the amount of the samples being rejected and for what reasons. Senders MUST NOT try and interpret the error message and SHOULD log it as is.
The following subsections specify Sender and Receiver semantics around headers and different write error cases.
#### Required `Written` Response Headers
<!---
Rationales: https://github.com/prometheus/prometheus/issues/14359
-->
Upon a successful content negotiation, Receivers process (write) the received batch of data. Once completed (with success or failure) for each important piece of data (currently Samples, Histograms and Exemplars) Receivers MUST send a dedicated HTTP `X-Prometheus-Remote-Write-*-Written` response header with the precise number of successfully written elements.
Each header value MUST be a single 64-bit integer. The header names MUST be as follows:
```
X-Prometheus-Remote-Write-Samples-Written <count of all successfully written Samples>
X-Prometheus-Remote-Write-Histograms-Written <count of all successfully written Histogram samples>
X-Prometheus-Remote-Write-Exemplars-Written <count of all successfully written Exemplars>
```
Upon receiving a 2xx or a 4xx status code, Senders CAN assume that any missing `X-Prometheus-Remote-Write-*-Written` response header means no element from this category (e.g. Sample) was written by the Receiver (count of `0`). Senders MUST NOT assume the same when using the deprecated `prometheus.WriteRequest` Protobuf Message due to the risk of hitting 1.0 Receiver without this feature.
Senders MAY use those headers to confirm which parts of data were successfully written by the Receiver. Common use cases:
* Better handling of the [Partial Write](#partial-write) failure situations: Senders MAY use those headers for more accurate client instrumentation and error handling.
* Detecting broken 1.0 Receiver implementations: Senders SHOULD assume [415 HTTP Unsupported Media Type](https://www.rfc-editor.org/rfc/rfc9110.html#name-415-unsupported-media-type) status code when sending the data using `io.prometheus.write.v2.Request` request and receiving 2xx HTTP status code, but none of the `X-Prometheus-Remote-Write-*-Written` response headers from the Receiver. This is a common issue for the 1.0 Receivers that do not check the `Content-Type` request header; accidental decoding of the `io.prometheus.write.v2.Request` payload with `prometheus.WriteRequest` schema results in empty result and no decoding errors.
* Detecting other broken implementations or issues: Senders MAY use those headers to detect broken Sender and Receiver implementations or other problems.
Senders MUST NOT assume what Remote Write specification version the Receiver implements from the remote write response headers.
More (optional) headers might come in the future, e.g. when more entities or fields are added and worth confirming.
#### Partial Write
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#partial-writes
-->
Senders SHOULD use Remote-Write to send samples for multiple series in a single request. As a result, Receivers MAY write valid samples within a write request that also contains some invalid or otherwise unwritten samples, which represents a partial write case. In such a case, the Receiver MUST return non-2xx status code following the [Invalid Samples](#invalid-samples) and [Retry on Partial Writes](#retries-on-partial-writes) sections.
#### Unsupported Request Content
Receivers MUST return [415 HTTP Unsupported Media Type](https://www.rfc-editor.org/rfc/rfc9110.html#name-415-unsupported-media-type) status code if they don't support a given content type or encoding provided by Senders.
Senders SHOULD expect [400 HTTP Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) for the above reasons from 1.x Receivers, for backwards compatibility.
#### Invalid Samples
Receivers MAY NOT support certain metric types or samples (e.g. a Receiver might reject sample without metadata type specified or without created timestamp, while another Receiver might accept such sample.). It’s up to the Receiver what sample is invalid. Receivers MUST return a [400 HTTP Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) status code for write requests that contain any invalid samples unless the [partial retriable write](#retries-on-partial-writes) occurs.
Senders MUST NOT retry on a 4xx HTTP status codes (other than [429](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429)), which MUST be used by Receivers to indicate that the write operation will never be able to succeed and should not be retried. Senders MAY retry on the 415 HTTP status code with a different content type or encoding to see if the Receiver supports it.
### Retries & Backoff
Receivers MAY return a [429 HTTP Too Many Requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429) status code to indicate the overloaded server situation. Receivers MAY return [the Retry-After](https://www.rfc-editor.org/rfc/rfc9110.html#name-retry-after) header to indicate the time for the next write attempt. Receivers MAY return a 5xx HTTP status code to represent internal server errors.
Senders MAY retry on a 429 HTTP status code. Senders MUST retry write requests on 5xx HTTP. Senders MUST use a backoff algorithm to prevent overwhelming the server. Senders MAY handle [the Retry-After response header](https://www.rfc-editor.org/rfc/rfc9110.html#name-retry-after) to estimate the next retry time.
The difference between 429 vs 5xx handling is due to the potential situation of a Sender “falling behind” when the Receiver cannot keep up with the request volume, or the Receiver choosing to rate limit the Sender to protect its availability. As a result, Senders has the option to NOT retry on 429, which allows progress to be made when there are Sender side errors (e.g. too much traffic), while the data is not lost when there are Receiver side errors (5xx).
#### Retries on Partial Writes
Receivers MAY return a 5xx HTTP or 429 HTTP status code on partial write or [partial invalid sample cases](#partial-write) when it expects Senders to retry the whole request. In that case, the Receiver MUST support idempotency as Senders MAY retry with the same request.
### Backward and Forward Compatibility
The protocol follows [semantic versioning 2.0](https://semver.org/): any 2.x compatible Receiver MUST be able to read any 2.x compatible Senders and vice versa. Breaking or backwards incompatible changes will result in a 3.x version of the spec.
The Protobuf Messages (in Wire Format) themselves are forward / backward compatible, in some respects:
* Removing fields from the Protobuf Message requires a major version bump.
* Adding (optional) fields can be done in a minor version bump.
In other words, this means that future minor versions of 2.x MAY add new optional fields to `io.prometheus.write.v2.Request`, new compressions, Protobuf Messages and negotiation mechanisms, as long as they are backwards compatible (e.g. optional to both Receiver and Sender).
#### 2.x vs 1.x Compatibility
The 2.x protocol is breaking compatibility with 1.x by introducing a new, mandatory `io.prometheus.write.v2.Request` Protobuf Message and deprecating the `prometheus.WriteRequest`.
2.x Senders MAY support 1.x Receivers by allowing users to configure what content type Senders should use. 2.x Senders also MAY automatically fall back to different content types, if the Receiver returns 415 HTTP status code.
## Protobuf Message
### `io.prometheus.write.v2.Request`
The `io.prometheus.write.v2.Request` references the new Protobuf Message that's meant to replace and deprecate the Remote-Write 1.0's `prometheus.WriteRequest` message.
<!---
TODO(bwplotka): Move link to the one on Prometheus main or even buf.
-->
The full schema and source of the truth is in Prometheus repository in [`prompb/io/prometheus/write/v2/types.proto`](https://github.com/prometheus/prometheus/blob/remote-write-2.0/prompb/io/prometheus/write/v2/types.proto#L32). The `gogo` dependency and options CAN be ignored ([will be removed eventually](https://github.com/prometheus/prometheus/issues/11908)). They are not part of the specification as they don't impact the serialized format.
The simplified version of the new `io.prometheus.write.v2.Request` is presented below.
```
// Request represents a request to write the given timeseries to a remote destination.
message Request {
// Since Request supersedes 1.0 spec's prometheus.WriteRequest, we reserve the top-down message
// for the deterministic interop between those two.
// Generally it's not needed, because Receivers must use the Content-Type header, but we want to
// be sympathetic to adopters with mistaken implementations and have deterministic error (empty
// message if you use the wrong proto schema).
reserved 1 to 3;
// symbols contains a de-duplicated array of string elements used for various
// items in a Request message, like labels and metadata items. For the sender's convenience
// around empty values for optional fields like unit_ref, symbols array MUST start with
// empty string.
//
// To decode each of the symbolized strings, referenced, by "ref(s)" suffix, you
// need to lookup the actual string by index from symbols array. The order of
// strings is up to the sender. The receiver should not assume any particular encoding.
repeated string symbols = 4;
// timeseries represents an array of distinct series with 0 or more samples.
repeated TimeSeries timeseries = 5;
}
// TimeSeries represents a single series.
message TimeSeries {
// labels_refs is a list of label name-value pair references, encoded
// as indices to the Request.symbols array. This list's length is always
// a multiple of two, and the underlying labels should be sorted lexicographically.
//
// Note that there might be multiple TimeSeries objects in the same
// Requests with the same labels e.g. for different exemplars, metadata
// or created timestamp.
repeated uint32 labels_refs = 1;
// Timeseries messages can either specify samples or (native) histogram samples
// (histogram field), but not both. For a typical sender (real-time metric
// streaming), in healthy cases, there will be only one sample or histogram.
//
// Samples and histograms are sorted by timestamp (older first).
repeated Sample samples = 2;
repeated Histogram histograms = 3;
// exemplars represents an optional set of exemplars attached to this series' samples.
repeated Exemplar exemplars = 4;
// metadata represents the metadata associated with the given series' samples.
Metadata metadata = 5;
// created_timestamp represents an optional created timestamp associated with
// this series' samples in ms format, typically for counter or histogram type
// metrics. Created timestamp represents the time when the counter started
// counting (sometimes referred to as start timestamp), which can increase
// the accuracy of query results.
//
// Note that some receivers might require this and in return fail to
// write such samples within the Request.
//
// For Go, see github.com/prometheus/prometheus/model/timestamp/timestamp.go
// for conversion from/to time.Time to Prometheus timestamp.
//
// Note that the "optional" keyword is omitted due to
// https://cloud.google.com/apis/design/design_patterns.md#optional_primitive_fields
// Zero value means value not set. If you need to use exactly zero value for
// the timestamp, use 1 millisecond before or after.
int64 created_timestamp = 6;
}
// Exemplar represents additional information attached to some series' samples.
message Exemplar {
// labels_refs is an optional list of label name-value pair references, encoded
// as indices to the Request.symbols array. This list's len is always
// a multiple of 2, and the underlying labels should be sorted lexicographically.
// If the exemplar references a trace it should use the `trace_id` label name, as a best practice.
repeated uint32 labels_refs = 1;
// value represents an exact example value. This can be useful when the exemplar
// is attached to a histogram, which only gives an estimated value through buckets.
double value = 2;
// timestamp represents the timestamp of the exemplar in ms.
// For Go, see github.com/prometheus/prometheus/model/timestamp/timestamp.go
// for conversion from/to time.Time to Prometheus timestamp.
int64 timestamp = 3;
}
// Sample represents series sample.
message Sample {
// value of the sample.
double value = 1;
// timestamp represents timestamp of the sample in ms.
int64 timestamp = 2;
}
// Metadata represents the metadata associated with the given series' samples.
message Metadata {
enum MetricType {
METRIC_TYPE_UNSPECIFIED = 0;
METRIC_TYPE_COUNTER = 1;
METRIC_TYPE_GAUGE = 2;
METRIC_TYPE_HISTOGRAM = 3;
METRIC_TYPE_GAUGEHISTOGRAM = 4;
METRIC_TYPE_SUMMARY = 5;
METRIC_TYPE_INFO = 6;
METRIC_TYPE_STATESET = 7;
}
MetricType type = 1;
// help_ref is a reference to the Request.symbols array representing help
// text for the metric. Help is optional, reference should point to an empty string in
// such a case.
uint32 help_ref = 3;
// unit_ref is a reference to the Request.symbols array representing a unit
// for the metric. Unit is optional, reference should point to an empty string in
// such a case.
uint32 unit_ref = 4;
}
// A native histogram, also known as a sparse histogram.
// See https://github.com/prometheus/prometheus/blob/remote-write-2.0/prompb/io/prometheus/write/v2/types.proto#L142
// for a full message that follows the native histogram spec for both sparse
// and exponential, as well as, custom bucketing.
message Histogram { ... }
```
All timestamps MUST be int64 counted as milliseconds since the Unix epoch. Sample's values MUST be float64.
For every `TimeSeries` message:
* `labels_refs` MUST be provided.
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#partial-writes#samples-vs-native-histogram-samples
-->
* At least one element in `samples` or in `histograms` MUST be provided. A `TimeSeries` MUST NOT include both `samples` and `histograms`. For series which (rarely) would mix float and histogram samples, a separate `TimeSeries` message MUST be used.
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#always-on-metadata
-->
* `metadata` sub-fields SHOULD be provided. Receivers MAY reject series with unspecified `Metadata.type`.
* Exemplars SHOULD be provided if they exist for a series.
* `created_timestamp` SHOULD be provided for metrics that follow counter semantics (e.g. counters and histograms). Receivers MAY reject those series without `created_timestamp` being set.
The following subsections define some schema elements in detail.
#### Symbols
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#partial-writes#string-interning
-->
The `io.prometheus.write.v2.Request` Protobuf Message is designed to [intern all strings](https://en.wikipedia.org/wiki/String_interning) for the proven additional compression and memory efficiency gains on top of the standard compressions.
The `symbols` table MUST be provided and it MUST contain deduplicated strings used in series, exemplar labels, and metadata strings. The first element of the `symbols` table MUST be an empty string, which is used to represent empty or unspecified values such as when `Metadata.unit_ref` or `Metadata.help_ref` are not provided. References MUST point to the existing index in the `symbols` string array.
#### Series Labels
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#labels-and-utf-8
-->
The complete set of labels MUST be sent with each `Sample` or `Histogram` sample. Additionally, the label set associated with samples:
* SHOULD contain a `__name__` label.
* MUST NOT contain repeated label names.
* MUST have label names sorted in lexicographical order.
* MUST NOT contain any empty label names or values.
Metric names, label names, and label values MUST be any sequence of UTF-8 characters.
Metric names SHOULD adhere to the regex `[a-zA-Z_:]([a-zA-Z0-9_:])*`.
Label names SHOULD adhere to the regex `[a-zA-Z_]([a-zA-Z0-9_])*`.
Names that do not adhere to the above, might be harder to use for PromQL users (see [the UTF-8 proposal for more details](https://github.com/prometheus/proposals/blob/main/proposals/2023-08-21-utf8.md)).
Label names beginning with "__" are RESERVED for system usage and SHOULD NOT be used, see [Prometheus Data Model](https://prometheus.io/docs/concepts/data_model/).
Receivers also MAY impose limits on the number and length of labels, but this is receiver-specific and is out of the scope of this document.
#### Samples and Histogram Samples
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#partial-writes#native-histograms
-->
Senders MUST send `samples` (or `histograms`) for any given `TimeSeries` in timestamp order. Senders MAY send multiple requests for different series in parallel.
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#partial-writes#being-pull-vs-push-agnostic
-->
Senders SHOULD send stale markers when a time series will no longer be appended to.
Senders MUST send stale markers if the discontinuation of time series is possible to detect, for example:
* For series that were pulled (scraped), unless explicit timestamp was used.
* For series that is resulted by a recording rule evaluation.
Generally, not sending stale markers for series that are discontinued can lead to the Receiver [non-trivial query time alignment issues](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness).
Stale markers MUST be signalled by the special NaN value `0x7ff0000000000002`. This value MUST NOT be used otherwise.
Typically, Senders can detect when a time series will no longer be appended using the following techniques:
1. Detecting, using service discovery, that the target exposing the series has gone away.
1. Noticing the target is no longer exposing the time series between successive scrapes.
1. Failing to scrape the target that originally exposed a time series.
1. Tracking configuration and evaluation for recording and alerting rules.
1. Tracking discontinuation of metrics for non-scrape source of metric (e.g. in k6 when the benchmark has finished for series per benchmark, it could emit a stale marker).
#### Metadata
Metadata SHOULD follow the official Prometheus guidelines for [Type](https://prometheus.io/docs/instrumenting/writing_exporters/#types) and [Help](https://prometheus.io/docs/instrumenting/writing_exporters/#help-strings).
Metadata MAY follow the official OpenMetrics guidelines for [Unit](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#unit).
#### Exemplars
Each exemplar, if attached to a `TimeSeries`:
* MUST contain a value.
<!---
Rationales: https://github.com/prometheus/proposals/blob/alexg/remote-write-20-proposal/proposals/2024-04-09_remote-write-20.md#partial-writes#exemplars
-->
* MAY contain labels e.g. referencing trace or request ID. If the exemplar references a trace it SHOULD use the `trace_id` label name, as a best practice.
* MUST contain a timestamp. While exemplar timestamps are optional in Prometheus/Open Metrics exposition formats, the assumption is that a timestamp is assigned at scrape time in the same way a timestamp is assigned to the scrape sample. Receivers require exemplar timestamps to reliably handle (e.g. deduplicate) incoming exemplars.
## Out of Scope
The same as in [1.0](./remote_write_spec.md#out-of-scope).
## Future Plans
This section contains speculative plans that are not considered part of protocol specification yet but are mentioned here for completeness. Note that 2.0 specification completed [2 of 3 future plans in the 1.0](./remote_write_spec.md#future-plans).
* **Transactionality** There is still no transactionality defined for 2.0 specification, mostly because it makes a scalable Sender implementation difficult. Prometheus Sender aims at being "transactional" - i.e. to never expose a partially scraped target to a query. We intend to do the same with Remote-Write -- for instance, in the future we would like to "align" Remote-Write with scrapes, perhaps such that all the samples, metadata and exemplars for a single scrape are sent in a single Remote-Write request.
However, Remote-Write 2.0 specification solves an important transactionality problem for [the classic histogram buckets](https://docs.google.com/document/d/1mpcSWH1B82q-BtJza-eJ8xMLlKt6EJ9oFGH325vtY1Q/edit#heading=h.ueg7q07wymku). This is done thanks to the native histograms supporting custom bucket-ing possible with the `io.prometheus.write.v2.Request` wire format. Senders might translate all classic histograms to native histograms this way, but it's out of this specification to mandate this. However, for this reason, Receivers MAY ignore certain metric types (e.g. classic histograms).
* **Alternative wire formats**. The OpenTelemetry community has shown the validity of Apache Arrow (and potentially other columnar formats) for over-wire data transfer with their OTLP protocol. We would like to do experiments to confirm the compatibility of a similar format with Prometheus’ data model and include benchmarks of any resource usage changes. We would potentially maintain both a protobuf and columnar format long term for compatibility reasons and use our content negotiation to add different Protobuf Messages for this purpose.
* **Global symbols**. Pre-defined string dictionary for interning The protocol could pre-define a static dictionary of ref->symbol that includes strings that are considered common, e.g. “namespace”, “le”, “job”, “seconds”, “bytes”, etc. Senders could refer to these without the need to include them in the request’s symbols table. This dictionary could incrementally grow with minor version releases of this protocol.
## Related
### FAQ
**Why did you not use gRPC?**
Because the 1.0 protocol does not use gRPC, breaking it would increase friction in the adoption. See 1.0 [reason](./remote_write_spec.md#faq).
**Why not stream protobuf messages?**
If you use persistent HTTP/1.1 connections, they are pretty close to streaming. Of course, headers have to be re-sent, but that is less expensive than a new TCP set up.
**Why do we send samples in order?**
The in-order constraint comes from the encoding we use for time series data in Prometheus, the implementation of which is optimized for append-only workloads. However, this requirement is also shared across many other databases and vendors in the ecosystem. In fact, [Prometheus with OOO feature enabled](https://youtu.be/qYsycK3nTSQ?t=1321), allows out-of-order writes, but with the performance penalty, thus reserved for rare events. To sum up, Receivers may support out-of-order write, though it is not permitted by the specification. In the future e.g. 2.x spec versions, we could extend content type to negotiate the out-of-order writes, if needed.
**How can we parallelise requests with the in-order constraint?**
Samples must be in-order _for a given series_. However, even if a Receiver does not support out-of-order write, the Remote-Write requests can be sent in parallel as long as they are for different series. Prometheus shards the samples by their labels into separate queues, and then writes happen sequentially in each queue. This guarantees samples for the same series are delivered in order, but samples for different series are sent in parallel - and potentially "out of order" between different series.
**What are the differences between Remote-Write 2.0 and OpenTelemetry's OTLP protocol?**
[OpenTelemetry OTLP](https://github.com/open-telemetry/opentelemetry-proto/blob/a05597bff803d3d9405fcdd1e1fb1f42bed4eb7a/docs/specification.md) is a protocol for transporting of telemetry data (such as metrics, logs, traces and profiles) between telemetry sources, intermediate nodes and telemetry backends. The recommended transport involves gRPC with protobuf, but HTTP with protobuf or JSON are also described. It was designed from scratch with the intent to support a variety of different observability signals, data types and extra information. For [metrics](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/metrics/v1/metrics.proto) that means additional non-identifying labels, flags, temporal aggregations types, resource or scoped metrics, schema URLs and more. OTLP also requires [the semantic convention](https://opentelemetry.io/docs/concepts/semantic-conventions/) to be used.
Remote-Write was designed for simplicity, efficiency and organic growth. The first version was officially released in 2023, when already [dozens of battle-tested adopters in the CNCF ecosystem](./remote_write_spec.md#compatible-senders-and-receivers) had been using this protocol for years. Remote-Write 2.0 iterates on the previous protocol by adding a few new elements (metadata, exemplars, created timestamp and native histograms) and string interning. Remote-Write 2.0 is always stateless, focuses only on metrics and is opinionated; as such it is scoped down to elements that the Prometheus community considers enough to have a robust metric solution. The intention is to ensure the Remote-Write is a stable protocol that is cheaper and simpler to adopt and use than the alternatives in the observability ecosystem. | prometheus | title Prometheus Remote Write 2 0 EXPERIMENTAL sort rank 4 Prometheus Remote Write Specification Version 2 0 rc 3 Status Experimental Date May 2024 The Remote Write specification in general is intended to document the standard for how Prometheus and Prometheus Remote Write compatible senders send data to Prometheus or Prometheus Remote Write compatible receivers This document is intended to define a second version of the Prometheus Remote Write remote write spec md API with minor changes to protocol and semantics This second version adds a new Protobuf Message with new features enabling more use cases and wider adoption on top of performance and cost savings The second version also deprecates the previous Protobuf Message from a 1 0 Remote Write specification docs specs remote write spec protocol and adds mandatory X Prometheus Remote Write Written HTTP response headers required written response headers for reliability purposes Finally this spec outlines how to implement backwards compatible senders and receivers even under a single endpoint using existing basic content negotiation request headers More advanced automatic content negotiation mechanisms might come in a future minor version if needed For the rationales behind the 2 0 specification see the formal proposal https github com prometheus proposals pull 35 The key words MUST MUST NOT REQUIRED SHALL SHALL NOT SHOULD SHOULD NOT RECOMMENDED MAY and OPTIONAL in this document are to be interpreted as described in RFC 2119 https datatracker ietf org doc html rfc2119 NOTE This is a release candidate for Remote Write 2 0 specification This means that this specification is currently in an experimental state no major changes are expected but we reserve the right to break the compatibility if it s necessary based on the early adopters feedback The potential feedback questions and suggestions should be added as comments to the PR with the open proposal https github com prometheus proposals pull 35 Introduction Background The Remote Write protocol is designed to make it possible to reliably propagate samples in real time from a sender to a receiver without loss For the detailed rationales behind each 2 0 Remote Write decision see https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md The Remote Write protocol is designed to be stateless there is strictly no inter message communication As such the protocol is not considered streaming To achieve a streaming effect multiple messages should be sent over the same connection using e g HTTP 1 1 or HTTP 2 Fancy technologies such as gRPC were considered but at the time were not widely adopted and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB The Remote Write protocol contains opportunities for batching e g sending multiple samples for different series in a single request It is not expected that multiple samples for the same series will be commonly sent in the same request although there is support for this in the Protobuf Message A test suite can be found at https github com prometheus compliance tree main remote write sender The compliance tests for remote write 2 0 compatibility are still in progress https github com prometheus compliance issues 101 Glossary In this document the following definitions are followed Remote Write is the name of this Prometheus protocol a Protocol is a communication specification that enables the client and server to transfer metrics a Protobuf Message or Proto Message refers to the content type https www rfc editor org rfc rfc9110 html name content type definition of the data structure for this Protocol Since the specification uses Google Protocol Buffers protobuf https protobuf dev exclusively the schema is defined in a proto file https protobuf dev programming guides proto3 and represented by a single Protobuf message https protobuf dev programming guides proto3 simple a Wire Format is the format of the data as it travels on the wire i e in a network In the case of Remote Write this is always the compressed binary protobuf format a Sender is something that sends Remote Write data a Receiver is something that receives writes Remote Write data The meaning of Written is up to the Receiver e g usually it means storing received data in a database but also just validating splitting or enhancing it Written refers to data the Receiver has received and is accepting Whether or not it has ingested this data to persistent storage written it to a WAL etc is up to the Receiver The only distinction is that the Receiver has accepted this data rather than explicitly rejecting it with an error response a Sample is a pair of timestamp value a Histogram is a pair of timestamp histogram value https github com prometheus docs blob b9657b5f5b264b81add39f6db2f1df36faf03efe content docs concepts native histograms md a Label is a pair of key value a Series is a list of samples identified by a unique set of labels Definitions Protocol The Remote Write Protocol MUST consist of RPCs with the request body serialized using a Google Protocol Buffers and then compressed Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md a new protobuf message identified by fully qualified name old one deprecated The protobuf serialization MUST use either of the following Protobuf Messages The prometheus WriteRequest introduced in the Remote Write 1 0 specification remote write spec md protocol As of 2 0 this message is deprecated It SHOULD be used only for compatibility reasons Senders and Receivers MAY NOT support the prometheus WriteRequest The io prometheus write v2 Request introduced in this specification and defined below protobuf message Senders and Receivers SHOULD use this message when possible Senders and Receivers MUST support the io prometheus write v2 Request Protobuf Message MUST use binary Wire Format Then MUST be compressed with Google s Snappy https github com google snappy Snappy s block format https github com google snappy blob 2c94e11145f0b7b184b831577c93e5a41c4c0346 format description txt MUST be used the framed format https github com google snappy blob 2c94e11145f0b7b184b831577c93e5a41c4c0346 framing format txt MUST NOT be used Senders MUST send a serialized and compressed Protobuf Message in the body of an HTTP POST request and send it to the Receiver via HTTP at the provided URL path Receivers MAY specify any HTTP URL path to receive metrics Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md basic content negotiation built on what we have Senders MUST send the following reserved headers with the HTTP request Content Encoding Content Type X Prometheus Remote Write Version User Agent Senders MAY allow users to add custom HTTP headers they MUST NOT allow users to configure them in such a way as to send reserved headers Content Encoding Content Encoding compression Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md no new compression added yet Content encoding request header MUST follow the RFC 9110 https www rfc editor org rfc rfc9110 html name content encoding Senders MUST use the snappy value Receivers MUST support snappy compression New optional compression algorithms might come in 2 x or beyond Content Type Content Type application x protobuf Content Type application x protobuf proto fully qualified name Content type request header MUST follow the RFC 9110 https www rfc editor org rfc rfc9110 html name content type Senders MUST use application x protobuf as the only media type Senders MAY add proto parameter to the header s value to indicate the fully qualified name of the Protobuf Message that was used from the two mentioned above As a result Senders MUST send any of the three supported header values For the deprecated message introduced in PRW 1 0 identified by prometheus WriteRequest Content Type application x protobuf Content Type application x protobuf proto prometheus WriteRequest For the message introduced in PRW 2 0 identified by io prometheus write v2 Request Content Type application x protobuf proto io prometheus write v2 Request When talking to 1 x Receivers Senders SHOULD use Content Type application x protobuf for backward compatibility Otherwise Senders SHOULD use Content Type application x protobuf proto io prometheus write v2 Request More Protobuf Messages might come in 2 x or beyond Receivers MUST use the content type header to identify the Protobuf Message schema to use Accidental wrong schema choices may result in non deterministic behaviour e g corruptions NOTE Thanks to reserved fields in io prometheus write v2 Request protobuf message Receiver accidental use of wrong schema with prometheus WriteRequest will result in empty message This is generally for convenience to avoid surprising errors but don t rely on it future Protobuf Messages might not have this feature X Prometheus Remote Write Version X Prometheus Remote Write Version Remote Write spec major and minor version When talking to 1 x Receivers Senders MUST use X Prometheus Remote Write Version 0 1 0 for backward compatibility Otherwise Senders SHOULD use the newest Remote Write version it is compatible with e g X Prometheus Remote Write Version 2 0 0 User Agent User Agent name version of the Sender Senders MUST include a user agent header that SHOULD follow the RFC 9110 User Agent header format https www rfc editor org rfc rfc9110 html name user agent Response Receivers that written all data successfully MUST return a success 2xx HTTP status code https www rfc editor org rfc rfc9110 html name successful 2xx In such a successful case the response body from the Receiver SHOULD be empty and the status code SHOULD be 204 HTTP No Content https www rfc editor org rfc rfc9110 html name 204 no content Senders MUST ignore the response body The response body is RESERVED for future use Receivers MUST NOT return a 2xx HTTP status code if any of the pieces of sent data known to the Receiver e g Samples Histograms Exemplars were NOT written successfully both partial write partial write or full write rejection In such a case the Receiver MUST provide a human readable error message in the response body The Receiver s error SHOULD contain information about the amount of the samples being rejected and for what reasons Senders MUST NOT try and interpret the error message and SHOULD log it as is The following subsections specify Sender and Receiver semantics around headers and different write error cases Required Written Response Headers Rationales https github com prometheus prometheus issues 14359 Upon a successful content negotiation Receivers process write the received batch of data Once completed with success or failure for each important piece of data currently Samples Histograms and Exemplars Receivers MUST send a dedicated HTTP X Prometheus Remote Write Written response header with the precise number of successfully written elements Each header value MUST be a single 64 bit integer The header names MUST be as follows X Prometheus Remote Write Samples Written count of all successfully written Samples X Prometheus Remote Write Histograms Written count of all successfully written Histogram samples X Prometheus Remote Write Exemplars Written count of all successfully written Exemplars Upon receiving a 2xx or a 4xx status code Senders CAN assume that any missing X Prometheus Remote Write Written response header means no element from this category e g Sample was written by the Receiver count of 0 Senders MUST NOT assume the same when using the deprecated prometheus WriteRequest Protobuf Message due to the risk of hitting 1 0 Receiver without this feature Senders MAY use those headers to confirm which parts of data were successfully written by the Receiver Common use cases Better handling of the Partial Write partial write failure situations Senders MAY use those headers for more accurate client instrumentation and error handling Detecting broken 1 0 Receiver implementations Senders SHOULD assume 415 HTTP Unsupported Media Type https www rfc editor org rfc rfc9110 html name 415 unsupported media type status code when sending the data using io prometheus write v2 Request request and receiving 2xx HTTP status code but none of the X Prometheus Remote Write Written response headers from the Receiver This is a common issue for the 1 0 Receivers that do not check the Content Type request header accidental decoding of the io prometheus write v2 Request payload with prometheus WriteRequest schema results in empty result and no decoding errors Detecting other broken implementations or issues Senders MAY use those headers to detect broken Sender and Receiver implementations or other problems Senders MUST NOT assume what Remote Write specification version the Receiver implements from the remote write response headers More optional headers might come in the future e g when more entities or fields are added and worth confirming Partial Write Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes Senders SHOULD use Remote Write to send samples for multiple series in a single request As a result Receivers MAY write valid samples within a write request that also contains some invalid or otherwise unwritten samples which represents a partial write case In such a case the Receiver MUST return non 2xx status code following the Invalid Samples invalid samples and Retry on Partial Writes retries on partial writes sections Unsupported Request Content Receivers MUST return 415 HTTP Unsupported Media Type https www rfc editor org rfc rfc9110 html name 415 unsupported media type status code if they don t support a given content type or encoding provided by Senders Senders SHOULD expect 400 HTTP Bad Request https www rfc editor org rfc rfc9110 html name 400 bad request for the above reasons from 1 x Receivers for backwards compatibility Invalid Samples Receivers MAY NOT support certain metric types or samples e g a Receiver might reject sample without metadata type specified or without created timestamp while another Receiver might accept such sample It s up to the Receiver what sample is invalid Receivers MUST return a 400 HTTP Bad Request https www rfc editor org rfc rfc9110 html name 400 bad request status code for write requests that contain any invalid samples unless the partial retriable write retries on partial writes occurs Senders MUST NOT retry on a 4xx HTTP status codes other than 429 https developer mozilla org en US docs Web HTTP Status 429 which MUST be used by Receivers to indicate that the write operation will never be able to succeed and should not be retried Senders MAY retry on the 415 HTTP status code with a different content type or encoding to see if the Receiver supports it Retries Backoff Receivers MAY return a 429 HTTP Too Many Requests https developer mozilla org en US docs Web HTTP Status 429 status code to indicate the overloaded server situation Receivers MAY return the Retry After https www rfc editor org rfc rfc9110 html name retry after header to indicate the time for the next write attempt Receivers MAY return a 5xx HTTP status code to represent internal server errors Senders MAY retry on a 429 HTTP status code Senders MUST retry write requests on 5xx HTTP Senders MUST use a backoff algorithm to prevent overwhelming the server Senders MAY handle the Retry After response header https www rfc editor org rfc rfc9110 html name retry after to estimate the next retry time The difference between 429 vs 5xx handling is due to the potential situation of a Sender falling behind when the Receiver cannot keep up with the request volume or the Receiver choosing to rate limit the Sender to protect its availability As a result Senders has the option to NOT retry on 429 which allows progress to be made when there are Sender side errors e g too much traffic while the data is not lost when there are Receiver side errors 5xx Retries on Partial Writes Receivers MAY return a 5xx HTTP or 429 HTTP status code on partial write or partial invalid sample cases partial write when it expects Senders to retry the whole request In that case the Receiver MUST support idempotency as Senders MAY retry with the same request Backward and Forward Compatibility The protocol follows semantic versioning 2 0 https semver org any 2 x compatible Receiver MUST be able to read any 2 x compatible Senders and vice versa Breaking or backwards incompatible changes will result in a 3 x version of the spec The Protobuf Messages in Wire Format themselves are forward backward compatible in some respects Removing fields from the Protobuf Message requires a major version bump Adding optional fields can be done in a minor version bump In other words this means that future minor versions of 2 x MAY add new optional fields to io prometheus write v2 Request new compressions Protobuf Messages and negotiation mechanisms as long as they are backwards compatible e g optional to both Receiver and Sender 2 x vs 1 x Compatibility The 2 x protocol is breaking compatibility with 1 x by introducing a new mandatory io prometheus write v2 Request Protobuf Message and deprecating the prometheus WriteRequest 2 x Senders MAY support 1 x Receivers by allowing users to configure what content type Senders should use 2 x Senders also MAY automatically fall back to different content types if the Receiver returns 415 HTTP status code Protobuf Message io prometheus write v2 Request The io prometheus write v2 Request references the new Protobuf Message that s meant to replace and deprecate the Remote Write 1 0 s prometheus WriteRequest message TODO bwplotka Move link to the one on Prometheus main or even buf The full schema and source of the truth is in Prometheus repository in prompb io prometheus write v2 types proto https github com prometheus prometheus blob remote write 2 0 prompb io prometheus write v2 types proto L32 The gogo dependency and options CAN be ignored will be removed eventually https github com prometheus prometheus issues 11908 They are not part of the specification as they don t impact the serialized format The simplified version of the new io prometheus write v2 Request is presented below Request represents a request to write the given timeseries to a remote destination message Request Since Request supersedes 1 0 spec s prometheus WriteRequest we reserve the top down message for the deterministic interop between those two Generally it s not needed because Receivers must use the Content Type header but we want to be sympathetic to adopters with mistaken implementations and have deterministic error empty message if you use the wrong proto schema reserved 1 to 3 symbols contains a de duplicated array of string elements used for various items in a Request message like labels and metadata items For the sender s convenience around empty values for optional fields like unit ref symbols array MUST start with empty string To decode each of the symbolized strings referenced by ref s suffix you need to lookup the actual string by index from symbols array The order of strings is up to the sender The receiver should not assume any particular encoding repeated string symbols 4 timeseries represents an array of distinct series with 0 or more samples repeated TimeSeries timeseries 5 TimeSeries represents a single series message TimeSeries labels refs is a list of label name value pair references encoded as indices to the Request symbols array This list s length is always a multiple of two and the underlying labels should be sorted lexicographically Note that there might be multiple TimeSeries objects in the same Requests with the same labels e g for different exemplars metadata or created timestamp repeated uint32 labels refs 1 Timeseries messages can either specify samples or native histogram samples histogram field but not both For a typical sender real time metric streaming in healthy cases there will be only one sample or histogram Samples and histograms are sorted by timestamp older first repeated Sample samples 2 repeated Histogram histograms 3 exemplars represents an optional set of exemplars attached to this series samples repeated Exemplar exemplars 4 metadata represents the metadata associated with the given series samples Metadata metadata 5 created timestamp represents an optional created timestamp associated with this series samples in ms format typically for counter or histogram type metrics Created timestamp represents the time when the counter started counting sometimes referred to as start timestamp which can increase the accuracy of query results Note that some receivers might require this and in return fail to write such samples within the Request For Go see github com prometheus prometheus model timestamp timestamp go for conversion from to time Time to Prometheus timestamp Note that the optional keyword is omitted due to https cloud google com apis design design patterns md optional primitive fields Zero value means value not set If you need to use exactly zero value for the timestamp use 1 millisecond before or after int64 created timestamp 6 Exemplar represents additional information attached to some series samples message Exemplar labels refs is an optional list of label name value pair references encoded as indices to the Request symbols array This list s len is always a multiple of 2 and the underlying labels should be sorted lexicographically If the exemplar references a trace it should use the trace id label name as a best practice repeated uint32 labels refs 1 value represents an exact example value This can be useful when the exemplar is attached to a histogram which only gives an estimated value through buckets double value 2 timestamp represents the timestamp of the exemplar in ms For Go see github com prometheus prometheus model timestamp timestamp go for conversion from to time Time to Prometheus timestamp int64 timestamp 3 Sample represents series sample message Sample value of the sample double value 1 timestamp represents timestamp of the sample in ms int64 timestamp 2 Metadata represents the metadata associated with the given series samples message Metadata enum MetricType METRIC TYPE UNSPECIFIED 0 METRIC TYPE COUNTER 1 METRIC TYPE GAUGE 2 METRIC TYPE HISTOGRAM 3 METRIC TYPE GAUGEHISTOGRAM 4 METRIC TYPE SUMMARY 5 METRIC TYPE INFO 6 METRIC TYPE STATESET 7 MetricType type 1 help ref is a reference to the Request symbols array representing help text for the metric Help is optional reference should point to an empty string in such a case uint32 help ref 3 unit ref is a reference to the Request symbols array representing a unit for the metric Unit is optional reference should point to an empty string in such a case uint32 unit ref 4 A native histogram also known as a sparse histogram See https github com prometheus prometheus blob remote write 2 0 prompb io prometheus write v2 types proto L142 for a full message that follows the native histogram spec for both sparse and exponential as well as custom bucketing message Histogram All timestamps MUST be int64 counted as milliseconds since the Unix epoch Sample s values MUST be float64 For every TimeSeries message labels refs MUST be provided Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes samples vs native histogram samples At least one element in samples or in histograms MUST be provided A TimeSeries MUST NOT include both samples and histograms For series which rarely would mix float and histogram samples a separate TimeSeries message MUST be used Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md always on metadata metadata sub fields SHOULD be provided Receivers MAY reject series with unspecified Metadata type Exemplars SHOULD be provided if they exist for a series created timestamp SHOULD be provided for metrics that follow counter semantics e g counters and histograms Receivers MAY reject those series without created timestamp being set The following subsections define some schema elements in detail Symbols Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes string interning The io prometheus write v2 Request Protobuf Message is designed to intern all strings https en wikipedia org wiki String interning for the proven additional compression and memory efficiency gains on top of the standard compressions The symbols table MUST be provided and it MUST contain deduplicated strings used in series exemplar labels and metadata strings The first element of the symbols table MUST be an empty string which is used to represent empty or unspecified values such as when Metadata unit ref or Metadata help ref are not provided References MUST point to the existing index in the symbols string array Series Labels Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md labels and utf 8 The complete set of labels MUST be sent with each Sample or Histogram sample Additionally the label set associated with samples SHOULD contain a name label MUST NOT contain repeated label names MUST have label names sorted in lexicographical order MUST NOT contain any empty label names or values Metric names label names and label values MUST be any sequence of UTF 8 characters Metric names SHOULD adhere to the regex a zA Z a zA Z0 9 Label names SHOULD adhere to the regex a zA Z a zA Z0 9 Names that do not adhere to the above might be harder to use for PromQL users see the UTF 8 proposal for more details https github com prometheus proposals blob main proposals 2023 08 21 utf8 md Label names beginning with are RESERVED for system usage and SHOULD NOT be used see Prometheus Data Model https prometheus io docs concepts data model Receivers also MAY impose limits on the number and length of labels but this is receiver specific and is out of the scope of this document Samples and Histogram Samples Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes native histograms Senders MUST send samples or histograms for any given TimeSeries in timestamp order Senders MAY send multiple requests for different series in parallel Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes being pull vs push agnostic Senders SHOULD send stale markers when a time series will no longer be appended to Senders MUST send stale markers if the discontinuation of time series is possible to detect for example For series that were pulled scraped unless explicit timestamp was used For series that is resulted by a recording rule evaluation Generally not sending stale markers for series that are discontinued can lead to the Receiver non trivial query time alignment issues https prometheus io docs prometheus latest querying basics staleness Stale markers MUST be signalled by the special NaN value 0x7ff0000000000002 This value MUST NOT be used otherwise Typically Senders can detect when a time series will no longer be appended using the following techniques 1 Detecting using service discovery that the target exposing the series has gone away 1 Noticing the target is no longer exposing the time series between successive scrapes 1 Failing to scrape the target that originally exposed a time series 1 Tracking configuration and evaluation for recording and alerting rules 1 Tracking discontinuation of metrics for non scrape source of metric e g in k6 when the benchmark has finished for series per benchmark it could emit a stale marker Metadata Metadata SHOULD follow the official Prometheus guidelines for Type https prometheus io docs instrumenting writing exporters types and Help https prometheus io docs instrumenting writing exporters help strings Metadata MAY follow the official OpenMetrics guidelines for Unit https github com OpenObservability OpenMetrics blob main specification OpenMetrics md unit Exemplars Each exemplar if attached to a TimeSeries MUST contain a value Rationales https github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes exemplars MAY contain labels e g referencing trace or request ID If the exemplar references a trace it SHOULD use the trace id label name as a best practice MUST contain a timestamp While exemplar timestamps are optional in Prometheus Open Metrics exposition formats the assumption is that a timestamp is assigned at scrape time in the same way a timestamp is assigned to the scrape sample Receivers require exemplar timestamps to reliably handle e g deduplicate incoming exemplars Out of Scope The same as in 1 0 remote write spec md out of scope Future Plans This section contains speculative plans that are not considered part of protocol specification yet but are mentioned here for completeness Note that 2 0 specification completed 2 of 3 future plans in the 1 0 remote write spec md future plans Transactionality There is still no transactionality defined for 2 0 specification mostly because it makes a scalable Sender implementation difficult Prometheus Sender aims at being transactional i e to never expose a partially scraped target to a query We intend to do the same with Remote Write for instance in the future we would like to align Remote Write with scrapes perhaps such that all the samples metadata and exemplars for a single scrape are sent in a single Remote Write request However Remote Write 2 0 specification solves an important transactionality problem for the classic histogram buckets https docs google com document d 1mpcSWH1B82q BtJza eJ8xMLlKt6EJ9oFGH325vtY1Q edit heading h ueg7q07wymku This is done thanks to the native histograms supporting custom bucket ing possible with the io prometheus write v2 Request wire format Senders might translate all classic histograms to native histograms this way but it s out of this specification to mandate this However for this reason Receivers MAY ignore certain metric types e g classic histograms Alternative wire formats The OpenTelemetry community has shown the validity of Apache Arrow and potentially other columnar formats for over wire data transfer with their OTLP protocol We would like to do experiments to confirm the compatibility of a similar format with Prometheus data model and include benchmarks of any resource usage changes We would potentially maintain both a protobuf and columnar format long term for compatibility reasons and use our content negotiation to add different Protobuf Messages for this purpose Global symbols Pre defined string dictionary for interning The protocol could pre define a static dictionary of ref symbol that includes strings that are considered common e g namespace le job seconds bytes etc Senders could refer to these without the need to include them in the request s symbols table This dictionary could incrementally grow with minor version releases of this protocol Related FAQ Why did you not use gRPC Because the 1 0 protocol does not use gRPC breaking it would increase friction in the adoption See 1 0 reason remote write spec md faq Why not stream protobuf messages If you use persistent HTTP 1 1 connections they are pretty close to streaming Of course headers have to be re sent but that is less expensive than a new TCP set up Why do we send samples in order The in order constraint comes from the encoding we use for time series data in Prometheus the implementation of which is optimized for append only workloads However this requirement is also shared across many other databases and vendors in the ecosystem In fact Prometheus with OOO feature enabled https youtu be qYsycK3nTSQ t 1321 allows out of order writes but with the performance penalty thus reserved for rare events To sum up Receivers may support out of order write though it is not permitted by the specification In the future e g 2 x spec versions we could extend content type to negotiate the out of order writes if needed How can we parallelise requests with the in order constraint Samples must be in order for a given series However even if a Receiver does not support out of order write the Remote Write requests can be sent in parallel as long as they are for different series Prometheus shards the samples by their labels into separate queues and then writes happen sequentially in each queue This guarantees samples for the same series are delivered in order but samples for different series are sent in parallel and potentially out of order between different series What are the differences between Remote Write 2 0 and OpenTelemetry s OTLP protocol OpenTelemetry OTLP https github com open telemetry opentelemetry proto blob a05597bff803d3d9405fcdd1e1fb1f42bed4eb7a docs specification md is a protocol for transporting of telemetry data such as metrics logs traces and profiles between telemetry sources intermediate nodes and telemetry backends The recommended transport involves gRPC with protobuf but HTTP with protobuf or JSON are also described It was designed from scratch with the intent to support a variety of different observability signals data types and extra information For metrics https github com open telemetry opentelemetry proto blob main opentelemetry proto metrics v1 metrics proto that means additional non identifying labels flags temporal aggregations types resource or scoped metrics schema URLs and more OTLP also requires the semantic convention https opentelemetry io docs concepts semantic conventions to be used Remote Write was designed for simplicity efficiency and organic growth The first version was officially released in 2023 when already dozens of battle tested adopters in the CNCF ecosystem remote write spec md compatible senders and receivers had been using this protocol for years Remote Write 2 0 iterates on the previous protocol by adding a few new elements metadata exemplars created timestamp and native histograms and string interning Remote Write 2 0 is always stateless focuses only on metrics and is opinionated as such it is scoped down to elements that the Prometheus community considers enough to have a robust metric solution The intention is to ensure the Remote Write is a stable protocol that is cheaper and simpler to adopt and use than the alternatives in the observability ecosystem |
prometheus Scope sortrank 4 title Comparison to alternatives Comparison to alternatives Prometheus vs Graphite | ---
title: Comparison to alternatives
sort_rank: 4
---
# Comparison to alternatives
## Prometheus vs. Graphite
### Scope
[Graphite](http://graphite.readthedocs.org/en/latest/) focuses on being a
passive time series database with a query language and graphing features. Any
other concerns are addressed by external components.
Prometheus is a full monitoring and trending system that includes built-in and
active scraping, storing, querying, graphing, and alerting based on time series
data. It has knowledge about what the world should look like (which endpoints
should exist, what time series patterns mean trouble, etc.), and actively tries
to find faults.
### Data model
Graphite stores numeric samples for named time series, much like Prometheus
does. However, Prometheus's metadata model is richer: while Graphite metric
names consist of dot-separated components which implicitly encode dimensions,
Prometheus encodes dimensions explicitly as key-value pairs, called labels, attached
to a metric name. This allows easy filtering, grouping, and matching by these
labels via the query language.
Further, especially when Graphite is used in combination with
[StatsD](https://github.com/etsy/statsd/), it is common to store only
aggregated data over all monitored instances, rather than preserving the
instance as a dimension and being able to drill down into individual
problematic instances.
For example, storing the number of HTTP requests to API servers with the
response code `500` and the method `POST` to the `/tracks` endpoint would
commonly be encoded like this in Graphite/StatsD:
```
stats.api-server.tracks.post.500 -> 93
```
In Prometheus the same data could be encoded like this (assuming three api-server instances):
```
api_server_http_requests_total{method="POST",handler="/tracks",status="500",instance="<sample1>"} -> 34
api_server_http_requests_total{method="POST",handler="/tracks",status="500",instance="<sample2>"} -> 28
api_server_http_requests_total{method="POST",handler="/tracks",status="500",instance="<sample3>"} -> 31
```
### Storage
Graphite stores time series data on local disk in the
[Whisper](http://graphite.readthedocs.org/en/latest/whisper.html) format, an
RRD-style database that expects samples to arrive at regular intervals. Every
time series is stored in a separate file, and new samples overwrite old ones
after a certain amount of time.
Prometheus also creates one local file per time series, but allows storing
samples at arbitrary intervals as scrapes or rule evaluations occur. Since new
samples are simply appended, old data may be kept arbitrarily long. Prometheus
also works well for many short-lived, frequently changing sets of time series.
### Summary
Prometheus offers a richer data model and query language, in addition to being
easier to run and integrate into your environment. If you want a clustered
solution that can hold historical data long term, Graphite may be a better
choice.
## Prometheus vs. InfluxDB
[InfluxDB](https://influxdata.com/) is an open-source time series database,
with a commercial option for scaling and clustering. The InfluxDB project was
released almost a year after Prometheus development began, so we were unable to
consider it as an alternative at the time. Still, there are significant
differences between Prometheus and InfluxDB, and both systems are geared
towards slightly different use cases.
### Scope
For a fair comparison, we must also consider
[Kapacitor](https://github.com/influxdata/kapacitor) together with InfluxDB, as
in combination they address the same problem space as Prometheus and the
Alertmanager.
The same scope differences as in the case of
[Graphite](#prometheus-vs-graphite) apply here for InfluxDB itself. In addition
InfluxDB offers continuous queries, which are equivalent to Prometheus
recording rules.
Kapacitor’s scope is a combination of Prometheus recording rules, alerting
rules, and the Alertmanager's notification functionality. Prometheus offers [a
more powerful query language for graphing and
alerting](https://www.robustperception.io/translating-between-monitoring-languages/).
The Prometheus Alertmanager additionally offers grouping, deduplication and
silencing functionality.
### Data model / storage
Like Prometheus, the InfluxDB data model has key-value pairs as labels, which
are called tags. In addition, InfluxDB has a second level of labels called
fields, which are more limited in use. InfluxDB supports timestamps with up to
nanosecond resolution, and float64, int64, bool, and string data types.
Prometheus, by contrast, supports the float64 data type with limited support for
strings, and millisecond resolution timestamps.
InfluxDB uses a variant of a [log-structured merge tree for storage with a write ahead log](https://docs.influxdata.com/influxdb/v1.7/concepts/storage_engine/),
sharded by time. This is much more suitable to event logging than Prometheus's
append-only file per time series approach.
[Logs and Metrics and Graphs, Oh My!](https://grafana.com/blog/2016/01/05/logs-and-metrics-and-graphs-oh-my/)
describes the differences between event logging and metrics recording.
### Architecture
Prometheus servers run independently of each other and only rely on their local
storage for their core functionality: scraping, rule processing, and alerting.
The open source version of InfluxDB is similar.
The commercial InfluxDB offering is, by design, a distributed storage cluster
with storage and queries being handled by many nodes at once.
This means that the commercial InfluxDB will be easier to scale horizontally,
but it also means that you have to manage the complexity of a distributed
storage system from the beginning. Prometheus will be simpler to run, but at
some point you will need to shard servers explicitly along scalability
boundaries like products, services, datacenters, or similar aspects.
Independent servers (which can be run redundantly in parallel) may also give
you better reliability and failure isolation.
Kapacitor's open-source release has no built-in distributed/redundant options for
rules, alerting, or notifications. The open-source release of Kapacitor can
be scaled via manual sharding by the user, similar to Prometheus itself.
Influx offers [Enterprise Kapacitor](https://docs.influxdata.com/enterprise_kapacitor), which supports an
HA/redundant alerting system.
Prometheus and the Alertmanager by contrast offer a fully open-source redundant
option via running redundant replicas of Prometheus and using the Alertmanager's
[High Availability](https://github.com/prometheus/alertmanager#high-availability)
mode.
### Summary
There are many similarities between the systems. Both have labels (called tags
in InfluxDB) to efficiently support multi-dimensional metrics. Both use
basically the same data compression algorithms. Both have extensive
integrations, including with each other. Both have hooks allowing you to extend
them further, such as analyzing data in statistical tools or performing
automated actions.
Where InfluxDB is better:
* If you're doing event logging.
* Commercial option offers clustering for InfluxDB, which is also better for long term data storage.
* Eventually consistent view of data between replicas.
Where Prometheus is better:
* If you're primarily doing metrics.
* More powerful query language, alerting, and notification functionality.
* Higher availability and uptime for graphing and alerting.
InfluxDB is maintained by a single commercial company following the open-core
model, offering premium features like closed-source clustering, hosting and
support. Prometheus is a [fully open source and independent project](/community/), maintained
by a number of companies and individuals, some of whom also offer commercial services and support.
## Prometheus vs. OpenTSDB
[OpenTSDB](http://opentsdb.net/) is a distributed time series database based on
[Hadoop](http://hadoop.apache.org/) and [HBase](http://hbase.apache.org/).
### Scope
The same scope differences as in the case of
[Graphite](/docs/introduction/comparison/#prometheus-vs-graphite) apply here.
### Data model
OpenTSDB's data model is almost identical to Prometheus's: time series are
identified by a set of arbitrary key-value pairs (OpenTSDB tags are
Prometheus labels). All data for a metric is
[stored together](http://opentsdb.net/docs/build/html/user_guide/writing/index.html#time-series-cardinality),
limiting the cardinality of metrics. There are minor differences though: Prometheus
allows arbitrary characters in label values, while OpenTSDB is more restrictive.
OpenTSDB also lacks a full query language, only allowing simple aggregation and math via its API.
### Storage
[OpenTSDB](http://opentsdb.net/)'s storage is implemented on top of
[Hadoop](http://hadoop.apache.org/) and [HBase](http://hbase.apache.org/). This
means that it is easy to scale OpenTSDB horizontally, but you have to accept
the overall complexity of running a Hadoop/HBase cluster from the beginning.
Prometheus will be simpler to run initially, but will require explicit sharding
once the capacity of a single node is exceeded.
### Summary
Prometheus offers a much richer query language, can handle higher cardinality
metrics, and forms part of a complete monitoring system. If you're already
running Hadoop and value long term storage over these benefits, OpenTSDB is a
good choice.
## Prometheus vs. Nagios
[Nagios](https://www.nagios.org/) is a monitoring system that originated in the
1990s as NetSaint.
### Scope
Nagios is primarily about alerting based on the exit codes of scripts. These are
called “checks”. There is silencing of individual alerts, however no grouping,
routing or deduplication.
There are a variety of plugins. For example, piping the few kilobytes of
perfData plugins are allowed to return [to a time series database such as Graphite](https://github.com/shawn-sterling/graphios) or using NRPE to [run checks on remote machines](https://exchange.nagios.org/directory/Addons/Monitoring-Agents/NRPE--2D-Nagios-Remote-Plugin-Executor/details).
### Data model
Nagios is host-based. Each host can have one or more services and each service
can perform one check.
There is no notion of labels or a query language.
### Storage
Nagios has no storage per-se, beyond the current check state.
There are plugins which can store data such as [for visualisation](https://docs.pnp4nagios.org/).
### Architecture
Nagios servers are standalone. All configuration of checks is via file.
### Summary
Nagios is suitable for basic monitoring of small and/or static systems where
blackbox probing is sufficient.
If you want to do whitebox monitoring, or have a dynamic or cloud based
environment, then Prometheus is a good choice.
## Prometheus vs. Sensu
[Sensu](https://sensu.io) is an open source monitoring and observability pipeline with a commercial distribution which offers additional features for scalability. It can reuse existing Nagios plugins.
### Scope
Sensu is an observability pipeline that focuses on processing and alerting of observability data as a stream of [Events](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/). It provides an extensible framework for event [filtering](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-filter/), aggregation, [transformation](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-transform/), and [processing](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-process/) – including sending alerts to other systems and storing events in third-party systems. Sensu's event processing capabilities are similar in scope to Prometheus alerting rules and Alertmanager.
### Data model
Sensu [Events](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/) represent service health and/or [metrics](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/#metric-attributes) in a structured data format identified by an [entity](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-entities/entities/) name (e.g. server, cloud compute instance, container, or service), an event name, and optional [key-value metadata](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/#metadata-attributes) called "labels" or "annotations". The Sensu Event payload may include one or more metric [`points`](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/#points-attributes), represented as a JSON object containing a `name`, `tags` (key/value pairs), `timestamp`, and `value` (always a float).
### Storage
Sensu stores current and recent event status information and real-time inventory data in an embedded database (etcd) or an external RDBMS (PostgreSQL).
### Architecture
All components of a Sensu deployment can be clustered for high availability and improved event-processing throughput.
### Summary
Sensu and Prometheus have a few capabilities in common, but they take very different approaches to monitoring. Both offer extensible discovery mechanisms for dynamic cloud-based environments and ephemeral compute platforms, though the underlying mechanisms are quite different. Both provide support for collecting multi-dimensional metrics via labels and annotations. Both have extensive integrations, and Sensu natively supports collecting metrics from all Prometheus exporters. Both are capable of forwarding observability data to third-party data platforms (e.g. event stores or TSDBs). Where Sensu and Prometheus differ the most is in their use cases.
Where Sensu is better:
- If you're collecting and processing hybrid observability data (including metrics _and/or_ events)
- If you're consolidating multiple monitoring tools and need support for metrics _and_ Nagios-style plugins or check scripts
- More powerful event-processing platform
Where Prometheus is better:
- If you're primarily collecting and evaluating metrics
- If you're monitoring homogeneous Kubernetes infrastructure (if 100% of the workloads you're monitoring are in K8s, Prometheus offers better K8s integration)
- More powerful query language, and built-in support for historical data analysis
Sensu is maintained by a single commercial company following the open-core business model, offering premium features like closed-source event correlation and aggregation, federation, and support. Prometheus is a fully open source and independent project, maintained by a number of companies and individuals, some of whom also offer commercial services and support. | prometheus | title Comparison to alternatives sort rank 4 Comparison to alternatives Prometheus vs Graphite Scope Graphite http graphite readthedocs org en latest focuses on being a passive time series database with a query language and graphing features Any other concerns are addressed by external components Prometheus is a full monitoring and trending system that includes built in and active scraping storing querying graphing and alerting based on time series data It has knowledge about what the world should look like which endpoints should exist what time series patterns mean trouble etc and actively tries to find faults Data model Graphite stores numeric samples for named time series much like Prometheus does However Prometheus s metadata model is richer while Graphite metric names consist of dot separated components which implicitly encode dimensions Prometheus encodes dimensions explicitly as key value pairs called labels attached to a metric name This allows easy filtering grouping and matching by these labels via the query language Further especially when Graphite is used in combination with StatsD https github com etsy statsd it is common to store only aggregated data over all monitored instances rather than preserving the instance as a dimension and being able to drill down into individual problematic instances For example storing the number of HTTP requests to API servers with the response code 500 and the method POST to the tracks endpoint would commonly be encoded like this in Graphite StatsD stats api server tracks post 500 93 In Prometheus the same data could be encoded like this assuming three api server instances api server http requests total method POST handler tracks status 500 instance sample1 34 api server http requests total method POST handler tracks status 500 instance sample2 28 api server http requests total method POST handler tracks status 500 instance sample3 31 Storage Graphite stores time series data on local disk in the Whisper http graphite readthedocs org en latest whisper html format an RRD style database that expects samples to arrive at regular intervals Every time series is stored in a separate file and new samples overwrite old ones after a certain amount of time Prometheus also creates one local file per time series but allows storing samples at arbitrary intervals as scrapes or rule evaluations occur Since new samples are simply appended old data may be kept arbitrarily long Prometheus also works well for many short lived frequently changing sets of time series Summary Prometheus offers a richer data model and query language in addition to being easier to run and integrate into your environment If you want a clustered solution that can hold historical data long term Graphite may be a better choice Prometheus vs InfluxDB InfluxDB https influxdata com is an open source time series database with a commercial option for scaling and clustering The InfluxDB project was released almost a year after Prometheus development began so we were unable to consider it as an alternative at the time Still there are significant differences between Prometheus and InfluxDB and both systems are geared towards slightly different use cases Scope For a fair comparison we must also consider Kapacitor https github com influxdata kapacitor together with InfluxDB as in combination they address the same problem space as Prometheus and the Alertmanager The same scope differences as in the case of Graphite prometheus vs graphite apply here for InfluxDB itself In addition InfluxDB offers continuous queries which are equivalent to Prometheus recording rules Kapacitor s scope is a combination of Prometheus recording rules alerting rules and the Alertmanager s notification functionality Prometheus offers a more powerful query language for graphing and alerting https www robustperception io translating between monitoring languages The Prometheus Alertmanager additionally offers grouping deduplication and silencing functionality Data model storage Like Prometheus the InfluxDB data model has key value pairs as labels which are called tags In addition InfluxDB has a second level of labels called fields which are more limited in use InfluxDB supports timestamps with up to nanosecond resolution and float64 int64 bool and string data types Prometheus by contrast supports the float64 data type with limited support for strings and millisecond resolution timestamps InfluxDB uses a variant of a log structured merge tree for storage with a write ahead log https docs influxdata com influxdb v1 7 concepts storage engine sharded by time This is much more suitable to event logging than Prometheus s append only file per time series approach Logs and Metrics and Graphs Oh My https grafana com blog 2016 01 05 logs and metrics and graphs oh my describes the differences between event logging and metrics recording Architecture Prometheus servers run independently of each other and only rely on their local storage for their core functionality scraping rule processing and alerting The open source version of InfluxDB is similar The commercial InfluxDB offering is by design a distributed storage cluster with storage and queries being handled by many nodes at once This means that the commercial InfluxDB will be easier to scale horizontally but it also means that you have to manage the complexity of a distributed storage system from the beginning Prometheus will be simpler to run but at some point you will need to shard servers explicitly along scalability boundaries like products services datacenters or similar aspects Independent servers which can be run redundantly in parallel may also give you better reliability and failure isolation Kapacitor s open source release has no built in distributed redundant options for rules alerting or notifications The open source release of Kapacitor can be scaled via manual sharding by the user similar to Prometheus itself Influx offers Enterprise Kapacitor https docs influxdata com enterprise kapacitor which supports an HA redundant alerting system Prometheus and the Alertmanager by contrast offer a fully open source redundant option via running redundant replicas of Prometheus and using the Alertmanager s High Availability https github com prometheus alertmanager high availability mode Summary There are many similarities between the systems Both have labels called tags in InfluxDB to efficiently support multi dimensional metrics Both use basically the same data compression algorithms Both have extensive integrations including with each other Both have hooks allowing you to extend them further such as analyzing data in statistical tools or performing automated actions Where InfluxDB is better If you re doing event logging Commercial option offers clustering for InfluxDB which is also better for long term data storage Eventually consistent view of data between replicas Where Prometheus is better If you re primarily doing metrics More powerful query language alerting and notification functionality Higher availability and uptime for graphing and alerting InfluxDB is maintained by a single commercial company following the open core model offering premium features like closed source clustering hosting and support Prometheus is a fully open source and independent project community maintained by a number of companies and individuals some of whom also offer commercial services and support Prometheus vs OpenTSDB OpenTSDB http opentsdb net is a distributed time series database based on Hadoop http hadoop apache org and HBase http hbase apache org Scope The same scope differences as in the case of Graphite docs introduction comparison prometheus vs graphite apply here Data model OpenTSDB s data model is almost identical to Prometheus s time series are identified by a set of arbitrary key value pairs OpenTSDB tags are Prometheus labels All data for a metric is stored together http opentsdb net docs build html user guide writing index html time series cardinality limiting the cardinality of metrics There are minor differences though Prometheus allows arbitrary characters in label values while OpenTSDB is more restrictive OpenTSDB also lacks a full query language only allowing simple aggregation and math via its API Storage OpenTSDB http opentsdb net s storage is implemented on top of Hadoop http hadoop apache org and HBase http hbase apache org This means that it is easy to scale OpenTSDB horizontally but you have to accept the overall complexity of running a Hadoop HBase cluster from the beginning Prometheus will be simpler to run initially but will require explicit sharding once the capacity of a single node is exceeded Summary Prometheus offers a much richer query language can handle higher cardinality metrics and forms part of a complete monitoring system If you re already running Hadoop and value long term storage over these benefits OpenTSDB is a good choice Prometheus vs Nagios Nagios https www nagios org is a monitoring system that originated in the 1990s as NetSaint Scope Nagios is primarily about alerting based on the exit codes of scripts These are called checks There is silencing of individual alerts however no grouping routing or deduplication There are a variety of plugins For example piping the few kilobytes of perfData plugins are allowed to return to a time series database such as Graphite https github com shawn sterling graphios or using NRPE to run checks on remote machines https exchange nagios org directory Addons Monitoring Agents NRPE 2D Nagios Remote Plugin Executor details Data model Nagios is host based Each host can have one or more services and each service can perform one check There is no notion of labels or a query language Storage Nagios has no storage per se beyond the current check state There are plugins which can store data such as for visualisation https docs pnp4nagios org Architecture Nagios servers are standalone All configuration of checks is via file Summary Nagios is suitable for basic monitoring of small and or static systems where blackbox probing is sufficient If you want to do whitebox monitoring or have a dynamic or cloud based environment then Prometheus is a good choice Prometheus vs Sensu Sensu https sensu io is an open source monitoring and observability pipeline with a commercial distribution which offers additional features for scalability It can reuse existing Nagios plugins Scope Sensu is an observability pipeline that focuses on processing and alerting of observability data as a stream of Events https docs sensu io sensu go latest observability pipeline observe events events It provides an extensible framework for event filtering https docs sensu io sensu go latest observability pipeline observe filter aggregation transformation https docs sensu io sensu go latest observability pipeline observe transform and processing https docs sensu io sensu go latest observability pipeline observe process including sending alerts to other systems and storing events in third party systems Sensu s event processing capabilities are similar in scope to Prometheus alerting rules and Alertmanager Data model Sensu Events https docs sensu io sensu go latest observability pipeline observe events events represent service health and or metrics https docs sensu io sensu go latest observability pipeline observe events events metric attributes in a structured data format identified by an entity https docs sensu io sensu go latest observability pipeline observe entities entities name e g server cloud compute instance container or service an event name and optional key value metadata https docs sensu io sensu go latest observability pipeline observe events events metadata attributes called labels or annotations The Sensu Event payload may include one or more metric points https docs sensu io sensu go latest observability pipeline observe events events points attributes represented as a JSON object containing a name tags key value pairs timestamp and value always a float Storage Sensu stores current and recent event status information and real time inventory data in an embedded database etcd or an external RDBMS PostgreSQL Architecture All components of a Sensu deployment can be clustered for high availability and improved event processing throughput Summary Sensu and Prometheus have a few capabilities in common but they take very different approaches to monitoring Both offer extensible discovery mechanisms for dynamic cloud based environments and ephemeral compute platforms though the underlying mechanisms are quite different Both provide support for collecting multi dimensional metrics via labels and annotations Both have extensive integrations and Sensu natively supports collecting metrics from all Prometheus exporters Both are capable of forwarding observability data to third party data platforms e g event stores or TSDBs Where Sensu and Prometheus differ the most is in their use cases Where Sensu is better If you re collecting and processing hybrid observability data including metrics and or events If you re consolidating multiple monitoring tools and need support for metrics and Nagios style plugins or check scripts More powerful event processing platform Where Prometheus is better If you re primarily collecting and evaluating metrics If you re monitoring homogeneous Kubernetes infrastructure if 100 of the workloads you re monitoring are in K8s Prometheus offers better K8s integration More powerful query language and built in support for historical data analysis Sensu is maintained by a single commercial company following the open core business model offering premium features like closed source event correlation and aggregation federation and support Prometheus is a fully open source and independent project maintained by a number of companies and individuals some of whom also offer commercial services and support |
prometheus title Glossary Glossary sortrank 9 Alert | ---
title: Glossary
sort_rank: 9
---
# Glossary
### Alert
An alert is the outcome of an alerting rule in Prometheus that is
actively firing. Alerts are sent from Prometheus to the Alertmanager.
### Alertmanager
The [Alertmanager](../../alerting/overview/) takes in alerts, aggregates them into
groups, de-duplicates, applies silences, throttles, and then sends out
notifications to email, Pagerduty, Slack etc.
### Bridge
A bridge is a component that takes samples from a client library and
exposes them to a non-Prometheus monitoring system. For example, the Python, Go, and Java clients can export metrics to Graphite.
### Client library
A client library is a library in some language (e.g. Go, Java, Python, Ruby)
that makes it easy to directly instrument your code, write custom collectors to
pull metrics from other systems and expose the metrics to Prometheus.
### Collector
A collector is a part of an exporter that represents a set of metrics. It may be
a single metric if it is part of direct instrumentation, or many metrics if it is pulling metrics from another system.
### Direct instrumentation
Direct instrumentation is instrumentation added inline as part of the source code of a program, using a [client library](#client-library).
### Endpoint
A source of metrics that can be scraped, usually corresponding to a single process.
### Exporter
An exporter is a binary running alongside the application you
want to obtain metrics from. The exporter exposes Prometheus metrics, commonly by converting metrics that are exposed in a non-Prometheus format into a format that Prometheus supports.
### Instance
An instance is a label that uniquely identifies a target in a job.
### Job
A collection of targets with the same purpose, for example monitoring a group of like processes replicated for scalability or reliability, is called a job.
### Notification
A notification represents a group of one or more alerts, and is sent by the Alertmanager to email, Pagerduty, Slack etc.
### Promdash
Promdash was a native dashboard builder for Prometheus. It has been deprecated and replaced by [Grafana](../../visualization/grafana/).
### Prometheus
Prometheus usually refers to the core binary of the Prometheus system. It may
also refer to the Prometheus monitoring system as a whole.
### PromQL
[PromQL](/docs/prometheus/latest/querying/basics/) is the Prometheus Query Language. It allows for
a wide range of operations including aggregation, slicing and dicing, prediction and joins.
### Pushgateway
The [Pushgateway](../../instrumenting/pushing/) persists the most recent push
of metrics from batch jobs. This allows Prometheus to scrape their metrics
after they have terminated.
### Recording Rules
Recording rules precompute frequently needed or computationally expensive expressions
and save their results as a new set of time series.
### Remote Read
Remote read is a Prometheus feature that allows transparent reading of time series from
other systems (such as long term storage) as part of queries.
### Remote Read Adapter
Not all systems directly support remote read. A remote read adapter sits between
Prometheus and another system, converting time series requests and responses between them.
### Remote Read Endpoint
A remote read endpoint is what Prometheus talks to when doing a remote read.
### Remote Write
Remote write is a Prometheus feature that allows sending ingested samples on the
fly to other systems, such as long term storage.
### Remote Write Adapter
Not all systems directly support remote write. A remote write adapter sits
between Prometheus and another system, converting the samples in the remote
write into a format the other system can understand.
### Remote Write Endpoint
A remote write endpoint is what Prometheus talks to when doing a remote write.
### Sample
A sample is a single value at a point in time in a time series.
In Prometheus, each sample consists of a float64 value and a millisecond-precision timestamp.
### Silence
A silence in the Alertmanager prevents alerts, with labels matching the silence, from
being included in notifications.
### Target
A target is the definition of an object to scrape. For example, what labels to apply, any authentication required to connect, or other information that defines how the scrape will occur.
### Time Series
The Prometheus time series are streams of timestamped values belonging to the same metric and the same set of labeled dimensions.
Prometheus stores all data as time series.
| prometheus | title Glossary sort rank 9 Glossary Alert An alert is the outcome of an alerting rule in Prometheus that is actively firing Alerts are sent from Prometheus to the Alertmanager Alertmanager The Alertmanager alerting overview takes in alerts aggregates them into groups de duplicates applies silences throttles and then sends out notifications to email Pagerduty Slack etc Bridge A bridge is a component that takes samples from a client library and exposes them to a non Prometheus monitoring system For example the Python Go and Java clients can export metrics to Graphite Client library A client library is a library in some language e g Go Java Python Ruby that makes it easy to directly instrument your code write custom collectors to pull metrics from other systems and expose the metrics to Prometheus Collector A collector is a part of an exporter that represents a set of metrics It may be a single metric if it is part of direct instrumentation or many metrics if it is pulling metrics from another system Direct instrumentation Direct instrumentation is instrumentation added inline as part of the source code of a program using a client library client library Endpoint A source of metrics that can be scraped usually corresponding to a single process Exporter An exporter is a binary running alongside the application you want to obtain metrics from The exporter exposes Prometheus metrics commonly by converting metrics that are exposed in a non Prometheus format into a format that Prometheus supports Instance An instance is a label that uniquely identifies a target in a job Job A collection of targets with the same purpose for example monitoring a group of like processes replicated for scalability or reliability is called a job Notification A notification represents a group of one or more alerts and is sent by the Alertmanager to email Pagerduty Slack etc Promdash Promdash was a native dashboard builder for Prometheus It has been deprecated and replaced by Grafana visualization grafana Prometheus Prometheus usually refers to the core binary of the Prometheus system It may also refer to the Prometheus monitoring system as a whole PromQL PromQL docs prometheus latest querying basics is the Prometheus Query Language It allows for a wide range of operations including aggregation slicing and dicing prediction and joins Pushgateway The Pushgateway instrumenting pushing persists the most recent push of metrics from batch jobs This allows Prometheus to scrape their metrics after they have terminated Recording Rules Recording rules precompute frequently needed or computationally expensive expressions and save their results as a new set of time series Remote Read Remote read is a Prometheus feature that allows transparent reading of time series from other systems such as long term storage as part of queries Remote Read Adapter Not all systems directly support remote read A remote read adapter sits between Prometheus and another system converting time series requests and responses between them Remote Read Endpoint A remote read endpoint is what Prometheus talks to when doing a remote read Remote Write Remote write is a Prometheus feature that allows sending ingested samples on the fly to other systems such as long term storage Remote Write Adapter Not all systems directly support remote write A remote write adapter sits between Prometheus and another system converting the samples in the remote write into a format the other system can understand Remote Write Endpoint A remote write endpoint is what Prometheus talks to when doing a remote write Sample A sample is a single value at a point in time in a time series In Prometheus each sample consists of a float64 value and a millisecond precision timestamp Silence A silence in the Alertmanager prevents alerts with labels matching the silence from being included in notifications Target A target is the definition of an object to scrape For example what labels to apply any authentication required to connect or other information that defines how the scrape will occur Time Series The Prometheus time series are streams of timestamped values belonging to the same metric and the same set of labeled dimensions Prometheus stores all data as time series |
prometheus toc full width Frequently Asked Questions sortrank 5 General title FAQ | ---
title: FAQ
sort_rank: 5
toc: full-width
---
# Frequently Asked Questions
## General
### What is Prometheus?
Prometheus is an open-source systems monitoring and alerting toolkit
with an active ecosystem.
It is the only system directly supported by [Kubernetes](https://kubernetes.io/) and the de facto standard across the [cloud native ecosystem](https://landscape.cncf.io/).
See the [overview](/docs/introduction/overview/).
### How does Prometheus compare against other monitoring systems?
See the [comparison](/docs/introduction/comparison/) page.
### What dependencies does Prometheus have?
The main Prometheus server runs standalone as a single monolithic binary and has no external dependencies.
#### Is this cloud native?
Yes.
Cloud native is a flexible operating model, breaking up old service boundaries to allow for more flexible and scalable deployments.
Prometheus's [service discovery](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) integrates with most tools and clouds. Its dimensional data model and scale into the tens of millions of active series allows it to monitor large cloud-native deployments.
There are always trade-offs to make when running services, and Prometheus values reliably getting alerts out to humans above all else.
### Can Prometheus be made highly available?
Yes, run identical Prometheus servers on two or more separate machines.
Identical alerts will be deduplicated by the [Alertmanager](https://github.com/prometheus/alertmanager).
Alertmanager supports [high availability](https://github.com/prometheus/alertmanager#high-availability) by interconnecting multiple Alertmanager instances to build an Alertmanager cluster. Instances of a cluster communicate using a gossip protocol managed via [HashiCorp's Memberlist](https://github.com/hashicorp/memberlist) library.
### I was told Prometheus “doesn't scale”.
This is often more of a marketing claim than anything else.
A single instance of Prometheus can be more performant than some systems positioning themselves as long term storage solution for Prometheus.
You can run Prometheus reliably with tens of millions of active series.
If you need more than that, there are several options. [Scaling and Federating Prometheus](https://www.robustperception.io/scaling-and-federating-prometheus/) on the Robust Perception blog is a good starting point, as are the long storage systems listed on our [integrations page](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).
### What language is Prometheus written in?
Most Prometheus components are written in Go. Some are also written in Java,
Python, and Ruby.
### How stable are Prometheus features, storage formats, and APIs?
All repositories in the Prometheus GitHub organization that have reached
version 1.0.0 broadly follow
[semantic versioning](http://semver.org/). Breaking changes are indicated by
increments of the major version. Exceptions are possible for experimental
components, which are clearly marked as such in announcements.
Even repositories that have not yet reached version 1.0.0 are, in general, quite
stable. We aim for a proper release process and an eventual 1.0.0 release for
each repository. In any case, breaking changes will be pointed out in release
notes (marked by `[CHANGE]`) or communicated clearly for components that do not
have formal releases yet.
### Why do you pull rather than push?
Pulling over HTTP offers a number of advantages:
* You can start extra monitoring instances as needed, e.g. on your laptop when developing changes.
* You can more easily and reliably tell if a target is down.
* You can manually go to a target and inspect its health with a web browser.
Overall, we believe that pulling is slightly better than pushing, but it should
not be considered a major point when considering a monitoring system.
For cases where you must push, we offer the [Pushgateway](/docs/instrumenting/pushing/).
### How to feed logs into Prometheus?
Short answer: Don't! Use something like [Grafana Loki](https://grafana.com/oss/loki/) or [OpenSearch](https://opensearch.org/) instead.
Longer answer: Prometheus is a system to collect and process metrics, not an
event logging system. The Grafana blog post
[Logs and Metrics and Graphs, Oh My!](https://grafana.com/blog/2016/01/05/logs-and-metrics-and-graphs-oh-my/)
provides more details about the differences between logs and metrics.
If you want to extract Prometheus metrics from application logs, Grafana Loki is designed for just that. See Loki's [metric queries](https://grafana.com/docs/loki/latest/logql/metric_queries/) documentation.
### Who wrote Prometheus?
Prometheus was initially started privately by
[Matt T. Proud](http://www.matttproud.com) and
[Julius Volz](http://juliusv.com). The majority of its
initial development was sponsored by [SoundCloud](https://soundcloud.com).
It's now maintained and extended by a wide range of [companies](https://prometheus.devstats.cncf.io/d/5/companies-table?orgId=1) and [individuals](https://prometheus.io/governance).
### What license is Prometheus released under?
Prometheus is released under the
[Apache 2.0](https://github.com/prometheus/prometheus/blob/main/LICENSE) license.
### What is the plural of Prometheus?
After [extensive research](https://youtu.be/B_CDeYrqxjQ), it has been determined
that the correct plural of 'Prometheus' is 'Prometheis'.
If you can not remember this, "Prometheus instances" is a good workaround.
### Can I reload Prometheus's configuration?
Yes, sending `SIGHUP` to the Prometheus process or an HTTP POST request to the
`/-/reload` endpoint will reload and apply the configuration file. The
various components attempt to handle failing changes gracefully.
### Can I send alerts?
Yes, with the [Alertmanager](https://github.com/prometheus/alertmanager).
We support sending alerts through [email, various native integrations](https://prometheus.io/docs/alerting/latest/configuration/), and a [webhook system anyone can add integrations to](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver).
### Can I create dashboards?
Yes, we recommend [Grafana](/docs/visualization/grafana/) for production
usage. There are also [Console templates](/docs/visualization/consoles/).
### Can I change the timezone? Why is everything in UTC?
To avoid any kind of timezone confusion, especially when the so-called
daylight saving time is involved, we decided to exclusively use Unix
time internally and UTC for display purposes in all components of
Prometheus. A carefully done timezone selection could be introduced
into the UI. Contributions are welcome. See
[issue #500](https://github.com/prometheus/prometheus/issues/500)
for the current state of this effort.
## Instrumentation
### Which languages have instrumentation libraries?
There are a number of client libraries for instrumenting your services with
Prometheus metrics. See the [client libraries](/docs/instrumenting/clientlibs/)
documentation for details.
If you are interested in contributing a client library for a new language, see
the [exposition formats](/docs/instrumenting/exposition_formats/).
### Can I monitor machines?
Yes, the [Node Exporter](https://github.com/prometheus/node_exporter) exposes
an extensive set of machine-level metrics on Linux and other Unix systems such
as CPU usage, memory, disk utilization, filesystem fullness, and network
bandwidth.
### Can I monitor network devices?
Yes, the [SNMP Exporter](https://github.com/prometheus/snmp_exporter) allows
monitoring of devices that support SNMP.
For industrial networks, there's also a [Modbus exporter](https://github.com/RichiH/modbus_exporter).
### Can I monitor batch jobs?
Yes, using the [Pushgateway](/docs/instrumenting/pushing/). See also the
[best practices](/docs/practices/instrumentation/#batch-jobs) for monitoring batch
jobs.
### What applications can Prometheus monitor out of the box?
See [the list of exporters and integrations](/docs/instrumenting/exporters/).
### Can I monitor JVM applications via JMX?
Yes, for applications that you cannot instrument directly with the Java client, you can use the [JMX Exporter](https://github.com/prometheus/jmx_exporter)
either standalone or as a Java Agent.
### What is the performance impact of instrumentation?
Performance across client libraries and languages may vary. For Java,
[benchmarks](https://github.com/prometheus/client_java/blob/master/benchmarks/README.md)
indicate that incrementing a counter/gauge with the Java client will take
12-17ns, depending on contention. This is negligible for all but the most
latency-critical code.
## Implementation
### Why are all sample values 64-bit floats?
We restrained ourselves to 64-bit floats to simplify the design. The
[IEEE 754 double-precision binary floating-point
format](http://en.wikipedia.org/wiki/Double-precision_floating-point_format)
supports integer precision for values up to 2<sup>53</sup>. Supporting
native 64 bit integers would (only) help if you need integer precision
above 2<sup>53</sup> but below 2<sup>63</sup>. In principle, support
for different sample value types (including some kind of big integer,
supporting even more than 64 bit) could be implemented, but it is not
a priority right now. A counter, even if incremented one million times per
second, will only run into precision issues after over 285 years. | prometheus | title FAQ sort rank 5 toc full width Frequently Asked Questions General What is Prometheus Prometheus is an open source systems monitoring and alerting toolkit with an active ecosystem It is the only system directly supported by Kubernetes https kubernetes io and the de facto standard across the cloud native ecosystem https landscape cncf io See the overview docs introduction overview How does Prometheus compare against other monitoring systems See the comparison docs introduction comparison page What dependencies does Prometheus have The main Prometheus server runs standalone as a single monolithic binary and has no external dependencies Is this cloud native Yes Cloud native is a flexible operating model breaking up old service boundaries to allow for more flexible and scalable deployments Prometheus s service discovery https prometheus io docs prometheus latest configuration configuration integrates with most tools and clouds Its dimensional data model and scale into the tens of millions of active series allows it to monitor large cloud native deployments There are always trade offs to make when running services and Prometheus values reliably getting alerts out to humans above all else Can Prometheus be made highly available Yes run identical Prometheus servers on two or more separate machines Identical alerts will be deduplicated by the Alertmanager https github com prometheus alertmanager Alertmanager supports high availability https github com prometheus alertmanager high availability by interconnecting multiple Alertmanager instances to build an Alertmanager cluster Instances of a cluster communicate using a gossip protocol managed via HashiCorp s Memberlist https github com hashicorp memberlist library I was told Prometheus doesn t scale This is often more of a marketing claim than anything else A single instance of Prometheus can be more performant than some systems positioning themselves as long term storage solution for Prometheus You can run Prometheus reliably with tens of millions of active series If you need more than that there are several options Scaling and Federating Prometheus https www robustperception io scaling and federating prometheus on the Robust Perception blog is a good starting point as are the long storage systems listed on our integrations page https prometheus io docs operating integrations remote endpoints and storage What language is Prometheus written in Most Prometheus components are written in Go Some are also written in Java Python and Ruby How stable are Prometheus features storage formats and APIs All repositories in the Prometheus GitHub organization that have reached version 1 0 0 broadly follow semantic versioning http semver org Breaking changes are indicated by increments of the major version Exceptions are possible for experimental components which are clearly marked as such in announcements Even repositories that have not yet reached version 1 0 0 are in general quite stable We aim for a proper release process and an eventual 1 0 0 release for each repository In any case breaking changes will be pointed out in release notes marked by CHANGE or communicated clearly for components that do not have formal releases yet Why do you pull rather than push Pulling over HTTP offers a number of advantages You can start extra monitoring instances as needed e g on your laptop when developing changes You can more easily and reliably tell if a target is down You can manually go to a target and inspect its health with a web browser Overall we believe that pulling is slightly better than pushing but it should not be considered a major point when considering a monitoring system For cases where you must push we offer the Pushgateway docs instrumenting pushing How to feed logs into Prometheus Short answer Don t Use something like Grafana Loki https grafana com oss loki or OpenSearch https opensearch org instead Longer answer Prometheus is a system to collect and process metrics not an event logging system The Grafana blog post Logs and Metrics and Graphs Oh My https grafana com blog 2016 01 05 logs and metrics and graphs oh my provides more details about the differences between logs and metrics If you want to extract Prometheus metrics from application logs Grafana Loki is designed for just that See Loki s metric queries https grafana com docs loki latest logql metric queries documentation Who wrote Prometheus Prometheus was initially started privately by Matt T Proud http www matttproud com and Julius Volz http juliusv com The majority of its initial development was sponsored by SoundCloud https soundcloud com It s now maintained and extended by a wide range of companies https prometheus devstats cncf io d 5 companies table orgId 1 and individuals https prometheus io governance What license is Prometheus released under Prometheus is released under the Apache 2 0 https github com prometheus prometheus blob main LICENSE license What is the plural of Prometheus After extensive research https youtu be B CDeYrqxjQ it has been determined that the correct plural of Prometheus is Prometheis If you can not remember this Prometheus instances is a good workaround Can I reload Prometheus s configuration Yes sending SIGHUP to the Prometheus process or an HTTP POST request to the reload endpoint will reload and apply the configuration file The various components attempt to handle failing changes gracefully Can I send alerts Yes with the Alertmanager https github com prometheus alertmanager We support sending alerts through email various native integrations https prometheus io docs alerting latest configuration and a webhook system anyone can add integrations to https prometheus io docs operating integrations alertmanager webhook receiver Can I create dashboards Yes we recommend Grafana docs visualization grafana for production usage There are also Console templates docs visualization consoles Can I change the timezone Why is everything in UTC To avoid any kind of timezone confusion especially when the so called daylight saving time is involved we decided to exclusively use Unix time internally and UTC for display purposes in all components of Prometheus A carefully done timezone selection could be introduced into the UI Contributions are welcome See issue 500 https github com prometheus prometheus issues 500 for the current state of this effort Instrumentation Which languages have instrumentation libraries There are a number of client libraries for instrumenting your services with Prometheus metrics See the client libraries docs instrumenting clientlibs documentation for details If you are interested in contributing a client library for a new language see the exposition formats docs instrumenting exposition formats Can I monitor machines Yes the Node Exporter https github com prometheus node exporter exposes an extensive set of machine level metrics on Linux and other Unix systems such as CPU usage memory disk utilization filesystem fullness and network bandwidth Can I monitor network devices Yes the SNMP Exporter https github com prometheus snmp exporter allows monitoring of devices that support SNMP For industrial networks there s also a Modbus exporter https github com RichiH modbus exporter Can I monitor batch jobs Yes using the Pushgateway docs instrumenting pushing See also the best practices docs practices instrumentation batch jobs for monitoring batch jobs What applications can Prometheus monitor out of the box See the list of exporters and integrations docs instrumenting exporters Can I monitor JVM applications via JMX Yes for applications that you cannot instrument directly with the Java client you can use the JMX Exporter https github com prometheus jmx exporter either standalone or as a Java Agent What is the performance impact of instrumentation Performance across client libraries and languages may vary For Java benchmarks https github com prometheus client java blob master benchmarks README md indicate that incrementing a counter gauge with the Java client will take 12 17ns depending on contention This is negligible for all but the most latency critical code Implementation Why are all sample values 64 bit floats We restrained ourselves to 64 bit floats to simplify the design The IEEE 754 double precision binary floating point format http en wikipedia org wiki Double precision floating point format supports integer precision for values up to 2 sup 53 sup Supporting native 64 bit integers would only help if you need integer precision above 2 sup 53 sup but below 2 sup 63 sup In principle support for different sample value types including some kind of big integer supporting even more than 64 bit could be implemented but it is not a priority right now A counter even if incremented one million times per second will only run into precision issues after over 285 years |
prometheus sortrank 3 Welcome to Prometheus Prometheus is a monitoring platform that collects metrics from monitored targets by scraping metrics HTTP endpoints on these targets This guide will show you how to install configure and monitor our first resource with Prometheus You ll download install and run Prometheus You ll also download and install an exporter tools that expose time series data on hosts and services Our first exporter will be Prometheus itself which provides a wide variety of host level metrics about memory usage garbage collection and more First steps with Prometheus Downloading Prometheus title First steps | ---
title: First steps
sort_rank: 3
---
# First steps with Prometheus
Welcome to Prometheus! Prometheus is a monitoring platform that collects metrics from monitored targets by scraping metrics HTTP endpoints on these targets. This guide will show you how to install, configure and monitor our first resource with Prometheus. You'll download, install and run Prometheus. You'll also download and install an exporter, tools that expose time series data on hosts and services. Our first exporter will be Prometheus itself, which provides a wide variety of host-level metrics about memory usage, garbage collection, and more.
## Downloading Prometheus
[Download the latest release](/download) of Prometheus for your platform, then
extract it:
```language-bash
tar xvfz prometheus-*.tar.gz
cd prometheus-*
```
The Prometheus server is a single binary called `prometheus` (or `prometheus.exe` on Microsoft Windows). We can run the binary and see help on its options by passing the `--help` flag.
```language-bash
./prometheus --help
usage: prometheus [<flags>]
The Prometheus monitoring server
. . .
```
Before starting Prometheus, let's configure it.
## Configuring Prometheus
Prometheus configuration is [YAML](https://yaml.org/). The Prometheus download comes with a sample configuration in a file called `prometheus.yml` that is a good place to get started.
We've stripped out most of the comments in the example file to make it more succinct (comments are the lines prefixed with a `#`).
```language-yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
# - "first.rules"
# - "second.rules"
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
```
There are three blocks of configuration in the example configuration file: `global`, `rule_files`, and `scrape_configs`.
The `global` block controls the Prometheus server's global configuration. We have two options present. The first, `scrape_interval`, controls how often Prometheus will scrape targets. You can override this for individual targets. In this case the global setting is to scrape every 15 seconds. The `evaluation_interval` option controls how often Prometheus will evaluate rules. Prometheus uses rules to create new time series and to generate alerts.
The `rule_files` block specifies the location of any rules we want the Prometheus server to load. For now we've got no rules.
The last block, `scrape_configs`, controls what resources Prometheus monitors. Since Prometheus also exposes data about itself as an HTTP endpoint it can scrape and monitor its own health. In the default configuration there is a single job, called `prometheus`, which scrapes the time series data exposed by the Prometheus server. The job contains a single, statically configured, target, the `localhost` on port `9090`. Prometheus expects metrics to be available on targets on a path of `/metrics`. So this default job is scraping via the URL: http://localhost:9090/metrics.
The time series data returned will detail the state and performance of the Prometheus server.
For a complete specification of configuration options, see the
[configuration documentation](/docs/operating/configuration).
## Starting Prometheus
To start Prometheus with our newly created configuration file, change to the directory containing the Prometheus binary and run:
```language-bash
./prometheus --config.file=prometheus.yml
```
Prometheus should start up. You should also be able to browse to a status page about itself at http://localhost:9090. Give it about 30 seconds to collect data about itself from its own HTTP metrics endpoint.
You can also verify that Prometheus is serving metrics about itself by
navigating to its own metrics endpoint: http://localhost:9090/metrics.
## Using the expression browser
Let us try looking at some data that Prometheus has collected about itself. To
use Prometheus's built-in expression browser, navigate to
http://localhost:9090/graph and choose the "Table" view within the "Graph"
tab.
As you can gather from http://localhost:9090/metrics, one metric that
Prometheus exports about itself is called
`promhttp_metric_handler_requests_total` (the total number of `/metrics` requests the Prometheus server has served). Go ahead and enter this into the expression console:
```
promhttp_metric_handler_requests_total
```
This should return a number of different time series (along with the latest value recorded for each), all with the metric name `promhttp_metric_handler_requests_total`, but with different labels. These labels designate different requests statuses.
If we were only interested in requests that resulted in HTTP code `200`, we could use this query to retrieve that information:
```
promhttp_metric_handler_requests_total{code="200"}
```
To count the number of returned time series, you could write:
```
count(promhttp_metric_handler_requests_total)
```
For more about the expression language, see the
[expression language documentation](/docs/querying/basics/).
## Using the graphing interface
To graph expressions, navigate to http://localhost:9090/graph and use the "Graph" tab.
For example, enter the following expression to graph the per-second HTTP request rate returning status code 200 happening in the self-scraped Prometheus:
```
rate(promhttp_metric_handler_requests_total{code="200"}[1m])
```
You can experiment with the graph range parameters and other settings.
## Monitoring other targets
Collecting metrics from Prometheus alone isn't a great representation of Prometheus' capabilities. To get a better sense of what Prometheus can do, we recommend exploring documentation about other exporters. The [Monitoring Linux or macOS host metrics using a node exporter](/docs/guides/node-exporter) guide is a good place to start.
## Summary
In this guide, you installed Prometheus, configured a Prometheus instance to monitor resources, and learned some basics of working with time series data in Prometheus' expression browser. To continue learning about Prometheus, check out the [Overview](/docs/introduction/overview) for some ideas about what to explore next. | prometheus | title First steps sort rank 3 First steps with Prometheus Welcome to Prometheus Prometheus is a monitoring platform that collects metrics from monitored targets by scraping metrics HTTP endpoints on these targets This guide will show you how to install configure and monitor our first resource with Prometheus You ll download install and run Prometheus You ll also download and install an exporter tools that expose time series data on hosts and services Our first exporter will be Prometheus itself which provides a wide variety of host level metrics about memory usage garbage collection and more Downloading Prometheus Download the latest release download of Prometheus for your platform then extract it language bash tar xvfz prometheus tar gz cd prometheus The Prometheus server is a single binary called prometheus or prometheus exe on Microsoft Windows We can run the binary and see help on its options by passing the help flag language bash prometheus help usage prometheus flags The Prometheus monitoring server Before starting Prometheus let s configure it Configuring Prometheus Prometheus configuration is YAML https yaml org The Prometheus download comes with a sample configuration in a file called prometheus yml that is a good place to get started We ve stripped out most of the comments in the example file to make it more succinct comments are the lines prefixed with a language yaml global scrape interval 15s evaluation interval 15s rule files first rules second rules scrape configs job name prometheus static configs targets localhost 9090 There are three blocks of configuration in the example configuration file global rule files and scrape configs The global block controls the Prometheus server s global configuration We have two options present The first scrape interval controls how often Prometheus will scrape targets You can override this for individual targets In this case the global setting is to scrape every 15 seconds The evaluation interval option controls how often Prometheus will evaluate rules Prometheus uses rules to create new time series and to generate alerts The rule files block specifies the location of any rules we want the Prometheus server to load For now we ve got no rules The last block scrape configs controls what resources Prometheus monitors Since Prometheus also exposes data about itself as an HTTP endpoint it can scrape and monitor its own health In the default configuration there is a single job called prometheus which scrapes the time series data exposed by the Prometheus server The job contains a single statically configured target the localhost on port 9090 Prometheus expects metrics to be available on targets on a path of metrics So this default job is scraping via the URL http localhost 9090 metrics The time series data returned will detail the state and performance of the Prometheus server For a complete specification of configuration options see the configuration documentation docs operating configuration Starting Prometheus To start Prometheus with our newly created configuration file change to the directory containing the Prometheus binary and run language bash prometheus config file prometheus yml Prometheus should start up You should also be able to browse to a status page about itself at http localhost 9090 Give it about 30 seconds to collect data about itself from its own HTTP metrics endpoint You can also verify that Prometheus is serving metrics about itself by navigating to its own metrics endpoint http localhost 9090 metrics Using the expression browser Let us try looking at some data that Prometheus has collected about itself To use Prometheus s built in expression browser navigate to http localhost 9090 graph and choose the Table view within the Graph tab As you can gather from http localhost 9090 metrics one metric that Prometheus exports about itself is called promhttp metric handler requests total the total number of metrics requests the Prometheus server has served Go ahead and enter this into the expression console promhttp metric handler requests total This should return a number of different time series along with the latest value recorded for each all with the metric name promhttp metric handler requests total but with different labels These labels designate different requests statuses If we were only interested in requests that resulted in HTTP code 200 we could use this query to retrieve that information promhttp metric handler requests total code 200 To count the number of returned time series you could write count promhttp metric handler requests total For more about the expression language see the expression language documentation docs querying basics Using the graphing interface To graph expressions navigate to http localhost 9090 graph and use the Graph tab For example enter the following expression to graph the per second HTTP request rate returning status code 200 happening in the self scraped Prometheus rate promhttp metric handler requests total code 200 1m You can experiment with the graph range parameters and other settings Monitoring other targets Collecting metrics from Prometheus alone isn t a great representation of Prometheus capabilities To get a better sense of what Prometheus can do we recommend exploring documentation about other exporters The Monitoring Linux or macOS host metrics using a node exporter docs guides node exporter guide is a good place to start Summary In this guide you installed Prometheus configured a Prometheus instance to monitor resources and learned some basics of working with time series data in Prometheus expression browser To continue learning about Prometheus check out the Overview docs introduction overview for some ideas about what to explore next |
prometheus sortrank 1 fields it contains and provides advanced tips about how to operate the log file as of 2 16 0 This guide demonstrates how to use that log file which Using the Prometheus query log Prometheus has the ability to log all the queries run by the engine to a log title Query Log | ---
title: Query Log
sort_rank: 1
---
# Using the Prometheus query log
Prometheus has the ability to log all the queries run by the engine to a log
file, as of 2.16.0. This guide demonstrates how to use that log file, which
fields it contains, and provides advanced tips about how to operate the log
file.
## Enable the query log
The query log can be toggled at runtime. It can therefore be activated when you
want to investigate slownesses or high load on your Prometheus instance.
To enable or disable the query log, two steps are needed:
1. Adapt the configuration to add or remove the query log configuration.
1. Reload the Prometheus server configuration.
### Logging all the queries to a file
This example demonstrates how to log all the queries to
a file called `/prometheus/query.log`. We will assume that `/prometheus` is the
data directory and that Prometheus has write access to it.
First, adapt the `prometheus.yml` configuration file:
```yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
query_log_file: /prometheus/query.log
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
```
Then, [reload](/docs/prometheus/latest/management_api/#reload) the Prometheus configuration:
```shell
$ curl -X POST http://127.0.0.1:9090/-/reload
```
Or, if Prometheus is not launched with `--web.enable-lifecycle`, and you're not
running on Windows, you can trigger the reload by sending a SIGHUP to the
Prometheus process.
The file `/prometheus/query.log` should now exist and all the queries
will be logged to that file.
To disable the query log, repeat the operation but remove `query_log_file` from
the configuration.
## Verifying if the query log is enabled
Prometheus conveniently exposes metrics that indicates if the query log is
enabled and working:
```
# HELP prometheus_engine_query_log_enabled State of the query log.
# TYPE prometheus_engine_query_log_enabled gauge
prometheus_engine_query_log_enabled 0
# HELP prometheus_engine_query_log_failures_total The number of query log failures.
# TYPE prometheus_engine_query_log_failures_total counter
prometheus_engine_query_log_failures_total 0
```
The first metric, `prometheus_engine_query_log_enabled` is set to 1 of the
query log is enabled, and 0 otherwise.
The second one, `prometheus_engine_query_log_failures_total`, indicates the
number of queries that could not be logged.
## Format of the query log
The query log is a JSON-formatted log. Here is an overview of the fields
present for a query:
```
{
"params": {
"end": "2020-02-08T14:59:50.368Z",
"query": "up == 0",
"start": "2020-02-08T13:59:50.368Z",
"step": 5
},
"stats": {
"timings": {
"evalTotalTime": 0.000447452,
"execQueueTime": 7.599e-06,
"execTotalTime": 0.000461232,
"innerEvalTime": 0.000427033,
"queryPreparationTime": 1.4177e-05,
"resultSortTime": 6.48e-07
}
},
"ts": "2020-02-08T14:59:50.387Z"
}
```
- `params`: The query. The start and end timestamp, the step and the actual
query statement.
- `stats`: Statistics. Currently, it contains internal engine timers.
- `ts`: The timestamp when the query ended.
Additionally, depending on what triggered the request, you will have additional
fields in the JSON lines.
### API Queries and consoles
HTTP requests contain the client IP, the method, and the path:
```
{
"httpRequest": {
"clientIP": "127.0.0.1",
"method": "GET",
"path": "/api/v1/query_range"
}
}
```
The path will contain the web prefix if it is set, and can also point to a
console.
The client IP is the network IP address and does not take into consideration the
headers like `X-Forwarded-For`. If you wish to log the original caller behind a
proxy, you need to do so in the proxy itself.
### Recording rules and alerts
Recording rules and alerts contain a ruleGroup element which contains the path
of the file and the name of the group:
```
{
"ruleGroup": {
"file": "rules.yml",
"name": "partners"
}
}
```
## Rotating the query log
Prometheus will not rotate the query log itself. Instead, you can use external
tools to do so.
One of those tools is logrotate. It is enabled by default on most Linux
distributions.
Here is an example of file you can add as
`/etc/logrotate.d/prometheus`:
```
/prometheus/query.log {
daily
rotate 7
compress
delaycompress
postrotate
killall -HUP prometheus
endscript
}
```
That will rotate your file daily and keep one week of history. | prometheus | title Query Log sort rank 1 Using the Prometheus query log Prometheus has the ability to log all the queries run by the engine to a log file as of 2 16 0 This guide demonstrates how to use that log file which fields it contains and provides advanced tips about how to operate the log file Enable the query log The query log can be toggled at runtime It can therefore be activated when you want to investigate slownesses or high load on your Prometheus instance To enable or disable the query log two steps are needed 1 Adapt the configuration to add or remove the query log configuration 1 Reload the Prometheus server configuration Logging all the queries to a file This example demonstrates how to log all the queries to a file called prometheus query log We will assume that prometheus is the data directory and that Prometheus has write access to it First adapt the prometheus yml configuration file yaml global scrape interval 15s evaluation interval 15s query log file prometheus query log scrape configs job name prometheus static configs targets localhost 9090 Then reload docs prometheus latest management api reload the Prometheus configuration shell curl X POST http 127 0 0 1 9090 reload Or if Prometheus is not launched with web enable lifecycle and you re not running on Windows you can trigger the reload by sending a SIGHUP to the Prometheus process The file prometheus query log should now exist and all the queries will be logged to that file To disable the query log repeat the operation but remove query log file from the configuration Verifying if the query log is enabled Prometheus conveniently exposes metrics that indicates if the query log is enabled and working HELP prometheus engine query log enabled State of the query log TYPE prometheus engine query log enabled gauge prometheus engine query log enabled 0 HELP prometheus engine query log failures total The number of query log failures TYPE prometheus engine query log failures total counter prometheus engine query log failures total 0 The first metric prometheus engine query log enabled is set to 1 of the query log is enabled and 0 otherwise The second one prometheus engine query log failures total indicates the number of queries that could not be logged Format of the query log The query log is a JSON formatted log Here is an overview of the fields present for a query params end 2020 02 08T14 59 50 368Z query up 0 start 2020 02 08T13 59 50 368Z step 5 stats timings evalTotalTime 0 000447452 execQueueTime 7 599e 06 execTotalTime 0 000461232 innerEvalTime 0 000427033 queryPreparationTime 1 4177e 05 resultSortTime 6 48e 07 ts 2020 02 08T14 59 50 387Z params The query The start and end timestamp the step and the actual query statement stats Statistics Currently it contains internal engine timers ts The timestamp when the query ended Additionally depending on what triggered the request you will have additional fields in the JSON lines API Queries and consoles HTTP requests contain the client IP the method and the path httpRequest clientIP 127 0 0 1 method GET path api v1 query range The path will contain the web prefix if it is set and can also point to a console The client IP is the network IP address and does not take into consideration the headers like X Forwarded For If you wish to log the original caller behind a proxy you need to do so in the proxy itself Recording rules and alerts Recording rules and alerts contain a ruleGroup element which contains the path of the file and the name of the group ruleGroup file rules yml name partners Rotating the query log Prometheus will not rotate the query log itself Instead you can use external tools to do so One of those tools is logrotate It is enabled by default on most Linux distributions Here is an example of file you can add as etc logrotate d prometheus prometheus query log daily rotate 7 compress delaycompress postrotate killall HUP prometheus endscript That will rotate your file daily and keep one week of history |
prometheus Securing Prometheus API and UI endpoints using basic auth NOTE This tutorial covers basic auth connections to Prometheus instances Basic auth is also supported for connections from Prometheus instances to Prometheus supports aka basic auth for connections to the Prometheus and title Basic auth | ---
title: Basic auth
---
# Securing Prometheus API and UI endpoints using basic auth
Prometheus supports [basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) (aka "basic auth") for connections to the Prometheus [expression browser](/docs/visualization/browser) and [HTTP API](/docs/prometheus/latest/querying/api).
NOTE: This tutorial covers basic auth connections *to* Prometheus instances. Basic auth is also supported for connections *from* Prometheus instances to [scrape targets](../../prometheus/latest/configuration/configuration/#scrape_config).
## Hashing a password
Let's say that you want to require a username and password from all users accessing the Prometheus instance. For this example, use `admin` as the username and choose any password you'd like.
First, generate a [bcrypt](https://en.wikipedia.org/wiki/Bcrypt) hash of the password.
To generate a hashed password, we will use python3-bcrypt.
Let's install it by running `apt install python3-bcrypt`, assuming you are
running a debian-like distribution. Other alternatives exist to generate hashed
passwords; for testing you can also use [bcrypt generators on the
web](https://bcrypt-generator.com/).
Here is a python script which uses python3-bcrypt to prompt for a password and
hash it:
```python
import getpass
import bcrypt
password = getpass.getpass("password: ")
hashed_password = bcrypt.hashpw(password.encode("utf-8"), bcrypt.gensalt())
print(hashed_password.decode())
```
Save that script as `gen-pass.py` and run it:
```shell
$ python3 gen-pass.py
```
That should prompt you for a password:
```
password:
$2b$12$hNf2lSsxfm0.i4a.1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay
```
In this example, I used "test" as password.
Save that password somewhere, we will use it in the next steps!
## Creating web.yml
Let's create a web.yml file
([documentation](https://prometheus.io/docs/prometheus/latest/configuration/https/)),
with the following content:
```yaml
basic_auth_users:
admin: $2b$12$hNf2lSsxfm0.i4a.1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay
```
You can validate that file with `promtool check web-config web.yml`
```shell
$ promtool check web-config web.yml
web.yml SUCCESS
```
You can add multiple users to the file.
## Launching Prometheus
You can launch prometheus with the web configuration file as follows:
```shell
$ prometheus --web.config.file=web.yml
```
## Testing
You can use cURL to interact with your setup. Try this request:
```bash
curl --head http://localhost:9090/graph
```
This will return a `401 Unauthorized` response because you've failed to supply a valid username and password.
To successfully access Prometheus endpoints using basic auth, for example the `/metrics` endpoint, supply the proper username using the `-u` flag and supply the password when prompted:
```bash
curl -u admin http://localhost:9090/metrics
Enter host password for user 'admin':
```
That should return Prometheus metrics output, which should look something like this:
```
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0.0001343
go_gc_duration_seconds{quantile="0.25"} 0.0002032
go_gc_duration_seconds{quantile="0.5"} 0.0004485
...
```
## Summary
In this guide, you stored a username and a hashed password in a `web.yml` file, launched prometheus with the parameter required to use the credentials in that file to authenticate users accessing Prometheus' HTTP endpoints. | prometheus | title Basic auth Securing Prometheus API and UI endpoints using basic auth Prometheus supports basic authentication https en wikipedia org wiki Basic access authentication aka basic auth for connections to the Prometheus expression browser docs visualization browser and HTTP API docs prometheus latest querying api NOTE This tutorial covers basic auth connections to Prometheus instances Basic auth is also supported for connections from Prometheus instances to scrape targets prometheus latest configuration configuration scrape config Hashing a password Let s say that you want to require a username and password from all users accessing the Prometheus instance For this example use admin as the username and choose any password you d like First generate a bcrypt https en wikipedia org wiki Bcrypt hash of the password To generate a hashed password we will use python3 bcrypt Let s install it by running apt install python3 bcrypt assuming you are running a debian like distribution Other alternatives exist to generate hashed passwords for testing you can also use bcrypt generators on the web https bcrypt generator com Here is a python script which uses python3 bcrypt to prompt for a password and hash it python import getpass import bcrypt password getpass getpass password hashed password bcrypt hashpw password encode utf 8 bcrypt gensalt print hashed password decode Save that script as gen pass py and run it shell python3 gen pass py That should prompt you for a password password 2b 12 hNf2lSsxfm0 i4a 1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay In this example I used test as password Save that password somewhere we will use it in the next steps Creating web yml Let s create a web yml file documentation https prometheus io docs prometheus latest configuration https with the following content yaml basic auth users admin 2b 12 hNf2lSsxfm0 i4a 1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay You can validate that file with promtool check web config web yml shell promtool check web config web yml web yml SUCCESS You can add multiple users to the file Launching Prometheus You can launch prometheus with the web configuration file as follows shell prometheus web config file web yml Testing You can use cURL to interact with your setup Try this request bash curl head http localhost 9090 graph This will return a 401 Unauthorized response because you ve failed to supply a valid username and password To successfully access Prometheus endpoints using basic auth for example the metrics endpoint supply the proper username using the u flag and supply the password when prompted bash curl u admin http localhost 9090 metrics Enter host password for user admin That should return Prometheus metrics output which should look something like this HELP go gc duration seconds A summary of the GC invocation durations TYPE go gc duration seconds summary go gc duration seconds quantile 0 0 0001343 go gc duration seconds quantile 0 25 0 0002032 go gc duration seconds quantile 0 5 0 0004485 Summary In this guide you stored a username and a hashed password in a web yml file launched prometheus with the parameter required to use the credentials in that file to authenticate users accessing Prometheus HTTP endpoints |
prometheus This guide will introduce you to the multi target exporter pattern To achieve this we will run the exporter as an example of the pattern title Understanding and using the multi target exporter pattern Understanding and using the multi target exporter pattern describe the multi target exporter pattern and why it is used | ---
title: Understanding and using the multi-target exporter pattern
---
# Understanding and using the multi-target exporter pattern
This guide will introduce you to the multi-target exporter pattern. To achieve this we will:
* describe the multi-target exporter pattern and why it is used,
* run the [blackbox](https://github.com/prometheus/blackbox_exporter) exporter as an example of the pattern,
* configure a custom query module for the blackbox exporter,
* let the blackbox exporter run basic metric queries against the Prometheus [website](https://prometheus.io),
* examine a popular pattern of configuring Prometheus to scrape exporters using relabeling.
## The multi-target exporter pattern?
By multi-target [exporter](/docs/instrumenting/exporters/) pattern we refer to a specific design, in which:
* the exporter will get the target’s metrics via a network protocol.
* the exporter does not have to run on the machine the metrics are taken from.
* the exporter gets the targets and a query config string as parameters of Prometheus’ GET request.
* the exporter subsequently starts the scrape after getting Prometheus’ GET requests and once it is done with scraping.
* the exporter can query multiple targets.
This pattern is only used for certain exporters, such as the [blackbox](https://github.com/prometheus/blackbox_exporter) and the [SNMP exporter](https://github.com/prometheus/snmp_exporter).
The reason is that we either can’t run an exporter on the targets, e.g. network gear speaking SNMP, or that we are explicitly interested in the distance, e.g. latency and reachability of a website from a specific point outside of our network, a common use case for the [blackbox](https://github.com/prometheus/blackbox_exporter) exporter.
## Running multi-target exporters
Multi-target exporters are flexible regarding their environment and can be run in many ways. As regular programs, in containers, as background services, on baremetal, on virtual machines. Because they are queried and do query over network they do need appropriate open ports. Otherwise they are frugal.
Now let’s try it out for yourself!
Use [Docker](https://www.docker.com/) to start a blackbox exporter container by running this in a terminal. Depending on your system configuration you might need to prepend the command with a `sudo`:
```bash
docker run -p 9115:9115 prom/blackbox-exporter
```
You should see a few log lines and if everything went well the last one should report `msg="Listening on address"` as seen here:
```
level=info ts=2018-10-17T15:41:35.4997596Z caller=main.go:324 msg="Listening on address" address=:9115
```
## Basic querying of multi-target exporters
There are two ways of querying:
1. Querying the exporter itself. It has its own metrics, usually available at `/metrics`.
1. Querying the exporter to scrape another target. Usually available at a "descriptive" endpoint, e.g. `/probe`. This is likely what you are primarily interested in, when using multi-target exporters.
You can manually try the first query type with curl in another terminal or use this [link](http://localhost:9115/metrics):
<a name="query-exporter"></a>
```bash
curl 'localhost:9115/metrics'
```
The response should be something like this:
```
# HELP blackbox_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which blackbox_exporter was built.
# TYPE blackbox_exporter_build_info gauge
blackbox_exporter_build_info{branch="HEAD",goversion="go1.10",revision="4a22506cf0cf139d9b2f9cde099f0012d9fcabde",version="0.12.0"} 1
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 9
[…]
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.05
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 7
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 7.8848e+06
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.54115492874e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.5609856e+07
```
Those are metrics in the Prometheus [format](/docs/instrumenting/exposition_formats/#text-format-example). They come from the exporter’s [instrumentation](/docs/practices/instrumentation/) and tell us about the state of the exporter itself while it is running. This is called whitebox monitoring and very useful in daily ops practice. If you are curious, try out our guide on how to [instrument your own applications](https://prometheus.io/docs/guides/go-application/).
For the second type of querying we need to provide a target and module as parameters in the HTTP GET Request. The target is a URI or IP and the module must defined in the exporter’s configuration. The blackbox exporter container comes with a meaningful default configuration.
We will use the target `prometheus.io` and the predefined module `http_2xx`. It tells the exporter to make a GET request like a browser would if you go to `prometheus.io` and to expect a [200 OK](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#2xx_Success) response.
You can now tell your blackbox exporter to query `prometheus.io` in the terminal with curl:
```bash
curl 'localhost:9115/probe?target=prometheus.io&module=http_2xx'
```
This will return a lot of metrics:
```
# HELP probe_dns_lookup_time_seconds Returns the time taken for probe dns lookup in seconds
# TYPE probe_dns_lookup_time_seconds gauge
probe_dns_lookup_time_seconds 0.061087943
# HELP probe_duration_seconds Returns how long the probe took to complete in seconds
# TYPE probe_duration_seconds gauge
probe_duration_seconds 0.065580871
# HELP probe_failed_due_to_regex Indicates if probe failed due to regex
# TYPE probe_failed_due_to_regex gauge
probe_failed_due_to_regex 0
# HELP probe_http_content_length Length of http content response
# TYPE probe_http_content_length gauge
probe_http_content_length 0
# HELP probe_http_duration_seconds Duration of http request by phase, summed over all redirects
# TYPE probe_http_duration_seconds gauge
probe_http_duration_seconds{phase="connect"} 0
probe_http_duration_seconds{phase="processing"} 0
probe_http_duration_seconds{phase="resolve"} 0.061087943
probe_http_duration_seconds{phase="tls"} 0
probe_http_duration_seconds{phase="transfer"} 0
# HELP probe_http_redirects The number of redirects
# TYPE probe_http_redirects gauge
probe_http_redirects 0
# HELP probe_http_ssl Indicates if SSL was used for the final redirect
# TYPE probe_http_ssl gauge
probe_http_ssl 0
# HELP probe_http_status_code Response HTTP status code
# TYPE probe_http_status_code gauge
probe_http_status_code 0
# HELP probe_http_version Returns the version of HTTP of the probe response
# TYPE probe_http_version gauge
probe_http_version 0
# HELP probe_ip_protocol Specifies whether probe ip protocol is IP4 or IP6
# TYPE probe_ip_protocol gauge
probe_ip_protocol 6
# HELP probe_success Displays whether or not the probe was a success
# TYPE probe_success gauge
probe_success 0
```
Notice that almost all metrics have a value of `0`. The last one reads `probe_success 0`. This means the prober could not successfully reach `prometheus.io`. The reason is hidden in the metric `probe_ip_protocol` with the value `6`. By default the prober uses [IPv6](https://en.wikipedia.org/wiki/IPv6) until told otherwise. But the Docker daemon blocks IPv6 until told otherwise. Hence our blackbox exporter running in a Docker container can’t connect via IPv6.
We could now either tell Docker to allow IPv6 or the blackbox exporter to use IPv4. In the real world both can make sense and as so often the answer to the question "what is to be done?" is "it depends". Because this is an exporter guide we will change the exporter and take the opportunity to configure a custom module.
## Configuring modules
The modules are predefined in a file inside the docker container called `config.yml` which is a copy of [blackbox.yml](https://github.com/prometheus/blackbox_exporter/blob/master/blackbox.yml) in the github repo.
We will copy this file, [adapt](https://github.com/prometheus/blackbox_exporter/blob/master/CONFIGURATION.md) it to our own needs and tell the exporter to use our config file instead of the one included in the container.
First download the file using curl or your browser:
```bash
curl -o blackbox.yml https://raw.githubusercontent.com/prometheus/blackbox_exporter/master/blackbox.yml
```
Open it in an editor. The first few lines look like this:
```yaml
modules:
http_2xx:
prober: http
http_post_2xx:
prober: http
http:
method: POST
```
[YAML](https://en.wikipedia.org/wiki/YAML) uses whitespace indentation to express hierarchy, so you can recognise that two `modules` named `http_2xx` and `http_post_2xx` are defined, and that they both have a prober `http` and for one the method value is specifically set to `POST`.
You will now change the module `http_2xx` by setting the `preferred_ip_protocol` of the prober `http` explicitly to the string `ip4`.
```yaml
modules:
http_2xx:
prober: http
http:
preferred_ip_protocol: "ip4"
http_post_2xx:
prober: http
http:
method: POST
```
If you want to know more about the available probers and options check out the [documentation](https://github.com/prometheus/blackbox_exporter/blob/master/CONFIGURATION.md).
Now we need to tell the blackbox exporter to use our freshly changed file. You can do that with the flag `--config.file="blackbox.yml"`. But because we are using Docker, we first must make this file [available](https://docs.docker.com/storage/bind-mounts/) inside the container using the `--mount` command.
NOTE: If you are using macOS you first need to allow the Docker daemon to access the directory in which your `blackbox.yml` is. You can do that by clicking on the little Docker whale in menu bar and then on `Preferences`->`File Sharing`->`+`. Afterwards press `Apply & Restart`.
First you stop the old container by changing into its terminal and press `ctrl+c`.
Make sure you are in the directory containing your `blackbox.yml`.
Then you run this command. It is long, but we will explain it:
<a name="run-exporter"></a>
```bash
docker \
run -p 9115:9115 \
--mount type=bind,source="$(pwd)"/blackbox.yml,target=/blackbox.yml,readonly \
prom/blackbox-exporter \
--config.file="/blackbox.yml"
```
With this command, you told `docker` to:
1. `run` a container with the port `9115` outside the container mapped to the port `9115` inside of the container.
1. `mount` from your current directory (`$(pwd)` stands for print working directory) the file `blackbox.yml` into `/blackbox.yml` in `readonly` mode.
1. use the image `prom/blackbox-exporter` from [Docker hub](https://hub.docker.com/r/prom/blackbox-exporter/).
1. run the blackbox-exporter with the flag `--config.file` telling it to use `/blackbox.yml` as config file.
If everything is correct, you should see something like this:
```
level=info ts=2018-10-19T12:40:51.650462756Z caller=main.go:213 msg="Starting blackbox_exporter" version="(version=0.12.0, branch=HEAD, revision=4a22506cf0cf139d9b2f9cde099f0012d9fcabde)"
level=info ts=2018-10-19T12:40:51.653357722Z caller=main.go:220 msg="Loaded config file"
level=info ts=2018-10-19T12:40:51.65349635Z caller=main.go:324 msg="Listening on address" address=:9115
```
Now you can try our new IPv4-using module `http_2xx` in a terminal:
```bash
curl 'localhost:9115/probe?target=prometheus.io&module=http_2xx'
```
Which should return Prometheus metrics like this:
```
# HELP probe_dns_lookup_time_seconds Returns the time taken for probe dns lookup in seconds
# TYPE probe_dns_lookup_time_seconds gauge
probe_dns_lookup_time_seconds 0.02679421
# HELP probe_duration_seconds Returns how long the probe took to complete in seconds
# TYPE probe_duration_seconds gauge
probe_duration_seconds 0.461619124
# HELP probe_failed_due_to_regex Indicates if probe failed due to regex
# TYPE probe_failed_due_to_regex gauge
probe_failed_due_to_regex 0
# HELP probe_http_content_length Length of http content response
# TYPE probe_http_content_length gauge
probe_http_content_length -1
# HELP probe_http_duration_seconds Duration of http request by phase, summed over all redirects
# TYPE probe_http_duration_seconds gauge
probe_http_duration_seconds{phase="connect"} 0.062076202999999996
probe_http_duration_seconds{phase="processing"} 0.23481845699999998
probe_http_duration_seconds{phase="resolve"} 0.029594103
probe_http_duration_seconds{phase="tls"} 0.163420078
probe_http_duration_seconds{phase="transfer"} 0.002243199
# HELP probe_http_redirects The number of redirects
# TYPE probe_http_redirects gauge
probe_http_redirects 1
# HELP probe_http_ssl Indicates if SSL was used for the final redirect
# TYPE probe_http_ssl gauge
probe_http_ssl 1
# HELP probe_http_status_code Response HTTP status code
# TYPE probe_http_status_code gauge
probe_http_status_code 200
# HELP probe_http_uncompressed_body_length Length of uncompressed response body
# TYPE probe_http_uncompressed_body_length gauge
probe_http_uncompressed_body_length 14516
# HELP probe_http_version Returns the version of HTTP of the probe response
# TYPE probe_http_version gauge
probe_http_version 1.1
# HELP probe_ip_protocol Specifies whether probe ip protocol is IP4 or IP6
# TYPE probe_ip_protocol gauge
probe_ip_protocol 4
# HELP probe_ssl_earliest_cert_expiry Returns earliest SSL cert expiry in unixtime
# TYPE probe_ssl_earliest_cert_expiry gauge
probe_ssl_earliest_cert_expiry 1.581897599e+09
# HELP probe_success Displays whether or not the probe was a success
# TYPE probe_success gauge
probe_success 1
# HELP probe_tls_version_info Contains the TLS version used
# TYPE probe_tls_version_info gauge
probe_tls_version_info{version="TLS 1.3"} 1
```
You can see that the probe was successful and get many useful metrics, like latency by phase, status code, ssl status or certificate expiry in [Unix time](https://en.wikipedia.org/wiki/Unix_time).
The blackbox exporter also offers a tiny web interface at [localhost:9115](http://localhost:9115) for you to check out the last few probes, the loaded config and debug information. It even offers a direct link to probe `prometheus.io`. Handy if you are wondering why something does not work.
## Querying multi-target exporters with Prometheus
So far, so good. Congratulate yourself. The blackbox exporter works and you can manually tell it to query a remote target. You are almost there. Now you need to tell Prometheus to do the queries for us.
Below you find a minimal prometheus config. It is telling Prometheus to scrape the exporter itself as we did [before](#query-exporter) using `curl 'localhost:9115/metrics'`:
NOTE: If you use Docker for Mac or Docker for Windows, you can’t use `localhost:9115` in the last line, but must use `host.docker.internal:9115`. This has to do with the virtual machines used to implement Docker on those operating systems. You should not use this in production.
`prometheus.yml` for Linux:
```yaml
global:
scrape_interval: 5s
scrape_configs:
- job_name: blackbox # To get metrics about the exporter itself
metrics_path: /metrics
static_configs:
- targets:
- localhost:9115
```
`prometheus.yml` for macOS and Windows:
```yaml
global:
scrape_interval: 5s
scrape_configs:
- job_name: blackbox # To get metrics about the exporter itself
metrics_path: /metrics
static_configs:
- targets:
- host.docker.internal:9115
```
Now run a Prometheus container and tell it to mount our config file from above. Because of the way networking on the host is addressable from the container you need to use a slightly different command on Linux than on MacOS and Windows.:
<a name=run-prometheus></a>
Run Prometheus on Linux (don’t use `--network="host"` in production):
```bash
docker \
run --network="host"\
--mount type=bind,source="$(pwd)"/prometheus.yml,target=/prometheus.yml,readonly \
prom/prometheus \
--config.file="/prometheus.yml"
```
Run Prometheus on MacOS and Windows:
```bash
docker \
run -p 9090:9090 \
--mount type=bind,source="$(pwd)"/prometheus.yml,target=/prometheus.yml,readonly \
prom/prometheus \
--config.file="/prometheus.yml"
```
This command works similarly to [running the blackbox exporter using a config file](#run-exporter).
If everything worked, you should be able to go to [localhost:9090/targets](http://localhost:9090/targets) and see under `blackbox` an endpoint with the state `UP` in green. If you get a red `DOWN` make sure that the blackbox exporter you started [above](#run-exporter) is still running. If you see nothing or a yellow `UNKNOWN` you are really fast and need to give it a few more seconds before reloading your browser’s tab.
To tell Prometheus to query `"localhost:9115/probe?target=prometheus.io&module=http_2xx"` you add another scrape job `blackbox-http` where you set the `metrics_path` to `/probe` and the parameters under `params:` in the Prometheus config file `prometheus.yml`:
<a name="prometheus-config"></a>
```yaml
global:
scrape_interval: 5s
scrape_configs:
- job_name: blackbox # To get metrics about the exporter itself
metrics_path: /metrics
static_configs:
- targets:
- localhost:9115 # For Windows and macOS replace with - host.docker.internal:9115
- job_name: blackbox-http # To get metrics about the exporter’s targets
metrics_path: /probe
params:
module: [http_2xx]
target: [prometheus.io]
static_configs:
- targets:
- localhost:9115 # For Windows and macOS replace with - host.docker.internal:9115
```
After saving the config file switch to the terminal with your Prometheus docker container and stop it by pressing `ctrl+C` and start it again to reload the configuration by using the existing [command](#run-prometheus).
The terminal should return the message `"Server is ready to receive web requests."` and after a few seconds you should start to see colourful graphs in [your Prometheus](http://localhost:9090/graph?g0.range_input=5m&g0.stacked=0&g0.expr=probe_http_duration_seconds&g0.tab=0).
This works, but it has a few disadvantages:
1. The actual targets are up in the param config, which is very unusual and hard to understand later.
1. The `instance` label has the value of the blackbox exporter’s address which is technically true, but not what we are interested in.
1. We can’t see which URL we probed. This is unpractical and will also mix up different metrics into one if we probe several URLs.
To fix this, we will use [relabeling](/docs/prometheus/latest/configuration/configuration/#relabel_config).
Relabeling is useful here because behind the scenes many things in Prometheus are configured with internal labels.
The details are complicated and out of scope for this guide. Hence we will limit ourselves to the necessary. But if you want to know more check out this [talk](https://www.youtube.com/watch?v=b5-SvvZ7AwI). For now it suffices if you understand this:
* All labels starting with `__` are dropped after the scrape. Most internal labels start with `__`.
* You can set internal labels that are called `__param_<name>`. Those set URL parameter with the key `<name>` for the scrape request.
* There is an internal label `__address__` which is set by the `targets` under `static_configs` and whose value is the hostname for the scrape request. By default it is later used to set the value for the label `instance`, which is attached to each metric and tells you where the metrics came from.
Here is the config you will use to do that. Don’t worry if this is a bit much at once, we will go through it step by step:
```yaml
global:
scrape_interval: 5s
scrape_configs:
- job_name: blackbox # To get metrics about the exporter itself
metrics_path: /metrics
static_configs:
- targets:
- localhost:9115 # For Windows and macOS replace with - host.docker.internal:9115
- job_name: blackbox-http # To get metrics about the exporter’s targets
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
- http://prometheus.io # Target to probe with http
- https://prometheus.io # Target to probe with https
- http://example.com:8080 # Target to probe with http on port 8080
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: localhost:9115 # The blackbox exporter’s real hostname:port. For Windows and macOS replace with - host.docker.internal:9115
```
So what is new compared to the [last config](#prometheus-config)?
`params` does not include `target` anymore. Instead we add the actual targets under `static configs:` `targets`. We also use several because we can do that now:
```yaml
params:
module: [http_2xx]
static_configs:
- targets:
- http://prometheus.io # Target to probe with http
- https://prometheus.io # Target to probe with https
- http://example.com:8080 # Target to probe with http on port 8080
```
`relabel_configs` contains the new relabeling rules:
```yaml
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: localhost:9115 # The blackbox exporter’s real hostname:port. For Windows and macOS replace with - host.docker.internal:9115
```
Before applying the relabeling rules, the URI of a request Prometheus would make would look like this:
`"http://prometheus.io/probe?module=http_2xx"`. After relabeling it will look like this `"http://localhost:9115/probe?target=http://prometheus.io&module=http_2xx"`.
Now let us explore how each rule does that:
First we take the values from the label `__address__` (which contain the values from `targets`) and write them to a new label `__param_target` which will add a parameter `target` to the Prometheus scrape requests:
```yaml
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
```
After this our imagined Prometheus request URI has now a target parameter: `"http://prometheus.io/probe?target=http://prometheus.io&module=http_2xx"`.
Then we take the values from the label `__param_target` and create a label instance with the values.
```yaml
relabel_configs:
- source_labels: [__param_target]
target_label: instance
```
Our request will not change, but the metrics that come back from our request will now bear a label `instance="http://prometheus.io"`.
After that we write the value `localhost:9115` (the URI of our exporter) to the label `__address__`. This will be used as the hostname and port for the Prometheus scrape requests. So that it queries the exporter and not the target URI directly.
```yaml
relabel_configs:
- target_label: __address__
replacement: localhost:9115 # The blackbox exporter’s real hostname:port. For Windows and macOS replace with - host.docker.internal:9115
```
Our request is now `"localhost:9115/probe?target=http://prometheus.io&module=http_2xx"`. This way we can have the actual targets there, get them as `instance` label values while letting Prometheus make a request against the blackbox exporter.
Often people combine these with a specific service discovery. Check out the [configuration documentation](/docs/prometheus/latest/configuration/configuration) for more information. Using them is no problem, as these write into the `__address__` label just like `targets` defined under `static_configs`.
That is it. Restart the Prometheus docker container and look at your [metrics](http://localhost:9090/graph?g0.range_input=30m&g0.stacked=0&g0.expr=probe_http_duration_seconds&g0.tab=0). Pay attention that you selected the period of time when the metrics were actually collected.
# Summary
In this guide, you learned how the multi-target exporter pattern works, how to run a blackbox exporter with a customised module, and to configure Prometheus using relabeling to scrape metrics with prober labels. | prometheus | title Understanding and using the multi target exporter pattern Understanding and using the multi target exporter pattern This guide will introduce you to the multi target exporter pattern To achieve this we will describe the multi target exporter pattern and why it is used run the blackbox https github com prometheus blackbox exporter exporter as an example of the pattern configure a custom query module for the blackbox exporter let the blackbox exporter run basic metric queries against the Prometheus website https prometheus io examine a popular pattern of configuring Prometheus to scrape exporters using relabeling The multi target exporter pattern By multi target exporter docs instrumenting exporters pattern we refer to a specific design in which the exporter will get the target s metrics via a network protocol the exporter does not have to run on the machine the metrics are taken from the exporter gets the targets and a query config string as parameters of Prometheus GET request the exporter subsequently starts the scrape after getting Prometheus GET requests and once it is done with scraping the exporter can query multiple targets This pattern is only used for certain exporters such as the blackbox https github com prometheus blackbox exporter and the SNMP exporter https github com prometheus snmp exporter The reason is that we either can t run an exporter on the targets e g network gear speaking SNMP or that we are explicitly interested in the distance e g latency and reachability of a website from a specific point outside of our network a common use case for the blackbox https github com prometheus blackbox exporter exporter Running multi target exporters Multi target exporters are flexible regarding their environment and can be run in many ways As regular programs in containers as background services on baremetal on virtual machines Because they are queried and do query over network they do need appropriate open ports Otherwise they are frugal Now let s try it out for yourself Use Docker https www docker com to start a blackbox exporter container by running this in a terminal Depending on your system configuration you might need to prepend the command with a sudo bash docker run p 9115 9115 prom blackbox exporter You should see a few log lines and if everything went well the last one should report msg Listening on address as seen here level info ts 2018 10 17T15 41 35 4997596Z caller main go 324 msg Listening on address address 9115 Basic querying of multi target exporters There are two ways of querying 1 Querying the exporter itself It has its own metrics usually available at metrics 1 Querying the exporter to scrape another target Usually available at a descriptive endpoint e g probe This is likely what you are primarily interested in when using multi target exporters You can manually try the first query type with curl in another terminal or use this link http localhost 9115 metrics a name query exporter a bash curl localhost 9115 metrics The response should be something like this HELP blackbox exporter build info A metric with a constant 1 value labeled by version revision branch and goversion from which blackbox exporter was built TYPE blackbox exporter build info gauge blackbox exporter build info branch HEAD goversion go1 10 revision 4a22506cf0cf139d9b2f9cde099f0012d9fcabde version 0 12 0 1 HELP go gc duration seconds A summary of the GC invocation durations TYPE go gc duration seconds summary go gc duration seconds quantile 0 0 go gc duration seconds quantile 0 25 0 go gc duration seconds quantile 0 5 0 go gc duration seconds quantile 0 75 0 go gc duration seconds quantile 1 0 go gc duration seconds sum 0 go gc duration seconds count 0 HELP go goroutines Number of goroutines that currently exist TYPE go goroutines gauge go goroutines 9 HELP process cpu seconds total Total user and system CPU time spent in seconds TYPE process cpu seconds total counter process cpu seconds total 0 05 HELP process max fds Maximum number of open file descriptors TYPE process max fds gauge process max fds 1 048576e 06 HELP process open fds Number of open file descriptors TYPE process open fds gauge process open fds 7 HELP process resident memory bytes Resident memory size in bytes TYPE process resident memory bytes gauge process resident memory bytes 7 8848e 06 HELP process start time seconds Start time of the process since unix epoch in seconds TYPE process start time seconds gauge process start time seconds 1 54115492874e 09 HELP process virtual memory bytes Virtual memory size in bytes TYPE process virtual memory bytes gauge process virtual memory bytes 1 5609856e 07 Those are metrics in the Prometheus format docs instrumenting exposition formats text format example They come from the exporter s instrumentation docs practices instrumentation and tell us about the state of the exporter itself while it is running This is called whitebox monitoring and very useful in daily ops practice If you are curious try out our guide on how to instrument your own applications https prometheus io docs guides go application For the second type of querying we need to provide a target and module as parameters in the HTTP GET Request The target is a URI or IP and the module must defined in the exporter s configuration The blackbox exporter container comes with a meaningful default configuration We will use the target prometheus io and the predefined module http 2xx It tells the exporter to make a GET request like a browser would if you go to prometheus io and to expect a 200 OK https en wikipedia org wiki List of HTTP status codes 2xx Success response You can now tell your blackbox exporter to query prometheus io in the terminal with curl bash curl localhost 9115 probe target prometheus io module http 2xx This will return a lot of metrics HELP probe dns lookup time seconds Returns the time taken for probe dns lookup in seconds TYPE probe dns lookup time seconds gauge probe dns lookup time seconds 0 061087943 HELP probe duration seconds Returns how long the probe took to complete in seconds TYPE probe duration seconds gauge probe duration seconds 0 065580871 HELP probe failed due to regex Indicates if probe failed due to regex TYPE probe failed due to regex gauge probe failed due to regex 0 HELP probe http content length Length of http content response TYPE probe http content length gauge probe http content length 0 HELP probe http duration seconds Duration of http request by phase summed over all redirects TYPE probe http duration seconds gauge probe http duration seconds phase connect 0 probe http duration seconds phase processing 0 probe http duration seconds phase resolve 0 061087943 probe http duration seconds phase tls 0 probe http duration seconds phase transfer 0 HELP probe http redirects The number of redirects TYPE probe http redirects gauge probe http redirects 0 HELP probe http ssl Indicates if SSL was used for the final redirect TYPE probe http ssl gauge probe http ssl 0 HELP probe http status code Response HTTP status code TYPE probe http status code gauge probe http status code 0 HELP probe http version Returns the version of HTTP of the probe response TYPE probe http version gauge probe http version 0 HELP probe ip protocol Specifies whether probe ip protocol is IP4 or IP6 TYPE probe ip protocol gauge probe ip protocol 6 HELP probe success Displays whether or not the probe was a success TYPE probe success gauge probe success 0 Notice that almost all metrics have a value of 0 The last one reads probe success 0 This means the prober could not successfully reach prometheus io The reason is hidden in the metric probe ip protocol with the value 6 By default the prober uses IPv6 https en wikipedia org wiki IPv6 until told otherwise But the Docker daemon blocks IPv6 until told otherwise Hence our blackbox exporter running in a Docker container can t connect via IPv6 We could now either tell Docker to allow IPv6 or the blackbox exporter to use IPv4 In the real world both can make sense and as so often the answer to the question what is to be done is it depends Because this is an exporter guide we will change the exporter and take the opportunity to configure a custom module Configuring modules The modules are predefined in a file inside the docker container called config yml which is a copy of blackbox yml https github com prometheus blackbox exporter blob master blackbox yml in the github repo We will copy this file adapt https github com prometheus blackbox exporter blob master CONFIGURATION md it to our own needs and tell the exporter to use our config file instead of the one included in the container First download the file using curl or your browser bash curl o blackbox yml https raw githubusercontent com prometheus blackbox exporter master blackbox yml Open it in an editor The first few lines look like this yaml modules http 2xx prober http http post 2xx prober http http method POST YAML https en wikipedia org wiki YAML uses whitespace indentation to express hierarchy so you can recognise that two modules named http 2xx and http post 2xx are defined and that they both have a prober http and for one the method value is specifically set to POST You will now change the module http 2xx by setting the preferred ip protocol of the prober http explicitly to the string ip4 yaml modules http 2xx prober http http preferred ip protocol ip4 http post 2xx prober http http method POST If you want to know more about the available probers and options check out the documentation https github com prometheus blackbox exporter blob master CONFIGURATION md Now we need to tell the blackbox exporter to use our freshly changed file You can do that with the flag config file blackbox yml But because we are using Docker we first must make this file available https docs docker com storage bind mounts inside the container using the mount command NOTE If you are using macOS you first need to allow the Docker daemon to access the directory in which your blackbox yml is You can do that by clicking on the little Docker whale in menu bar and then on Preferences File Sharing Afterwards press Apply Restart First you stop the old container by changing into its terminal and press ctrl c Make sure you are in the directory containing your blackbox yml Then you run this command It is long but we will explain it a name run exporter a bash docker run p 9115 9115 mount type bind source pwd blackbox yml target blackbox yml readonly prom blackbox exporter config file blackbox yml With this command you told docker to 1 run a container with the port 9115 outside the container mapped to the port 9115 inside of the container 1 mount from your current directory pwd stands for print working directory the file blackbox yml into blackbox yml in readonly mode 1 use the image prom blackbox exporter from Docker hub https hub docker com r prom blackbox exporter 1 run the blackbox exporter with the flag config file telling it to use blackbox yml as config file If everything is correct you should see something like this level info ts 2018 10 19T12 40 51 650462756Z caller main go 213 msg Starting blackbox exporter version version 0 12 0 branch HEAD revision 4a22506cf0cf139d9b2f9cde099f0012d9fcabde level info ts 2018 10 19T12 40 51 653357722Z caller main go 220 msg Loaded config file level info ts 2018 10 19T12 40 51 65349635Z caller main go 324 msg Listening on address address 9115 Now you can try our new IPv4 using module http 2xx in a terminal bash curl localhost 9115 probe target prometheus io module http 2xx Which should return Prometheus metrics like this HELP probe dns lookup time seconds Returns the time taken for probe dns lookup in seconds TYPE probe dns lookup time seconds gauge probe dns lookup time seconds 0 02679421 HELP probe duration seconds Returns how long the probe took to complete in seconds TYPE probe duration seconds gauge probe duration seconds 0 461619124 HELP probe failed due to regex Indicates if probe failed due to regex TYPE probe failed due to regex gauge probe failed due to regex 0 HELP probe http content length Length of http content response TYPE probe http content length gauge probe http content length 1 HELP probe http duration seconds Duration of http request by phase summed over all redirects TYPE probe http duration seconds gauge probe http duration seconds phase connect 0 062076202999999996 probe http duration seconds phase processing 0 23481845699999998 probe http duration seconds phase resolve 0 029594103 probe http duration seconds phase tls 0 163420078 probe http duration seconds phase transfer 0 002243199 HELP probe http redirects The number of redirects TYPE probe http redirects gauge probe http redirects 1 HELP probe http ssl Indicates if SSL was used for the final redirect TYPE probe http ssl gauge probe http ssl 1 HELP probe http status code Response HTTP status code TYPE probe http status code gauge probe http status code 200 HELP probe http uncompressed body length Length of uncompressed response body TYPE probe http uncompressed body length gauge probe http uncompressed body length 14516 HELP probe http version Returns the version of HTTP of the probe response TYPE probe http version gauge probe http version 1 1 HELP probe ip protocol Specifies whether probe ip protocol is IP4 or IP6 TYPE probe ip protocol gauge probe ip protocol 4 HELP probe ssl earliest cert expiry Returns earliest SSL cert expiry in unixtime TYPE probe ssl earliest cert expiry gauge probe ssl earliest cert expiry 1 581897599e 09 HELP probe success Displays whether or not the probe was a success TYPE probe success gauge probe success 1 HELP probe tls version info Contains the TLS version used TYPE probe tls version info gauge probe tls version info version TLS 1 3 1 You can see that the probe was successful and get many useful metrics like latency by phase status code ssl status or certificate expiry in Unix time https en wikipedia org wiki Unix time The blackbox exporter also offers a tiny web interface at localhost 9115 http localhost 9115 for you to check out the last few probes the loaded config and debug information It even offers a direct link to probe prometheus io Handy if you are wondering why something does not work Querying multi target exporters with Prometheus So far so good Congratulate yourself The blackbox exporter works and you can manually tell it to query a remote target You are almost there Now you need to tell Prometheus to do the queries for us Below you find a minimal prometheus config It is telling Prometheus to scrape the exporter itself as we did before query exporter using curl localhost 9115 metrics NOTE If you use Docker for Mac or Docker for Windows you can t use localhost 9115 in the last line but must use host docker internal 9115 This has to do with the virtual machines used to implement Docker on those operating systems You should not use this in production prometheus yml for Linux yaml global scrape interval 5s scrape configs job name blackbox To get metrics about the exporter itself metrics path metrics static configs targets localhost 9115 prometheus yml for macOS and Windows yaml global scrape interval 5s scrape configs job name blackbox To get metrics about the exporter itself metrics path metrics static configs targets host docker internal 9115 Now run a Prometheus container and tell it to mount our config file from above Because of the way networking on the host is addressable from the container you need to use a slightly different command on Linux than on MacOS and Windows a name run prometheus a Run Prometheus on Linux don t use network host in production bash docker run network host mount type bind source pwd prometheus yml target prometheus yml readonly prom prometheus config file prometheus yml Run Prometheus on MacOS and Windows bash docker run p 9090 9090 mount type bind source pwd prometheus yml target prometheus yml readonly prom prometheus config file prometheus yml This command works similarly to running the blackbox exporter using a config file run exporter If everything worked you should be able to go to localhost 9090 targets http localhost 9090 targets and see under blackbox an endpoint with the state UP in green If you get a red DOWN make sure that the blackbox exporter you started above run exporter is still running If you see nothing or a yellow UNKNOWN you are really fast and need to give it a few more seconds before reloading your browser s tab To tell Prometheus to query localhost 9115 probe target prometheus io module http 2xx you add another scrape job blackbox http where you set the metrics path to probe and the parameters under params in the Prometheus config file prometheus yml a name prometheus config a yaml global scrape interval 5s scrape configs job name blackbox To get metrics about the exporter itself metrics path metrics static configs targets localhost 9115 For Windows and macOS replace with host docker internal 9115 job name blackbox http To get metrics about the exporter s targets metrics path probe params module http 2xx target prometheus io static configs targets localhost 9115 For Windows and macOS replace with host docker internal 9115 After saving the config file switch to the terminal with your Prometheus docker container and stop it by pressing ctrl C and start it again to reload the configuration by using the existing command run prometheus The terminal should return the message Server is ready to receive web requests and after a few seconds you should start to see colourful graphs in your Prometheus http localhost 9090 graph g0 range input 5m g0 stacked 0 g0 expr probe http duration seconds g0 tab 0 This works but it has a few disadvantages 1 The actual targets are up in the param config which is very unusual and hard to understand later 1 The instance label has the value of the blackbox exporter s address which is technically true but not what we are interested in 1 We can t see which URL we probed This is unpractical and will also mix up different metrics into one if we probe several URLs To fix this we will use relabeling docs prometheus latest configuration configuration relabel config Relabeling is useful here because behind the scenes many things in Prometheus are configured with internal labels The details are complicated and out of scope for this guide Hence we will limit ourselves to the necessary But if you want to know more check out this talk https www youtube com watch v b5 SvvZ7AwI For now it suffices if you understand this All labels starting with are dropped after the scrape Most internal labels start with You can set internal labels that are called param name Those set URL parameter with the key name for the scrape request There is an internal label address which is set by the targets under static configs and whose value is the hostname for the scrape request By default it is later used to set the value for the label instance which is attached to each metric and tells you where the metrics came from Here is the config you will use to do that Don t worry if this is a bit much at once we will go through it step by step yaml global scrape interval 5s scrape configs job name blackbox To get metrics about the exporter itself metrics path metrics static configs targets localhost 9115 For Windows and macOS replace with host docker internal 9115 job name blackbox http To get metrics about the exporter s targets metrics path probe params module http 2xx static configs targets http prometheus io Target to probe with http https prometheus io Target to probe with https http example com 8080 Target to probe with http on port 8080 relabel configs source labels address target label param target source labels param target target label instance target label address replacement localhost 9115 The blackbox exporter s real hostname port For Windows and macOS replace with host docker internal 9115 So what is new compared to the last config prometheus config params does not include target anymore Instead we add the actual targets under static configs targets We also use several because we can do that now yaml params module http 2xx static configs targets http prometheus io Target to probe with http https prometheus io Target to probe with https http example com 8080 Target to probe with http on port 8080 relabel configs contains the new relabeling rules yaml relabel configs source labels address target label param target source labels param target target label instance target label address replacement localhost 9115 The blackbox exporter s real hostname port For Windows and macOS replace with host docker internal 9115 Before applying the relabeling rules the URI of a request Prometheus would make would look like this http prometheus io probe module http 2xx After relabeling it will look like this http localhost 9115 probe target http prometheus io module http 2xx Now let us explore how each rule does that First we take the values from the label address which contain the values from targets and write them to a new label param target which will add a parameter target to the Prometheus scrape requests yaml relabel configs source labels address target label param target After this our imagined Prometheus request URI has now a target parameter http prometheus io probe target http prometheus io module http 2xx Then we take the values from the label param target and create a label instance with the values yaml relabel configs source labels param target target label instance Our request will not change but the metrics that come back from our request will now bear a label instance http prometheus io After that we write the value localhost 9115 the URI of our exporter to the label address This will be used as the hostname and port for the Prometheus scrape requests So that it queries the exporter and not the target URI directly yaml relabel configs target label address replacement localhost 9115 The blackbox exporter s real hostname port For Windows and macOS replace with host docker internal 9115 Our request is now localhost 9115 probe target http prometheus io module http 2xx This way we can have the actual targets there get them as instance label values while letting Prometheus make a request against the blackbox exporter Often people combine these with a specific service discovery Check out the configuration documentation docs prometheus latest configuration configuration for more information Using them is no problem as these write into the address label just like targets defined under static configs That is it Restart the Prometheus docker container and look at your metrics http localhost 9090 graph g0 range input 30m g0 stacked 0 g0 expr probe http duration seconds g0 tab 0 Pay attention that you selected the period of time when the metrics were actually collected Summary In this guide you learned how the multi target exporter pattern works how to run a blackbox exporter with a customised module and to configure Prometheus using relabeling to scrape metrics with prober labels |
prometheus title Monitoring Docker container metrics using cAdvisor short for container Advisor analyzes and exposes resource usage and performance data from running containers cAdvisor exposes Prometheus metrics out of the box In this guide we will create a local multi container installation that includes containers running Prometheus cAdvisor and a server respectively examine some container metrics produced by the Redis container collected by cAdvisor and scraped by Prometheus Monitoring Docker container metrics using cAdvisor | ---
title: Monitoring Docker container metrics using cAdvisor
---
# Monitoring Docker container metrics using cAdvisor
[cAdvisor](https://github.com/google/cadvisor) (short for **c**ontainer **Advisor**) analyzes and exposes resource usage and performance data from running containers. cAdvisor exposes Prometheus metrics out of the box. In this guide, we will:
* create a local multi-container [Docker Compose](https://docs.docker.com/compose/) installation that includes containers running Prometheus, cAdvisor, and a [Redis](https://redis.io/) server, respectively
* examine some container metrics produced by the Redis container, collected by cAdvisor, and scraped by Prometheus
## Prometheus configuration
First, you'll need to [configure Prometheus](/docs/prometheus/latest/configuration/configuration) to scrape metrics from cAdvisor. Create a `prometheus.yml` file and populate it with this configuration:
```yaml
scrape_configs:
- job_name: cadvisor
scrape_interval: 5s
static_configs:
- targets:
- cadvisor:8080
```
## Docker Compose configuration
Now we'll need to create a Docker Compose [configuration](https://docs.docker.com/compose/compose-file/) that specifies which containers are part of our installation as well as which ports are exposed by each container, which volumes are used, and so on.
In the same folder where you created the [`prometheus.yml`](#prometheus-configuration) file, create a `docker-compose.yml` file and populate it with this Docker Compose configuration:
```yaml
version: '3.2'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- 9090:9090
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
depends_on:
- cadvisor
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
ports:
- 8080:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
depends_on:
- redis
redis:
image: redis:latest
container_name: redis
ports:
- 6379:6379
```
This configuration instructs Docker Compose to run three services, each of which corresponds to a [Docker](https://docker.com) container:
1. The `prometheus` service uses the local `prometheus.yml` configuration file (imported into the container by the `volumes` parameter).
1. The `cadvisor` service exposes port 8080 (the default port for cAdvisor metrics) and relies on a variety of local volumes (`/`, `/var/run`, etc.).
1. The `redis` service is a standard Redis server. cAdvisor will gather container metrics from this container automatically, i.e. without any further configuration.
To run the installation:
```bash
docker-compose up
```
If Docker Compose successfully starts up all three containers, you should see output like this:
```
prometheus | level=info ts=2018-07-12T22:02:40.5195272Z caller=main.go:500 msg="Server is ready to receive web requests."
```
You can verify that all three containers are running using the [`ps`](https://docs.docker.com/compose/reference/ps/) command:
```bash
docker-compose ps
```
Your output will look something like this:
```
Name Command State Ports
----------------------------------------------------------------------------
cadvisor /usr/bin/cadvisor -logtostderr Up 8080/tcp
prometheus /bin/prometheus --config.f ... Up 0.0.0.0:9090->9090/tcp
redis docker-entrypoint.sh redis ... Up 0.0.0.0:6379->6379/tcp
```
## Exploring the cAdvisor web UI
You can access the cAdvisor [web UI](https://github.com/google/cadvisor/blob/master/docs/web.md) at `http://localhost:8080`. You can explore stats and graphs for specific Docker containers in our installation at `http://localhost:8080/docker/<container>`. Metrics for the Redis container, for example, can be accessed at `http://localhost:8080/docker/redis`, Prometheus at `http://localhost:8080/docker/prometheus`, and so on.
## Exploring metrics in the expression browser
cAdvisor's web UI is a useful interface for exploring the kinds of things that cAdvisor monitors, but it doesn't provide an interface for exploring container *metrics*. For that we'll need the Prometheus [expression browser](/docs/visualization/browser), which is available at `http://localhost:9090/graph`. You can enter Prometheus expressions into the expression bar, which looks like this:

Let's start by exploring the `container_start_time_seconds` metric, which records the start time of containers (in seconds). You can select for specific containers by name using the `name="<container_name>"` expression. The container name corresponds to the `container_name` parameter in the Docker Compose configuration. The [`container_start_time_seconds{name="redis"}`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=container_start_time_seconds%7Bname%3D%22redis%22%7D&g0.tab=1) expression, for example, shows the start time for the `redis` container.
NOTE: A full listing of cAdvisor-gathered container metrics exposed to Prometheus can be found in the [cAdvisor documentation](https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md).
## Other expressions
The table below lists some other example expressions
Expression | Description | For
:----------|:------------|:---
[`rate(container_cpu_usage_seconds_total{name="redis"}[1m])`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=rate(container_cpu_usage_seconds_total%7Bname%3D%22redis%22%7D%5B1m%5D)&g0.tab=1) | The [cgroup](https://en.wikipedia.org/wiki/Cgroups)'s CPU usage in the last minute | The `redis` container
[`container_memory_usage_bytes{name="redis"}`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=container_memory_usage_bytes%7Bname%3D%22redis%22%7D&g0.tab=1) | The cgroup's total memory usage (in bytes) | The `redis` container
[`rate(container_network_transmit_bytes_total[1m])`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=rate(container_network_transmit_bytes_total%5B1m%5D)&g0.tab=1) | Bytes transmitted over the network by the container per second in the last minute | All containers
[`rate(container_network_receive_bytes_total[1m])`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=rate(container_network_receive_bytes_total%5B1m%5D)&g0.tab=1) | Bytes received over the network by the container per second in the last minute | All containers
## Summary
In this guide, we ran three separate containers in a single installation using Docker Compose: a Prometheus container scraped metrics from a cAdvisor container which, in turns, gathered metrics produced by a Redis container. We then explored a handful of cAdvisor container metrics using the Prometheus expression browser. | prometheus | title Monitoring Docker container metrics using cAdvisor Monitoring Docker container metrics using cAdvisor cAdvisor https github com google cadvisor short for c ontainer Advisor analyzes and exposes resource usage and performance data from running containers cAdvisor exposes Prometheus metrics out of the box In this guide we will create a local multi container Docker Compose https docs docker com compose installation that includes containers running Prometheus cAdvisor and a Redis https redis io server respectively examine some container metrics produced by the Redis container collected by cAdvisor and scraped by Prometheus Prometheus configuration First you ll need to configure Prometheus docs prometheus latest configuration configuration to scrape metrics from cAdvisor Create a prometheus yml file and populate it with this configuration yaml scrape configs job name cadvisor scrape interval 5s static configs targets cadvisor 8080 Docker Compose configuration Now we ll need to create a Docker Compose configuration https docs docker com compose compose file that specifies which containers are part of our installation as well as which ports are exposed by each container which volumes are used and so on In the same folder where you created the prometheus yml prometheus configuration file create a docker compose yml file and populate it with this Docker Compose configuration yaml version 3 2 services prometheus image prom prometheus latest container name prometheus ports 9090 9090 command config file etc prometheus prometheus yml volumes prometheus yml etc prometheus prometheus yml ro depends on cadvisor cadvisor image gcr io cadvisor cadvisor latest container name cadvisor ports 8080 8080 volumes rootfs ro var run var run rw sys sys ro var lib docker var lib docker ro depends on redis redis image redis latest container name redis ports 6379 6379 This configuration instructs Docker Compose to run three services each of which corresponds to a Docker https docker com container 1 The prometheus service uses the local prometheus yml configuration file imported into the container by the volumes parameter 1 The cadvisor service exposes port 8080 the default port for cAdvisor metrics and relies on a variety of local volumes var run etc 1 The redis service is a standard Redis server cAdvisor will gather container metrics from this container automatically i e without any further configuration To run the installation bash docker compose up If Docker Compose successfully starts up all three containers you should see output like this prometheus level info ts 2018 07 12T22 02 40 5195272Z caller main go 500 msg Server is ready to receive web requests You can verify that all three containers are running using the ps https docs docker com compose reference ps command bash docker compose ps Your output will look something like this Name Command State Ports cadvisor usr bin cadvisor logtostderr Up 8080 tcp prometheus bin prometheus config f Up 0 0 0 0 9090 9090 tcp redis docker entrypoint sh redis Up 0 0 0 0 6379 6379 tcp Exploring the cAdvisor web UI You can access the cAdvisor web UI https github com google cadvisor blob master docs web md at http localhost 8080 You can explore stats and graphs for specific Docker containers in our installation at http localhost 8080 docker container Metrics for the Redis container for example can be accessed at http localhost 8080 docker redis Prometheus at http localhost 8080 docker prometheus and so on Exploring metrics in the expression browser cAdvisor s web UI is a useful interface for exploring the kinds of things that cAdvisor monitors but it doesn t provide an interface for exploring container metrics For that we ll need the Prometheus expression browser docs visualization browser which is available at http localhost 9090 graph You can enter Prometheus expressions into the expression bar which looks like this Prometheus expression bar assets prometheus expression bar png Let s start by exploring the container start time seconds metric which records the start time of containers in seconds You can select for specific containers by name using the name container name expression The container name corresponds to the container name parameter in the Docker Compose configuration The container start time seconds name redis http localhost 9090 graph g0 range input 1h g0 expr container start time seconds 7Bname 3D 22redis 22 7D g0 tab 1 expression for example shows the start time for the redis container NOTE A full listing of cAdvisor gathered container metrics exposed to Prometheus can be found in the cAdvisor documentation https github com google cadvisor blob master docs storage prometheus md Other expressions The table below lists some other example expressions Expression Description For rate container cpu usage seconds total name redis 1m http localhost 9090 graph g0 range input 1h g0 expr rate container cpu usage seconds total 7Bname 3D 22redis 22 7D 5B1m 5D g0 tab 1 The cgroup https en wikipedia org wiki Cgroups s CPU usage in the last minute The redis container container memory usage bytes name redis http localhost 9090 graph g0 range input 1h g0 expr container memory usage bytes 7Bname 3D 22redis 22 7D g0 tab 1 The cgroup s total memory usage in bytes The redis container rate container network transmit bytes total 1m http localhost 9090 graph g0 range input 1h g0 expr rate container network transmit bytes total 5B1m 5D g0 tab 1 Bytes transmitted over the network by the container per second in the last minute All containers rate container network receive bytes total 1m http localhost 9090 graph g0 range input 1h g0 expr rate container network receive bytes total 5B1m 5D g0 tab 1 Bytes received over the network by the container per second in the last minute All containers Summary In this guide we ran three separate containers in a single installation using Docker Compose a Prometheus container scraped metrics from a cAdvisor container which in turns gathered metrics produced by a Redis container We then explored a handful of cAdvisor container metrics using the Prometheus expression browser |
prometheus title Monitoring Linux host metrics with the Node Exporter The Prometheus exposes a wide variety of hardware and kernel related metrics Monitoring Linux host metrics with the Node Exporter In this guide you will | ---
title: Monitoring Linux host metrics with the Node Exporter
---
# Monitoring Linux host metrics with the Node Exporter
The Prometheus [**Node Exporter**](https://github.com/prometheus/node_exporter) exposes a wide variety of hardware- and kernel-related metrics.
In this guide, you will:
* Start up a Node Exporter on `localhost`
* Start up a Prometheus instance on `localhost` that's configured to scrape metrics from the running Node Exporter
NOTE: While the Prometheus Node Exporter is for *nix systems, there is the [Windows exporter](https://github.com/prometheus-community/windows_exporter) for Windows that serves an analogous purpose.
## Installing and running the Node Exporter
The Prometheus Node Exporter is a single static binary that you can install [via tarball](#tarball-installation). Once you've downloaded it from the Prometheus [downloads page](/download#node_exporter) extract it, and run it:
```bash
# NOTE: Replace the URL with one from the above mentioned "downloads" page.
# <VERSION>, <OS>, and <ARCH> are placeholders.
wget https://github.com/prometheus/node_exporter/releases/download/v<VERSION>/node_exporter-<VERSION>.<OS>-<ARCH>.tar.gz
tar xvfz node_exporter-*.*-amd64.tar.gz
cd node_exporter-*.*-amd64
./node_exporter
```
You should see output like this indicating that the Node Exporter is now running and exposing metrics on port 9100:
```
INFO[0000] Starting node_exporter (version=0.16.0, branch=HEAD, revision=d42bd70f4363dced6b77d8fc311ea57b63387e4f) source="node_exporter.go:82"
INFO[0000] Build context (go=go1.9.6, user=root@a67a9bc13a69, date=20180515-15:53:28) source="node_exporter.go:83"
INFO[0000] Enabled collectors: source="node_exporter.go:90"
INFO[0000] - boottime source="node_exporter.go:97"
...
INFO[0000] Listening on :9100 source="node_exporter.go:111"
```
## Node Exporter metrics
Once the Node Exporter is installed and running, you can verify that metrics are being exported by cURLing the `/metrics` endpoint:
```bash
curl http://localhost:9100/metrics
```
You should see output like this:
```
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.8996e-05
go_gc_duration_seconds{quantile="0.25"} 4.5926e-05
go_gc_duration_seconds{quantile="0.5"} 5.846e-05
# etc.
```
Success! The Node Exporter is now exposing metrics that Prometheus can scrape, including a wide variety of system metrics further down in the output (prefixed with `node_`). To view those metrics (along with help and type information):
```bash
curl http://localhost:9100/metrics | grep "node_"
```
## Configuring your Prometheus instances
Your locally running Prometheus instance needs to be properly configured in order to access Node Exporter metrics. The following [`prometheus.yml`](../../prometheus/latest/configuration/configuration/) example configuration file will tell the Prometheus instance to scrape, and how frequently, from the Node Exporter via `localhost:9100`:
<a id="config"></a>
```yaml
global:
scrape_interval: 15s
scrape_configs:
- job_name: node
static_configs:
- targets: ['localhost:9100']
```
To install Prometheus, [download the latest release](/download) for your platform and untar it:
```bash
wget https://github.com/prometheus/prometheus/releases/download/v*/prometheus-*.*-amd64.tar.gz
tar xvf prometheus-*.*-amd64.tar.gz
cd prometheus-*.*
```
Once Prometheus is installed you can start it up, using the `--config.file` flag to point to the Prometheus configuration that you created [above](#config):
```bash
./prometheus --config.file=./prometheus.yml
```
## Exploring Node Exporter metrics through the Prometheus expression browser
Now that Prometheus is scraping metrics from a running Node Exporter instance, you can explore those metrics using the Prometheus UI (aka the [expression browser](/docs/visualization/browser)). Navigate to `localhost:9090/graph` in your browser and use the main expression bar at the top of the page to enter expressions. The expression bar looks like this:

Metrics specific to the Node Exporter are prefixed with `node_` and include metrics like `node_cpu_seconds_total` and `node_exporter_build_info`.
Click on the links below to see some example metrics:
Metric | Meaning
:------|:-------
[`rate(node_cpu_seconds_total{mode="system"}[1m])`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=rate(node_cpu_seconds_total%7Bmode%3D%22system%22%7D%5B1m%5D)&g0.tab=1) | The average amount of CPU time spent in system mode, per second, over the last minute (in seconds)
[`node_filesystem_avail_bytes`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=node_filesystem_avail_bytes&g0.tab=1) | The filesystem space available to non-root users (in bytes)
[`rate(node_network_receive_bytes_total[1m])`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=rate(node_network_receive_bytes_total%5B1m%5D)&g0.tab=1) | The average network traffic received, per second, over the last minute (in bytes) | prometheus | title Monitoring Linux host metrics with the Node Exporter Monitoring Linux host metrics with the Node Exporter The Prometheus Node Exporter https github com prometheus node exporter exposes a wide variety of hardware and kernel related metrics In this guide you will Start up a Node Exporter on localhost Start up a Prometheus instance on localhost that s configured to scrape metrics from the running Node Exporter NOTE While the Prometheus Node Exporter is for nix systems there is the Windows exporter https github com prometheus community windows exporter for Windows that serves an analogous purpose Installing and running the Node Exporter The Prometheus Node Exporter is a single static binary that you can install via tarball tarball installation Once you ve downloaded it from the Prometheus downloads page download node exporter extract it and run it bash NOTE Replace the URL with one from the above mentioned downloads page VERSION OS and ARCH are placeholders wget https github com prometheus node exporter releases download v VERSION node exporter VERSION OS ARCH tar gz tar xvfz node exporter amd64 tar gz cd node exporter amd64 node exporter You should see output like this indicating that the Node Exporter is now running and exposing metrics on port 9100 INFO 0000 Starting node exporter version 0 16 0 branch HEAD revision d42bd70f4363dced6b77d8fc311ea57b63387e4f source node exporter go 82 INFO 0000 Build context go go1 9 6 user root a67a9bc13a69 date 20180515 15 53 28 source node exporter go 83 INFO 0000 Enabled collectors source node exporter go 90 INFO 0000 boottime source node exporter go 97 INFO 0000 Listening on 9100 source node exporter go 111 Node Exporter metrics Once the Node Exporter is installed and running you can verify that metrics are being exported by cURLing the metrics endpoint bash curl http localhost 9100 metrics You should see output like this HELP go gc duration seconds A summary of the GC invocation durations TYPE go gc duration seconds summary go gc duration seconds quantile 0 3 8996e 05 go gc duration seconds quantile 0 25 4 5926e 05 go gc duration seconds quantile 0 5 5 846e 05 etc Success The Node Exporter is now exposing metrics that Prometheus can scrape including a wide variety of system metrics further down in the output prefixed with node To view those metrics along with help and type information bash curl http localhost 9100 metrics grep node Configuring your Prometheus instances Your locally running Prometheus instance needs to be properly configured in order to access Node Exporter metrics The following prometheus yml prometheus latest configuration configuration example configuration file will tell the Prometheus instance to scrape and how frequently from the Node Exporter via localhost 9100 a id config a yaml global scrape interval 15s scrape configs job name node static configs targets localhost 9100 To install Prometheus download the latest release download for your platform and untar it bash wget https github com prometheus prometheus releases download v prometheus amd64 tar gz tar xvf prometheus amd64 tar gz cd prometheus Once Prometheus is installed you can start it up using the config file flag to point to the Prometheus configuration that you created above config bash prometheus config file prometheus yml Exploring Node Exporter metrics through the Prometheus expression browser Now that Prometheus is scraping metrics from a running Node Exporter instance you can explore those metrics using the Prometheus UI aka the expression browser docs visualization browser Navigate to localhost 9090 graph in your browser and use the main expression bar at the top of the page to enter expressions The expression bar looks like this Prometheus expressions browser assets prometheus expression bar png Metrics specific to the Node Exporter are prefixed with node and include metrics like node cpu seconds total and node exporter build info Click on the links below to see some example metrics Metric Meaning rate node cpu seconds total mode system 1m http localhost 9090 graph g0 range input 1h g0 expr rate node cpu seconds total 7Bmode 3D 22system 22 7D 5B1m 5D g0 tab 1 The average amount of CPU time spent in system mode per second over the last minute in seconds node filesystem avail bytes http localhost 9090 graph g0 range input 1h g0 expr node filesystem avail bytes g0 tab 1 The filesystem space available to non root users in bytes rate node network receive bytes total 1m http localhost 9090 graph g0 range input 1h g0 expr rate node network receive bytes total 5B1m 5D g0 tab 1 The average network traffic received per second over the last minute in bytes |
prometheus Prometheus can discover targets in a Docker Swarm swarm cluster as of title Docker Swarm sortrank 1 v2 20 0 This guide demonstrates how to use that service discovery mechanism Docker Swarm | ---
title: Docker Swarm
sort_rank: 1
---
# Docker Swarm
Prometheus can discover targets in a [Docker Swarm][swarm] cluster, as of
v2.20.0. This guide demonstrates how to use that service discovery mechanism.
## Docker Swarm service discovery architecture
The [Docker Swarm service discovery][swarmsd] contains 3 different roles: nodes, services,
and tasks.
The first role, **nodes**, represents the hosts that are part of the Swarm. It
can be used to automatically monitor the Docker daemons or the Node Exporters
who run on the Swarm hosts.
The second role, **tasks**, represents any individual container deployed in the
swarm. Each task gets its associated service labels. One service can be backed by
one or multiple tasks.
The third one, **services**, will discover the services deployed in the
swarm. It will discover the ports exposed by the services. Usually you will want
to use the tasks role instead of this one.
Prometheus will only discover tasks and service that expose ports.
NOTE: The rest of this post assumes that you have a Swarm running.
## Setting up Prometheus
For this guide, you need to [setup Prometheus][setup]. We will assume that
Prometheus runs on a Docker Swarm manager node and has access to the Docker
socket at `/var/run/docker.sock`.
## Monitoring Docker daemons
Let's dive into the service discovery itself.
Docker itself, as a daemon, exposes [metrics][dockermetrics] that can be
ingested by a Prometheus server.
You can enable them by editing `/etc/docker/daemon.json` and setting the
following properties:
```json
{
"metrics-addr" : "0.0.0.0:9323",
"experimental" : true
}
```
Instead of `0.0.0.0`, you can set the IP of the Docker Swarm node.
A restart of the daemon is required to take the new configuration into account.
The [Docker documentation][dockermetrics] contains more info about this.
Then, you can configure Prometheus to scrape the Docker daemon, by providing the
following `prometheus.yml` file:
```yaml
scrape_configs:
# Make Prometheus scrape itself for metrics.
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# Create a job for Docker daemons.
- job_name: 'docker'
dockerswarm_sd_configs:
- host: unix:///var/run/docker.sock
role: nodes
relabel_configs:
# Fetch metrics on port 9323.
- source_labels: [__meta_dockerswarm_node_address]
target_label: __address__
replacement: $1:9323
# Set hostname as instance label
- source_labels: [__meta_dockerswarm_node_hostname]
target_label: instance
```
For the nodes role, you can also use the `port` parameter of
`dockerswarm_sd_configs`. However, using `relabel_configs` is recommended as it
enables Prometheus to reuse the same API calls across identical Docker Swarm
configurations.
## Monitoring Containers
Let's now deploy a service in our Swarm. We will deploy [cadvisor][cad], which
exposes container resources metrics:
```shell
docker service create --name cadvisor -l prometheus-job=cadvisor \
--mode=global --publish target=8080,mode=host \
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock,ro \
--mount type=bind,src=/,dst=/rootfs,ro \
--mount type=bind,src=/var/run,dst=/var/run \
--mount type=bind,src=/sys,dst=/sys,ro \
--mount type=bind,src=/var/lib/docker,dst=/var/lib/docker,ro \
google/cadvisor -docker_only
```
This is a minimal `prometheus.yml` file to monitor it:
```yaml
scrape_configs:
# Make Prometheus scrape itself for metrics.
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# Create a job for Docker Swarm containers.
- job_name: 'dockerswarm'
dockerswarm_sd_configs:
- host: unix:///var/run/docker.sock
role: tasks
relabel_configs:
# Only keep containers that should be running.
- source_labels: [__meta_dockerswarm_task_desired_state]
regex: running
action: keep
# Only keep containers that have a `prometheus-job` label.
- source_labels: [__meta_dockerswarm_service_label_prometheus_job]
regex: .+
action: keep
# Use the prometheus-job Swarm label as Prometheus job label.
- source_labels: [__meta_dockerswarm_service_label_prometheus_job]
target_label: job
```
Let's analyze each part of the [relabel configuration][rela].
```yaml
- source_labels: [__meta_dockerswarm_task_desired_state]
regex: running
action: keep
```
Docker Swarm exposes the desired [state of the tasks][state] over the API. In
out example, we only **keep** the targets that should be running. It prevents
monitoring tasks that should be shut down.
```yaml
- source_labels: [__meta_dockerswarm_service_label_prometheus_job]
regex: .+
action: keep
```
When we deployed our cadvisor, we have added a label `prometheus-job=cadvisor`.
As Prometheus fetches the tasks labels, we can instruct it to **only** keep the
targets which have a `prometheus-job` label.
```yaml
- source_labels: [__meta_dockerswarm_service_label_prometheus_job]
target_label: job
```
That last part takes the label `prometheus-job` of the task and turns it into
a target label, overwriting the default `dockerswarm` job label that comes from
the scrape config.
## Discovered labels
The [Prometheus Documentation][swarmsd] contains the full list of labels, but
here are other relabel configs that you might find useful.
### Scraping metrics via a certain network only
```yaml
- source_labels: [__meta_dockerswarm_network_name]
regex: ingress
action: keep
```
### Scraping global tasks only
Global tasks run on every daemon.
```yaml
- source_labels: [__meta_dockerswarm_service_mode]
regex: global
action: keep
- source_labels: [__meta_dockerswarm_task_port_publish_mode]
regex: host
action: keep
```
### Adding a docker_node label to the targets
```yaml
- source_labels: [__meta_dockerswarm_node_hostname]
target_label: docker_node
```
## Connecting to the Docker Swarm
The above `dockerswarm_sd_configs` entries have a field host:
```yaml
host: unix:///var/run/docker.sock
```
That is using the Docker socket. Prometheus offers [additional configuration
options][swarmsd] to connect to Swarm using HTTP and HTTPS, if you prefer that
over the unix socket.
## Conclusion
There are many discovery labels you can play with to better determine which
targets to monitor and how, for the tasks, there is more than 25 labels
available. Don't hesitate to look at the "Service Discovery" page of your
Prometheus server (under the "Status" menu) to see all the discovered labels.
The service discovery makes no assumptions about your Swarm stack, in such a way
that given proper configuration, this should be pluggable to any existing stack.
[state]:https://docs.docker.com/engine/swarm/how-swarm-mode-works/swarm-task-states/
[rela]:https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
[swarm]:https://docs.docker.com/engine/swarm/
[swarmsd]:https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config
[dockermetrics]:https://docs.docker.com/config/daemon/prometheus/
[cad]:https://github.com/google/cadvisor
[setup]:https://prometheus.io/docs/prometheus/latest/getting_started/ | prometheus | title Docker Swarm sort rank 1 Docker Swarm Prometheus can discover targets in a Docker Swarm swarm cluster as of v2 20 0 This guide demonstrates how to use that service discovery mechanism Docker Swarm service discovery architecture The Docker Swarm service discovery swarmsd contains 3 different roles nodes services and tasks The first role nodes represents the hosts that are part of the Swarm It can be used to automatically monitor the Docker daemons or the Node Exporters who run on the Swarm hosts The second role tasks represents any individual container deployed in the swarm Each task gets its associated service labels One service can be backed by one or multiple tasks The third one services will discover the services deployed in the swarm It will discover the ports exposed by the services Usually you will want to use the tasks role instead of this one Prometheus will only discover tasks and service that expose ports NOTE The rest of this post assumes that you have a Swarm running Setting up Prometheus For this guide you need to setup Prometheus setup We will assume that Prometheus runs on a Docker Swarm manager node and has access to the Docker socket at var run docker sock Monitoring Docker daemons Let s dive into the service discovery itself Docker itself as a daemon exposes metrics dockermetrics that can be ingested by a Prometheus server You can enable them by editing etc docker daemon json and setting the following properties json metrics addr 0 0 0 0 9323 experimental true Instead of 0 0 0 0 you can set the IP of the Docker Swarm node A restart of the daemon is required to take the new configuration into account The Docker documentation dockermetrics contains more info about this Then you can configure Prometheus to scrape the Docker daemon by providing the following prometheus yml file yaml scrape configs Make Prometheus scrape itself for metrics job name prometheus static configs targets localhost 9090 Create a job for Docker daemons job name docker dockerswarm sd configs host unix var run docker sock role nodes relabel configs Fetch metrics on port 9323 source labels meta dockerswarm node address target label address replacement 1 9323 Set hostname as instance label source labels meta dockerswarm node hostname target label instance For the nodes role you can also use the port parameter of dockerswarm sd configs However using relabel configs is recommended as it enables Prometheus to reuse the same API calls across identical Docker Swarm configurations Monitoring Containers Let s now deploy a service in our Swarm We will deploy cadvisor cad which exposes container resources metrics shell docker service create name cadvisor l prometheus job cadvisor mode global publish target 8080 mode host mount type bind src var run docker sock dst var run docker sock ro mount type bind src dst rootfs ro mount type bind src var run dst var run mount type bind src sys dst sys ro mount type bind src var lib docker dst var lib docker ro google cadvisor docker only This is a minimal prometheus yml file to monitor it yaml scrape configs Make Prometheus scrape itself for metrics job name prometheus static configs targets localhost 9090 Create a job for Docker Swarm containers job name dockerswarm dockerswarm sd configs host unix var run docker sock role tasks relabel configs Only keep containers that should be running source labels meta dockerswarm task desired state regex running action keep Only keep containers that have a prometheus job label source labels meta dockerswarm service label prometheus job regex action keep Use the prometheus job Swarm label as Prometheus job label source labels meta dockerswarm service label prometheus job target label job Let s analyze each part of the relabel configuration rela yaml source labels meta dockerswarm task desired state regex running action keep Docker Swarm exposes the desired state of the tasks state over the API In out example we only keep the targets that should be running It prevents monitoring tasks that should be shut down yaml source labels meta dockerswarm service label prometheus job regex action keep When we deployed our cadvisor we have added a label prometheus job cadvisor As Prometheus fetches the tasks labels we can instruct it to only keep the targets which have a prometheus job label yaml source labels meta dockerswarm service label prometheus job target label job That last part takes the label prometheus job of the task and turns it into a target label overwriting the default dockerswarm job label that comes from the scrape config Discovered labels The Prometheus Documentation swarmsd contains the full list of labels but here are other relabel configs that you might find useful Scraping metrics via a certain network only yaml source labels meta dockerswarm network name regex ingress action keep Scraping global tasks only Global tasks run on every daemon yaml source labels meta dockerswarm service mode regex global action keep source labels meta dockerswarm task port publish mode regex host action keep Adding a docker node label to the targets yaml source labels meta dockerswarm node hostname target label docker node Connecting to the Docker Swarm The above dockerswarm sd configs entries have a field host yaml host unix var run docker sock That is using the Docker socket Prometheus offers additional configuration options swarmsd to connect to Swarm using HTTP and HTTPS if you prefer that over the unix socket Conclusion There are many discovery labels you can play with to better determine which targets to monitor and how for the tasks there is more than 25 labels available Don t hesitate to look at the Service Discovery page of your Prometheus server under the Status menu to see all the discovered labels The service discovery makes no assumptions about your Swarm stack in such a way that given proper configuration this should be pluggable to any existing stack state https docs docker com engine swarm how swarm mode works swarm task states rela https prometheus io docs prometheus latest configuration configuration relabel config swarm https docs docker com engine swarm swarmsd https prometheus io docs prometheus latest configuration configuration dockerswarm sd config dockermetrics https docs docker com config daemon prometheus cad https github com google cadvisor setup https prometheus io docs prometheus latest getting started |
prometheus Instrumenting a Go application for Prometheus NOTE For comprehensive API documentation see the for Prometheus various Go libraries title Instrumenting a Go application Prometheus has an official that you can use to instrument Go applications In this guide we ll create a simple Go application that exposes Prometheus metrics via HTTP | ---
title: Instrumenting a Go application
---
# Instrumenting a Go application for Prometheus
Prometheus has an official [Go client library](https://github.com/prometheus/client_golang) that you can use to instrument Go applications. In this guide, we'll create a simple Go application that exposes Prometheus metrics via HTTP.
NOTE: For comprehensive API documentation, see the [GoDoc](https://godoc.org/github.com/prometheus/client_golang) for Prometheus' various Go libraries.
## Installation
You can install the `prometheus`, `promauto`, and `promhttp` libraries necessary for the guide using [`go get`](https://golang.org/doc/articles/go_command.html):
```bash
go get github.com/prometheus/client_golang/prometheus
go get github.com/prometheus/client_golang/prometheus/promauto
go get github.com/prometheus/client_golang/prometheus/promhttp
```
## How Go exposition works
To expose Prometheus metrics in a Go application, you need to provide a `/metrics` HTTP endpoint. You can use the [`prometheus/promhttp`](https://godoc.org/github.com/prometheus/client_golang/prometheus/promhttp) library's HTTP [`Handler`](https://godoc.org/github.com/prometheus/client_golang/prometheus/promhttp#Handler) as the handler function.
This minimal application, for example, would expose the default metrics for Go applications via `http://localhost:2112/metrics`:
```go
package main
import (
"net/http"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
func main() {
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":2112", nil)
}
```
To start the application:
```bash
go run main.go
```
To access the metrics:
```bash
curl http://localhost:2112/metrics
```
## Adding your own metrics
The application [above](#how-go-exposition-works) exposes only the default Go metrics. You can also register your own custom application-specific metrics. This example application exposes a `myapp_processed_ops_total` [counter](/docs/concepts/metric_types/#counter) that counts the number of operations that have been processed thus far. Every 2 seconds, the counter is incremented by one.
```go
package main
import (
"net/http"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
func recordMetrics() {
go func() {
for {
opsProcessed.Inc()
time.Sleep(2 * time.Second)
}
}()
}
var (
opsProcessed = promauto.NewCounter(prometheus.CounterOpts{
Name: "myapp_processed_ops_total",
Help: "The total number of processed events",
})
)
func main() {
recordMetrics()
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":2112", nil)
}
```
To run the application:
```bash
go run main.go
```
To access the metrics:
```bash
curl http://localhost:2112/metrics
```
In the metrics output, you'll see the help text, type information, and current value of the `myapp_processed_ops_total` counter:
```
# HELP myapp_processed_ops_total The total number of processed events
# TYPE myapp_processed_ops_total counter
myapp_processed_ops_total 5
```
You can [configure](/docs/prometheus/latest/configuration/configuration/#scrape_config) a locally running Prometheus instance to scrape metrics from the application. Here's an example `prometheus.yml` configuration:
```yaml
scrape_configs:
- job_name: myapp
scrape_interval: 10s
static_configs:
- targets:
- localhost:2112
```
## Other Go client features
In this guide we covered just a small handful of features available in the Prometheus Go client libraries. You can also expose other metrics types, such as [gauges](https://godoc.org/github.com/prometheus/client_golang/prometheus#Gauge) and [histograms](https://godoc.org/github.com/prometheus/client_golang/prometheus#Histogram), [non-global registries](https://godoc.org/github.com/prometheus/client_golang/prometheus#Registry), functions for [pushing metrics](https://godoc.org/github.com/prometheus/client_golang/prometheus/push) to Prometheus [PushGateways](/docs/instrumenting/pushing/), bridging Prometheus and [Graphite](https://godoc.org/github.com/prometheus/client_golang/prometheus/graphite), and more.
## Summary
In this guide, you created two sample Go applications that expose metrics to Prometheus---one that exposes only the default Go metrics and one that also exposes a custom Prometheus counter---and configured a Prometheus instance to scrape metrics from those applications. | prometheus | title Instrumenting a Go application Instrumenting a Go application for Prometheus Prometheus has an official Go client library https github com prometheus client golang that you can use to instrument Go applications In this guide we ll create a simple Go application that exposes Prometheus metrics via HTTP NOTE For comprehensive API documentation see the GoDoc https godoc org github com prometheus client golang for Prometheus various Go libraries Installation You can install the prometheus promauto and promhttp libraries necessary for the guide using go get https golang org doc articles go command html bash go get github com prometheus client golang prometheus go get github com prometheus client golang prometheus promauto go get github com prometheus client golang prometheus promhttp How Go exposition works To expose Prometheus metrics in a Go application you need to provide a metrics HTTP endpoint You can use the prometheus promhttp https godoc org github com prometheus client golang prometheus promhttp library s HTTP Handler https godoc org github com prometheus client golang prometheus promhttp Handler as the handler function This minimal application for example would expose the default metrics for Go applications via http localhost 2112 metrics go package main import net http github com prometheus client golang prometheus promhttp func main http Handle metrics promhttp Handler http ListenAndServe 2112 nil To start the application bash go run main go To access the metrics bash curl http localhost 2112 metrics Adding your own metrics The application above how go exposition works exposes only the default Go metrics You can also register your own custom application specific metrics This example application exposes a myapp processed ops total counter docs concepts metric types counter that counts the number of operations that have been processed thus far Every 2 seconds the counter is incremented by one go package main import net http time github com prometheus client golang prometheus github com prometheus client golang prometheus promauto github com prometheus client golang prometheus promhttp func recordMetrics go func for opsProcessed Inc time Sleep 2 time Second var opsProcessed promauto NewCounter prometheus CounterOpts Name myapp processed ops total Help The total number of processed events func main recordMetrics http Handle metrics promhttp Handler http ListenAndServe 2112 nil To run the application bash go run main go To access the metrics bash curl http localhost 2112 metrics In the metrics output you ll see the help text type information and current value of the myapp processed ops total counter HELP myapp processed ops total The total number of processed events TYPE myapp processed ops total counter myapp processed ops total 5 You can configure docs prometheus latest configuration configuration scrape config a locally running Prometheus instance to scrape metrics from the application Here s an example prometheus yml configuration yaml scrape configs job name myapp scrape interval 10s static configs targets localhost 2112 Other Go client features In this guide we covered just a small handful of features available in the Prometheus Go client libraries You can also expose other metrics types such as gauges https godoc org github com prometheus client golang prometheus Gauge and histograms https godoc org github com prometheus client golang prometheus Histogram non global registries https godoc org github com prometheus client golang prometheus Registry functions for pushing metrics https godoc org github com prometheus client golang prometheus push to Prometheus PushGateways docs instrumenting pushing bridging Prometheus and Graphite https godoc org github com prometheus client golang prometheus graphite and more Summary In this guide you created two sample Go applications that expose metrics to Prometheus one that exposes only the default Go metrics and one that also exposes a custom Prometheus counter and configured a Prometheus instance to scrape metrics from those applications |
prometheus In this guide we will Use file based service discovery to discover scrape targets title Use file based service discovery to discover scrape targets Prometheus offers a variety of for discovering scrape targets including and many others If you need to use a service discovery system that is not currently supported your use case may be best served by Prometheus mechanism which enables you to list scrape targets in a JSON file along with metadata about those targets | ---
title: Use file-based service discovery to discover scrape targets
---
# Use file-based service discovery to discover scrape targets
Prometheus offers a variety of [service discovery options](https://github.com/prometheus/prometheus/tree/main/discovery) for discovering scrape targets, including [Kubernetes](/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config), [Consul](/docs/prometheus/latest/configuration/configuration/#consul_sd_config), and many others. If you need to use a service discovery system that is not currently supported, your use case may be best served by Prometheus' [file-based service discovery](/docs/prometheus/latest/configuration/configuration/#file_sd_config) mechanism, which enables you to list scrape targets in a JSON file (along with metadata about those targets).
In this guide, we will:
* Install and run a Prometheus [Node Exporter](../node-exporter) locally
* Create a `targets.json` file specifying the host and port information for the Node Exporter
* Install and run a Prometheus instance that is configured to discover the Node Exporter using the `targets.json` file
## Installing and running the Node Exporter
See [this section](../node-exporter#installing-and-running-the-node-exporter) of the [Monitoring Linux host metrics with the Node Exporter](../node-exporter) guide. The Node Exporter runs on port 9100. To ensure that the Node Exporter is exposing metrics:
```bash
curl http://localhost:9100/metrics
```
The metrics output should look something like this:
```
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
...
```
## Installing, configuring, and running Prometheus
Like the Node Exporter, Prometheus is a single static binary that you can install via tarball. [Download the latest release](/download#prometheus) for your platform and untar it:
```bash
wget https://github.com/prometheus/prometheus/releases/download/v*/prometheus-*.*-amd64.tar.gz
tar xvf prometheus-*.*-amd64.tar.gz
cd prometheus-*.*
```
The untarred directory contains a `prometheus.yml` configuration file. Replace the current contents of that file with this:
```yaml
scrape_configs:
- job_name: 'node'
file_sd_configs:
- files:
- 'targets.json'
```
This configuration specifies that there is a job called `node` (for the Node Exporter) that retrieves host and port information for Node Exporter instances from a `targets.json` file.
Now create that `targets.json` file and add this content to it:
```json
[
{
"labels": {
"job": "node"
},
"targets": [
"localhost:9100"
]
}
]
```
NOTE: In this guide we'll work with JSON service discovery configurations manually for the sake of brevity. In general, however, we recommend that you use some kind of JSON-generating process or tool instead.
This configuration specifies that there is a `node` job with one target: `localhost:9100`.
Now you can start up Prometheus:
```bash
./prometheus
```
If Prometheus has started up successfully, you should see a line like this in the logs:
```
level=info ts=2018-08-13T20:39:24.905651509Z caller=main.go:500 msg="Server is ready to receive web requests."
```
## Exploring the discovered services' metrics
With Prometheus up and running, you can explore metrics exposed by the `node` service using the Prometheus [expression browser](/docs/visualization/browser). If you explore the [`up{job="node"}`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=up%7Bjob%3D%22node%22%7D&g0.tab=1) metric, for example, you can see that the Node Exporter is being appropriately discovered.
## Changing the targets list dynamically
When using Prometheus' file-based service discovery mechanism, the Prometheus instance will listen for changes to the file and automatically update the scrape target list, without requiring an instance restart. To demonstrate this, start up a second Node Exporter instance on port 9200. First navigate to the directory containing the Node Exporter binary and run this command in a new terminal window:
```bash
./node_exporter --web.listen-address=":9200"
```
Now modify the config in `targets.json` by adding an entry for the new Node Exporter:
```json
[
{
"targets": [
"localhost:9100"
],
"labels": {
"job": "node"
}
},
{
"targets": [
"localhost:9200"
],
"labels": {
"job": "node"
}
}
]
```
When you save the changes, Prometheus will automatically be notified of the new list of targets. The [`up{job="node"}`](http://localhost:9090/graph?g0.range_input=1h&g0.expr=up%7Bjob%3D%22node%22%7D&g0.tab=1) metric should display two instances with `instance` labels `localhost:9100` and `localhost:9200`.
## Summary
In this guide, you installed and ran a Prometheus Node Exporter and configured Prometheus to discover and scrape metrics from the Node Exporter using file-based service discovery. | prometheus | title Use file based service discovery to discover scrape targets Use file based service discovery to discover scrape targets Prometheus offers a variety of service discovery options https github com prometheus prometheus tree main discovery for discovering scrape targets including Kubernetes docs prometheus latest configuration configuration kubernetes sd config Consul docs prometheus latest configuration configuration consul sd config and many others If you need to use a service discovery system that is not currently supported your use case may be best served by Prometheus file based service discovery docs prometheus latest configuration configuration file sd config mechanism which enables you to list scrape targets in a JSON file along with metadata about those targets In this guide we will Install and run a Prometheus Node Exporter node exporter locally Create a targets json file specifying the host and port information for the Node Exporter Install and run a Prometheus instance that is configured to discover the Node Exporter using the targets json file Installing and running the Node Exporter See this section node exporter installing and running the node exporter of the Monitoring Linux host metrics with the Node Exporter node exporter guide The Node Exporter runs on port 9100 To ensure that the Node Exporter is exposing metrics bash curl http localhost 9100 metrics The metrics output should look something like this HELP go gc duration seconds A summary of the GC invocation durations TYPE go gc duration seconds summary go gc duration seconds quantile 0 0 go gc duration seconds quantile 0 25 0 go gc duration seconds quantile 0 5 0 Installing configuring and running Prometheus Like the Node Exporter Prometheus is a single static binary that you can install via tarball Download the latest release download prometheus for your platform and untar it bash wget https github com prometheus prometheus releases download v prometheus amd64 tar gz tar xvf prometheus amd64 tar gz cd prometheus The untarred directory contains a prometheus yml configuration file Replace the current contents of that file with this yaml scrape configs job name node file sd configs files targets json This configuration specifies that there is a job called node for the Node Exporter that retrieves host and port information for Node Exporter instances from a targets json file Now create that targets json file and add this content to it json labels job node targets localhost 9100 NOTE In this guide we ll work with JSON service discovery configurations manually for the sake of brevity In general however we recommend that you use some kind of JSON generating process or tool instead This configuration specifies that there is a node job with one target localhost 9100 Now you can start up Prometheus bash prometheus If Prometheus has started up successfully you should see a line like this in the logs level info ts 2018 08 13T20 39 24 905651509Z caller main go 500 msg Server is ready to receive web requests Exploring the discovered services metrics With Prometheus up and running you can explore metrics exposed by the node service using the Prometheus expression browser docs visualization browser If you explore the up job node http localhost 9090 graph g0 range input 1h g0 expr up 7Bjob 3D 22node 22 7D g0 tab 1 metric for example you can see that the Node Exporter is being appropriately discovered Changing the targets list dynamically When using Prometheus file based service discovery mechanism the Prometheus instance will listen for changes to the file and automatically update the scrape target list without requiring an instance restart To demonstrate this start up a second Node Exporter instance on port 9200 First navigate to the directory containing the Node Exporter binary and run this command in a new terminal window bash node exporter web listen address 9200 Now modify the config in targets json by adding an entry for the new Node Exporter json targets localhost 9100 labels job node targets localhost 9200 labels job node When you save the changes Prometheus will automatically be notified of the new list of targets The up job node http localhost 9090 graph g0 range input 1h g0 expr up 7Bjob 3D 22node 22 7D g0 tab 1 metric should display two instances with instance labels localhost 9100 and localhost 9200 Summary In this guide you installed and ran a Prometheus Node Exporter and configured Prometheus to discover and scrape metrics from the Node Exporter using file based service discovery |
prometheus title OpenTelemetry Prometheus supports aka OpenTelemetry Protocol ingestion through Enable the OTLP receiver Using Prometheus as your OpenTelemetry backend | ---
title: OpenTelemetry
---
# Using Prometheus as your OpenTelemetry backend
Prometheus supports [OTLP](https://opentelemetry.io/docs/specs/otlp) (aka "OpenTelemetry Protocol") ingestion through [HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp).
## Enable the OTLP receiver
By default, the OTLP receiver is disabled, similarly to the Remote Write receiver.
This is because Prometheus can work without any authentication, so it would not be
safe to accept incoming traffic unless explicitly configured.
To enable the receiver you need to toggle the CLI flag `--web.enable-otlp-receiver`.
This will cause Prometheus to serve OTLP metrics receiving on HTTP `/api/v1/otlp/v1/metrics` path.
```shell
$ prometheus --web.enable-otlp-receiver
```
## Send OpenTelemetry Metrics to the Prometheus Server
Generally you need to tell the source of the OTLP metrics traffic about Prometheus endpoint and the fact that the
[HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp) mode of OTLP should be used (gRPC is usually a default).
OpenTelemetry SDKs and instrumentation libraries can be usually configured via [standard environment variables](https://opentelemetry.io/docs/languages/sdk-configuration/). The following are the OpenTelemetry variables needed to send OpenTelemetry metrics to a Prometheus server on localhost:
```shell
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=http://localhost:9090/api/v1/otlp/v1/metrics
```
Turn off traces and logs:
```shell
export OTEL_TRACES_EXPORTER=none
export OTEL_LOGS_EXPORTER=none
```
The default push interval for OpenTelemetry metrics is 60 seconds. The following will set a 15 second push interval:
```shell
export OTEL_METRIC_EXPORT_INTERVAL=15000
```
If your instrumentation library does not provide `service.name` and `service.instance.id` out-of-the-box, it is highly recommended to set them.
```shell
export OTEL_SERVICE_NAME="my-example-service"
export OTEL_RESOURCE_ATTRIBUTES="service.instance.id=$(uuidgen)"
```
The above assumes that `uuidgen` command is available on your system. Make sure that `service.instance.id` is unique for each instance, and that a new `service.instance.id` is generated whenever a resource attribute chances. The [recommended](https://github.com/open-telemetry/semantic-conventions/tree/main/docs/resource) way is to generate a new UUID on each startup of an instance.
## Configuring Prometheus
This section explains various recommended configuration aspects of Prometheus server to enable and tune your OpenTelemetry flow.
See the example Prometheus configuration [file](https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-otlp.yml)
we will use in the below section.
### Enable out-of-order ingestion
There are multiple reasons why you might want to enable out-of-order ingestion.
For example, the OpenTelemetry collector encourages batching and you could have multiple replicas of the collector sending data to Prometheus. Because there is no mechanism ordering those samples they could get out-of-order.
To enable out-of-order ingestion you need to extend the Prometheus configuration file with the following:
```shell
storage:
tsdb:
out_of_order_time_window: 30m
```
30 minutes of out-of-order have been enough for most cases but don't hesitate to adjust this value to your needs.
### Promoting resource attributes
Based on experience and conversations with our community, we've found that out of all the commonly seen resource attributes,
there are certain worth attaching to all your OTLP metrics.
By default, Prometheus won't be promoting any attributes. If you'd like to promote any
of them, you can do so in this section of the Prometheus configuration file. The following
snippet shares the best practise set of attributes to promote:
```yaml
otlp:
# Recommended attributes to be promoted to labels.
promote_resource_attributes:
- service.instance.id
- service.name
- service.namespace
- cloud.availability_zone
- cloud.region
- container.name
- deployment.environment.name
- k8s.cluster.name
- k8s.container.name
- k8s.cronjob.name
- k8s.daemonset.name
- k8s.deployment.name
- k8s.job.name
- k8s.namespace.name
- k8s.pod.name
- k8s.replicaset.name
- k8s.statefulset.name
```
## Including resource attributes at query time
All non-promoted, more verbose or unique labels are attached to a special `target_info`.
You can use this metric to join some labels on query time.
An example of such a query can look like the following:
```promql
rate(http_server_request_duration_seconds_count[2m])
* on (job, instance) group_left (k8s_cluster_name)
target_info
```
What happens in this query is that the time series resulting from `rate(http_server_request_duration_seconds_count[2m])` are augmented with the `k8s_cluster_name` label from the `target_info` series that share the same `job` and `instance` labels.
In other words, the `job` and `instance` labels are shared between `http_server_request_duration_seconds_count` and `target_info`, akin to SQL foreign keys.
The `k8s_cluster_name` label, On the other hand, corresponds to the OTel resource attribute `k8s.cluster.name` (Prometheus converts dots to underscores).
So, what is the relation between the `target_info` metric and OTel resource attributes?
When Prometheus processes an OTLP write request, and provided that contained resources include the attributes `service.instance.id` and/or `service.name`, Prometheus generates the info metric `target_info` for every (OTel) resource.
It adds to each such `target_info` series the label `instance` with the value of the `service.instance.id` resource attribute, and the label `job` with the value of the `service.name` resource attribute.
If the resource attribute `service.namespace` exists, it's prefixed to the `job` label value (i.e., `<service.namespace>/<service.name>`).
By default `service.name`, `service.namespace` and `service.instance.id` themselves are not added to `target_info`, because they are converted into `job` and `instance`. However the following configuration parameter can be enabled to add them to `target_info` directly (going through normalization to replace dots with underscores, if `otlp.translation_strategy` is `UnderscoreEscapingWithSuffixes`) on top of the conversion into `job` and `instance`.
```
otlp:
keep_identifying_resource_attributes: true
```
The rest of the resource attributes are also added as labels to the `target_info` series, names converted to Prometheus format (e.g. dots converted to underscores) if `otlp.translation_strategy` is `UnderscoreEscapingWithSuffixes`.
If a resource lacks both `service.instance.id` and `service.name` attributes, no corresponding `target_info` series is generated.
For each of a resource's OTel metrics, Prometheus converts it to a corresponding Prometheus time series, and (if `target_info` is generated) adds the right `instance` and `job` labels.
## UTF-8
From the 3.x version, Prometheus supports UTF-8 for metric names and labels, so [Prometheus normalization translator package from OpenTelemetry](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/translator/prometheus) can be omitted.
UTF-8 is enabled by default in Prometheus storage and UI, but you need to `translation_strategy` for OTLP metric receiver, which by default is set to old normalization `UnderscoreEscapingWithSuffixes`.
Setting it to `NoUTF8EscapingWithSuffixes`, which we recommend, will disable changing special characters to `_` which allows native use of OpenTelemetry metric format, especially with [the semantic conventions](https://opentelemetry.io/docs/specs/semconv/general/metrics/). Note that special suffixes like units and `_total` for counters will be attached. There is [ongoing work to have no suffix generation](https://github.com/prometheus/proposals/pull/39), stay tuned for that.
```
otlp:
# Ingest OTLP data keeping UTF-8 characters in metric/label names.
translation_strategy: NoUTF8EscapingWithSuffixes
```
> Currently there's a known limitation in the OTLP translation package where characters get removed from metric/label names if multiple UTF-8 characters are concatenated between words, e.g. `my___metric` becomes `my_metric`. Please see https://github.com/prometheus/prometheus/issues/15362 for more details.
## Delta Temporality
The [OpenTelemetry specification says](https://opentelemetry.io/docs/specs/otel/metrics/data-model/#temporality) that both Delta temporality and Cumulative temporality are supported.
While Delta temporality is common in systems like statsd and graphite, cumulative temporality is the default temporality for Prometheus.
Today Prometheus does not have support for delta temporality but we are learning from the OpenTelemetry community and we are considering adding support for it in the future.
If you are coming from a delta temporality system we recommend that you use the [delta to cumulative processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/deltatocumulativeprocessor) in your OTel pipeline. | prometheus | title OpenTelemetry Using Prometheus as your OpenTelemetry backend Prometheus supports OTLP https opentelemetry io docs specs otlp aka OpenTelemetry Protocol ingestion through HTTP https opentelemetry io docs specs otlp otlphttp Enable the OTLP receiver By default the OTLP receiver is disabled similarly to the Remote Write receiver This is because Prometheus can work without any authentication so it would not be safe to accept incoming traffic unless explicitly configured To enable the receiver you need to toggle the CLI flag web enable otlp receiver This will cause Prometheus to serve OTLP metrics receiving on HTTP api v1 otlp v1 metrics path shell prometheus web enable otlp receiver Send OpenTelemetry Metrics to the Prometheus Server Generally you need to tell the source of the OTLP metrics traffic about Prometheus endpoint and the fact that the HTTP https opentelemetry io docs specs otlp otlphttp mode of OTLP should be used gRPC is usually a default OpenTelemetry SDKs and instrumentation libraries can be usually configured via standard environment variables https opentelemetry io docs languages sdk configuration The following are the OpenTelemetry variables needed to send OpenTelemetry metrics to a Prometheus server on localhost shell export OTEL EXPORTER OTLP PROTOCOL http protobuf export OTEL EXPORTER OTLP METRICS ENDPOINT http localhost 9090 api v1 otlp v1 metrics Turn off traces and logs shell export OTEL TRACES EXPORTER none export OTEL LOGS EXPORTER none The default push interval for OpenTelemetry metrics is 60 seconds The following will set a 15 second push interval shell export OTEL METRIC EXPORT INTERVAL 15000 If your instrumentation library does not provide service name and service instance id out of the box it is highly recommended to set them shell export OTEL SERVICE NAME my example service export OTEL RESOURCE ATTRIBUTES service instance id uuidgen The above assumes that uuidgen command is available on your system Make sure that service instance id is unique for each instance and that a new service instance id is generated whenever a resource attribute chances The recommended https github com open telemetry semantic conventions tree main docs resource way is to generate a new UUID on each startup of an instance Configuring Prometheus This section explains various recommended configuration aspects of Prometheus server to enable and tune your OpenTelemetry flow See the example Prometheus configuration file https github com prometheus prometheus blob main documentation examples prometheus otlp yml we will use in the below section Enable out of order ingestion There are multiple reasons why you might want to enable out of order ingestion For example the OpenTelemetry collector encourages batching and you could have multiple replicas of the collector sending data to Prometheus Because there is no mechanism ordering those samples they could get out of order To enable out of order ingestion you need to extend the Prometheus configuration file with the following shell storage tsdb out of order time window 30m 30 minutes of out of order have been enough for most cases but don t hesitate to adjust this value to your needs Promoting resource attributes Based on experience and conversations with our community we ve found that out of all the commonly seen resource attributes there are certain worth attaching to all your OTLP metrics By default Prometheus won t be promoting any attributes If you d like to promote any of them you can do so in this section of the Prometheus configuration file The following snippet shares the best practise set of attributes to promote yaml otlp Recommended attributes to be promoted to labels promote resource attributes service instance id service name service namespace cloud availability zone cloud region container name deployment environment name k8s cluster name k8s container name k8s cronjob name k8s daemonset name k8s deployment name k8s job name k8s namespace name k8s pod name k8s replicaset name k8s statefulset name Including resource attributes at query time All non promoted more verbose or unique labels are attached to a special target info You can use this metric to join some labels on query time An example of such a query can look like the following promql rate http server request duration seconds count 2m on job instance group left k8s cluster name target info What happens in this query is that the time series resulting from rate http server request duration seconds count 2m are augmented with the k8s cluster name label from the target info series that share the same job and instance labels In other words the job and instance labels are shared between http server request duration seconds count and target info akin to SQL foreign keys The k8s cluster name label On the other hand corresponds to the OTel resource attribute k8s cluster name Prometheus converts dots to underscores So what is the relation between the target info metric and OTel resource attributes When Prometheus processes an OTLP write request and provided that contained resources include the attributes service instance id and or service name Prometheus generates the info metric target info for every OTel resource It adds to each such target info series the label instance with the value of the service instance id resource attribute and the label job with the value of the service name resource attribute If the resource attribute service namespace exists it s prefixed to the job label value i e service namespace service name By default service name service namespace and service instance id themselves are not added to target info because they are converted into job and instance However the following configuration parameter can be enabled to add them to target info directly going through normalization to replace dots with underscores if otlp translation strategy is UnderscoreEscapingWithSuffixes on top of the conversion into job and instance otlp keep identifying resource attributes true The rest of the resource attributes are also added as labels to the target info series names converted to Prometheus format e g dots converted to underscores if otlp translation strategy is UnderscoreEscapingWithSuffixes If a resource lacks both service instance id and service name attributes no corresponding target info series is generated For each of a resource s OTel metrics Prometheus converts it to a corresponding Prometheus time series and if target info is generated adds the right instance and job labels UTF 8 From the 3 x version Prometheus supports UTF 8 for metric names and labels so Prometheus normalization translator package from OpenTelemetry https github com open telemetry opentelemetry collector contrib tree main pkg translator prometheus can be omitted UTF 8 is enabled by default in Prometheus storage and UI but you need to translation strategy for OTLP metric receiver which by default is set to old normalization UnderscoreEscapingWithSuffixes Setting it to NoUTF8EscapingWithSuffixes which we recommend will disable changing special characters to which allows native use of OpenTelemetry metric format especially with the semantic conventions https opentelemetry io docs specs semconv general metrics Note that special suffixes like units and total for counters will be attached There is ongoing work to have no suffix generation https github com prometheus proposals pull 39 stay tuned for that otlp Ingest OTLP data keeping UTF 8 characters in metric label names translation strategy NoUTF8EscapingWithSuffixes Currently there s a known limitation in the OTLP translation package where characters get removed from metric label names if multiple UTF 8 characters are concatenated between words e g my metric becomes my metric Please see https github com prometheus prometheus issues 15362 for more details Delta Temporality The OpenTelemetry specification says https opentelemetry io docs specs otel metrics data model temporality that both Delta temporality and Cumulative temporality are supported While Delta temporality is common in systems like statsd and graphite cumulative temporality is the default temporality for Prometheus Today Prometheus does not have support for delta temporality but we are learning from the OpenTelemetry community and we are considering adding support for it in the future If you are coming from a delta temporality system we recommend that you use the delta to cumulative processor https github com open telemetry opentelemetry collector contrib tree main processor deltatocumulativeprocessor in your OTel pipeline |
prometheus Versions of Prometheus before 3 0 required that metric and label names adhere to are valid names but there are some manual changes needed for other parts of the ecosystem to introduce names with any UTF 8 characters Introduction title UTF 8 in Prometheus a strict set of character requirements With Prometheus 3 0 all UTF 8 strings | ---
title: UTF-8 in Prometheus
---
# Introduction
Versions of Prometheus before 3.0 required that metric and label names adhere to
a strict set of character requirements. With Prometheus 3.0, all UTF-8 strings
are valid names, but there are some manual changes needed for other parts of the ecosystem to introduce names with any UTF-8 characters.
There may also be circumstances where users want to enforce the legacy character
set, perhaps for compatibility with an older system or one that does not yet
support UTF-8.
This document guides you through the UTF-8 transition details.
# Go Instrumentation
Currently, metrics created by the official Prometheus [client_golang library](github.com/prometheus/client_golang) will reject UTF-8 names
by default. It is necessary to change the default validation scheme to allow
UTF-8. The requirement to set this value will be removed in a future version of
the common library.
```golang
import "github.com/prometheus/common/model"
func init() {
model.NameValidationScheme = model.UTF8Validation
}
```
If users want to enforce the legacy character set, they can set the validation
scheme to `LegacyValidation`.
Setting the validation scheme must be done before the instantiation of metrics
and can be set on the fly if desired.
## Instrumenting in other languages
Other client libraries may have similar requirements to set the validation
scheme. Check the documentation for the library you are using.
# Configuring Name Validation during Scraping
By default, Prometheus 3.0 accepts all UTF-8 strings as valid metric and label
names. It is possible to override this behavior for scraped targets and reject
names that do not conform to the legacy character set.
This option can be set in the Prometheus YAML file on a global basis:
```yaml
global:
metric_name_validation_scheme: legacy
```
or on a per-scrape config basis:
```yaml
scrape_configs:
- job_name: prometheus
metric_name_validation_scheme: legacy
```
Scrape config settings override the global setting.
## Scrape Content Negotiation for UTF-8 escaping
At scrape time, the scraping system **must** pass `escaping=allow-utf-8` in the
Accept header in order to be served UTF-8 names. If a system being scraped does
not see this header, it will automatically convert UTF-8 names to
legacy-compatible using underscore replacement.
Scraping systems can also request a specfic escaping method if desired by
setting the `escaping` header to a different value.
* `underscores`: The default: convert legacy-invalid characters to underscores.
* `dots`: similar to UnderscoreEscaping, except that dots are converted to
`_dot_` and pre-existing underscores are converted to `__`. This allows for
round-tripping of simple metric names that also contain dots.
* `values`: This mode prepends the name with `U__` and replaces all invalid
characters with the unicode value, surrounded by underscores. Single
underscores are replaced with double underscores. This mode allows for full
round-tripping of UTF-8 names with a legacy system.
## Remote Write 2.0
Remote Write 2.0 automatically accepts all UTF-8 names in Prometheus 3.0. There
is no way to enforce the legacy character set validation with Remote Write 2.0.
# OTLP Metrics
OTLP receiver in Prometheus 3.0 still normalizes all names to Prometheus format by default. You can change this in `otlp` section of the Prometheus configuration as follows:
otlp:
# Ingest OTLP data keeping UTF-8 characters in metric/label names.
translation_strategy: NoUTF8EscapingWithSuffixes
See [OpenTelemetry guide](./opentelemetry) for more details.
# Querying
Querying for metrics with UTF-8 names will require a slightly different syntax
in PromQL.
The classic query syntax will still work for legacy-compatible names:
`my_metric{}`
But UTF-8 names must be quoted **and** moved into the braces:
`{"my.metric"}`
Label names must also be quoted if they contain legacy-incompatible characters:
`{"metric.name", "my.label.name"="bar"}`
The metric name can appear anywhere inside the braces, but style prefers that it
be the first term | prometheus | title UTF 8 in Prometheus Introduction Versions of Prometheus before 3 0 required that metric and label names adhere to a strict set of character requirements With Prometheus 3 0 all UTF 8 strings are valid names but there are some manual changes needed for other parts of the ecosystem to introduce names with any UTF 8 characters There may also be circumstances where users want to enforce the legacy character set perhaps for compatibility with an older system or one that does not yet support UTF 8 This document guides you through the UTF 8 transition details Go Instrumentation Currently metrics created by the official Prometheus client golang library github com prometheus client golang will reject UTF 8 names by default It is necessary to change the default validation scheme to allow UTF 8 The requirement to set this value will be removed in a future version of the common library golang import github com prometheus common model func init model NameValidationScheme model UTF8Validation If users want to enforce the legacy character set they can set the validation scheme to LegacyValidation Setting the validation scheme must be done before the instantiation of metrics and can be set on the fly if desired Instrumenting in other languages Other client libraries may have similar requirements to set the validation scheme Check the documentation for the library you are using Configuring Name Validation during Scraping By default Prometheus 3 0 accepts all UTF 8 strings as valid metric and label names It is possible to override this behavior for scraped targets and reject names that do not conform to the legacy character set This option can be set in the Prometheus YAML file on a global basis yaml global metric name validation scheme legacy or on a per scrape config basis yaml scrape configs job name prometheus metric name validation scheme legacy Scrape config settings override the global setting Scrape Content Negotiation for UTF 8 escaping At scrape time the scraping system must pass escaping allow utf 8 in the Accept header in order to be served UTF 8 names If a system being scraped does not see this header it will automatically convert UTF 8 names to legacy compatible using underscore replacement Scraping systems can also request a specfic escaping method if desired by setting the escaping header to a different value underscores The default convert legacy invalid characters to underscores dots similar to UnderscoreEscaping except that dots are converted to dot and pre existing underscores are converted to This allows for round tripping of simple metric names that also contain dots values This mode prepends the name with U and replaces all invalid characters with the unicode value surrounded by underscores Single underscores are replaced with double underscores This mode allows for full round tripping of UTF 8 names with a legacy system Remote Write 2 0 Remote Write 2 0 automatically accepts all UTF 8 names in Prometheus 3 0 There is no way to enforce the legacy character set validation with Remote Write 2 0 OTLP Metrics OTLP receiver in Prometheus 3 0 still normalizes all names to Prometheus format by default You can change this in otlp section of the Prometheus configuration as follows otlp Ingest OTLP data keeping UTF 8 characters in metric label names translation strategy NoUTF8EscapingWithSuffixes See OpenTelemetry guide opentelemetry for more details Querying Querying for metrics with UTF 8 names will require a slightly different syntax in PromQL The classic query syntax will still work for legacy compatible names my metric But UTF 8 names must be quoted and moved into the braces my metric Label names must also be quoted if they contain legacy incompatible characters metric name my label name bar The metric name can appear anywhere inside the braces but style prefers that it be the first term |
Subsets and Splits