questions
stringlengths
4
1.65k
answers
stringlengths
1.73k
353k
site
stringclasses
24 values
answers_cleaned
stringlengths
1.73k
353k
argocd Release Assets Asset Description Verification of Argo CD Artifacts Prerequisites crane for container verification only cosign or higher slsa verifier
# Verification of Argo CD Artifacts ## Prerequisites - cosign `v2.0.0` or higher [installation instructions](https://docs.sigstore.dev/cosign/installation) - slsa-verifier [installation instructions](https://github.com/slsa-framework/slsa-verifier#installation) - crane [installation instructions](https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md) (for container verification only) *** ## Release Assets | Asset | Description | |--------------------------|-------------------------------| | argocd-darwin-amd64 | CLI Binary | | argocd-darwin-arm64 | CLI Binary | | argocd-linux_amd64 | CLI Binary | | argocd-linux_arm64 | CLI Binary | | argocd-linux_ppc64le | CLI Binary | | argocd-linux_s390x | CLI Binary | | argocd-windows_amd64 | CLI Binary | | argocd-cli.intoto.jsonl | Attestation of CLI binaries | | argocd-sbom.intoto.jsonl | Attestation of SBOM | | cli_checksums.txt | Checksums of binaries | | sbom.tar.gz | Sbom | | sbom.tar.gz.pem | Certificate used to sign sbom | | sbom.tar.gz.sig | Signature of sbom | *** ## Verification of container images Argo CD container images are signed by [cosign](https://github.com/sigstore/cosign) using identity-based ("keyless") signing and transparency. Executing the following command can be used to verify the signature of a container image: ```bash cosign verify \ --certificate-identity-regexp https://github.com/argoproj/argo-cd/.github/workflows/image-reuse.yaml@refs/tags/v \ --certificate-oidc-issuer https://token.actions.githubusercontent.com \ --certificate-github-workflow-repository "argoproj/argo-cd" \ quay.io/argoproj/argocd:v2.11.3 | jq ``` The command should output the following if the container image was correctly verified: ```bash The following checks were performed on each of these signatures: - The cosign claims were validated - Existence of the claims in the transparency log was verified offline - Any certificates were verified against the Fulcio roots. [ { "critical": { "identity": { "docker-reference": "quay.io/argoproj/argo-cd" }, "image": { "docker-manifest-digest": "sha256:63dc60481b1b2abf271e1f2b866be8a92962b0e53aaa728902caa8ac8d235277" }, "type": "cosign container image signature" }, "optional": { "1.3.6.1.4.1.57264.1.1": "https://token.actions.githubusercontent.com", "1.3.6.1.4.1.57264.1.2": "push", "1.3.6.1.4.1.57264.1.3": "a6ec84da0eaa519cbd91a8f016cf4050c03323b2", "1.3.6.1.4.1.57264.1.4": "Publish ArgoCD Release", "1.3.6.1.4.1.57264.1.5": "argoproj/argo-cd", "1.3.6.1.4.1.57264.1.6": "refs/tags/<version>", ... ``` *** ## Verification of container image with SLSA attestations A [SLSA](https://slsa.dev/) Level 3 provenance is generated using [slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator). The following command will verify the signature of an attestation and how it was issued. It will contain the payloadType, payload, and signature. Run the following command as per the [slsa-verifier documentation](https://github.com/slsa-framework/slsa-verifier/tree/main#containers): ```bash # Get the immutable container image to prevent TOCTOU attacks https://github.com/slsa-framework/slsa-verifier#toctou-attacks IMAGE=quay.io/argoproj/argocd:v2.7.0 IMAGE="${IMAGE}@"$(crane digest "${IMAGE}") # Verify provenance, including the tag to prevent rollback attacks. slsa-verifier verify-image "$IMAGE" \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 ``` If you only want to verify up to the major or minor verion of the source repository tag (instead of the full tag), use the `--source-versioned-tag` which performs semantic versioning verification: ```shell slsa-verifier verify-image "$IMAGE" \ --source-uri github.com/argoproj/argo-cd \ --source-versioned-tag v2 # Note: May use v2.7 for minor version verification. ``` The attestation payload contains a non-forgeable provenance which is base64 encoded and can be viewed by passing the `--print-provenance` option to the commands above: ```bash slsa-verifier verify-image "$IMAGE" \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 \ --print-provenance | jq ``` If you prefer using cosign, follow these [instructions](https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#cosign). !!! tip `cosign` or `slsa-verifier` can both be used to verify image attestations. Check the documentation of each binary for detailed instructions. *** ## Verification of CLI artifacts with SLSA attestations A single attestation (`argocd-cli.intoto.jsonl`) from each release is provided. This can be used with [slsa-verifier](https://github.com/slsa-framework/slsa-verifier#verification-for-github-builders) to verify that a CLI binary was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed. ```bash slsa-verifier verify-artifact argocd-linux-amd64 \ --provenance-path argocd-cli.intoto.jsonl \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 ``` If you only want to verify up to the major or minor verion of the source repository tag (instead of the full tag), use the `--source-versioned-tag` which performs semantic versioning verification: ```shell slsa-verifier verify-artifact argocd-linux-amd64 \ --provenance-path argocd-cli.intoto.jsonl \ --source-uri github.com/argoproj/argo-cd \ --source-versioned-tag v2 # Note: May use v2.7 for minor version verification. ``` The payload is a non-forgeable provenance which is base64 encoded and can be viewed by passing the `--print-provenance` option to the commands above: ```bash slsa-verifier verify-artifact argocd-linux-amd64 \ --provenance-path argocd-cli.intoto.jsonl \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 \ --print-provenance | jq ``` ## Verification of Sbom A single attestation (`argocd-sbom.intoto.jsonl`) from each release is provided along with the sbom (`sbom.tar.gz`). This can be used with [slsa-verifier](https://github.com/slsa-framework/slsa-verifier#verification-for-github-builders) to verify that the SBOM was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed. ```bash slsa-verifier verify-artifact sbom.tar.gz \ --provenance-path argocd-sbom.intoto.jsonl \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 ``` *** ## Verification on Kubernetes ### Policy controllers !!! note We encourage all users to verify signatures and provenances with your admission/policy controller of choice. Doing so will verify that an image was built by us before it's deployed on your Kubernetes cluster. Cosign signatures and SLSA provenances are compatible with several types of admission controllers. Please see the [cosign documentation](https://docs.sigstore.dev/cosign/overview/#kubernetes-integrations) and [slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#verification) for supported controllers.
argocd
Verification of Argo CD Artifacts Prerequisites cosign v2 0 0 or higher installation instructions https docs sigstore dev cosign installation slsa verifier installation instructions https github com slsa framework slsa verifier installation crane installation instructions https github com google go containerregistry blob main cmd crane README md for container verification only Release Assets Asset Description argocd darwin amd64 CLI Binary argocd darwin arm64 CLI Binary argocd linux amd64 CLI Binary argocd linux arm64 CLI Binary argocd linux ppc64le CLI Binary argocd linux s390x CLI Binary argocd windows amd64 CLI Binary argocd cli intoto jsonl Attestation of CLI binaries argocd sbom intoto jsonl Attestation of SBOM cli checksums txt Checksums of binaries sbom tar gz Sbom sbom tar gz pem Certificate used to sign sbom sbom tar gz sig Signature of sbom Verification of container images Argo CD container images are signed by cosign https github com sigstore cosign using identity based keyless signing and transparency Executing the following command can be used to verify the signature of a container image bash cosign verify certificate identity regexp https github com argoproj argo cd github workflows image reuse yaml refs tags v certificate oidc issuer https token actions githubusercontent com certificate github workflow repository argoproj argo cd quay io argoproj argocd v2 11 3 jq The command should output the following if the container image was correctly verified bash The following checks were performed on each of these signatures The cosign claims were validated Existence of the claims in the transparency log was verified offline Any certificates were verified against the Fulcio roots critical identity docker reference quay io argoproj argo cd image docker manifest digest sha256 63dc60481b1b2abf271e1f2b866be8a92962b0e53aaa728902caa8ac8d235277 type cosign container image signature optional 1 3 6 1 4 1 57264 1 1 https token actions githubusercontent com 1 3 6 1 4 1 57264 1 2 push 1 3 6 1 4 1 57264 1 3 a6ec84da0eaa519cbd91a8f016cf4050c03323b2 1 3 6 1 4 1 57264 1 4 Publish ArgoCD Release 1 3 6 1 4 1 57264 1 5 argoproj argo cd 1 3 6 1 4 1 57264 1 6 refs tags version Verification of container image with SLSA attestations A SLSA https slsa dev Level 3 provenance is generated using slsa github generator https github com slsa framework slsa github generator The following command will verify the signature of an attestation and how it was issued It will contain the payloadType payload and signature Run the following command as per the slsa verifier documentation https github com slsa framework slsa verifier tree main containers bash Get the immutable container image to prevent TOCTOU attacks https github com slsa framework slsa verifier toctou attacks IMAGE quay io argoproj argocd v2 7 0 IMAGE IMAGE crane digest IMAGE Verify provenance including the tag to prevent rollback attacks slsa verifier verify image IMAGE source uri github com argoproj argo cd source tag v2 7 0 If you only want to verify up to the major or minor verion of the source repository tag instead of the full tag use the source versioned tag which performs semantic versioning verification shell slsa verifier verify image IMAGE source uri github com argoproj argo cd source versioned tag v2 Note May use v2 7 for minor version verification The attestation payload contains a non forgeable provenance which is base64 encoded and can be viewed by passing the print provenance option to the commands above bash slsa verifier verify image IMAGE source uri github com argoproj argo cd source tag v2 7 0 print provenance jq If you prefer using cosign follow these instructions https github com slsa framework slsa github generator blob main internal builders container README md cosign tip cosign or slsa verifier can both be used to verify image attestations Check the documentation of each binary for detailed instructions Verification of CLI artifacts with SLSA attestations A single attestation argocd cli intoto jsonl from each release is provided This can be used with slsa verifier https github com slsa framework slsa verifier verification for github builders to verify that a CLI binary was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed bash slsa verifier verify artifact argocd linux amd64 provenance path argocd cli intoto jsonl source uri github com argoproj argo cd source tag v2 7 0 If you only want to verify up to the major or minor verion of the source repository tag instead of the full tag use the source versioned tag which performs semantic versioning verification shell slsa verifier verify artifact argocd linux amd64 provenance path argocd cli intoto jsonl source uri github com argoproj argo cd source versioned tag v2 Note May use v2 7 for minor version verification The payload is a non forgeable provenance which is base64 encoded and can be viewed by passing the print provenance option to the commands above bash slsa verifier verify artifact argocd linux amd64 provenance path argocd cli intoto jsonl source uri github com argoproj argo cd source tag v2 7 0 print provenance jq Verification of Sbom A single attestation argocd sbom intoto jsonl from each release is provided along with the sbom sbom tar gz This can be used with slsa verifier https github com slsa framework slsa verifier verification for github builders to verify that the SBOM was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed bash slsa verifier verify artifact sbom tar gz provenance path argocd sbom intoto jsonl source uri github com argoproj argo cd source tag v2 7 0 Verification on Kubernetes Policy controllers note We encourage all users to verify signatures and provenances with your admission policy controller of choice Doing so will verify that an image was built by us before it s deployed on your Kubernetes cluster Cosign signatures and SLSA provenances are compatible with several types of admission controllers Please see the cosign documentation https docs sigstore dev cosign overview kubernetes integrations and slsa github generator https github com slsa framework slsa github generator blob main internal builders container README md verification for supported controllers
argocd operations and the API TLS configuration The user facing endpoint of the workload which serves the UI The endpoint of the which is accessed by Argo CD provides three inbound TLS endpoints that can be configured and workloads to request repository
# TLS configuration Argo CD provides three inbound TLS endpoints that can be configured: * The user-facing endpoint of the `argocd-server` workload which serves the UI and the API * The endpoint of the `argocd-repo-server`, which is accessed by `argocd-server` and `argocd-application-controller` workloads to request repository operations. * The endpoint of the `argocd-dex-server`, which is accessed by `argocd-server` to handle OIDC authentication. By default, and without further configuration, these endpoints will be set-up to use an automatically generated, self-signed certificate. However, most users will want to explicitly configure the certificates for these TLS endpoints, possibly using automated means such as `cert-manager` or using their own dedicated Certificate Authority. ## Configuring TLS for argocd-server ### Inbound TLS options for argocd-server You can configure certain TLS options for the `argocd-server` workload by setting command line parameters. The following parameters are available: |Parameter|Default|Description| |---------|-------|-----------| |`--insecure`|`false`|Disables TLS completely| |`--tlsminversion`|`1.2`|The minimum TLS version to be offered to clients| |`--tlsmaxversion`|`1.3`|The maximum TLS version to be offered to clients| |`--tlsciphers`|`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384:TLS_RSA_WITH_AES_256_GCM_SHA384`|A colon separated list of TLS cipher suites to be offered to clients| ### TLS certificates used by argocd-server There are two ways to configure the TLS certificates used by `argocd-server`: * Setting the `tls.crt` and `tls.key` keys in the `argocd-server-tls` secret to hold PEM data of the certificate and the corresponding private key. The `argocd-server-tls` secret may be of type `tls`, but does not have to be. * Setting the `tls.crt` and `tls.key` keys in the `argocd-secret` secret to hold PEM data of the certificate and the corresponding private key. This method is considered deprecated, and only exists for purposes of backwards compatibility. Changing `argocd-secret` should not be used to override the TLS certificate anymore. Argo CD decides which TLS certificate to use for the endpoint of `argocd-server` as follows: * If the `argocd-server-tls` secret exists and contains a valid key pair in the `tls.crt` and `tls.key` keys, this will be used for the certificate of the endpoint of `argocd-server`. * Otherwise, if the `argocd-secret` secret contains a valid key pair in the `tls.crt` and `tls.key` keys, this will be used as certificate for the endpoint of `argocd-server`. * If no `tls.crt` and `tls.key` keys are found in neither of the two mentioned secrets, Argo CD will generate a self-signed certificate and persist it in the `argocd-secret` secret. The `argocd-server-tls` secret contains only information for TLS configuration to be used by `argocd-server` and is safe to be managed via third-party tools such as `cert-manager` or `SealedSecrets` To create this secret manually from an existing key pair, you can use `kubectl`: ```shell kubectl create -n argocd secret tls argocd-server-tls \ --cert=/path/to/cert.pem \ --key=/path/to/key.pem ``` Argo CD will pick up changes to the `argocd-server-tls` secret automatically and will not require restart of the pods to use a renewed certificate. ## Configuring inbound TLS for argocd-repo-server ### Inbound TLS options for argocd-repo-server You can configure certain TLS options for the `argocd-repo-server` workload by setting command line parameters. The following parameters are available: |Parameter|Default|Description| |---------|-------|-----------| |`--disable-tls`|`false`|Disables TLS completely| |`--tlsminversion`|`1.2`|The minimum TLS version to be offered to clients| |`--tlsmaxversion`|`1.3`|The maximum TLS version to be offered to clients| |`--tlsciphers`|`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384:TLS_RSA_WITH_AES_256_GCM_SHA384`|A colon separated list of TLS cipher suites to be offered to clients| ### Inbound TLS certificates used by argocd-repo-server To configure the TLS certificate used by the `argocd-repo-server` workload, create a secret named `argocd-repo-server-tls` in the namespace where Argo CD is running in with the certificate's key pair stored in `tls.crt` and `tls.key` keys. If this secret does not exist, `argocd-repo-server` will generate and use a self-signed certificate. To create this secret, you can use `kubectl`: ```shell kubectl create -n argocd secret tls argocd-repo-server-tls \ --cert=/path/to/cert.pem \ --key=/path/to/key.pem ``` If the certificate is self-signed, you will also need to add `ca.crt` to the secret with the contents of your CA certificate. Please note, that as opposed to `argocd-server`, the `argocd-repo-server` is not able to pick up changes to this secret automatically. If you create (or update) this secret, the `argocd-repo-server` pods need to be restarted. Also note, that the certificate should be issued with the correct SAN entries for the `argocd-repo-server`, containing at least the entries for `DNS:argocd-repo-server` and `DNS:argocd-repo-server.argo-cd.svc` depending on how your workloads connect to the repository server. ## Configuring inbound TLS for argocd-dex-server ### Inbound TLS options for argocd-dex-server You can configure certain TLS options for the `argocd-dex-server` workload by setting command line parameters. The following parameters are available: |Parameter|Default|Description| |---------|-------|-----------| |`--disable-tls`|`false`|Disables TLS completely| ### Inbound TLS certificates used by argocd-dex-server To configure the TLS certificate used by the `argocd-dex-server` workload, create a secret named `argocd-dex-server-tls` in the namespace where Argo CD is running in with the certificate's key pair stored in `tls.crt` and `tls.key` keys. If this secret does not exist, `argocd-dex-server` will generate and use a self-signed certificate. To create this secret, you can use `kubectl`: ```shell kubectl create -n argocd secret tls argocd-dex-server-tls \ --cert=/path/to/cert.pem \ --key=/path/to/key.pem ``` If the certificate is self-signed, you will also need to add `ca.crt` to the secret with the contents of your CA certificate. Please note, that as opposed to `argocd-server`, the `argocd-dex-server` is not able to pick up changes to this secret automatically. If you create (or update) this secret, the `argocd-dex-server` pods need to be restarted. Also note, that the certificate should be issued with the correct SAN entries for the `argocd-dex-server`, containing at least the entries for `DNS:argocd-dex-server` and `DNS:argocd-dex-server.argo-cd.svc` depending on how your workloads connect to the repository server. ## Configuring TLS between Argo CD components ### Configuring TLS to argocd-repo-server Both `argocd-server` and `argocd-application-controller` communicate with the `argocd-repo-server` using a gRPC API over TLS. By default, `argocd-repo-server` generates a non-persistent, self signed certificate to use for its gRPC endpoint on startup. Because the `argocd-repo-server` has no means to connect to the K8s control plane API, this certificate is not being available to outside consumers for verification. Both, the `argocd-server` and `argocd-application-server` will use a non-validating connection to the `argocd-repo-server` for this reason. To change this behavior to be more secure by having the `argocd-server` and `argocd-application-controller` validate the TLS certificate of the `argocd-repo-server` endpoint, the following steps need to be performed: * Create a persistent TLS certificate to be used by `argocd-repo-server`, as shown above * Restart the `argocd-repo-server` pod(s) * Modify the pod startup parameters for `argocd-server` and `argocd-application-controller` to include the `--repo-server-strict-tls` parameter. The `argocd-server` and `argocd-application-controller` workloads will now validate the TLS certificate of the `argocd-repo-server` by using the certificate stored in the `argocd-repo-server-tls` secret. !!!note "Certificate expiry" Please make sure that the certificate has a proper life time. Keep in mind that when you have to replace the certificate, all workloads have to be restarted in order to properly work again. ### Configuring TLS to argocd-dex-server `argocd-server` communicates with the `argocd-dex-server` using an HTTPS API over TLS. By default, `argocd-dex-server` generates a non-persistent, self signed certificate to use for its HTTPS endpoint on startup. Because the `argocd-dex-server` has no means to connect to the K8s control plane API, this certificate is not being available to outside consumers for verification. The `argocd-server` will use a non-validating connection to the `argocd-dex-server` for this reason. To change this behavior to be more secure by having the `argocd-server` validate the TLS certificate of the `argocd-dex-server` endpoint, the following steps need to be performed: * Create a persistent TLS certificate to be used by `argocd-dex-server`, as shown above * Restart the `argocd-dex-server` pod(s) * Modify the pod startup parameters for `argocd-server` to include the `--dex-server-strict-tls` parameter. The `argocd-server` workload will now validate the TLS certificate of the `argocd-dex-server` by using the certificate stored in the `argocd-dex-server-tls` secret. !!!note "Certificate expiry" Please make sure that the certificate has a proper life time. Keep in mind that when you have to replace the certificate, all workloads have to be restarted in order to properly work again. ### Disabling TLS to argocd-repo-server In some scenarios where mTLS through side-car proxies is involved (e.g. in a service mesh), you may want configure the connections between the `argocd-server` and `argocd-application-controller` to `argocd-repo-server` to not use TLS at all. In this case, you will need to: * Configure `argocd-repo-server` with TLS on the gRPC API disabled by specifying the `--disable-tls` parameter to the pod container's startup arguments. Also, consider restricting listening addresses to the loopback interface by specifying `--listen 127.0.0.1` parameter, so that insecure endpoint is not exposed on the pod's network interfaces, but still available to the side-car container. * Configure `argocd-server` and `argocd-application-controller` to not use TLS for connections to the `argocd-repo-server` by specifying the parameter `--repo-server-plaintext` to the pod container's startup arguments * Configure `argocd-server` and `argocd-application-controller` to connect to the side-car instead of directly to the `argocd-repo-server` service by specifying its address via the `--repo-server <address>` parameter After this change, the `argocd-server` and `argocd-application-controller` will use a plain text connection to the side-car proxy, that will handle all aspects of TLS to the `argocd-repo-server`'s TLS side-car proxy. ### Disabling TLS to argocd-dex-server In some scenarios where mTLS through side-car proxies is involved (e.g. in a service mesh), you may want configure the connections between `argocd-server` to `argocd-dex-server` to not use TLS at all. In this case, you will need to: * Configure `argocd-dex-server` with TLS on the HTTPS API disabled by specifying the `--disable-tls` parameter to the pod container's startup arguments * Configure `argocd-server` to not use TLS for connections to the `argocd-dex-server` by specifying the parameter `--dex-server-plaintext` to the pod container's startup arguments * Configure `argocd-server` to connect to the side-car instead of directly to the `argocd-dex-server` service by specifying its address via the `--dex-server <address>` parameter After this change, the `argocd-server` will use a plain text connection to the side-car proxy, that will handle all aspects of TLS to the `argocd-dex-server`'s TLS side-car proxy.
argocd
TLS configuration Argo CD provides three inbound TLS endpoints that can be configured The user facing endpoint of the argocd server workload which serves the UI and the API The endpoint of the argocd repo server which is accessed by argocd server and argocd application controller workloads to request repository operations The endpoint of the argocd dex server which is accessed by argocd server to handle OIDC authentication By default and without further configuration these endpoints will be set up to use an automatically generated self signed certificate However most users will want to explicitly configure the certificates for these TLS endpoints possibly using automated means such as cert manager or using their own dedicated Certificate Authority Configuring TLS for argocd server Inbound TLS options for argocd server You can configure certain TLS options for the argocd server workload by setting command line parameters The following parameters are available Parameter Default Description insecure false Disables TLS completely tlsminversion 1 2 The minimum TLS version to be offered to clients tlsmaxversion 1 3 The maximum TLS version to be offered to clients tlsciphers TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS RSA WITH AES 256 GCM SHA384 A colon separated list of TLS cipher suites to be offered to clients TLS certificates used by argocd server There are two ways to configure the TLS certificates used by argocd server Setting the tls crt and tls key keys in the argocd server tls secret to hold PEM data of the certificate and the corresponding private key The argocd server tls secret may be of type tls but does not have to be Setting the tls crt and tls key keys in the argocd secret secret to hold PEM data of the certificate and the corresponding private key This method is considered deprecated and only exists for purposes of backwards compatibility Changing argocd secret should not be used to override the TLS certificate anymore Argo CD decides which TLS certificate to use for the endpoint of argocd server as follows If the argocd server tls secret exists and contains a valid key pair in the tls crt and tls key keys this will be used for the certificate of the endpoint of argocd server Otherwise if the argocd secret secret contains a valid key pair in the tls crt and tls key keys this will be used as certificate for the endpoint of argocd server If no tls crt and tls key keys are found in neither of the two mentioned secrets Argo CD will generate a self signed certificate and persist it in the argocd secret secret The argocd server tls secret contains only information for TLS configuration to be used by argocd server and is safe to be managed via third party tools such as cert manager or SealedSecrets To create this secret manually from an existing key pair you can use kubectl shell kubectl create n argocd secret tls argocd server tls cert path to cert pem key path to key pem Argo CD will pick up changes to the argocd server tls secret automatically and will not require restart of the pods to use a renewed certificate Configuring inbound TLS for argocd repo server Inbound TLS options for argocd repo server You can configure certain TLS options for the argocd repo server workload by setting command line parameters The following parameters are available Parameter Default Description disable tls false Disables TLS completely tlsminversion 1 2 The minimum TLS version to be offered to clients tlsmaxversion 1 3 The maximum TLS version to be offered to clients tlsciphers TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS RSA WITH AES 256 GCM SHA384 A colon separated list of TLS cipher suites to be offered to clients Inbound TLS certificates used by argocd repo server To configure the TLS certificate used by the argocd repo server workload create a secret named argocd repo server tls in the namespace where Argo CD is running in with the certificate s key pair stored in tls crt and tls key keys If this secret does not exist argocd repo server will generate and use a self signed certificate To create this secret you can use kubectl shell kubectl create n argocd secret tls argocd repo server tls cert path to cert pem key path to key pem If the certificate is self signed you will also need to add ca crt to the secret with the contents of your CA certificate Please note that as opposed to argocd server the argocd repo server is not able to pick up changes to this secret automatically If you create or update this secret the argocd repo server pods need to be restarted Also note that the certificate should be issued with the correct SAN entries for the argocd repo server containing at least the entries for DNS argocd repo server and DNS argocd repo server argo cd svc depending on how your workloads connect to the repository server Configuring inbound TLS for argocd dex server Inbound TLS options for argocd dex server You can configure certain TLS options for the argocd dex server workload by setting command line parameters The following parameters are available Parameter Default Description disable tls false Disables TLS completely Inbound TLS certificates used by argocd dex server To configure the TLS certificate used by the argocd dex server workload create a secret named argocd dex server tls in the namespace where Argo CD is running in with the certificate s key pair stored in tls crt and tls key keys If this secret does not exist argocd dex server will generate and use a self signed certificate To create this secret you can use kubectl shell kubectl create n argocd secret tls argocd dex server tls cert path to cert pem key path to key pem If the certificate is self signed you will also need to add ca crt to the secret with the contents of your CA certificate Please note that as opposed to argocd server the argocd dex server is not able to pick up changes to this secret automatically If you create or update this secret the argocd dex server pods need to be restarted Also note that the certificate should be issued with the correct SAN entries for the argocd dex server containing at least the entries for DNS argocd dex server and DNS argocd dex server argo cd svc depending on how your workloads connect to the repository server Configuring TLS between Argo CD components Configuring TLS to argocd repo server Both argocd server and argocd application controller communicate with the argocd repo server using a gRPC API over TLS By default argocd repo server generates a non persistent self signed certificate to use for its gRPC endpoint on startup Because the argocd repo server has no means to connect to the K8s control plane API this certificate is not being available to outside consumers for verification Both the argocd server and argocd application server will use a non validating connection to the argocd repo server for this reason To change this behavior to be more secure by having the argocd server and argocd application controller validate the TLS certificate of the argocd repo server endpoint the following steps need to be performed Create a persistent TLS certificate to be used by argocd repo server as shown above Restart the argocd repo server pod s Modify the pod startup parameters for argocd server and argocd application controller to include the repo server strict tls parameter The argocd server and argocd application controller workloads will now validate the TLS certificate of the argocd repo server by using the certificate stored in the argocd repo server tls secret note Certificate expiry Please make sure that the certificate has a proper life time Keep in mind that when you have to replace the certificate all workloads have to be restarted in order to properly work again Configuring TLS to argocd dex server argocd server communicates with the argocd dex server using an HTTPS API over TLS By default argocd dex server generates a non persistent self signed certificate to use for its HTTPS endpoint on startup Because the argocd dex server has no means to connect to the K8s control plane API this certificate is not being available to outside consumers for verification The argocd server will use a non validating connection to the argocd dex server for this reason To change this behavior to be more secure by having the argocd server validate the TLS certificate of the argocd dex server endpoint the following steps need to be performed Create a persistent TLS certificate to be used by argocd dex server as shown above Restart the argocd dex server pod s Modify the pod startup parameters for argocd server to include the dex server strict tls parameter The argocd server workload will now validate the TLS certificate of the argocd dex server by using the certificate stored in the argocd dex server tls secret note Certificate expiry Please make sure that the certificate has a proper life time Keep in mind that when you have to replace the certificate all workloads have to be restarted in order to properly work again Disabling TLS to argocd repo server In some scenarios where mTLS through side car proxies is involved e g in a service mesh you may want configure the connections between the argocd server and argocd application controller to argocd repo server to not use TLS at all In this case you will need to Configure argocd repo server with TLS on the gRPC API disabled by specifying the disable tls parameter to the pod container s startup arguments Also consider restricting listening addresses to the loopback interface by specifying listen 127 0 0 1 parameter so that insecure endpoint is not exposed on the pod s network interfaces but still available to the side car container Configure argocd server and argocd application controller to not use TLS for connections to the argocd repo server by specifying the parameter repo server plaintext to the pod container s startup arguments Configure argocd server and argocd application controller to connect to the side car instead of directly to the argocd repo server service by specifying its address via the repo server address parameter After this change the argocd server and argocd application controller will use a plain text connection to the side car proxy that will handle all aspects of TLS to the argocd repo server s TLS side car proxy Disabling TLS to argocd dex server In some scenarios where mTLS through side car proxies is involved e g in a service mesh you may want configure the connections between argocd server to argocd dex server to not use TLS at all In this case you will need to Configure argocd dex server with TLS on the HTTPS API disabled by specifying the disable tls parameter to the pod container s startup arguments Configure argocd server to not use TLS for connections to the argocd dex server by specifying the parameter dex server plaintext to the pod container s startup arguments Configure argocd server to connect to the side car instead of directly to the argocd dex server service by specifying its address via the dex server address parameter After this change the argocd server will use a plain text connection to the side car proxy that will handle all aspects of TLS to the argocd dex server s TLS side car proxy
argocd Operators can add actions to custom resources in form of a Lua script and expand those capabilities Built in Actions Argo CD allows operators to define custom actions which users can perform on specific resource types This is used internally to provide actions like for a or for an Argo Rollout The following are actions that are built in to Argo CD Each action name links to its Lua script definition Resource Actions Overview
# Resource Actions ## Overview Argo CD allows operators to define custom actions which users can perform on specific resource types. This is used internally to provide actions like `restart` for a `DaemonSet`, or `retry` for an Argo Rollout. Operators can add actions to custom resources in form of a Lua script and expand those capabilities. ## Built-in Actions The following are actions that are built-in to Argo CD. Each action name links to its Lua script definition: {!docs/operator-manual/resource_actions_builtin.md!} See the [RBAC documentation](rbac.md#the-action-action) for information on how to control access to these actions. ## Custom Resource Actions Argo CD supports custom resource actions written in [Lua](https://www.lua.org/). This is useful if you: * Have a custom resource for which Argo CD does not provide any built-in actions. * Have a commonly performed manual task that might be error prone if executed by users via `kubectl` The resource actions act on a single object. You can define your own custom resource actions in the `argocd-cm` ConfigMap. ### Custom Resource Action Types #### An action that modifies the source resource This action modifies and returns the source resource. This kind of action was the only one available till 2.8, and it is still supported. #### An action that produces a list of new or modified resources **An alpha feature, introduced in 2.8.** This action returns a list of impacted resources, each impacted resource has a K8S resource and an operation to perform on. Currently supported operations are "create" and "patch", "patch" is only supported for the source resource. Creating new resources is possible, by specifying a "create" operation for each such resource in the returned list. One of the returned resources can be the modified source object, with a "patch" operation, if needed. See the definition examples below. ### Define a Custom Resource Action in `argocd-cm` ConfigMap Custom resource actions can be defined in `resource.customizations.actions.<group_kind>` field of `argocd-cm`. Following example demonstrates a set of custom actions for `CronJob` resources, each such action returns the modified CronJob. The customizations key is in the format of `resource.customizations.actions.<apiGroup_Kind>`. ```yaml resource.customizations.actions.batch_CronJob: | discovery.lua: | actions = {} actions["suspend"] = {["disabled"] = true} actions["resume"] = {["disabled"] = true} local suspend = false if obj.spec.suspend ~= nil then suspend = obj.spec.suspend end if suspend then actions["resume"]["disabled"] = false else actions["suspend"]["disabled"] = false end return actions definitions: - name: suspend action.lua: | obj.spec.suspend = true return obj - name: resume action.lua: | if obj.spec.suspend ~= nil and obj.spec.suspend then obj.spec.suspend = false end return obj ``` The `discovery.lua` script must return a table where the key name represents the action name. You can optionally include logic to enable or disable certain actions based on the current object state. Each action name must be represented in the list of `definitions` with an accompanying `action.lua` script to control the resource modifications. The `obj` is a global variable which contains the resource. Each action script returns an optionally modified version of the resource. In this example, we are simply setting `.spec.suspend` to either `true` or `false`. By default, defining a resource action customization will override any built-in action for this resource kind. As of Argo CD version 2.13.0, if you want to retain the built-in actions, you can set the `mergeBuiltinActions` key to `true`. Your custom actions will have precedence over the built-in actions. ```yaml resource.customizations.actions.argoproj.io_Rollout: | mergeBuiltinActions: true discovery.lua: | actions = {} actions["do-things"] = {} return actions definitions: - name: do-things action.lua: | return obj ``` #### Creating new resources with a custom action !!! important Creating resources via the Argo CD UI is an intentional, strategic departure from GitOps principles. We recommend that you use this feature sparingly and only for resources that are not part of the desired state of the application. The resource the action is invoked on would be referred to as the `source resource`. The new resource and all the resources implicitly created as a result, must be permitted on the AppProject level, otherwise the creation will fail. ##### Creating a source resource child resources with a custom action If the new resource represents a k8s child of the source resource, the source resource ownerReference must be set on the new resource. Here is an example Lua snippet, that takes care of constructing a Job resource that is a child of a source CronJob resource - the `obj` is a global variable, which contains the source resource: ```lua -- ... ownerRef = {} ownerRef.apiVersion = obj.apiVersion ownerRef.kind = obj.kind ownerRef.name = obj.metadata.name ownerRef.uid = obj.metadata.uid job = {} job.metadata = {} job.metadata.ownerReferences = {} job.metadata.ownerReferences[1] = ownerRef -- ... ``` ##### Creating independent child resources with a custom action If the new resource is independent of the source resource, the default behavior of such new resource is that it is not known by the App of the source resource (as it is not part of the desired state and does not have an `ownerReference`). To make the App aware of the new resource, the `app.kubernetes.io/instance` label (or other ArgoCD tracking label, if configured) must be set on the resource. It can be copied from the source resource, like this: ```lua -- ... newObj = {} newObj.metadata = {} newObj.metadata.labels = {} newObj.metadata.labels["app.kubernetes.io/instance"] = obj.metadata.labels["app.kubernetes.io/instance"] -- ... ``` While the new resource will be part of the App with the tracking label in place, it will be immediately deleted if auto prune is set on the App. To keep the resource, set `Prune=false` annotation on the resource, with this Lua snippet: ```lua -- ... newObj.metadata.annotations = {} newObj.metadata.annotations["argocd.argoproj.io/sync-options"] = "Prune=false" -- ... ``` (If setting `Prune=false` behavior, the resource will not be deleted upon the deletion of the App, and will require a manual cleanup). The resource and the App will now appear out of sync - which is the expected ArgoCD behavior upon creating a resource that is not part of the desired state. If you wish to treat such an App as a synced one, add the following resource annotation in Lua code: ```lua -- ... newObj.metadata.annotations["argocd.argoproj.io/compare-options"] = "IgnoreExtraneous" -- ... ``` #### An action that produces a list of resources - a complete example: ```yaml resource.customizations.actions.ConfigMap: | discovery.lua: | actions = {} actions["do-things"] = {} return actions definitions: - name: do-things action.lua: | -- Create a new ConfigMap cm1 = {} cm1.apiVersion = "v1" cm1.kind = "ConfigMap" cm1.metadata = {} cm1.metadata.name = "cm1" cm1.metadata.namespace = obj.metadata.namespace cm1.metadata.labels = {} -- Copy ArgoCD tracking label so that the resource is recognized by the App cm1.metadata.labels["app.kubernetes.io/instance"] = obj.metadata.labels["app.kubernetes.io/instance"] cm1.metadata.annotations = {} -- For Apps with auto-prune, set the prune false on the resource, so it does not get deleted cm1.metadata.annotations["argocd.argoproj.io/sync-options"] = "Prune=false" -- Keep the App synced even though it has a resource that is not in Git cm1.metadata.annotations["argocd.argoproj.io/compare-options"] = "IgnoreExtraneous" cm1.data = {} cm1.data.myKey1 = "myValue1" impactedResource1 = {} impactedResource1.operation = "create" impactedResource1.resource = cm1 -- Patch the original cm obj.metadata.labels["aKey"] = "aValue" impactedResource2 = {} impactedResource2.operation = "patch" impactedResource2.resource = obj result = {} result[1] = impactedResource1 result[2] = impactedResource2 return result ```
argocd
Resource Actions Overview Argo CD allows operators to define custom actions which users can perform on specific resource types This is used internally to provide actions like restart for a DaemonSet or retry for an Argo Rollout Operators can add actions to custom resources in form of a Lua script and expand those capabilities Built in Actions The following are actions that are built in to Argo CD Each action name links to its Lua script definition docs operator manual resource actions builtin md See the RBAC documentation rbac md the action action for information on how to control access to these actions Custom Resource Actions Argo CD supports custom resource actions written in Lua https www lua org This is useful if you Have a custom resource for which Argo CD does not provide any built in actions Have a commonly performed manual task that might be error prone if executed by users via kubectl The resource actions act on a single object You can define your own custom resource actions in the argocd cm ConfigMap Custom Resource Action Types An action that modifies the source resource This action modifies and returns the source resource This kind of action was the only one available till 2 8 and it is still supported An action that produces a list of new or modified resources An alpha feature introduced in 2 8 This action returns a list of impacted resources each impacted resource has a K8S resource and an operation to perform on Currently supported operations are create and patch patch is only supported for the source resource Creating new resources is possible by specifying a create operation for each such resource in the returned list One of the returned resources can be the modified source object with a patch operation if needed See the definition examples below Define a Custom Resource Action in argocd cm ConfigMap Custom resource actions can be defined in resource customizations actions group kind field of argocd cm Following example demonstrates a set of custom actions for CronJob resources each such action returns the modified CronJob The customizations key is in the format of resource customizations actions apiGroup Kind yaml resource customizations actions batch CronJob discovery lua actions actions suspend disabled true actions resume disabled true local suspend false if obj spec suspend nil then suspend obj spec suspend end if suspend then actions resume disabled false else actions suspend disabled false end return actions definitions name suspend action lua obj spec suspend true return obj name resume action lua if obj spec suspend nil and obj spec suspend then obj spec suspend false end return obj The discovery lua script must return a table where the key name represents the action name You can optionally include logic to enable or disable certain actions based on the current object state Each action name must be represented in the list of definitions with an accompanying action lua script to control the resource modifications The obj is a global variable which contains the resource Each action script returns an optionally modified version of the resource In this example we are simply setting spec suspend to either true or false By default defining a resource action customization will override any built in action for this resource kind As of Argo CD version 2 13 0 if you want to retain the built in actions you can set the mergeBuiltinActions key to true Your custom actions will have precedence over the built in actions yaml resource customizations actions argoproj io Rollout mergeBuiltinActions true discovery lua actions actions do things return actions definitions name do things action lua return obj Creating new resources with a custom action important Creating resources via the Argo CD UI is an intentional strategic departure from GitOps principles We recommend that you use this feature sparingly and only for resources that are not part of the desired state of the application The resource the action is invoked on would be referred to as the source resource The new resource and all the resources implicitly created as a result must be permitted on the AppProject level otherwise the creation will fail Creating a source resource child resources with a custom action If the new resource represents a k8s child of the source resource the source resource ownerReference must be set on the new resource Here is an example Lua snippet that takes care of constructing a Job resource that is a child of a source CronJob resource the obj is a global variable which contains the source resource lua ownerRef ownerRef apiVersion obj apiVersion ownerRef kind obj kind ownerRef name obj metadata name ownerRef uid obj metadata uid job job metadata job metadata ownerReferences job metadata ownerReferences 1 ownerRef Creating independent child resources with a custom action If the new resource is independent of the source resource the default behavior of such new resource is that it is not known by the App of the source resource as it is not part of the desired state and does not have an ownerReference To make the App aware of the new resource the app kubernetes io instance label or other ArgoCD tracking label if configured must be set on the resource It can be copied from the source resource like this lua newObj newObj metadata newObj metadata labels newObj metadata labels app kubernetes io instance obj metadata labels app kubernetes io instance While the new resource will be part of the App with the tracking label in place it will be immediately deleted if auto prune is set on the App To keep the resource set Prune false annotation on the resource with this Lua snippet lua newObj metadata annotations newObj metadata annotations argocd argoproj io sync options Prune false If setting Prune false behavior the resource will not be deleted upon the deletion of the App and will require a manual cleanup The resource and the App will now appear out of sync which is the expected ArgoCD behavior upon creating a resource that is not part of the desired state If you wish to treat such an App as a synced one add the following resource annotation in Lua code lua newObj metadata annotations argocd argoproj io compare options IgnoreExtraneous An action that produces a list of resources a complete example yaml resource customizations actions ConfigMap discovery lua actions actions do things return actions definitions name do things action lua Create a new ConfigMap cm1 cm1 apiVersion v1 cm1 kind ConfigMap cm1 metadata cm1 metadata name cm1 cm1 metadata namespace obj metadata namespace cm1 metadata labels Copy ArgoCD tracking label so that the resource is recognized by the App cm1 metadata labels app kubernetes io instance obj metadata labels app kubernetes io instance cm1 metadata annotations For Apps with auto prune set the prune false on the resource so it does not get deleted cm1 metadata annotations argocd argoproj io sync options Prune false Keep the App synced even though it has a resource that is not in Git cm1 metadata annotations argocd argoproj io compare options IgnoreExtraneous cm1 data cm1 data myKey1 myValue1 impactedResource1 impactedResource1 operation create impactedResource1 resource cm1 Patch the original cm obj metadata labels aKey aValue impactedResource2 impactedResource2 operation patch impactedResource2 resource obj result result 1 impactedResource1 result 2 impactedResource2 return result
argocd For untracked resources you can By default an Argo CD Application is refreshed every time a resource that belongs to it changes When a resource update is ignored if the resource s does not change the Application that this resource belongs to will not be reconciled for Reconcile Optimization and a high CPU usage on the Argo CD allows you to optionally ignore resource updates on specific fields Kubernetes controllers often update the resources they watch periodically causing continuous reconcile operation on the Application
# Reconcile Optimization By default, an Argo CD Application is refreshed every time a resource that belongs to it changes. Kubernetes controllers often update the resources they watch periodically, causing continuous reconcile operation on the Application and a high CPU usage on the `argocd-application-controller`. Argo CD allows you to optionally ignore resource updates on specific fields for [tracked resources](../user-guide/resource_tracking.md). For untracked resources, you can [use the argocd.argoproj.io/ignore-resource-updates annotations](#ignoring-updates-for-untracked-resources) When a resource update is ignored, if the resource's [health status](./health.md) does not change, the Application that this resource belongs to will not be reconciled. ## System-Level Configuration By default, `resource.ignoreResourceUpdatesEnabled` is set to `true`, enabling Argo CD to ignore resource updates. This default setting ensures that Argo CD maintains sustainable performance by reducing unnecessary reconcile operations. If you need to alter this behavior, you can explicitly set `resource.ignoreResourceUpdatesEnabled` to `false` in the `argocd-cm` ConfigMap: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd data: resource.ignoreResourceUpdatesEnabled: "false" ``` Argo CD allows ignoring resource updates at a specific JSON path, using [RFC6902 JSON patches](https://tools.ietf.org/html/rfc6902) and [JQ path expressions](https://stedolan.github.io/jq/manual/#path(path_expression)). It can be configured for a specified group and kind in `resource.customizations` key of the `argocd-cm` ConfigMap. Following is an example of a customization which ignores the `refreshTime` status field of an [`ExternalSecret`](https://external-secrets.io/main/api/externalsecret/) resource: ```yaml data: resource.customizations.ignoreResourceUpdates.external-secrets.io_ExternalSecret: | jsonPointers: - /status/refreshTime # JQ equivalent of the above: # jqPathExpressions: # - .status.refreshTime ``` It is possible to configure `ignoreResourceUpdates` to be applied to all tracked resources in every Application managed by an Argo CD instance. In order to do so, resource customizations can be configured like in the example below: ```yaml data: resource.customizations.ignoreResourceUpdates.all: | jsonPointers: - /status ``` ### Using ignoreDifferences to ignore reconcile It is possible to use existing system-level `ignoreDifferences` customizations to ignore resource updates as well. Instead of copying all configurations, the `ignoreDifferencesOnResourceUpdates` setting can be used to add all ignored differences as ignored resource updates: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm data: resource.compareoptions: | ignoreDifferencesOnResourceUpdates: true ``` ## Default Configuration By default, the metadata fields `generation`, `resourceVersion` and `managedFields` are always ignored for all resources. ## Finding Resources to Ignore The application controller logs when a resource change triggers a refresh. You can use these logs to find high-churn resource kinds and then inspect those resources to find which fields to ignore. To find these logs, search for `"Requesting app refresh caused by object update"`. The logs include structured fields for `api-version` and `kind`. Counting the number of refreshes triggered, by api-version/kind should reveal the high-churn resource kinds. !!!note These logs are at the `debug` level. Configure the application-controller's log level to `debug`. Once you have identified some resources which change often, you can try to determine which fields are changing. Here is one approach: ```shell kubectl get <resource> -o yaml > /tmp/before.yaml # Wait a minute or two. kubectl get <resource> -o yaml > /tmp/after.yaml diff /tmp/before.yaml /tmp/after ``` The diff can give you a sense for which fields are changing and should perhaps be ignored. ## Checking Whether Resource Updates are Ignored Whenever Argo CD skips a refresh due to an ignored resource update, the controller logs the following line: "Ignoring change of object because none of the watched resource fields have changed". Search the application-controller logs for this line to confirm that your resource ignore rules are being applied. !!!note These logs are at the `debug` level. Configure the application-controller's log level to `debug`. ## Examples ### argoproj.io/Application ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm data: resource.customizations.ignoreResourceUpdates.argoproj.io_Application: | jsonPointers: # Ignore when ownerReferences change, for example when a parent ApplicationSet changes often. - /metadata/ownerReferences # Ignore reconciledAt, since by itself it doesn't indicate any important change. - /status/reconciledAt jqPathExpressions: # Ignore lastTransitionTime for conditions; helpful when SharedResourceWarnings are being regularly updated but not # actually changing in content. - .status?.conditions[]?.lastTransitionTime ``` ## Ignoring updates for untracked resources ArgoCD will only apply `ignoreResourceUpdates` configuration to tracked resources of an application. This means dependant resources, such as a `ReplicaSet` and `Pod` created by a `Deployment`, will not ignore any updates and trigger a reconcile of the application for any changes. If you want to apply the `ignoreResourceUpdates` configuration to an untracked resource, you can add the `argocd.argoproj.io/ignore-resource-updates=true` annotation in the dependent resources manifest. ## Example ### CronJob ```yaml apiVersion: batch/v1 kind: CronJob metadata: name: hello namespace: test-cronjob spec: schedule: "* * * * *" jobTemplate: metadata: annotations: argocd.argoproj.io/ignore-resource-updates: "true" spec: template: metadata: annotations: argocd.argoproj.io/ignore-resource-updates: "true" spec: containers: - name: hello image: busybox:1.28 imagePullPolicy: IfNotPresent command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure ``` The resource updates will be ignored based on your the `ignoreResourceUpdates` configuration in the `argocd-cm` configMap: `argocd-cm`: ```yaml resource.customizations.ignoreResourceUpdates.batch_Job: | jsonPointers: - /status resource.customizations.ignoreResourceUpdates.Pod: | jsonPointers: - /status ```
argocd
Reconcile Optimization By default an Argo CD Application is refreshed every time a resource that belongs to it changes Kubernetes controllers often update the resources they watch periodically causing continuous reconcile operation on the Application and a high CPU usage on the argocd application controller Argo CD allows you to optionally ignore resource updates on specific fields for tracked resources user guide resource tracking md For untracked resources you can use the argocd argoproj io ignore resource updates annotations ignoring updates for untracked resources When a resource update is ignored if the resource s health status health md does not change the Application that this resource belongs to will not be reconciled System Level Configuration By default resource ignoreResourceUpdatesEnabled is set to true enabling Argo CD to ignore resource updates This default setting ensures that Argo CD maintains sustainable performance by reducing unnecessary reconcile operations If you need to alter this behavior you can explicitly set resource ignoreResourceUpdatesEnabled to false in the argocd cm ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd data resource ignoreResourceUpdatesEnabled false Argo CD allows ignoring resource updates at a specific JSON path using RFC6902 JSON patches https tools ietf org html rfc6902 and JQ path expressions https stedolan github io jq manual path path expression It can be configured for a specified group and kind in resource customizations key of the argocd cm ConfigMap Following is an example of a customization which ignores the refreshTime status field of an ExternalSecret https external secrets io main api externalsecret resource yaml data resource customizations ignoreResourceUpdates external secrets io ExternalSecret jsonPointers status refreshTime JQ equivalent of the above jqPathExpressions status refreshTime It is possible to configure ignoreResourceUpdates to be applied to all tracked resources in every Application managed by an Argo CD instance In order to do so resource customizations can be configured like in the example below yaml data resource customizations ignoreResourceUpdates all jsonPointers status Using ignoreDifferences to ignore reconcile It is possible to use existing system level ignoreDifferences customizations to ignore resource updates as well Instead of copying all configurations the ignoreDifferencesOnResourceUpdates setting can be used to add all ignored differences as ignored resource updates yaml apiVersion v1 kind ConfigMap metadata name argocd cm data resource compareoptions ignoreDifferencesOnResourceUpdates true Default Configuration By default the metadata fields generation resourceVersion and managedFields are always ignored for all resources Finding Resources to Ignore The application controller logs when a resource change triggers a refresh You can use these logs to find high churn resource kinds and then inspect those resources to find which fields to ignore To find these logs search for Requesting app refresh caused by object update The logs include structured fields for api version and kind Counting the number of refreshes triggered by api version kind should reveal the high churn resource kinds note These logs are at the debug level Configure the application controller s log level to debug Once you have identified some resources which change often you can try to determine which fields are changing Here is one approach shell kubectl get resource o yaml tmp before yaml Wait a minute or two kubectl get resource o yaml tmp after yaml diff tmp before yaml tmp after The diff can give you a sense for which fields are changing and should perhaps be ignored Checking Whether Resource Updates are Ignored Whenever Argo CD skips a refresh due to an ignored resource update the controller logs the following line Ignoring change of object because none of the watched resource fields have changed Search the application controller logs for this line to confirm that your resource ignore rules are being applied note These logs are at the debug level Configure the application controller s log level to debug Examples argoproj io Application yaml apiVersion v1 kind ConfigMap metadata name argocd cm data resource customizations ignoreResourceUpdates argoproj io Application jsonPointers Ignore when ownerReferences change for example when a parent ApplicationSet changes often metadata ownerReferences Ignore reconciledAt since by itself it doesn t indicate any important change status reconciledAt jqPathExpressions Ignore lastTransitionTime for conditions helpful when SharedResourceWarnings are being regularly updated but not actually changing in content status conditions lastTransitionTime Ignoring updates for untracked resources ArgoCD will only apply ignoreResourceUpdates configuration to tracked resources of an application This means dependant resources such as a ReplicaSet and Pod created by a Deployment will not ignore any updates and trigger a reconcile of the application for any changes If you want to apply the ignoreResourceUpdates configuration to an untracked resource you can add the argocd argoproj io ignore resource updates true annotation in the dependent resources manifest Example CronJob yaml apiVersion batch v1 kind CronJob metadata name hello namespace test cronjob spec schedule jobTemplate metadata annotations argocd argoproj io ignore resource updates true spec template metadata annotations argocd argoproj io ignore resource updates true spec containers name hello image busybox 1 28 imagePullPolicy IfNotPresent command bin sh c date echo Hello from the Kubernetes cluster restartPolicy OnFailure The resource updates will be ignored based on your the ignoreResourceUpdates configuration in the argocd cm configMap argocd cm yaml resource customizations ignoreResourceUpdates batch Job jsonPointers status resource customizations ignoreResourceUpdates Pod jsonPointers status
argocd NOTE The HA installation will require at least three different nodes due to pod anti affinity roles in the specs Additionally IPv6 only clusters are not supported Scaling Up High Availability A set of are provided for users who wish to run Argo CD in a highly available manner This runs more containers and runs Redis in HA mode Argo CD is largely stateless All data is persisted as Kubernetes objects which in turn is stored in Kubernetes etcd Redis is only used as a throw away cache and can be lost When lost it will be rebuilt without loss of service
# High Availability Argo CD is largely stateless. All data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service. A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/master/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode. > **NOTE:** The HA installation will require at least three different nodes due to pod anti-affinity roles in the > specs. Additionally, IPv6 only clusters are not supported. ## Scaling Up ### argocd-repo-server **settings:** The `argocd-repo-server` is responsible for cloning Git repository, keeping it up to date and generating manifests using the appropriate tool. * `argocd-repo-server` fork/exec config management tool to generate manifests. The fork can fail due to lack of memory or limit on the number of OS threads. The `--parallelismlimit` flag controls how many manifests generations are running concurrently and helps avoid OOM kills. * the `argocd-repo-server` ensures that repository is in the clean state during the manifest generation using config management tools such as Kustomize, Helm or custom plugin. As a result Git repositories with multiple applications might affect repository server performance. Read [Monorepo Scaling Considerations](#monorepo-scaling-considerations) for more information. * `argocd-repo-server` clones the repository into `/tmp` (or the path specified in the `TMPDIR` env variable). The Pod might run out of disk space if it has too many repositories or if the repositories have a lot of files. To avoid this problem mount a persistent volume. * `argocd-repo-server` uses `git ls-remote` to resolve ambiguous revisions such as `HEAD`, a branch or a tag name. This operation happens frequently and might fail. To avoid failed syncs use the `ARGOCD_GIT_ATTEMPTS_COUNT` environment variable to retry failed requests. * `argocd-repo-server` Every 3m (by default) Argo CD checks for changes to the app manifests. Argo CD assumes by default that manifests only change when the repo changes, so it caches the generated manifests (for 24h by default). With Kustomize remote bases, or in case a Helm chart gets changed without bumping its version number, the expected manifests can change even though the repo has not changed. By reducing the cache time, you can get the changes without waiting for 24h. Use `--repo-cache-expiration duration`, and we'd suggest in low volume environments you try '1h'. Bear in mind that this will negate the benefits of caching if set too low. * `argocd-repo-server` executes config management tools such as `helm` or `kustomize` and enforces a 90 second timeout. This timeout can be changed by using the `ARGOCD_EXEC_TIMEOUT` env variable. The value should be in the Go time duration string format, for example, `2m30s`. **metrics:** * `argocd_git_request_total` - Number of git requests. This metric provides two tags: `repo` - Git repo URL; `request_type` - `ls-remote` or `fetch`. * `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` - Is an environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issues. Note: This metric is expensive to both query and store! ### argocd-application-controller **settings:** The `argocd-application-controller` uses `argocd-repo-server` to get generated manifests and Kubernetes API server to get the actual cluster state. * each controller replica uses two separate queues to process application reconciliation (milliseconds) and app syncing (seconds). The number of queue processors for each queue is controlled by `--status-processors` (20 by default) and `--operation-processors` (10 by default) flags. Increase the number of processors if your Argo CD instance manages too many applications. For 1000 application we use 50 for `--status-processors` and 25 for `--operation-processors` * The manifest generation typically takes the most time during reconciliation. The duration of manifest generation is limited to make sure the controller refresh queue does not overflow. The app reconciliation fails with `Context deadline exceeded` error if the manifest generation is taking too much time. As a workaround increase the value of `--repo-server-timeout-seconds` and consider scaling up the `argocd-repo-server` deployment. * The controller uses Kubernetes watch APIs to maintain a lightweight Kubernetes cluster cache. This allows avoiding querying Kubernetes during app reconciliation and significantly improves performance. For performance reasons the controller monitors and caches only the preferred versions of a resource. During reconciliation, the controller might have to convert cached resources from the preferred version into a version of the resource stored in Git. If `kubectl convert` fails because the conversion is not supported then the controller falls back to Kubernetes API query which slows down reconciliation. In this case, we advise to use the preferred resource version in Git. * The controller polls Git every 3m by default. You can change this duration using the `timeout.reconciliation` and `timeout.reconciliation.jitter` setting in the `argocd-cm` ConfigMap. The value of the fields is a duration string e.g `60s`, `1m`, `1h` or `1d`. * If the controller is managing too many clusters and uses too much memory then you can shard clusters across multiple controller replicas. To enable sharding, increase the number of replicas in `argocd-application-controller` `StatefulSet` and repeat the number of replicas in the `ARGOCD_CONTROLLER_REPLICAS` environment variable. The strategic merge patch below demonstrates changes required to configure two controller replicas. * By default, the controller will update the cluster information every 10 seconds. If there is a problem with your cluster network environment that is causing the update time to take a long time, you can try modifying the environment variable `ARGO_CD_UPDATE_CLUSTER_INFO_TIMEOUT` to increase the timeout (the unit is seconds). ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: argocd-application-controller spec: replicas: 2 template: spec: containers: - name: argocd-application-controller env: - name: ARGOCD_CONTROLLER_REPLICAS value: "2" ``` * In order to manually set the cluster's shard number, specify the optional `shard` property when creating a cluster. If not specified, it will be calculated on the fly by the application controller. * The shard distribution algorithm of the `argocd-application-controller` can be set by using the `--sharding-method` parameter. Supported sharding methods are : [legacy (default), round-robin, consistent-hashing]: - `legacy` mode uses an `uid` based distribution (non-uniform). - `round-robin` uses an equal distribution across all shards. - `consistent-hashing` uses the consistent hashing with bounded loads algorithm which tends to equal distribution and also reduces cluster or application reshuffling in case of additions or removals of shards or clusters. The `--sharding-method` parameter can also be overridden by setting the key `controller.sharding.algorithm` in the `argocd-cmd-params-cm` `configMap` (preferably) or by setting the `ARGOCD_CONTROLLER_SHARDING_ALGORITHM` environment variable and by specifiying the same possible values. !!! warning "Alpha Features" The `round-robin` shard distribution algorithm is an experimental feature. Reshuffling is known to occur in certain scenarios with cluster removal. If the cluster at rank-0 is removed, reshuffling all clusters across shards will occur and may temporarily have negative performance impacts. The `consistent-hashing` shard distribution algorithm is an experimental feature. Extensive benchmark have been documented on the [CNOE blog](https://cnoe.io/blog/argo-cd-application-scalability) with encouraging results. Community feedback is highly appreciated before moving this feature to a production ready state. * A cluster can be manually assigned and forced to a `shard` by patching the `shard` field in the cluster secret to contain the shard number, e.g. ```yaml apiVersion: v1 kind: Secret metadata: name: mycluster-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: shard: 1 name: mycluster.example.com server: https://mycluster.example.com config: | { "bearerToken": "<authentication token>", "tlsClientConfig": { "insecure": false, "caData": "<base64 encoded certificate>" } } ``` * `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` - environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issues. Note: This metric is expensive to both query and store! * `ARGOCD_CLUSTER_CACHE_LIST_PAGE_BUFFER_SIZE` - environment variable controlling the number of pages the controller buffers in memory when performing a list operation against the K8s api server while syncing the cluster cache. This is useful when the cluster contains a large number of resources and cluster sync times exceed the default etcd compaction interval timeout. In this scenario, when attempting to sync the cluster cache, the application controller may throw an error that the `continue parameter is too old to display a consistent list result`. Setting a higher value for this environment variable configures the controller with a larger buffer in which to store pre-fetched pages which are processed asynchronously, increasing the likelihood that all pages have been pulled before the etcd compaction interval timeout expires. In the most extreme case, operators can set this value such that `ARGOCD_CLUSTER_CACHE_LIST_PAGE_SIZE * ARGOCD_CLUSTER_CACHE_LIST_PAGE_BUFFER_SIZE` exceeds the largest resource count (grouped by k8s api version, the granule of parallelism for list operations). In this case, all resources will be buffered in memory -- no api server request will be blocked by processing. * `ARGOCD_APPLICATION_TREE_SHARD_SIZE` - environment variable controlling the max number of resources stored in one Redis key. Splitting application tree into multiple keys helps to reduce the amount of traffic between the controller and Redis. The default value is 0, which means that the application tree is stored in a single Redis key. The reasonable value is 100. **metrics** * `argocd_app_reconcile` - reports application reconciliation duration in seconds. Can be used to build reconciliation duration heat map to get a high-level reconciliation performance picture. * `argocd_app_k8s_request_total` - number of k8s requests per application. The number of fallback Kubernetes API queries - useful to identify which application has a resource with non-preferred version and causes performance issues. ### argocd-server The `argocd-server` is stateless and probably the least likely to cause issues. To ensure there is no downtime during upgrades, consider increasing the number of replicas to `3` or more and repeat the number in the `ARGOCD_API_SERVER_REPLICAS` environment variable. The strategic merge patch below demonstrates this. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: argocd-server spec: replicas: 3 template: spec: containers: - name: argocd-server env: - name: ARGOCD_API_SERVER_REPLICAS value: "3" ``` **settings:** * The `ARGOCD_API_SERVER_REPLICAS` environment variable is used to divide [the limit of concurrent login requests (`ARGOCD_MAX_CONCURRENT_LOGIN_REQUESTS_COUNT`)](./user-management/index.md#failed-logins-rate-limiting) between each replica. * The `ARGOCD_GRPC_MAX_SIZE_MB` environment variable allows specifying the max size of the server response message in megabytes. The default value is 200. You might need to increase this for an Argo CD instance that manages 3000+ applications. ### argocd-dex-server, argocd-redis The `argocd-dex-server` uses an in-memory database, and two or more instances would have inconsistent data. `argocd-redis` is pre-configured with the understanding of only three total redis servers/sentinels. ## Monorepo Scaling Considerations Argo CD repo server maintains one repository clone locally and uses it for application manifest generation. If the manifest generation requires to change a file in the local repository clone then only one concurrent manifest generation per server instance is allowed. This limitation might significantly slowdown Argo CD if you have a mono repository with multiple applications (50+). ### Enable Concurrent Processing Argo CD determines if manifest generation might change local files in the local repository clone based on the config management tool and application settings. If the manifest generation has no side effects then requests are processed in parallel without a performance penalty. The following are known cases that might cause slowness and their workarounds: * **Multiple Helm based applications pointing to the same directory in one Git repository:** for historical reasons Argo CD generates Helm manifests sequentially. To enable parallel generation set `ARGOCD_HELM_ALLOW_CONCURRENCY=true` to `argocd-repo-server` deployment or create `.argocd-allow-concurrency` file. Future versions of Argo CD will enable this by default. * **Multiple Custom plugin based applications:** avoid creating temporal files during manifest generation and create `.argocd-allow-concurrency` file in the app directory, or use the sidecar plugin option, which processes each application using a temporary copy of the repository. * **Multiple Kustomize applications in same repository with [parameter overrides](../user-guide/parameters.md):** sorry, no workaround for now. ### Manifest Paths Annotation Argo CD aggressively caches generated manifests and uses the repository commit SHA as a cache key. A new commit to the Git repository invalidates the cache for all applications configured in the repository. This can negatively affect repositories with multiple applications. You can use [webhooks](https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/webhook.md) and the `argocd.argoproj.io/manifest-generate-paths` Application CRD annotation to solve this problem and improve performance. The `argocd.argoproj.io/manifest-generate-paths` annotation contains a semicolon-separated list of paths within the Git repository that are used during manifest generation. It will use the paths specified in the annotation to compare the last cached revision to the latest commit. If no modified files match the paths specified in `argocd.argoproj.io/manifest-generate-paths`, then it will not trigger application reconciliation and the existing cache will be considered valid for the new commit. Installations that use a different repository for each application are **not** subject to this behavior and will likely get no benefit from using these annotations. Similarly, applications referencing an external Helm values file will not get the benefits of this feature when an unrelated change happens in the external source. For webhooks, the comparison is done using the files specified in the webhook event payload instead. !!! note Application manifest paths annotation support for webhooks depends on the git provider used for the Application. It is currently only supported for GitHub, GitLab, and Gogs based repos. * **Relative path** The annotation might contain a relative path. In this case the path is considered relative to the path specified in the application source: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd annotations: # resolves to the 'guestbook' directory argocd.argoproj.io/manifest-generate-paths: . spec: source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook # ... ``` * **Absolute path** The annotation value might be an absolute path starting with '/'. In this case path is considered as an absolute path within the Git repository: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook annotations: argocd.argoproj.io/manifest-generate-paths: /guestbook spec: source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook # ... ``` * **Multiple paths** It is possible to put multiple paths into the annotation. Paths must be separated with a semicolon (`;`): ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook annotations: # resolves to 'my-application' and 'shared' argocd.argoproj.io/manifest-generate-paths: .;../shared spec: source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: my-application # ... ``` * **Glob paths** The annotation might contain a glob pattern path, which can be any pattern supported by the [Go filepath Match function](https://pkg.go.dev/path/filepath#Match): ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd annotations: # resolves to any file matching the pattern of *-secret.yaml in the top level shared folder argocd.argoproj.io/manifest-generate-paths: "/shared/*-secret.yaml" spec: source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook # ... ``` !!! note If application manifest generation using the `argocd.argoproj.io/manifest-generate-paths` annotation feature is enabled, only the resources specified by this annotation will be sent to the CMP server for manifest generation, rather than the entire repository. To determine the appropriate resources, a common root path is calculated based on the paths provided in the annotation. The application path serves as the deepest path that can be selected as the root. ### Application Sync Timeout & Jitter Argo CD has a timeout for application syncs. It will trigger a refresh for each application periodically when the timeout expires. With a large number of applications, this will cause a spike in the refresh queue and can cause a spike to the repo-server component. To avoid this, you can set a jitter to the sync timeout which will spread out the refreshes and give time to the repo-server to catch up. The jitter is the maximum duration that can be added to the sync timeout, so if the sync timeout is 5 minutes and the jitter is 1 minute, then the actual timeout will be between 5 and 6 minutes. To configure the jitter you can set the following environment variables: * `ARGOCD_RECONCILIATION_JITTER` - The jitter to apply to the sync timeout. Disabled when value is 0. Defaults to 0. ## Rate Limiting Application Reconciliations To prevent high controller resource usage or sync loops caused either due to misbehaving apps or other environment specific factors, we can configure rate limits on the workqueues used by the application controller. There are two types of rate limits that can be configured: * Global rate limits * Per item rate limits The final rate limiter uses a combination of both and calculates the final backoff as `max(globalBackoff, perItemBackoff)`. ### Global rate limits This is disabled by default, it is a simple bucket based rate limiter that limits the number of items that can be queued per second. This is useful to prevent a large number of apps from being queued at the same time. To configure the bucket limiter you can set the following environment variables: * `WORKQUEUE_BUCKET_SIZE` - The number of items that can be queued in a single burst. Defaults to 500. * `WORKQUEUE_BUCKET_QPS` - The number of items that can be queued per second. Defaults to MaxFloat64, which disables the limiter. ### Per item rate limits This by default returns a fixed base delay/backoff value but can be configured to return exponential values. Per item rate limiter limits the number of times a particular item can be queued. This is based on exponential backoff where the backoff time for an item keeps increasing exponentially if it is queued multiple times in a short period, but the backoff is reset automatically if a configured `cool down` period has elapsed since the last time the item was queued. To configure the per item limiter you can set the following environment variables: * `WORKQUEUE_FAILURE_COOLDOWN_NS` : The cool down period in nanoseconds, once period has elapsed for an item the backoff is reset. Exponential backoff is disabled if set to 0(default), eg. values : 10 * 10^9 (=10s) * `WORKQUEUE_BASE_DELAY_NS` : The base delay in nanoseconds, this is the initial backoff used in the exponential backoff formula. Defaults to 1000 (=1μs) * `WORKQUEUE_MAX_DELAY_NS` : The max delay in nanoseconds, this is the max backoff limit. Defaults to 3 * 10^9 (=3s) * `WORKQUEUE_BACKOFF_FACTOR` : The backoff factor, this is the factor by which the backoff is increased for each retry. Defaults to 1.5 The formula used to calculate the backoff time for an item, where `numRequeue` is the number of times the item has been queued and `lastRequeueTime` is the time at which the item was last queued: - When `WORKQUEUE_FAILURE_COOLDOWN_NS` != 0 : ``` backoff = time.Since(lastRequeueTime) >= WORKQUEUE_FAILURE_COOLDOWN_NS ? WORKQUEUE_BASE_DELAY_NS : min( WORKQUEUE_MAX_DELAY_NS, WORKQUEUE_BASE_DELAY_NS * WORKQUEUE_BACKOFF_FACTOR ^ (numRequeue) ) ``` - When `WORKQUEUE_FAILURE_COOLDOWN_NS` = 0 : ``` backoff = WORKQUEUE_BASE_DELAY_NS ``` ## HTTP Request Retry Strategy In scenarios where network instability or transient server errors occur, the retry strategy ensures the robustness of HTTP communication by automatically resending failed requests. It uses a combination of maximum retries and backoff intervals to prevent overwhelming the server or thrashing the network. ### Configuring Retries The retry logic can be fine-tuned with the following environment variables: * `ARGOCD_K8SCLIENT_RETRY_MAX` - The maximum number of retries for each request. The request will be dropped after this count is reached. Defaults to 0 (no retries). * `ARGOCD_K8SCLIENT_RETRY_BASE_BACKOFF` - The initial backoff delay on the first retry attempt in ms. Subsequent retries will double this backoff time up to a maximum threshold. Defaults to 100ms. ### Backoff Strategy The backoff strategy employed is a simple exponential backoff without jitter. The backoff time increases exponentially with each retry attempt until a maximum backoff duration is reached. The formula for calculating the backoff time is: ``` backoff = min(retryWaitMax, baseRetryBackoff * (2 ^ retryAttempt)) ``` Where `retryAttempt` starts at 0 and increments by 1 for each subsequent retry. ### Maximum Wait Time There is a cap on the backoff time to prevent excessive wait times between retries. This cap is defined by: `retryWaitMax` - The maximum duration to wait before retrying. This ensures that retries happen within a reasonable timeframe. Defaults to 10 seconds. ### Non-Retriable Conditions Not all HTTP responses are eligible for retries. The following conditions will not trigger a retry: * Responses with a status code indicating client errors (4xx) except for 429 Too Many Requests. * Responses with the status code 501 Not Implemented. ## CPU/Memory Profiling Argo CD optionally exposes a profiling endpoint that can be used to profile the CPU and memory usage of the Argo CD component. The profiling endpoint is available on metrics port of each component. See [metrics](./metrics.md) for more information about the port. For security reasons the profiling endpoint is disabled by default. The endpoint can be enabled by setting the `server.profile.enabled` or `controller.profile.enabled` key of [argocd-cmd-params-cm](argocd-cmd-params-cm.yaml) ConfigMap to `true`. Once the endpoint is enabled you can use go profile tool to collect the CPU and memory profiles. Example: ```bash $ kubectl port-forward svc/argocd-metrics 8082:8082 $ go tool pprof http://localhost:8082/debug/pprof/heap ```
argocd
High Availability Argo CD is largely stateless All data is persisted as Kubernetes objects which in turn is stored in Kubernetes etcd Redis is only used as a throw away cache and can be lost When lost it will be rebuilt without loss of service A set of HA manifests https github com argoproj argo cd tree master manifests ha are provided for users who wish to run Argo CD in a highly available manner This runs more containers and runs Redis in HA mode NOTE The HA installation will require at least three different nodes due to pod anti affinity roles in the specs Additionally IPv6 only clusters are not supported Scaling Up argocd repo server settings The argocd repo server is responsible for cloning Git repository keeping it up to date and generating manifests using the appropriate tool argocd repo server fork exec config management tool to generate manifests The fork can fail due to lack of memory or limit on the number of OS threads The parallelismlimit flag controls how many manifests generations are running concurrently and helps avoid OOM kills the argocd repo server ensures that repository is in the clean state during the manifest generation using config management tools such as Kustomize Helm or custom plugin As a result Git repositories with multiple applications might affect repository server performance Read Monorepo Scaling Considerations monorepo scaling considerations for more information argocd repo server clones the repository into tmp or the path specified in the TMPDIR env variable The Pod might run out of disk space if it has too many repositories or if the repositories have a lot of files To avoid this problem mount a persistent volume argocd repo server uses git ls remote to resolve ambiguous revisions such as HEAD a branch or a tag name This operation happens frequently and might fail To avoid failed syncs use the ARGOCD GIT ATTEMPTS COUNT environment variable to retry failed requests argocd repo server Every 3m by default Argo CD checks for changes to the app manifests Argo CD assumes by default that manifests only change when the repo changes so it caches the generated manifests for 24h by default With Kustomize remote bases or in case a Helm chart gets changed without bumping its version number the expected manifests can change even though the repo has not changed By reducing the cache time you can get the changes without waiting for 24h Use repo cache expiration duration and we d suggest in low volume environments you try 1h Bear in mind that this will negate the benefits of caching if set too low argocd repo server executes config management tools such as helm or kustomize and enforces a 90 second timeout This timeout can be changed by using the ARGOCD EXEC TIMEOUT env variable The value should be in the Go time duration string format for example 2m30s metrics argocd git request total Number of git requests This metric provides two tags repo Git repo URL request type ls remote or fetch ARGOCD ENABLE GRPC TIME HISTOGRAM Is an environment variable that enables collecting RPC performance metrics Enable it if you need to troubleshoot performance issues Note This metric is expensive to both query and store argocd application controller settings The argocd application controller uses argocd repo server to get generated manifests and Kubernetes API server to get the actual cluster state each controller replica uses two separate queues to process application reconciliation milliseconds and app syncing seconds The number of queue processors for each queue is controlled by status processors 20 by default and operation processors 10 by default flags Increase the number of processors if your Argo CD instance manages too many applications For 1000 application we use 50 for status processors and 25 for operation processors The manifest generation typically takes the most time during reconciliation The duration of manifest generation is limited to make sure the controller refresh queue does not overflow The app reconciliation fails with Context deadline exceeded error if the manifest generation is taking too much time As a workaround increase the value of repo server timeout seconds and consider scaling up the argocd repo server deployment The controller uses Kubernetes watch APIs to maintain a lightweight Kubernetes cluster cache This allows avoiding querying Kubernetes during app reconciliation and significantly improves performance For performance reasons the controller monitors and caches only the preferred versions of a resource During reconciliation the controller might have to convert cached resources from the preferred version into a version of the resource stored in Git If kubectl convert fails because the conversion is not supported then the controller falls back to Kubernetes API query which slows down reconciliation In this case we advise to use the preferred resource version in Git The controller polls Git every 3m by default You can change this duration using the timeout reconciliation and timeout reconciliation jitter setting in the argocd cm ConfigMap The value of the fields is a duration string e g 60s 1m 1h or 1d If the controller is managing too many clusters and uses too much memory then you can shard clusters across multiple controller replicas To enable sharding increase the number of replicas in argocd application controller StatefulSet and repeat the number of replicas in the ARGOCD CONTROLLER REPLICAS environment variable The strategic merge patch below demonstrates changes required to configure two controller replicas By default the controller will update the cluster information every 10 seconds If there is a problem with your cluster network environment that is causing the update time to take a long time you can try modifying the environment variable ARGO CD UPDATE CLUSTER INFO TIMEOUT to increase the timeout the unit is seconds yaml apiVersion apps v1 kind StatefulSet metadata name argocd application controller spec replicas 2 template spec containers name argocd application controller env name ARGOCD CONTROLLER REPLICAS value 2 In order to manually set the cluster s shard number specify the optional shard property when creating a cluster If not specified it will be calculated on the fly by the application controller The shard distribution algorithm of the argocd application controller can be set by using the sharding method parameter Supported sharding methods are legacy default round robin consistent hashing legacy mode uses an uid based distribution non uniform round robin uses an equal distribution across all shards consistent hashing uses the consistent hashing with bounded loads algorithm which tends to equal distribution and also reduces cluster or application reshuffling in case of additions or removals of shards or clusters The sharding method parameter can also be overridden by setting the key controller sharding algorithm in the argocd cmd params cm configMap preferably or by setting the ARGOCD CONTROLLER SHARDING ALGORITHM environment variable and by specifiying the same possible values warning Alpha Features The round robin shard distribution algorithm is an experimental feature Reshuffling is known to occur in certain scenarios with cluster removal If the cluster at rank 0 is removed reshuffling all clusters across shards will occur and may temporarily have negative performance impacts The consistent hashing shard distribution algorithm is an experimental feature Extensive benchmark have been documented on the CNOE blog https cnoe io blog argo cd application scalability with encouraging results Community feedback is highly appreciated before moving this feature to a production ready state A cluster can be manually assigned and forced to a shard by patching the shard field in the cluster secret to contain the shard number e g yaml apiVersion v1 kind Secret metadata name mycluster secret labels argocd argoproj io secret type cluster type Opaque stringData shard 1 name mycluster example com server https mycluster example com config bearerToken authentication token tlsClientConfig insecure false caData base64 encoded certificate ARGOCD ENABLE GRPC TIME HISTOGRAM environment variable that enables collecting RPC performance metrics Enable it if you need to troubleshoot performance issues Note This metric is expensive to both query and store ARGOCD CLUSTER CACHE LIST PAGE BUFFER SIZE environment variable controlling the number of pages the controller buffers in memory when performing a list operation against the K8s api server while syncing the cluster cache This is useful when the cluster contains a large number of resources and cluster sync times exceed the default etcd compaction interval timeout In this scenario when attempting to sync the cluster cache the application controller may throw an error that the continue parameter is too old to display a consistent list result Setting a higher value for this environment variable configures the controller with a larger buffer in which to store pre fetched pages which are processed asynchronously increasing the likelihood that all pages have been pulled before the etcd compaction interval timeout expires In the most extreme case operators can set this value such that ARGOCD CLUSTER CACHE LIST PAGE SIZE ARGOCD CLUSTER CACHE LIST PAGE BUFFER SIZE exceeds the largest resource count grouped by k8s api version the granule of parallelism for list operations In this case all resources will be buffered in memory no api server request will be blocked by processing ARGOCD APPLICATION TREE SHARD SIZE environment variable controlling the max number of resources stored in one Redis key Splitting application tree into multiple keys helps to reduce the amount of traffic between the controller and Redis The default value is 0 which means that the application tree is stored in a single Redis key The reasonable value is 100 metrics argocd app reconcile reports application reconciliation duration in seconds Can be used to build reconciliation duration heat map to get a high level reconciliation performance picture argocd app k8s request total number of k8s requests per application The number of fallback Kubernetes API queries useful to identify which application has a resource with non preferred version and causes performance issues argocd server The argocd server is stateless and probably the least likely to cause issues To ensure there is no downtime during upgrades consider increasing the number of replicas to 3 or more and repeat the number in the ARGOCD API SERVER REPLICAS environment variable The strategic merge patch below demonstrates this yaml apiVersion apps v1 kind Deployment metadata name argocd server spec replicas 3 template spec containers name argocd server env name ARGOCD API SERVER REPLICAS value 3 settings The ARGOCD API SERVER REPLICAS environment variable is used to divide the limit of concurrent login requests ARGOCD MAX CONCURRENT LOGIN REQUESTS COUNT user management index md failed logins rate limiting between each replica The ARGOCD GRPC MAX SIZE MB environment variable allows specifying the max size of the server response message in megabytes The default value is 200 You might need to increase this for an Argo CD instance that manages 3000 applications argocd dex server argocd redis The argocd dex server uses an in memory database and two or more instances would have inconsistent data argocd redis is pre configured with the understanding of only three total redis servers sentinels Monorepo Scaling Considerations Argo CD repo server maintains one repository clone locally and uses it for application manifest generation If the manifest generation requires to change a file in the local repository clone then only one concurrent manifest generation per server instance is allowed This limitation might significantly slowdown Argo CD if you have a mono repository with multiple applications 50 Enable Concurrent Processing Argo CD determines if manifest generation might change local files in the local repository clone based on the config management tool and application settings If the manifest generation has no side effects then requests are processed in parallel without a performance penalty The following are known cases that might cause slowness and their workarounds Multiple Helm based applications pointing to the same directory in one Git repository for historical reasons Argo CD generates Helm manifests sequentially To enable parallel generation set ARGOCD HELM ALLOW CONCURRENCY true to argocd repo server deployment or create argocd allow concurrency file Future versions of Argo CD will enable this by default Multiple Custom plugin based applications avoid creating temporal files during manifest generation and create argocd allow concurrency file in the app directory or use the sidecar plugin option which processes each application using a temporary copy of the repository Multiple Kustomize applications in same repository with parameter overrides user guide parameters md sorry no workaround for now Manifest Paths Annotation Argo CD aggressively caches generated manifests and uses the repository commit SHA as a cache key A new commit to the Git repository invalidates the cache for all applications configured in the repository This can negatively affect repositories with multiple applications You can use webhooks https github com argoproj argo cd blob master docs operator manual webhook md and the argocd argoproj io manifest generate paths Application CRD annotation to solve this problem and improve performance The argocd argoproj io manifest generate paths annotation contains a semicolon separated list of paths within the Git repository that are used during manifest generation It will use the paths specified in the annotation to compare the last cached revision to the latest commit If no modified files match the paths specified in argocd argoproj io manifest generate paths then it will not trigger application reconciliation and the existing cache will be considered valid for the new commit Installations that use a different repository for each application are not subject to this behavior and will likely get no benefit from using these annotations Similarly applications referencing an external Helm values file will not get the benefits of this feature when an unrelated change happens in the external source For webhooks the comparison is done using the files specified in the webhook event payload instead note Application manifest paths annotation support for webhooks depends on the git provider used for the Application It is currently only supported for GitHub GitLab and Gogs based repos Relative path The annotation might contain a relative path In this case the path is considered relative to the path specified in the application source yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook namespace argocd annotations resolves to the guestbook directory argocd argoproj io manifest generate paths spec source repoURL https github com argoproj argocd example apps git targetRevision HEAD path guestbook Absolute path The annotation value might be an absolute path starting with In this case path is considered as an absolute path within the Git repository yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook annotations argocd argoproj io manifest generate paths guestbook spec source repoURL https github com argoproj argocd example apps git targetRevision HEAD path guestbook Multiple paths It is possible to put multiple paths into the annotation Paths must be separated with a semicolon yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook annotations resolves to my application and shared argocd argoproj io manifest generate paths shared spec source repoURL https github com argoproj argocd example apps git targetRevision HEAD path my application Glob paths The annotation might contain a glob pattern path which can be any pattern supported by the Go filepath Match function https pkg go dev path filepath Match yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook namespace argocd annotations resolves to any file matching the pattern of secret yaml in the top level shared folder argocd argoproj io manifest generate paths shared secret yaml spec source repoURL https github com argoproj argocd example apps git targetRevision HEAD path guestbook note If application manifest generation using the argocd argoproj io manifest generate paths annotation feature is enabled only the resources specified by this annotation will be sent to the CMP server for manifest generation rather than the entire repository To determine the appropriate resources a common root path is calculated based on the paths provided in the annotation The application path serves as the deepest path that can be selected as the root Application Sync Timeout Jitter Argo CD has a timeout for application syncs It will trigger a refresh for each application periodically when the timeout expires With a large number of applications this will cause a spike in the refresh queue and can cause a spike to the repo server component To avoid this you can set a jitter to the sync timeout which will spread out the refreshes and give time to the repo server to catch up The jitter is the maximum duration that can be added to the sync timeout so if the sync timeout is 5 minutes and the jitter is 1 minute then the actual timeout will be between 5 and 6 minutes To configure the jitter you can set the following environment variables ARGOCD RECONCILIATION JITTER The jitter to apply to the sync timeout Disabled when value is 0 Defaults to 0 Rate Limiting Application Reconciliations To prevent high controller resource usage or sync loops caused either due to misbehaving apps or other environment specific factors we can configure rate limits on the workqueues used by the application controller There are two types of rate limits that can be configured Global rate limits Per item rate limits The final rate limiter uses a combination of both and calculates the final backoff as max globalBackoff perItemBackoff Global rate limits This is disabled by default it is a simple bucket based rate limiter that limits the number of items that can be queued per second This is useful to prevent a large number of apps from being queued at the same time To configure the bucket limiter you can set the following environment variables WORKQUEUE BUCKET SIZE The number of items that can be queued in a single burst Defaults to 500 WORKQUEUE BUCKET QPS The number of items that can be queued per second Defaults to MaxFloat64 which disables the limiter Per item rate limits This by default returns a fixed base delay backoff value but can be configured to return exponential values Per item rate limiter limits the number of times a particular item can be queued This is based on exponential backoff where the backoff time for an item keeps increasing exponentially if it is queued multiple times in a short period but the backoff is reset automatically if a configured cool down period has elapsed since the last time the item was queued To configure the per item limiter you can set the following environment variables WORKQUEUE FAILURE COOLDOWN NS The cool down period in nanoseconds once period has elapsed for an item the backoff is reset Exponential backoff is disabled if set to 0 default eg values 10 10 9 10s WORKQUEUE BASE DELAY NS The base delay in nanoseconds this is the initial backoff used in the exponential backoff formula Defaults to 1000 1 s WORKQUEUE MAX DELAY NS The max delay in nanoseconds this is the max backoff limit Defaults to 3 10 9 3s WORKQUEUE BACKOFF FACTOR The backoff factor this is the factor by which the backoff is increased for each retry Defaults to 1 5 The formula used to calculate the backoff time for an item where numRequeue is the number of times the item has been queued and lastRequeueTime is the time at which the item was last queued When WORKQUEUE FAILURE COOLDOWN NS 0 backoff time Since lastRequeueTime WORKQUEUE FAILURE COOLDOWN NS WORKQUEUE BASE DELAY NS min WORKQUEUE MAX DELAY NS WORKQUEUE BASE DELAY NS WORKQUEUE BACKOFF FACTOR numRequeue When WORKQUEUE FAILURE COOLDOWN NS 0 backoff WORKQUEUE BASE DELAY NS HTTP Request Retry Strategy In scenarios where network instability or transient server errors occur the retry strategy ensures the robustness of HTTP communication by automatically resending failed requests It uses a combination of maximum retries and backoff intervals to prevent overwhelming the server or thrashing the network Configuring Retries The retry logic can be fine tuned with the following environment variables ARGOCD K8SCLIENT RETRY MAX The maximum number of retries for each request The request will be dropped after this count is reached Defaults to 0 no retries ARGOCD K8SCLIENT RETRY BASE BACKOFF The initial backoff delay on the first retry attempt in ms Subsequent retries will double this backoff time up to a maximum threshold Defaults to 100ms Backoff Strategy The backoff strategy employed is a simple exponential backoff without jitter The backoff time increases exponentially with each retry attempt until a maximum backoff duration is reached The formula for calculating the backoff time is backoff min retryWaitMax baseRetryBackoff 2 retryAttempt Where retryAttempt starts at 0 and increments by 1 for each subsequent retry Maximum Wait Time There is a cap on the backoff time to prevent excessive wait times between retries This cap is defined by retryWaitMax The maximum duration to wait before retrying This ensures that retries happen within a reasonable timeframe Defaults to 10 seconds Non Retriable Conditions Not all HTTP responses are eligible for retries The following conditions will not trigger a retry Responses with a status code indicating client errors 4xx except for 429 Too Many Requests Responses with the status code 501 Not Implemented CPU Memory Profiling Argo CD optionally exposes a profiling endpoint that can be used to profile the CPU and memory usage of the Argo CD component The profiling endpoint is available on metrics port of each component See metrics metrics md for more information about the port For security reasons the profiling endpoint is disabled by default The endpoint can be enabled by setting the server profile enabled or controller profile enabled key of argocd cmd params cm argocd cmd params cm yaml ConfigMap to true Once the endpoint is enabled you can use go profile tool to collect the CPU and memory profiles Example bash kubectl port forward svc argocd metrics 8082 8082 go tool pprof http localhost 8082 debug pprof heap
argocd note Git webhook notifications from GitHub GitLab Bitbucket Bitbucket Server Azure DevOps and Gogs The following explains how to configure this delay from polling the API server can be configured to receive webhook events Argo CD supports Overview Git Webhook Configuration Argo CD polls Git repositories every three minutes to detect changes to the manifests To eliminate a Git webhook for GitHub but the same process should be applicable to other providers
# Git Webhook Configuration ## Overview Argo CD polls Git repositories every three minutes to detect changes to the manifests. To eliminate this delay from polling, the API server can be configured to receive webhook events. Argo CD supports Git webhook notifications from GitHub, GitLab, Bitbucket, Bitbucket Server, Azure DevOps and Gogs. The following explains how to configure a Git webhook for GitHub, but the same process should be applicable to other providers. !!! note The webhook handler does not differentiate between branch events and tag events where the branch and tag names are the same. A hook event for a push to branch `x` will trigger a refresh for an app pointing at the same repo with `targetRevision: refs/tags/x`. ## 1. Create The WebHook In The Git Provider In your Git provider, navigate to the settings page where webhooks can be configured. The payload URL configured in the Git provider should use the `/api/webhook` endpoint of your Argo CD instance (e.g. `https://argocd.example.com/api/webhook`). If you wish to use a shared secret, input an arbitrary value in the secret. This value will be used when configuring the webhook in the next step. To prevent DDoS attacks with unauthenticated webhook events (the `/api/webhook` endpoint currently lacks rate limiting protection), it is recommended to limit the payload size. You can achieve this by configuring the `argocd-cm` ConfigMap with the `webhook.maxPayloadSizeMB` attribute. The default value is 1GB. ## Github ![Add Webhook](../assets/webhook-config.png "Add Webhook") !!! note When creating the webhook in GitHub, the "Content type" needs to be set to "application/json". The default value "application/x-www-form-urlencoded" is not supported by the library used to handle the hooks ## Azure DevOps ![Add Webhook](../assets/azure-devops-webhook-config.png "Add Webhook") Azure DevOps optionally supports securing the webhook using basic authentication. To use it, specify the username and password in the webhook configuration and configure the same username/password in `argocd-secret` Kubernetes secret in `webhook.azuredevops.username` and `webhook.azuredevops.password` keys. ## 2. Configure Argo CD With The WebHook Secret (Optional) Configuring a webhook shared secret is optional, since Argo CD will still refresh applications related to the Git repository, even with unauthenticated webhook events. This is safe to do since the contents of webhook payloads are considered untrusted, and will only result in a refresh of the application (a process which already occurs at three-minute intervals). If Argo CD is publicly accessible, then configuring a webhook secret is recommended to prevent a DDoS attack. In the `argocd-secret` Kubernetes secret, configure one of the following keys with the Git provider's webhook secret configured in step 1. | Provider | K8s Secret Key | |-----------------|----------------------------------| | GitHub | `webhook.github.secret` | | GitLab | `webhook.gitlab.secret` | | BitBucket | `webhook.bitbucket.uuid` | | BitBucketServer | `webhook.bitbucketserver.secret` | | Gogs | `webhook.gogs.secret` | | Azure DevOps | `webhook.azuredevops.username` | | | `webhook.azuredevops.password` | Edit the Argo CD Kubernetes secret: ```bash kubectl edit secret argocd-secret -n argocd ``` TIP: for ease of entering secrets, Kubernetes supports inputting secrets in the `stringData` field, which saves you the trouble of base64 encoding the values and copying it to the `data` field. Simply copy the shared webhook secret created in step 1, to the corresponding GitHub/GitLab/BitBucket key under the `stringData` field: ```yaml apiVersion: v1 kind: Secret metadata: name: argocd-secret namespace: argocd type: Opaque data: ... stringData: # github webhook secret webhook.github.secret: shhhh! it's a GitHub secret # gitlab webhook secret webhook.gitlab.secret: shhhh! it's a GitLab secret # bitbucket webhook secret webhook.bitbucket.uuid: your-bitbucket-uuid # bitbucket server webhook secret webhook.bitbucketserver.secret: shhhh! it's a Bitbucket server secret # gogs server webhook secret webhook.gogs.secret: shhhh! it's a gogs server secret # azuredevops username and password webhook.azuredevops.username: admin webhook.azuredevops.password: secret-password ``` After saving, the changes should take effect automatically. ### Alternative If you want to store webhook data in **another** Kubernetes `Secret`, instead of `argocd-secret`. ArgoCD knows to check the keys under `data` in your Kubernetes `Secret` starts with `$`, then your Kubernetes `Secret` name and `:` (colon). Syntax: `$<k8s_secret_name>:<a_key_in_that_k8s_secret>` > NOTE: Secret must have label `app.kubernetes.io/part-of: argocd` For more information refer to the corresponding section in the [User Management Documentation](user-management/index.md#alternative).
argocd
Git Webhook Configuration Overview Argo CD polls Git repositories every three minutes to detect changes to the manifests To eliminate this delay from polling the API server can be configured to receive webhook events Argo CD supports Git webhook notifications from GitHub GitLab Bitbucket Bitbucket Server Azure DevOps and Gogs The following explains how to configure a Git webhook for GitHub but the same process should be applicable to other providers note The webhook handler does not differentiate between branch events and tag events where the branch and tag names are the same A hook event for a push to branch x will trigger a refresh for an app pointing at the same repo with targetRevision refs tags x 1 Create The WebHook In The Git Provider In your Git provider navigate to the settings page where webhooks can be configured The payload URL configured in the Git provider should use the api webhook endpoint of your Argo CD instance e g https argocd example com api webhook If you wish to use a shared secret input an arbitrary value in the secret This value will be used when configuring the webhook in the next step To prevent DDoS attacks with unauthenticated webhook events the api webhook endpoint currently lacks rate limiting protection it is recommended to limit the payload size You can achieve this by configuring the argocd cm ConfigMap with the webhook maxPayloadSizeMB attribute The default value is 1GB Github Add Webhook assets webhook config png Add Webhook note When creating the webhook in GitHub the Content type needs to be set to application json The default value application x www form urlencoded is not supported by the library used to handle the hooks Azure DevOps Add Webhook assets azure devops webhook config png Add Webhook Azure DevOps optionally supports securing the webhook using basic authentication To use it specify the username and password in the webhook configuration and configure the same username password in argocd secret Kubernetes secret in webhook azuredevops username and webhook azuredevops password keys 2 Configure Argo CD With The WebHook Secret Optional Configuring a webhook shared secret is optional since Argo CD will still refresh applications related to the Git repository even with unauthenticated webhook events This is safe to do since the contents of webhook payloads are considered untrusted and will only result in a refresh of the application a process which already occurs at three minute intervals If Argo CD is publicly accessible then configuring a webhook secret is recommended to prevent a DDoS attack In the argocd secret Kubernetes secret configure one of the following keys with the Git provider s webhook secret configured in step 1 Provider K8s Secret Key GitHub webhook github secret GitLab webhook gitlab secret BitBucket webhook bitbucket uuid BitBucketServer webhook bitbucketserver secret Gogs webhook gogs secret Azure DevOps webhook azuredevops username webhook azuredevops password Edit the Argo CD Kubernetes secret bash kubectl edit secret argocd secret n argocd TIP for ease of entering secrets Kubernetes supports inputting secrets in the stringData field which saves you the trouble of base64 encoding the values and copying it to the data field Simply copy the shared webhook secret created in step 1 to the corresponding GitHub GitLab BitBucket key under the stringData field yaml apiVersion v1 kind Secret metadata name argocd secret namespace argocd type Opaque data stringData github webhook secret webhook github secret shhhh it s a GitHub secret gitlab webhook secret webhook gitlab secret shhhh it s a GitLab secret bitbucket webhook secret webhook bitbucket uuid your bitbucket uuid bitbucket server webhook secret webhook bitbucketserver secret shhhh it s a Bitbucket server secret gogs server webhook secret webhook gogs secret shhhh it s a gogs server secret azuredevops username and password webhook azuredevops username admin webhook azuredevops password secret password After saving the changes should take effect automatically Alternative If you want to store webhook data in another Kubernetes Secret instead of argocd secret ArgoCD knows to check the keys under data in your Kubernetes Secret starts with then your Kubernetes Secret name and colon Syntax k8s secret name a key in that k8s secret NOTE Secret must have label app kubernetes io part of argocd For more information refer to the corresponding section in the User Management Documentation user management index md alternative
"argocd Argo CD applications projects and settings can be defined declaratively using Kubernetes man(...TRUNCATED)
"# Declarative Setup\n\nArgo CD applications, projects and settings can be defined declaratively usi(...TRUNCATED)
argocd
" Declarative Setup Argo CD applications projects and settings can be defined declaratively using(...TRUNCATED)
"argocd Config Management Plugins a Config Management Plugin CMP Argo CD s native config management (...TRUNCATED)
"\n# Config Management Plugins\n\nArgo CD's \"native\" config management tools are Helm, Jsonnet, an(...TRUNCATED)
argocd
" Config Management Plugins Argo CD s native config management tools are Helm Jsonnet and Kus(...TRUNCATED)
"argocd Metrics Application Controller Metrics gauge Information about Applications It contains labe(...TRUNCATED)
"# Metrics\n\nArgo CD exposes different sets of Prometheus metrics per server.\n\n## Application Con(...TRUNCATED)
argocd
" Metrics Argo CD exposes different sets of Prometheus metrics per server Application Control(...TRUNCATED)
"argocd The following groups of features won t be available in this engine capable of getting the de(...TRUNCATED)
"# Argo CD Core\n\n## Introduction\n\nArgo CD Core is a different installation that runs Argo CD in (...TRUNCATED)
argocd
" Argo CD Core Introduction Argo CD Core is a different installation that runs Argo CD in head(...TRUNCATED)

Technical Documentation Dataset

A curated collection of technical documentation and guides spanning various cloud-native technologies, infrastructure tools, and machine learning frameworks. This dataset contains 1,397 documents in JSONL format, covering essential topics for modern software development and DevOps practices.

Dataset Overview

This dataset includes documentation across multiple domains:

  • Cloud Platforms: GCP (83 docs), EKS (33 docs)
  • Kubernetes Ecosystem: Kubernetes reference (251 docs), ArgoCD (60 docs), Cilium (33 docs)
  • Infrastructure as Code: Terraform (151 docs)
  • Observability: Prometheus (32 docs), Grafana (43 docs)
  • Service Mesh: Istio (33 docs)
  • Machine Learning: scikit-learn (55 docs)
  • And more: Including Docker, Redis, Linux, and various Kubernetes-related tools

Format

Each document is stored in JSONL format, making it easy to process and integrate into various applications. The dataset is hosted on Hugging Face for convenient access and version control.

Usage

This collection is particularly useful for:

  • Technical documentation systems
  • Knowledge bases
  • Search implementations
  • Training documentation-related ML models

You can access and download the dataset directly from the Hugging Face repository using standard tools and libraries.

Downloads last month
100

Models trained or fine-tuned on saidsef/tech-docs