questions
stringlengths
4
1.65k
answers
stringlengths
1.73k
353k
site
stringclasses
24 values
answers_cleaned
stringlengths
1.73k
353k
GCP Everything in this repository has been developed as a parallel to CAI Config Validator The difference is that the Constraint Template schemas target instead of This ensures that the policies only target terraform resource changes instead of the entire CAI metadata library from a project folder or organization Use this when you intend to validate changes rather than declaratively manage a GCP cloud environment See for information on how to use this library Terraform Config Validator Policy Library This repo contains a library of constraint templates and sample constraints to be used for Terraform resource change requests If you re looking for the CAI variant please see User Guide
# Terraform Config Validator Policy Library This repo contains a library of constraint templates and sample constraints to be used for Terraform resource change requests. If you're looking for the CAI variant, please see [Config Validator](https://github.com/lykaasegura/w-secteam-repo). Everything in this repository has been developed as a parallel to CAI Config Validator. The difference is that the Constraint/Template schemas target `validation.resourcechange.terraform.cloud.google.com` instead of `validation.gcp.forsetisecurity.org`. This ensures that the policies only target terraform resource changes, instead of the entire CAI metadata library from a project, folder, or organization. Use this when you intend to validate changes, rather than declaratively manage a GCP cloud environment. ## User Guide See [docs/user_guide.md](docs/user_guide.md) for information on how to use this library. See [docs/functional_principles.md](docs/functional_principles.md) for information on how to **develop** your own policies to use with `gcloud beta terraform vet`. ## Creating Policies in the Constraint Framework This library is set up in the **Constraint Framework** style. This means that we utilize Gatekeeper Constraints and ConstraintTemplates to interpret and apply rego logic to incoming terraform change resources. This can be challenging to understand at first, so please refer to the [functional principles](docs/functional_principles.md) documentation found in the `docs` folder. ## General Differences This library is intended to validate terraform plan resources. Therefore, as mentioned, the target has been swapped from `validation.gcp.forsetisecurity.org` to `validation.resourcechange.terraform.cloud.google.com`. This also means that the Constraint and ConstraintTemplate definitions have also had to be changed from Gatekeeper API version `v1alpha1` to `v1beta1`, as this functionality is currently under development. As a result, the Rego policy language has also had to change. If you user CAI Constraints and Templates (ie. v1apha1 Constraints/Templates), those inlined Rego policies **will not work.** You can check out documentation on how to create terraform policies in the [`gcloud terraform beta vet` documentation](https://cloud.google.com/docs/terraform/policy-validation/create-terraform-constraints). ## Working with this policy library The operation of this library is similar with the CAI library, as the development flow with Make and other tools has proven to be quite efficient and helpful. Therefore, you can check the [user guide](docs/user_guide.md) for relevant documentation required to get this library working for your needs. ### Initializing a policy library You can easily set up a new (local) policy library by downloading a [bundle](./docs/index.md#policy-bundles) using [kpt](https://kpt.dev/). Download the full policy library and install the [Forseti bundle](./docs/bundles/forseti-security.md): ``` export BUNDLE=forseti-security kpt pkg get https://github.com/GoogleCloudPlatform/policy-library.git ./policy-library kpt fn source policy-library/samples/ | \ kpt fn eval - --image gcr.io/config-validator/get-policy-bundle:latest -- bundle=$BUNDLE | \ kpt fn sink policy-library/policies/constraints/$BUNDLE ``` Once you have initialized a library, you might want to save it to [git](./docs/user_guide.md#https://github.com/GoogleCloudPlatform/policy-library/blob/master/docs/user_guide.md#get-started-with-the-policy-library-repository). ### Developing a Constraint If this library doesn't contain a constraint that matches your use case, you can develop a new one using the [Constraint Template Authoring Guide](docs/functional_principles.md). #### Available Commands ``` make audit Run audit against real CAI dump data make build Format and build make build_templates Inline Rego rules into constraint templates make format Format Rego rules make help Prints help for targets with comments make test Test constraint templates via OPA ``` #### Inlining You can run `make build` to automatically inline Rego rules into your constraint templates. This is done by finding a `INLINE("filename")` and `#ENDINLINE` statements in your yaml, and replacing everything in between with the contents of the file. For example, running `make build` would replace the raw content with the replaced content below Raw: ``` #INLINE("my_rule.rego") # This text will be replaced #ENDINLINE ``` Replaced: ``` #INLINE("my_rule.rego") #contents of my_rule.rego #ENDINLINE ``` #### Linting Policies Config Validator provides a policy linter. You can invoke it as: ``` go get github.com/GoogleCloudPlatform/config-validator/cmd/policy-tool policy-tool --policies ./policies --policies ./samples --libs ./lib ``` #### Local CI You can run the cloudbuild CI locally as follows: ``` gcloud components install cloud-build-local cloud-build-local --config ./cloudbuild.yaml --dryrun=false . ``` #### Updating CI Images You can update the CI images to add new versions of rego/opa as they are released. ``` # Rebuild all images. make -j ci-images # Rebuild a single image make ci-image-v1.16.0 ```
GCP
Terraform Config Validator Policy Library This repo contains a library of constraint templates and sample constraints to be used for Terraform resource change requests If you re looking for the CAI variant please see Config Validator https github com lykaasegura w secteam repo Everything in this repository has been developed as a parallel to CAI Config Validator The difference is that the Constraint Template schemas target validation resourcechange terraform cloud google com instead of validation gcp forsetisecurity org This ensures that the policies only target terraform resource changes instead of the entire CAI metadata library from a project folder or organization Use this when you intend to validate changes rather than declaratively manage a GCP cloud environment User Guide See docs user guide md docs user guide md for information on how to use this library See docs functional principles md docs functional principles md for information on how to develop your own policies to use with gcloud beta terraform vet Creating Policies in the Constraint Framework This library is set up in the Constraint Framework style This means that we utilize Gatekeeper Constraints and ConstraintTemplates to interpret and apply rego logic to incoming terraform change resources This can be challenging to understand at first so please refer to the functional principles docs functional principles md documentation found in the docs folder General Differences This library is intended to validate terraform plan resources Therefore as mentioned the target has been swapped from validation gcp forsetisecurity org to validation resourcechange terraform cloud google com This also means that the Constraint and ConstraintTemplate definitions have also had to be changed from Gatekeeper API version v1alpha1 to v1beta1 as this functionality is currently under development As a result the Rego policy language has also had to change If you user CAI Constraints and Templates ie v1apha1 Constraints Templates those inlined Rego policies will not work You can check out documentation on how to create terraform policies in the gcloud terraform beta vet documentation https cloud google com docs terraform policy validation create terraform constraints Working with this policy library The operation of this library is similar with the CAI library as the development flow with Make and other tools has proven to be quite efficient and helpful Therefore you can check the user guide docs user guide md for relevant documentation required to get this library working for your needs Initializing a policy library You can easily set up a new local policy library by downloading a bundle docs index md policy bundles using kpt https kpt dev Download the full policy library and install the Forseti bundle docs bundles forseti security md export BUNDLE forseti security kpt pkg get https github com GoogleCloudPlatform policy library git policy library kpt fn source policy library samples kpt fn eval image gcr io config validator get policy bundle latest bundle BUNDLE kpt fn sink policy library policies constraints BUNDLE Once you have initialized a library you might want to save it to git docs user guide md https github com GoogleCloudPlatform policy library blob master docs user guide md get started with the policy library repository Developing a Constraint If this library doesn t contain a constraint that matches your use case you can develop a new one using the Constraint Template Authoring Guide docs functional principles md Available Commands make audit Run audit against real CAI dump data make build Format and build make build templates Inline Rego rules into constraint templates make format Format Rego rules make help Prints help for targets with comments make test Test constraint templates via OPA Inlining You can run make build to automatically inline Rego rules into your constraint templates This is done by finding a INLINE filename and ENDINLINE statements in your yaml and replacing everything in between with the contents of the file For example running make build would replace the raw content with the replaced content below Raw INLINE my rule rego This text will be replaced ENDINLINE Replaced INLINE my rule rego contents of my rule rego ENDINLINE Linting Policies Config Validator provides a policy linter You can invoke it as go get github com GoogleCloudPlatform config validator cmd policy tool policy tool policies policies policies samples libs lib Local CI You can run the cloudbuild CI locally as follows gcloud components install cloud build local cloud build local config cloudbuild yaml dryrun false Updating CI Images You can update the CI images to add new versions of rego opa as they are released Rebuild all images make j ci images Rebuild a single image make ci image v1 16 0
GCP Config Validator Setup User Guide Table of Contents Go from setup to proof of concept in under 1 hour
## Config Validator | Setup & User Guide ### Go from setup to proof-of-concept in under 1 hour **Table of Contents** * [Overview](#overview) * [How to set up constraints with Policy Library](#how-to-set-up-constraints-with-policy-library) * [Get started with the Policy Library repository](#get-started-with-the-policy-library-repository) * [Instantiate constraints](#instantiate-constraints) * [How to validate policies](#how-to-validate-policies) * [Deploy Forseti](#deploy-forseti) * [Policy Library Sync from Git Repository](https://forsetisecurity.org/docs/latest/configure/config-validator/policy-library-sync-from-git-repo.html) * [Policy Library Sync from GCS](https://forsetisecurity.org/docs/latest/configure/config-validator/policy-library-sync-from-gcs.html) * [End to end workflow with sample constraint](#end-to-end-workflow-with-sample-constraint) * [Contact Info](#contact-info) ## Overview This tool is designed to perform policy validation check on Terraform resource changes. It will not help with ongoing monitoring in your organization heirarchy, so if you're looking for that, please find the [config-validator](https://github.com/GoogleCloudPlatform/config-validator) project and associated [policy-library](https://github.com/GoogleCloudPlatform/policy-library) to get started with Cloud Asset Inventory policies. Designed as an offshoot from the aforementioned policy-library, we set out to design a similar library that targets resource changes before terraform deployments. By refactoring rego policies in our library, we were able to target `validation.resourcechange.terraform.cloud.google.com` instead of the forsetisecurity target for CAI data. This allows for policy control in cases when the current state of the enviornment would clearly conflict with security policies, but you can't enforce fine-grained control to allow for that state to exist while locking out nearby features from terraform. This would likely be the case in automated IAM role or permission granting in a project with a super-admin. The super-admin may need to be there, and if using CAI policy validation, the pipeline would always fail if you define policies that limit the scope of a user's control. Keep in mind that this behavior may lead to security vulnerabilities, because the tool does not perform any ongoing monitoring. ## How to set up constraints with Policy Library ### Get started with the Policy Library repository The Policy Library repository contains the following directories: * `policies` * `constraints`: This is initially empty. You should place your constraint files here. * `templates`: This directory contains pre-defined constraint templates. * `validator`: This directory contains the `.rego` files and their associated unit tests. You do not need to touch this directory unless you intend to modify existing constraint templates or create new ones. Running `make build` will inline the Rego content in the corresponding constraint template files. This repository contains a set of pre-defined constraint templates. You can duplicate this repository into a private repository. First you should create a new **private** git repository. For example, if you use GitHub then you can use the [GitHub UI](https://github.com/new). Then follow the steps below to get everything setup. This policy library can also be made public, but it is not recommended. By making your policy library public, it would allow others to see what you are and **ARE NOT** scanning for. #### Duplicate Policy Library Repository To run the following commands, you will need to configure git to connect securely. It is recommended to connect with SSH. [Here is a helpful resource](https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh) for learning about how this works, including steps to set this up for GitHub repositories; other providers offer this feature as well. ``` export GIT_REPO_ADDR="[email protected]:${YOUR_GITHUB_USERNAME}/policy-library.git" git clone --bare https://github.com/tdesrosi/gcp-terraform-config-validator.git cd policy-library.git git push --mirror ${GIT_REPO_ADDR} cd .. rm -rf policy-library.git git clone ${GIT_REPO_ADDR} ``` #### Setup Constraints Then you need to examine the available constraint templates inside the `templates` directory. Pick the constraint templates that you wish to use, create constraint YAML files corresponding to those templates, and place them under `policies/constraints`. Commit the newly created constraint files to **your** Git repository. For example, assuming you have created a Git repository named "policy-library" under your GitHub account, you can use the following commands to perform the initial commit: ``` cd policy-library # Add new constraints... git add --all git commit -m "Initial commit of policy library constraints" git push -u origin master ``` #### Pull in latest changes from Public Repository Periodically you should pull any changes from the public repository, which might contain new templates and Rego files. ``` git remote add public https://github.com/tdesrosi/policy-library-tf-resource-change.git git pull public main git push origin main ``` ### Instantiate constraints The constraint template library only contains templates. Templates specify the constraint logic, and you must create constraints based on those templates in order to enforce them. Constraint parameters are defined as YAML files in the following format: ``` apiVersion: constraints.gatekeeper.sh/v1beta1 kind: # place constraint template kind here metadata: name: # place constraint name here spec: severity: # low, medium, or high match: target: [] # put the constraint application target here exclude: [] # optional, default is no exclusions parameters: # put the parameters defined in constraint template here ``` The <code><em>target</em></code> field is specified in a path-like format. It specifies where in the GCP resources hierarchy the constraint is to be applied. For example: <table> <tr> <td>Target </td> <td>Description </td> </tr> <tr> <td>organizations/** </td> <td>All organizations </td> </tr> <tr> <td>organizations/123/** </td> <td>Everything in organization 123 </td> </tr> <tr> <td>organizations/123/folders/** </td> <td>Everything in organization 123 that is under a folder </td> </tr> <tr> <td>organizations/123/folders/456 </td> <td>Everything in folder 456 in organization 123 </td> </tr> <tr> <td>organizations/123/folders/456/projects/789 </td> <td>Everything in project 789 in folder 456 in organization 123 </td> </tr> </table> The <code><em>exclude</em></code> field follows the same pattern and has precedence over the <code><em>target</em></code> field. If a resource is in both, it will be excluded. The schema of the <code><em>parameters</em></code> field is defined in the constraint template, using the [OpenAPI V3](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#schemaObject) schema. This is the same validation schema in Kubernetes's custom resource definition. Every template contains a <code><em>validation</em></code> section that looks like the following: ``` validation: openAPIV3Schema: properties: mode: type: string instances: type: array items: string ``` According to the template above, the parameter field in the constraint file should contain a string named `mode` and a string array named <code><em>instances</em></code>. For example: ``` parameters: mode: allowlist instances: - //compute.googleapis.com/projects/test-project/zones/us-east1-b/instances/one - //compute.googleapis.com/projects/test-project/zones/us-east1-b/instances/two ``` These parameters specify that two VM instances may have external IP addresses. The are exempt from the constraint since they are allowlisted. Here is a complete example of a sample external IP address constraint file: ``` apiVersion: constraints.gatekeeper.sh/v1beta1 kind: TFGCPExternalIpAccessConstraintV1 metadata: name: forbid-external-ip-allowlist spec: severity: high match: target: ["organizations/**"] parameters: mode: "allowlist" instances: - //compute.googleapis.com/projects/test-project/zones/us-east1-b/instances/one - //compute.googleapis.com/projects/test-project/zones/us-east1-b/instances/two ``` ## How to validate policies Follow the [instructions](https://cloud.google.com/docs/terraform/policy-validation/validate-policies) to validate policies in your local or production environments. ## End to end workflow with sample constraint In this section, you will apply a constraint that enforces IAM policy member domain restriction using [Cloud Shell](https://cloud.google.com/shell/). First click on this [link](https://console.cloud.google.com/cloudshell/open?cloudshell_image=gcr.io/graphite-cloud-shell-images/terraform:latest&cloudshell_git_repo=https://github.com/tdesrosi/policy-tf-resource-change.git) to open a new Cloud Shell session. The Cloud Shell session has Terraform pre-installed and the Policy Library repository cloned. Once you have the session open, the next step is to copy over the sample IAM domain restriction constraint: ``` cp samples/constraints/iam_service_accounts_only.yaml policies/constraints ``` Let's take a look at this constraint: ``` apiVersion: constraints.gatekeeper.sh/v1beta1 kind: TFGCPIAMAllowedPolicyMemberDomainsConstraintV2 metadata: name: service-accounts-only annotations: description: Checks that members that have been granted IAM roles belong to allowlisted domains. Block IAM role bindings for non-service accounts by domain (gserviceaccount.com) spec: severity: high parameters: domains: - gserviceaccount.com ``` It specifies that only members from gserviceaccount.com domain can be present in an IAM policy. To verify that it works, let's attempt to create a project. Create the following Terraform `main.tf` file: ``` provider "google" { version = "~> 1.20" project = "your-terraform-provider-project" } resource "random_id" "proj" { byte_length = 8 } resource "google_project" "sample_project" { project_id = "validator-${random_id.proj.hex}" name = "config validator test project" } resource "google_project_iam_binding" "sample_iam_binding" { project = "${google_project.sample_project.project_id}" role = "roles/owner" members = [ "user:your-email@your-domain" ] } ``` Make sure to specify your Terraform [provider project](https://www.terraform.io/docs/providers/google/getting_started.html) and email address. Then initialize Terraform and generate a Terraform plan: ``` terraform init terraform plan -out=test.tfplan terraform show -json ./test.tfplan > ./tfplan.json ``` Since your email address is in the IAM policy binding, the plan should result in a violation. Let's try this out: ``` gcloud beta terraform vet tfplan.json --policy-library=policy-library ``` The Terraform validator should return a violation. As a test, you can relax the constraint to make the violation go away. Edit the `policy-library/policies/constraints/iam_service_accounts_only.yaml` file and append your email domain to the domains allowlist: ``` apiVersion: constraints.gatekeeper.sh/v1beta1 kind: TFGCPIAMAllowedPolicyMemberDomainsConstraintV2 metadata: name: service-accounts-only annotations: description: Checks that members that have been granted IAM roles belong to allowlisted domains. Block IAM role bindings for non-service accounts by domain (gserviceaccount.com) spec: severity: high parameters: domains: - gserviceaccount.com - your-email-domain.com ``` Then run Terraform plan and validate the output again: ``` terraform plan -out=test.tfplan terraform show -json ./test.tfplan > ./tfplan.json gcloud beta terraform vet tfplan.json --policy-library=policy-library ``` The command above should result in no violations found. ## Contact Info Questions or comments? Please contact [email protected] for this project, or [email protected] for information about the terraform-validator project.
GCP
Config Validator Setup User Guide Go from setup to proof of concept in under 1 hour Table of Contents Overview overview How to set up constraints with Policy Library how to set up constraints with policy library Get started with the Policy Library repository get started with the policy library repository Instantiate constraints instantiate constraints How to validate policies how to validate policies Deploy Forseti deploy forseti Policy Library Sync from Git Repository https forsetisecurity org docs latest configure config validator policy library sync from git repo html Policy Library Sync from GCS https forsetisecurity org docs latest configure config validator policy library sync from gcs html End to end workflow with sample constraint end to end workflow with sample constraint Contact Info contact info Overview This tool is designed to perform policy validation check on Terraform resource changes It will not help with ongoing monitoring in your organization heirarchy so if you re looking for that please find the config validator https github com GoogleCloudPlatform config validator project and associated policy library https github com GoogleCloudPlatform policy library to get started with Cloud Asset Inventory policies Designed as an offshoot from the aforementioned policy library we set out to design a similar library that targets resource changes before terraform deployments By refactoring rego policies in our library we were able to target validation resourcechange terraform cloud google com instead of the forsetisecurity target for CAI data This allows for policy control in cases when the current state of the enviornment would clearly conflict with security policies but you can t enforce fine grained control to allow for that state to exist while locking out nearby features from terraform This would likely be the case in automated IAM role or permission granting in a project with a super admin The super admin may need to be there and if using CAI policy validation the pipeline would always fail if you define policies that limit the scope of a user s control Keep in mind that this behavior may lead to security vulnerabilities because the tool does not perform any ongoing monitoring How to set up constraints with Policy Library Get started with the Policy Library repository The Policy Library repository contains the following directories policies constraints This is initially empty You should place your constraint files here templates This directory contains pre defined constraint templates validator This directory contains the rego files and their associated unit tests You do not need to touch this directory unless you intend to modify existing constraint templates or create new ones Running make build will inline the Rego content in the corresponding constraint template files This repository contains a set of pre defined constraint templates You can duplicate this repository into a private repository First you should create a new private git repository For example if you use GitHub then you can use the GitHub UI https github com new Then follow the steps below to get everything setup This policy library can also be made public but it is not recommended By making your policy library public it would allow others to see what you are and ARE NOT scanning for Duplicate Policy Library Repository To run the following commands you will need to configure git to connect securely It is recommended to connect with SSH Here is a helpful resource https help github com en github authenticating to github connecting to github with ssh for learning about how this works including steps to set this up for GitHub repositories other providers offer this feature as well export GIT REPO ADDR git github com YOUR GITHUB USERNAME policy library git git clone bare https github com tdesrosi gcp terraform config validator git cd policy library git git push mirror GIT REPO ADDR cd rm rf policy library git git clone GIT REPO ADDR Setup Constraints Then you need to examine the available constraint templates inside the templates directory Pick the constraint templates that you wish to use create constraint YAML files corresponding to those templates and place them under policies constraints Commit the newly created constraint files to your Git repository For example assuming you have created a Git repository named policy library under your GitHub account you can use the following commands to perform the initial commit cd policy library Add new constraints git add all git commit m Initial commit of policy library constraints git push u origin master Pull in latest changes from Public Repository Periodically you should pull any changes from the public repository which might contain new templates and Rego files git remote add public https github com tdesrosi policy library tf resource change git git pull public main git push origin main Instantiate constraints The constraint template library only contains templates Templates specify the constraint logic and you must create constraints based on those templates in order to enforce them Constraint parameters are defined as YAML files in the following format apiVersion constraints gatekeeper sh v1beta1 kind place constraint template kind here metadata name place constraint name here spec severity low medium or high match target put the constraint application target here exclude optional default is no exclusions parameters put the parameters defined in constraint template here The code em target em code field is specified in a path like format It specifies where in the GCP resources hierarchy the constraint is to be applied For example table tr td Target td td Description td tr tr td organizations td td All organizations td tr tr td organizations 123 td td Everything in organization 123 td tr tr td organizations 123 folders td td Everything in organization 123 that is under a folder td tr tr td organizations 123 folders 456 td td Everything in folder 456 in organization 123 td tr tr td organizations 123 folders 456 projects 789 td td Everything in project 789 in folder 456 in organization 123 td tr table The code em exclude em code field follows the same pattern and has precedence over the code em target em code field If a resource is in both it will be excluded The schema of the code em parameters em code field is defined in the constraint template using the OpenAPI V3 https github com OAI OpenAPI Specification blob master versions 3 0 0 md schemaObject schema This is the same validation schema in Kubernetes s custom resource definition Every template contains a code em validation em code section that looks like the following validation openAPIV3Schema properties mode type string instances type array items string According to the template above the parameter field in the constraint file should contain a string named mode and a string array named code em instances em code For example parameters mode allowlist instances compute googleapis com projects test project zones us east1 b instances one compute googleapis com projects test project zones us east1 b instances two These parameters specify that two VM instances may have external IP addresses The are exempt from the constraint since they are allowlisted Here is a complete example of a sample external IP address constraint file apiVersion constraints gatekeeper sh v1beta1 kind TFGCPExternalIpAccessConstraintV1 metadata name forbid external ip allowlist spec severity high match target organizations parameters mode allowlist instances compute googleapis com projects test project zones us east1 b instances one compute googleapis com projects test project zones us east1 b instances two How to validate policies Follow the instructions https cloud google com docs terraform policy validation validate policies to validate policies in your local or production environments End to end workflow with sample constraint In this section you will apply a constraint that enforces IAM policy member domain restriction using Cloud Shell https cloud google com shell First click on this link https console cloud google com cloudshell open cloudshell image gcr io graphite cloud shell images terraform latest cloudshell git repo https github com tdesrosi policy tf resource change git to open a new Cloud Shell session The Cloud Shell session has Terraform pre installed and the Policy Library repository cloned Once you have the session open the next step is to copy over the sample IAM domain restriction constraint cp samples constraints iam service accounts only yaml policies constraints Let s take a look at this constraint apiVersion constraints gatekeeper sh v1beta1 kind TFGCPIAMAllowedPolicyMemberDomainsConstraintV2 metadata name service accounts only annotations description Checks that members that have been granted IAM roles belong to allowlisted domains Block IAM role bindings for non service accounts by domain gserviceaccount com spec severity high parameters domains gserviceaccount com It specifies that only members from gserviceaccount com domain can be present in an IAM policy To verify that it works let s attempt to create a project Create the following Terraform main tf file provider google version 1 20 project your terraform provider project resource random id proj byte length 8 resource google project sample project project id validator random id proj hex name config validator test project resource google project iam binding sample iam binding project google project sample project project id role roles owner members user your email your domain Make sure to specify your Terraform provider project https www terraform io docs providers google getting started html and email address Then initialize Terraform and generate a Terraform plan terraform init terraform plan out test tfplan terraform show json test tfplan tfplan json Since your email address is in the IAM policy binding the plan should result in a violation Let s try this out gcloud beta terraform vet tfplan json policy library policy library The Terraform validator should return a violation As a test you can relax the constraint to make the violation go away Edit the policy library policies constraints iam service accounts only yaml file and append your email domain to the domains allowlist apiVersion constraints gatekeeper sh v1beta1 kind TFGCPIAMAllowedPolicyMemberDomainsConstraintV2 metadata name service accounts only annotations description Checks that members that have been granted IAM roles belong to allowlisted domains Block IAM role bindings for non service accounts by domain gserviceaccount com spec severity high parameters domains gserviceaccount com your email domain com Then run Terraform plan and validate the output again terraform plan out test tfplan terraform show json test tfplan tfplan json gcloud beta terraform vet tfplan json policy library policy library The command above should result in no violations found Contact Info Questions or comments Please contact tdesrosi google com for this project or validator support google com for information about the terraform validator project
GCP The folder structure below contains a TL DR explanation of each item s purpose We ll go into further detail below bash You ll notice that this repository contains a handful of folders each with different items It s confusing at first so let s dive into it First let s start with how the library is organized policy library tf resource change root Functional Principles of the Constraint Framework Folder Structure
# Functional Principles of the Constraint Framework You'll notice that this repository contains a handful of folders, each with different items. It's confusing at first, so let's dive into it! First, let's start with how the library is organized. ## Folder Structure The folder structure below contains a TL;DR explanation of each item's purpose. We'll go into further detail below. ```(bash) policy-library-tf-resource-change (root)/ β”œβ”€β”€ docs/ β”‚ └── *Contains documentation on this library* β”œβ”€β”€ lib/ β”‚ └── *Contains shared rego functions* β”œβ”€β”€ policies/ β”‚ β”œβ”€β”€ constraints/ β”‚ β”‚ └── *Contains constraint yaml files* β”‚ └── templates/ β”‚ └── *Contains constraint template yaml files* β”œβ”€β”€ samples/ β”‚ └── constraints/ β”‚ └── *Contains sample constraint yaml files (not checked at runtime)* β”œβ”€β”€ scripts/ β”‚ └── *Contains helper scripts to assist with policy development* β”œβ”€β”€ validator/ β”‚ β”œβ”€β”€ *Contains rego policies (used in constraint template yaml files)* β”‚ β”œβ”€β”€ *files ending in `*_test.rego` are base unit testing files for their associated rego policies* β”‚ └── test/ β”‚ └── *Contains test data/constraints used for unit testing* └── Makefile *Allows user to use `make ...` commands for policy development* ``` ## Basic Operation When you run `gcloud beta terrafor vet`, a number of things happen. First, the resource being tested (ie. Your terraform plan json file) is translated from its native Terraform schema to Cloud Asset Inventory (CAI) schema. Keep in mind that the data being passed in is exactly the same, it's just translated into a language that the validator can understand. Because the validator is also set up to perform ongoing policy validation on the environment, all resources are cast into the CAI schema to bring all data together. Don't be alarmed, however, our constraints and constraint templates are reworked to tell the validator to *only* look at terraform resource changes. We'll explore how that happens later on. Once the data is in CAI format, `gcloud beta terraform vet` then initializes the constraints and templates in your policy library. Let's look at what each type of yaml file contains: | Type | Description | | -- | -- | | ConstraintTemplate | Describes how to integrate logic from a rego policy into rules that the validator checks. This file will describe how constraints that use it should be configured, and also provides the rego policy as an inlined yaml string field. You'll also notice the `target` definition, which reads as `validation.resourcechange.terraform.cloud.google.com`. This is the main difference between this policy library and the [existing CAI policy library](https://github.com/GoogleCloudPlatform/policy-library). **This is the part of the library that allows us to target terraform resource changes and skip current environment monitoring.** Ultimately, think of ConstraintTemplates as definitions that our corresponding constraint(s) must abide by. | | Constraint | Implements single rules that depend on a constraint template. Remember that constraint templates contain a schema that describes how to write constraints that depend on them. The actual constraint will contain the data that you'll be looking to test. For example, if you want to validate that terraform only creates IAM bindings, you would create a constraint that passes in the *mode* `allowlist`, and a list of `members` (domains, in this case) that terraform is allowed to create. The constraint tells the validator to fail the pipeline if terraform tries to bind an IAM role to a user ouside of the domain(s) you've passed in. <br /><br /> You can create multiple constraints for any given ConstraintTemplate, you just need to make sure that the rules don't conflict with one another. For example, any allowlist/denylist policy would be difficult to create multiple constraints for. The reason is that if one constraint is of type `allowlist` and the other is of type `denylist`, it's very easy to introduce overlaps in the sets that you create. By this logic, if *domain-a.com* is not a part of your policy member domain allowlist or denylist constraints, it'll automatically be denied. This is because allowlists are more inclusive than denylists, and any domain not in an allowlist will be denied. Just because it's not denied in the denylist, it will be denied by the allowlist. | If this is still confusing to you, the [Gatekeeper docuemntation](https://github.com/GoogleCloudPlatform/policy-library) provides a detailed explanation of how Constraints and ConstraintTemplates work together. Yes, Gatekeeper is generally synonymous with Kubernetes resources, but it's simply an extension of Open Policy Agent. We use gatekeeper's API in the terraform validator, and it's how the tool is able to interpret policies from the policy library. ## Developing Terraform Constraints In order to develop new policies, you must first understand the process for writing rego policies and how the constraint framework is used. The very first step is to collect some sample data. erraform-based constraints operate on **resource change data**, which comes from the `resource_changes` key of [Terraform plan JSON.](https://developer.hashicorp.com/terraform/internals/json-format) This will be an array of objects that describe changes that terraform will try to make to the target environment. Here's an example that comes directly from one of the data files used in one of the unit tests in this library. ```(json) "resource_changes": [ { "address": "google_project_iam_binding.iam-service-account-user-12345", "mode": "managed", "type": "google_project_iam_binding", "name": "iam-service-account-user-12345", "provider_name": "registry.terraform.io/hashicorp/google", "change": { "actions": [ "create" ], "before": null, "after": { "condition": [], "members": [ "serviceAccount:[email protected]", "user:[email protected]" ], "project": "12345", "role": "roles/iam.serviceAccountUser" }, "after_unknown": { "condition": [], "etag": true, "id": true, "members": [ false, false ] }, "before_sensitive": false, "after_sensitive": { "condition": [], "members": [ false, false ] } } }, [...] ] ``` You can start to see the schema of each resource, which will help us with the next step. It's time to write the Rego policy, which you can put under the `validator/` folder in the project. Note that Rego policies for terraform resource changes are slightly different than their CAI counterparts. Let's go through a sample policy that allows us to allowlist/denylist certain IAM roles: ```(rego) package templates.gcp.TFGCPIAMAllowBanRolesConstraintV1 violation[{ "msg": message, "details": metadata, }] { params := input.parameters resource := input.review resource.type == "google_project_iam_binding" not resource.change.actions[0] == "delete" role := resource.change.after.role matches_found = [r | r := config_pattern(role); glob.match(params.roles[_], [], r)] mode := object.get(params, "mode", "allowlist") target_match_count(mode, desired_count) count(matches_found) != desired_count message := output_msg(desired_count, resource.name, role) metadata := { "resource": resource.name, "role": role, } } target_match_count(mode) = 0 { mode == "denylist" } target_match_count(mode) = 1 { mode == "allowlist" } output_msg(0, asset_name, role) = msg { msg := sprintf("%v is in the banned list of IAM policy for %v", [role, asset_name]) } output_msg(1, asset_name, role) = msg { msg := sprintf("%v is NOT in the allowed list of IAM policy for %v", [role, asset_name]) } config_pattern(old_pattern) = "**" { old_pattern == "*" } config_pattern(old_pattern) = old_pattern { old_pattern != "*" } ``` There are two parts to this policy. We have a `violation` object, which contains line-by-line logic statements. The way rego works is that the policy will run line-by-line, and will `break` if any of the conditions don't pass. For instance, if the `resource.type` is **not** "google_project_iam_binding", the rest of the rule will not be checked, and no violation gets reported. If you're familiar with Golang syntax, a `:=` means *set the variable equal to the value on the right side (declaration, assignment, and redeclaration)*. A `==` means *check the equality of the left and right side*. A `=` is for assignment **only**, meaning that it won't infer the type of the variable. It can be used outside of functions of violations, however. The violation object will run with every `input.review` object (which is each object in the `resource_changes` array, evaluated one at a time). `input.parameters` will always be the same, as it contains the information passed in the `parameters` section of the constraint (AHA! That's why we use the constraint framework!). In the case of this policy, if the number of matches we get between the input object and the constraint does not match the desired count (determined by whether the mode of the constraint is `allowlist` or `denylist`), the `violation` will spit out a violation message, and an object containing metadata. The second part of the policy is for functions that the policy uses in the violation. You'll see duplicates, like `target_match_count`, for example. These duplicative functions allow us to return a different value based on the input. In the case of `target_match_count`, if the mode is of type `*denylist`, it will return a `0`. So, if we end up getting one hit between our `input.review` object, the actual count **will not** equal our desired count, and the violation will trigger, and send a violation message to stdout. ### Inlining Rego Policies This is the easiest part of the development process. Once you've created a new policy, if you've cloned this repo, simply run `make build` to inline the policy into its accompanying constraint template. Remember the first line of the rego policy? `package templates.gcp.TFGCPIAMAllowBanRolesConstraintV1`? You'll need to make sure that you create a constraint template with the `kind` property under spec->crd->spec->names->kind set to `TFGCPIAMAllowBanRolesConstraintV1`. You can see this, and other details, in the ConstraintTemplate that accompanys this policy: ```(yaml) apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: tfgcpiamallowbanrolesconstraintv1 spec: crd: spec: names: kind: TFGCPIAMAllowBanRolesConstraintV1 validation: openAPIV3Schema: properties: mode: description: "Enforcement mode, defaults to allow" type: string enum: [denylist, allowlist] roles: description: "Roles to be allowed or banned ex. roles/owner; Wildcards (*) supported" type: array items: type: string targets: - target: validation.resourcechange.terraform.cloud.google.com rego: | #INLINE("validator/my_rego_rule.rego") ... (rego code) #ENDINLINE ``` You can see that the ConstraintTemplate defines a set of properties that constraints built from this template must abide by. Look at a constraint that uses this template to get a better idea of how the two interract: ```(yaml) apiVersion: constraints.gatekeeper.sh/v1beta1 kind: TFGCPIAMAllowBanRolesConstraintV1 metadata: name: iam-allow-roles annotations: description: Allow only the listed IAM role bindings to be created. This constraint is member-agnostic. spec: severity: high match: target: # {"$ref":"#/definitions/io.k8s.cli.setters.target"} - "organizations/**" exclude: [] # optional, default is no exclusions parameters: mode: "allowlist" roles: - "roles/logging.viewer" - "roles/resourcemanager.projectIamAdmin" ``` Notice that the constraint defines a set of parameters, including `mode` and `roles`. If you look at the ConstraintTempalte, you can see that these two fields are defined and described. `Mode` is a string enumerable that *must* be either denylist or allowlist. `gcloud beta terraform vet` will actually error out if this is not upheld in the associated constraint(s). ## Additional Resources Documentation on the Constraint Framework and `gcloud beta terraform vet` is pretty widespread, but this document provides some application-specific information for using the `validation.resourcechange.terraform.cloud.google.com` target for validating terraform resource changes. Documentation on this use case does exist, and can be found [here](https://cloud.google.com/docs/terraform/policy-validation/create-terraform-constraints). When it comes time to create your own rego policies, I also recommend using the [Rego Playground](https://play.openpolicyagent.org/) to first get a hang of the Rego language. Developing Rego in a local environment is really challenging, as debugging and tracing features must be integrated manually, usually with a log statement, like `trace(sprintf("Value: %s", [value]))`. However, the very nature of how Rego runs makes this extremely challenging, becasue if your violation exits before you even get to a log line, the trace will never be printed to stdout, and you won't easily be able to see what's going wrong in your policies. In my early development of this policy library, I created a couple of playgrounds in order to get a hang of Rego. Here's [an example](https://play.openpolicyagent.org/p/HzzUikhvQ4) of that, where I tested the logic for validating permissions on IAM Custom Roles. Take a look if you're interested! Although keep in mind that this wouldn't work if brought into the policy library, as this policy is unsuported by the Gatekeeper v1beta1 API Version. You would need to change the `deny` rule to `violation` and use updated Rego built-ins, like `object.get` instead of `get_default()`. For me, the output gave me helpful hints on what was wrong with my rules and utility functions. ## Contact Info Questions or comments? Please contact [email protected] for this project, or [email protected] for information about the terraform-validator project.
GCP
Functional Principles of the Constraint Framework You ll notice that this repository contains a handful of folders each with different items It s confusing at first so let s dive into it First let s start with how the library is organized Folder Structure The folder structure below contains a TL DR explanation of each item s purpose We ll go into further detail below bash policy library tf resource change root docs Contains documentation on this library lib Contains shared rego functions policies constraints Contains constraint yaml files templates Contains constraint template yaml files samples constraints Contains sample constraint yaml files not checked at runtime scripts Contains helper scripts to assist with policy development validator Contains rego policies used in constraint template yaml files files ending in test rego are base unit testing files for their associated rego policies test Contains test data constraints used for unit testing Makefile Allows user to use make commands for policy development Basic Operation When you run gcloud beta terrafor vet a number of things happen First the resource being tested ie Your terraform plan json file is translated from its native Terraform schema to Cloud Asset Inventory CAI schema Keep in mind that the data being passed in is exactly the same it s just translated into a language that the validator can understand Because the validator is also set up to perform ongoing policy validation on the environment all resources are cast into the CAI schema to bring all data together Don t be alarmed however our constraints and constraint templates are reworked to tell the validator to only look at terraform resource changes We ll explore how that happens later on Once the data is in CAI format gcloud beta terraform vet then initializes the constraints and templates in your policy library Let s look at what each type of yaml file contains Type Description ConstraintTemplate Describes how to integrate logic from a rego policy into rules that the validator checks This file will describe how constraints that use it should be configured and also provides the rego policy as an inlined yaml string field You ll also notice the target definition which reads as validation resourcechange terraform cloud google com This is the main difference between this policy library and the existing CAI policy library https github com GoogleCloudPlatform policy library This is the part of the library that allows us to target terraform resource changes and skip current environment monitoring Ultimately think of ConstraintTemplates as definitions that our corresponding constraint s must abide by Constraint Implements single rules that depend on a constraint template Remember that constraint templates contain a schema that describes how to write constraints that depend on them The actual constraint will contain the data that you ll be looking to test For example if you want to validate that terraform only creates IAM bindings you would create a constraint that passes in the mode allowlist and a list of members domains in this case that terraform is allowed to create The constraint tells the validator to fail the pipeline if terraform tries to bind an IAM role to a user ouside of the domain s you ve passed in br br You can create multiple constraints for any given ConstraintTemplate you just need to make sure that the rules don t conflict with one another For example any allowlist denylist policy would be difficult to create multiple constraints for The reason is that if one constraint is of type allowlist and the other is of type denylist it s very easy to introduce overlaps in the sets that you create By this logic if domain a com is not a part of your policy member domain allowlist or denylist constraints it ll automatically be denied This is because allowlists are more inclusive than denylists and any domain not in an allowlist will be denied Just because it s not denied in the denylist it will be denied by the allowlist If this is still confusing to you the Gatekeeper docuemntation https github com GoogleCloudPlatform policy library provides a detailed explanation of how Constraints and ConstraintTemplates work together Yes Gatekeeper is generally synonymous with Kubernetes resources but it s simply an extension of Open Policy Agent We use gatekeeper s API in the terraform validator and it s how the tool is able to interpret policies from the policy library Developing Terraform Constraints In order to develop new policies you must first understand the process for writing rego policies and how the constraint framework is used The very first step is to collect some sample data erraform based constraints operate on resource change data which comes from the resource changes key of Terraform plan JSON https developer hashicorp com terraform internals json format This will be an array of objects that describe changes that terraform will try to make to the target environment Here s an example that comes directly from one of the data files used in one of the unit tests in this library json resource changes address google project iam binding iam service account user 12345 mode managed type google project iam binding name iam service account user 12345 provider name registry terraform io hashicorp google change actions create before null after condition members serviceAccount service 12345 notiam gserviceaccount com user bad notgoogle com project 12345 role roles iam serviceAccountUser after unknown condition etag true id true members false false before sensitive false after sensitive condition members false false You can start to see the schema of each resource which will help us with the next step It s time to write the Rego policy which you can put under the validator folder in the project Note that Rego policies for terraform resource changes are slightly different than their CAI counterparts Let s go through a sample policy that allows us to allowlist denylist certain IAM roles rego package templates gcp TFGCPIAMAllowBanRolesConstraintV1 violation msg message details metadata params input parameters resource input review resource type google project iam binding not resource change actions 0 delete role resource change after role matches found r r config pattern role glob match params roles r mode object get params mode allowlist target match count mode desired count count matches found desired count message output msg desired count resource name role metadata resource resource name role role target match count mode 0 mode denylist target match count mode 1 mode allowlist output msg 0 asset name role msg msg sprintf v is in the banned list of IAM policy for v role asset name output msg 1 asset name role msg msg sprintf v is NOT in the allowed list of IAM policy for v role asset name config pattern old pattern old pattern config pattern old pattern old pattern old pattern There are two parts to this policy We have a violation object which contains line by line logic statements The way rego works is that the policy will run line by line and will break if any of the conditions don t pass For instance if the resource type is not google project iam binding the rest of the rule will not be checked and no violation gets reported If you re familiar with Golang syntax a means set the variable equal to the value on the right side declaration assignment and redeclaration A means check the equality of the left and right side A is for assignment only meaning that it won t infer the type of the variable It can be used outside of functions of violations however The violation object will run with every input review object which is each object in the resource changes array evaluated one at a time input parameters will always be the same as it contains the information passed in the parameters section of the constraint AHA That s why we use the constraint framework In the case of this policy if the number of matches we get between the input object and the constraint does not match the desired count determined by whether the mode of the constraint is allowlist or denylist the violation will spit out a violation message and an object containing metadata The second part of the policy is for functions that the policy uses in the violation You ll see duplicates like target match count for example These duplicative functions allow us to return a different value based on the input In the case of target match count if the mode is of type denylist it will return a 0 So if we end up getting one hit between our input review object the actual count will not equal our desired count and the violation will trigger and send a violation message to stdout Inlining Rego Policies This is the easiest part of the development process Once you ve created a new policy if you ve cloned this repo simply run make build to inline the policy into its accompanying constraint template Remember the first line of the rego policy package templates gcp TFGCPIAMAllowBanRolesConstraintV1 You ll need to make sure that you create a constraint template with the kind property under spec crd spec names kind set to TFGCPIAMAllowBanRolesConstraintV1 You can see this and other details in the ConstraintTemplate that accompanys this policy yaml apiVersion templates gatekeeper sh v1beta1 kind ConstraintTemplate metadata name tfgcpiamallowbanrolesconstraintv1 spec crd spec names kind TFGCPIAMAllowBanRolesConstraintV1 validation openAPIV3Schema properties mode description Enforcement mode defaults to allow type string enum denylist allowlist roles description Roles to be allowed or banned ex roles owner Wildcards supported type array items type string targets target validation resourcechange terraform cloud google com rego INLINE validator my rego rule rego rego code ENDINLINE You can see that the ConstraintTemplate defines a set of properties that constraints built from this template must abide by Look at a constraint that uses this template to get a better idea of how the two interract yaml apiVersion constraints gatekeeper sh v1beta1 kind TFGCPIAMAllowBanRolesConstraintV1 metadata name iam allow roles annotations description Allow only the listed IAM role bindings to be created This constraint is member agnostic spec severity high match target ref definitions io k8s cli setters target organizations exclude optional default is no exclusions parameters mode allowlist roles roles logging viewer roles resourcemanager projectIamAdmin Notice that the constraint defines a set of parameters including mode and roles If you look at the ConstraintTempalte you can see that these two fields are defined and described Mode is a string enumerable that must be either denylist or allowlist gcloud beta terraform vet will actually error out if this is not upheld in the associated constraint s Additional Resources Documentation on the Constraint Framework and gcloud beta terraform vet is pretty widespread but this document provides some application specific information for using the validation resourcechange terraform cloud google com target for validating terraform resource changes Documentation on this use case does exist and can be found here https cloud google com docs terraform policy validation create terraform constraints When it comes time to create your own rego policies I also recommend using the Rego Playground https play openpolicyagent org to first get a hang of the Rego language Developing Rego in a local environment is really challenging as debugging and tracing features must be integrated manually usually with a log statement like trace sprintf Value s value However the very nature of how Rego runs makes this extremely challenging becasue if your violation exits before you even get to a log line the trace will never be printed to stdout and you won t easily be able to see what s going wrong in your policies In my early development of this policy library I created a couple of playgrounds in order to get a hang of Rego Here s an example https play openpolicyagent org p HzzUikhvQ4 of that where I tested the logic for validating permissions on IAM Custom Roles Take a look if you re interested Although keep in mind that this wouldn t work if brought into the policy library as this policy is unsuported by the Gatekeeper v1beta1 API Version You would need to change the deny rule to violation and use updated Rego built ins like object get instead of get default For me the output gave me helpful hints on what was wrong with my rules and utility functions Contact Info Questions or comments Please contact tdesrosi google com for this project or validator support google com for information about the terraform validator project
GCP format and compressed using compression Application can be extended to support additional redis operations different encodings e g avro protobuf and Application Framework to execute various operations on redis to evaluate key performance metrics such as CPU Memory compression formats Check below for details Overview utilization Bytes transferred Time per command etc For complete list of metrics refer Framework currently supports few operations set get push pop with sample payload Payload is encoded in
## Overview Application Framework to execute various operations on redis to evaluate key performance metrics such as CPU & Memory utilization, Bytes transferred, Time per command etc. For complete list of metrics refer [MemoryStore Redis Metrics](https://cloud.google.com/memorystore/docs/redis/supported-monitoring-metrics) Framework currently supports few operations (set/get, push & pop) with sample payload. Payload is encoded in [message pack](https://msgpack.org/index.html) format and compressed using [LZ4](https://lz4.org/) compression. Application can be extended to support additional redis operations, different encodings (e.g: avro, protobuf) and compression formats. Check below [section](#ExtendApplication) for details. ## Getting Started ### System Requirements * Java 11 * Maven 3 * [gcloud CLI](https://cloud.google.com/sdk/gcloud) ### Building Jar - Execute the below script from the project root directory to build uber jar: ```bash $ ./scripts/build-jar.sh ``` Successful execution of the script will generate the jar in the path:```{project_dir}/artifacts/redis-benchmarks-${version}.jar``` ### Building Docker image Docker image can be built and pushed to [google cloud artifact registry](https://cloud.google.com/artifact-registry). Prior to building the docker image, follow the below steps: - [Enable Artifact Registry](https://cloud.google.com/artifact-registry/docs/enable-service) - Create a new repository called ```benchmarks``` using the command: ```bash $ location="us-central1" $ gcloud artifacts repositories create benchmarks \ --repository-format=docker \ --location=${location} \ --description="Contains docker images to execute benchmark tests." ``` - Execute the below script from the project root directory to build & push docker image: ```bash $ ./scripts/build-image.sh ``` Successful execution of the script will push the image to artifact registry. - Image can be pulled using below command: ```bash $ project_id=<<gcp-project>> $ docker pull us-central1-docker.pkg.dev/${project_id}/benchmarks/redis-benchmark:latest ``` ## Executing Application ### Application options Following are the options that can be supplied while executing the application: | Name | Description | Optional | Default Value | |----------------------|:----------------------------------------------------------------------------------------------|:---------|:--------------| | Project | Cloud Project identifier | N | | | hostname | Redis host name | Y | localhost | | port | Redis port number | Y | 6379 | | runduration_minutes | Amount of time (in minutes) to run the application | Y | 1 | | cpu_scaling_factor | Determines the parallelism (i.e tasks) by multiplying cpu_scaling_factor with available cores | Y | 1 | | write_ratio | Determines the percent of writes compared to reads. Default is 20% writes and 80% reads. | Y | 0.2 | | task_types | Task types to be executed. Can be supplied 1 or more as comma seperated values. | N | | ### Run application ```bash $ PROJECT_ID=<<gcp-project>> $ RUNDURATION_MINUTES=1 $ CPU_SCALING_FACTOR=1 $ WRITE_RATIO=0.2 $ REDIS_HOST=localhost $ REDIS_PORT=6379 $ TASK_TYPES="SetGet,ListOps" $ java -jar ./artifacts/redis-benchmarks-1.0.jar \ --project=${PROJECT_ID} \ --runduration_minutes=${RUNDURATION_MINUTES} \ --cpu_scaling_factor=${CPU_SCALING_FACTOR} \ --write_ratio=${WRITE_RATIO} \ --task_types=${TASK_TYPES} \ --hostname=${REDIS_HOST} \ --port=${REDIS_PORT} ``` ### Output Post completion, the application will display the number of writes, reads, cache hits and misses for each task. ![Output.png](img/Output.png) ## <a name="ExtendApplication">Extending Application</a> Application can be extended to support additional payloads, encodings and compression formats. ### Adding new payload Create a new class similar to [profile](./src/main/java/com/google/cloud/pso/benchmarks/redis/model/Profile.java) that implements Payload interface ### Using different encoding Create a new class similar to [MessagePack](./src/main/java/com/google/cloud/pso/benchmarks/redis/serde/MessagePack.java) that implements EncDecoder interface ### Using different Compression Create a new class similar to [LZ4Compression](./src/main/java/com/google/cloud/pso/benchmarks/redis/compression/LZ4Compression.java) that implements Compression interface ### Adding additional tasks Create a new class similar to [SetGetTask](./src/main/java/com/google/cloud/pso/benchmarks/redis/tasks/SetGetTask.java) that implements RedisTask interface - Payload can be generated using [PayloadGenerator](./src/main/java/com/google/cloud/pso/benchmarks/redis/PayloadGenerator.java) that accepts payload type, encoding and compression as input parameters. - Tasks can be created with appropriate payloads and can be supplied to the workload executior. Check **createRedisTask** method in [WorkloadExecutor](./src/main/java/com/google/cloud/pso/benchmarks/redis/WorkloadExecutor.java) for reference. ## Disclaimer This project is not an official Google project. It is not supported by Google and disclaims all warranties as to its quality, merchantability, or fitness for a particular purpose.
GCP
Overview Application Framework to execute various operations on redis to evaluate key performance metrics such as CPU Memory utilization Bytes transferred Time per command etc For complete list of metrics refer MemoryStore Redis Metrics https cloud google com memorystore docs redis supported monitoring metrics Framework currently supports few operations set get push pop with sample payload Payload is encoded in message pack https msgpack org index html format and compressed using LZ4 https lz4 org compression Application can be extended to support additional redis operations different encodings e g avro protobuf and compression formats Check below section ExtendApplication for details Getting Started System Requirements Java 11 Maven 3 gcloud CLI https cloud google com sdk gcloud Building Jar Execute the below script from the project root directory to build uber jar bash scripts build jar sh Successful execution of the script will generate the jar in the path project dir artifacts redis benchmarks version jar Building Docker image Docker image can be built and pushed to google cloud artifact registry https cloud google com artifact registry Prior to building the docker image follow the below steps Enable Artifact Registry https cloud google com artifact registry docs enable service Create a new repository called benchmarks using the command bash location us central1 gcloud artifacts repositories create benchmarks repository format docker location location description Contains docker images to execute benchmark tests Execute the below script from the project root directory to build push docker image bash scripts build image sh Successful execution of the script will push the image to artifact registry Image can be pulled using below command bash project id gcp project docker pull us central1 docker pkg dev project id benchmarks redis benchmark latest Executing Application Application options Following are the options that can be supplied while executing the application Name Description Optional Default Value Project Cloud Project identifier N hostname Redis host name Y localhost port Redis port number Y 6379 runduration minutes Amount of time in minutes to run the application Y 1 cpu scaling factor Determines the parallelism i e tasks by multiplying cpu scaling factor with available cores Y 1 write ratio Determines the percent of writes compared to reads Default is 20 writes and 80 reads Y 0 2 task types Task types to be executed Can be supplied 1 or more as comma seperated values N Run application bash PROJECT ID gcp project RUNDURATION MINUTES 1 CPU SCALING FACTOR 1 WRITE RATIO 0 2 REDIS HOST localhost REDIS PORT 6379 TASK TYPES SetGet ListOps java jar artifacts redis benchmarks 1 0 jar project PROJECT ID runduration minutes RUNDURATION MINUTES cpu scaling factor CPU SCALING FACTOR write ratio WRITE RATIO task types TASK TYPES hostname REDIS HOST port REDIS PORT Output Post completion the application will display the number of writes reads cache hits and misses for each task Output png img Output png a name ExtendApplication Extending Application a Application can be extended to support additional payloads encodings and compression formats Adding new payload Create a new class similar to profile src main java com google cloud pso benchmarks redis model Profile java that implements Payload interface Using different encoding Create a new class similar to MessagePack src main java com google cloud pso benchmarks redis serde MessagePack java that implements EncDecoder interface Using different Compression Create a new class similar to LZ4Compression src main java com google cloud pso benchmarks redis compression LZ4Compression java that implements Compression interface Adding additional tasks Create a new class similar to SetGetTask src main java com google cloud pso benchmarks redis tasks SetGetTask java that implements RedisTask interface Payload can be generated using PayloadGenerator src main java com google cloud pso benchmarks redis PayloadGenerator java that accepts payload type encoding and compression as input parameters Tasks can be created with appropriate payloads and can be supplied to the workload executior Check createRedisTask method in WorkloadExecutor src main java com google cloud pso benchmarks redis WorkloadExecutor java for reference Disclaimer This project is not an official Google project It is not supported by Google and disclaims all warranties as to its quality merchantability or fitness for a particular purpose
GCP kafka2avro batch mode pipeline This example contains two Dataflow pipelines reads objects from Kafka convert them to Avro and write the generates a set of objects and write them to Kafka This is a output to Google Cloud Storage This is a streaming mode pipeline This example shows how to use Apache Beam and SCIO to read objects from a Kafka topic and serialize them encoded as Avro files in Google Cloud Storage
# kafka2avro This example shows how to use Apache Beam and SCIO to read objects from a Kafka topic, and serialize them encoded as Avro files in Google Cloud Storage. This example contains two Dataflow pipelines: * [Object2Kafka](src/main/scala/com/google/cloud/pso/kafka2avro/Object2Kafka.scala): generates a set of objects and write them to Kafka. This is a batch mode pipeline. * [Kafka2Avro](src/main/scala/com/google/cloud/pso/kafka2avro/Kafka2Avro.scala): reads objects from Kafka, convert them to Avro, and write the output to Google Cloud Storage. This is a streaming mode pipeline ## Configuration Before compiling and generating your package, you need to change some options in [`src/main/resources/application.conf`](src/main/resources/application.conf): * `broker`: String with the address of the Kafka brokers. * `dest-bucket`: The name of the bucket where the Avro files will be written * `dest-path`: The directories structure where the Avro files will be written (e.g. blank to write in the top level dir in the bucket, or anything like a/b/c). * `kafka-topic`: The name of the topic where the objects are written to, or read from. * `num-demo-objects`: Number of objects that will be generated by the Object2Kafka pipeline, these objects can be read with the Kafka2Avro pipeline to test that everything is working as expected. The configuration file follows [the HOCON format](https://github.com/lightbend/config/blob/master/README.md#using-hocon-the-json-superset). Here is a sample configuration file with all the options set: ```bash broker = "1.2.3.4:9092" dest-bucket = "my-bucket-in-gcs" dest-path = "persisted/fromkafka/avro/" kafka-topic = "my_kafka_topic" num-demo-objects = 500 # comments are allowed in the config file ``` ## Pre-requirements ### Build tool This example is written in Scala and uses SBT as build tool. You need to have SBT >= 1.0 installed. You can download SBT from https://www.scala-sbt.org/ The Scala version is 2.12.8. If you have the JDK > 1.8 installed, SBT should automatically download the Scala compiler. ## Compile Run `sbt` in the top sources folder. Inside sbt, download all the dependencies: ``` sbt:kafka2avro> update ``` and then compile ``` sbt:kafka2avro> compile ``` ## Deploy and run If you have managed to compile the code, you can generate a JAR package to be deployed on Dataflow, with: ``` sbt:kafka2avro> pack ``` This will generate a set of JAR files in `target/pack/lib` ### Running the Object2Kafka pipeline This is batch pipeline, provided just an example to populate Kafka and test the streaming pipeline. Once you have generated the JAR file using the `pack` command inside SBT, you can now launch the job in Dataflow to populate Kafka with some demo objects. Using Java 1.8, run the following command. Notice that you have to set the project id, and a location in a GCS bucket to store the JARs imported by Dataflow: ``` CLASSPATH="target/pack/lib/*" java com.google.cloud.pso.kafka2avro.Object2Kafka --exec.mainClass=com.google.cloud.pso.kafka2avro.Object2Kafka --project=YOUR_PROJECT_ID --stagingLocation="gs://YOUR_BUCKET/YOUR_STAGING_LOCATION" --runner=DataflowRunner ``` ### Running the Kafka2Avro pipeline This is a streaming pipeline. It will keep running unless you cancel it. The default windowing policy is to group messages every 2 minutes, in a fixed window. To change the policy, please see [the function `windowIn` in `Kafka2Avro.scala`](src/main/scala/com/google/cloud/pso/kafka2avro/Kafka2Avro.scala#L60-L70). Once you have generated the JAR file using the `pack` command inside SBT, you can now launch the job in Dataflow to populate Kafka with some demo objects. Using Java 1.8, run the following command. Notice that you have to set the project id, and a location in a GCS bucket to store the JARs imported by Dataflow: ``` CLASSPATH="target/pack/lib/*" java com.google.cloud.pso.kafka2avro.Kafka2Avro --exec.mainClass=com.google.cloud.pso.kafka2avro.Kafka2Avro --project=YOUR_PROJECT_ID --stagingLocation="gs://YOUR_BUCKET/YOUR_STAGING_LOCATION" --runner=DataflowRunner ``` Please remember that the machine running the JAR may need to have connectivity to the Kafka cluster in order to retrieve some metadata, prior to launching the pipeline in Dataflow. **Remember that this is a streaming pipeline, it will keep running forever until you cancel or stop it.** ### Wrong filenames for some dependencies In some cases, some dependencies may be downloaded with wrong filenames. For instance, containing symbols that need to be escaped. Importing these JARs in the job in Dataflow will fail. If when running your Dataflow job, it fails before it is launched because it cannot copy some dependencies, change the name of the offending files so they don't contain symbols. For instance: ``` mv target/pack/lib/netty-codec-http2-\[4.1.25.Final,4.1.25.Final\].jar target/pack/lib/netty-codec-http2.jar ``` ## Continuous Integration This example includes [a configuration file for Cloud Build](cloudbuild.yaml), so you can use it to run the unit tests with every commit done to your repository. To use this configuration file: * Add your sources to a Git repository (either in Bitbucket, Github or Google Cloud Source). * Configure a trigger in Google Cloud Build linked to your Git repository. * Set the path for the configuration file to [`cloudbuild.yaml`](cloudbuild.yaml). The included configuration file will do the following steps: * Download a cache for Ivy2 from a Google Cloud Storage bucket named YOURPROJECT_cache, where `YOURPROJECT` is your GCP project id. * Compile and test the Scala code. * Generate a package. * Upload the new Ivy2 cache to the same bucket as in the first step. * Upload the generated package and all its dependencies to a bucket named YOURPROJECT_pkgs, where `YOURPROJECT` is your GCP project id. So these default steps will try to write to and read from two different buckets in Google Cloud Storage. Please either create these buckets in your GCP project, or change the configuration. Please note that you need to build and include the `scala-sbt` Cloud Builder in order to use this configuration file. * Make sure you have the Google Cloud SDK configured with your credentials and project * Download the sources from [GoogleCloudPlatform/cloud-builders-community/tree/master/scala-sbt](https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/scala-sbt) * And then in the `scala-sbt` sources dir, run `gcloud builds submit . --config=cloudbuild.yaml` to add the builder to your GCP project. You only need to do this once.
GCP
kafka2avro This example shows how to use Apache Beam and SCIO to read objects from a Kafka topic and serialize them encoded as Avro files in Google Cloud Storage This example contains two Dataflow pipelines Object2Kafka src main scala com google cloud pso kafka2avro Object2Kafka scala generates a set of objects and write them to Kafka This is a batch mode pipeline Kafka2Avro src main scala com google cloud pso kafka2avro Kafka2Avro scala reads objects from Kafka convert them to Avro and write the output to Google Cloud Storage This is a streaming mode pipeline Configuration Before compiling and generating your package you need to change some options in src main resources application conf src main resources application conf broker String with the address of the Kafka brokers dest bucket The name of the bucket where the Avro files will be written dest path The directories structure where the Avro files will be written e g blank to write in the top level dir in the bucket or anything like a b c kafka topic The name of the topic where the objects are written to or read from num demo objects Number of objects that will be generated by the Object2Kafka pipeline these objects can be read with the Kafka2Avro pipeline to test that everything is working as expected The configuration file follows the HOCON format https github com lightbend config blob master README md using hocon the json superset Here is a sample configuration file with all the options set bash broker 1 2 3 4 9092 dest bucket my bucket in gcs dest path persisted fromkafka avro kafka topic my kafka topic num demo objects 500 comments are allowed in the config file Pre requirements Build tool This example is written in Scala and uses SBT as build tool You need to have SBT 1 0 installed You can download SBT from https www scala sbt org The Scala version is 2 12 8 If you have the JDK 1 8 installed SBT should automatically download the Scala compiler Compile Run sbt in the top sources folder Inside sbt download all the dependencies sbt kafka2avro update and then compile sbt kafka2avro compile Deploy and run If you have managed to compile the code you can generate a JAR package to be deployed on Dataflow with sbt kafka2avro pack This will generate a set of JAR files in target pack lib Running the Object2Kafka pipeline This is batch pipeline provided just an example to populate Kafka and test the streaming pipeline Once you have generated the JAR file using the pack command inside SBT you can now launch the job in Dataflow to populate Kafka with some demo objects Using Java 1 8 run the following command Notice that you have to set the project id and a location in a GCS bucket to store the JARs imported by Dataflow CLASSPATH target pack lib java com google cloud pso kafka2avro Object2Kafka exec mainClass com google cloud pso kafka2avro Object2Kafka project YOUR PROJECT ID stagingLocation gs YOUR BUCKET YOUR STAGING LOCATION runner DataflowRunner Running the Kafka2Avro pipeline This is a streaming pipeline It will keep running unless you cancel it The default windowing policy is to group messages every 2 minutes in a fixed window To change the policy please see the function windowIn in Kafka2Avro scala src main scala com google cloud pso kafka2avro Kafka2Avro scala L60 L70 Once you have generated the JAR file using the pack command inside SBT you can now launch the job in Dataflow to populate Kafka with some demo objects Using Java 1 8 run the following command Notice that you have to set the project id and a location in a GCS bucket to store the JARs imported by Dataflow CLASSPATH target pack lib java com google cloud pso kafka2avro Kafka2Avro exec mainClass com google cloud pso kafka2avro Kafka2Avro project YOUR PROJECT ID stagingLocation gs YOUR BUCKET YOUR STAGING LOCATION runner DataflowRunner Please remember that the machine running the JAR may need to have connectivity to the Kafka cluster in order to retrieve some metadata prior to launching the pipeline in Dataflow Remember that this is a streaming pipeline it will keep running forever until you cancel or stop it Wrong filenames for some dependencies In some cases some dependencies may be downloaded with wrong filenames For instance containing symbols that need to be escaped Importing these JARs in the job in Dataflow will fail If when running your Dataflow job it fails before it is launched because it cannot copy some dependencies change the name of the offending files so they don t contain symbols For instance mv target pack lib netty codec http2 4 1 25 Final 4 1 25 Final jar target pack lib netty codec http2 jar Continuous Integration This example includes a configuration file for Cloud Build cloudbuild yaml so you can use it to run the unit tests with every commit done to your repository To use this configuration file Add your sources to a Git repository either in Bitbucket Github or Google Cloud Source Configure a trigger in Google Cloud Build linked to your Git repository Set the path for the configuration file to cloudbuild yaml cloudbuild yaml The included configuration file will do the following steps Download a cache for Ivy2 from a Google Cloud Storage bucket named YOURPROJECT cache where YOURPROJECT is your GCP project id Compile and test the Scala code Generate a package Upload the new Ivy2 cache to the same bucket as in the first step Upload the generated package and all its dependencies to a bucket named YOURPROJECT pkgs where YOURPROJECT is your GCP project id So these default steps will try to write to and read from two different buckets in Google Cloud Storage Please either create these buckets in your GCP project or change the configuration Please note that you need to build and include the scala sbt Cloud Builder in order to use this configuration file Make sure you have the Google Cloud SDK configured with your credentials and project Download the sources from GoogleCloudPlatform cloud builders community tree master scala sbt https github com GoogleCloudPlatform cloud builders community tree master scala sbt And then in the scala sbt sources dir run gcloud builds submit config cloudbuild yaml to add the builder to your GCP project You only need to do this once
GCP This approach genrally has following benefits With multi folder repositories it s possible to combine similar IaC into a single repository and or centralize the mangement of multiple business units code Organizing code across multiple folders within a single version control repositiroy such as github is a very common practice and we re referring this as multi folder repository Selective deployment approach lets you find the folders changed within your repository and only run the logic for changed folders Selective deployment Reduced overhead in managing multiple CI CD pipelines
# Selective deployment Organizing code across multiple folders within a single version control repositiroy such as github is a very common practice, and we're referring this as multi-folder repository. **Selective deployment** approach lets you find the folders changed within your repository and only run the logic for changed folders. With multi-folder repositories, it's possible to combine similar IaC into a single repository and/or centralize the mangement of multiple business units code. This approach genrally has following benefits: - Reduced overhead in managing multiple CI/CD pipelines - Better code visibility - Reduces overhead in managing multiple ACLs for similar code Example multi-folder repository structure ```txt └── single-repo β”œβ”€β”€ build-files β”‚ └── compile.sh β”œβ”€β”€ build.yaml └── user-resources β”œβ”€β”€ business-unit1 β”‚ β”œβ”€β”€ dev-env β”‚ β”œβ”€β”€ prod-env β”‚ └── qa-env └── business-unit2 β”œβ”€β”€ dev-env β”œβ”€β”€ prod-env └── qa-env ``` ``` Note: A Mono repo is always a multi-folder repository, however vice versa is not always true. Checkout https://www.hashicorp.com/blog/terraform-mono-repo-vs-multi-repo-the-great-debate article for more information on mono vs multi repos for IaC.``` ## Solution This can be addressed in multiple different ways, and our approach is as below: ### Step 1: Find the commit associated with last successful build. ```sh nth_successful_commit() { local n=$1 # n=1 --> Last successful commit. local trigger_name=$2 local project=$3 local trigger_id=$(get_trigger_value $trigger_name $project "id") local nth_successful_build=$(gcloud builds list --filter "buildTriggerId=$trigger_id AND STATUS=(SUCCESS)" --format "value(id)" --limit=$build_find_limit --project $project | awk "NR==$n") || exit 1 local nth_successful_commit=$(gcloud builds describe $nth_successful_build --format "value(substitutions.COMMIT_SHA)" --project $project) || exit 1 echo $nth_successful_commit } ``` ### Step 2: Find the differece between current commit and last successful commit. ```sh previous_commit_sha=$(nth_successful_commit 1 $apply_trigger_name $project) || exit 1 git diff --name-only ${previous_commit_sha} ${commit_sha} | sort -u > $logs_dir/diff.log || exit 1 ``` This step will give you a list of files/folders that were modified after the commit associated with last successful build. ### Step 3: Iterate over changed folders. You can now iterate over only the changed folders received from Step 2 in the $logs_dir/diff.log file. ## Implementation steps ### Pre-requisites - A cloud source repository (or any other source control repositories). - A cloud build configuration file with basic steps to be executed on a single folder of the repository. ```Note: We are assuming that the pipeline is built on Google cloud build. If the pipeline is built on other platforms, you might need to retrofit this solution accordingly.``` ### Setup Clooud build Add the right values for the below CloudBuild substitution variables in the `cloudbuild.yaml` file. ```sh _TF_SA_EMAIL: '' _PREVIOUS_COMMIT_SHA: '' _RUN_ALL_PROJECTS: 'false' ``` - `_TF_SA_EMAIL` is the GCP service_account with the necessary IAM permissions that the terraform impersonates. Cloudbuild default SA should have [roles/iam.serviceAccountTokenCreator](https://cloud.google.com/iam/docs/service-accounts#token-creator-role) on the `_TF_SA_EMAIL` service_account. - `_PREVIOUS_COMMIT_SHA` is the github commit_sha that is used for explicitly checking delta changes between this commit and the latest commit instead of automatically detecting last sucessful commit based on a successful cloudbuild execution. Especially this would be required for first time execution when there is not a successful cloudbuild execution. - `_RUN_ALL_PROJECTS` is to force executing through all folders. Once is a while this is required: - for deploying a change that imapcts all folders such as a terraform module commonly used by code in all/multiple folders. - for detecting and fixing any configurations drifts, especially when manual changes are performed. ## Important points Use unshallow copy of git clone. Cloud build in its default behaviour uses shallow a copy of the repository (i.e. only the code associated with the commit with which the current build was triggered). Shallow copy prevents us from performing git operations like git diff. However, we can use following step in the cloud build to fetch unshallow copy: ```yaml - id: 'unshallow' name: gcr.io/cloud-builders/git args: ['fetch', '--unshallow'] ``
GCP
Selective deployment Organizing code across multiple folders within a single version control repositiroy such as github is a very common practice and we re referring this as multi folder repository Selective deployment approach lets you find the folders changed within your repository and only run the logic for changed folders With multi folder repositories it s possible to combine similar IaC into a single repository and or centralize the mangement of multiple business units code This approach genrally has following benefits Reduced overhead in managing multiple CI CD pipelines Better code visibility Reduces overhead in managing multiple ACLs for similar code Example multi folder repository structure txt single repo build files compile sh build yaml user resources business unit1 dev env prod env qa env business unit2 dev env prod env qa env Note A Mono repo is always a multi folder repository however vice versa is not always true Checkout https www hashicorp com blog terraform mono repo vs multi repo the great debate article for more information on mono vs multi repos for IaC Solution This can be addressed in multiple different ways and our approach is as below Step 1 Find the commit associated with last successful build sh nth successful commit local n 1 n 1 Last successful commit local trigger name 2 local project 3 local trigger id get trigger value trigger name project id local nth successful build gcloud builds list filter buildTriggerId trigger id AND STATUS SUCCESS format value id limit build find limit project project awk NR n exit 1 local nth successful commit gcloud builds describe nth successful build format value substitutions COMMIT SHA project project exit 1 echo nth successful commit Step 2 Find the differece between current commit and last successful commit sh previous commit sha nth successful commit 1 apply trigger name project exit 1 git diff name only previous commit sha commit sha sort u logs dir diff log exit 1 This step will give you a list of files folders that were modified after the commit associated with last successful build Step 3 Iterate over changed folders You can now iterate over only the changed folders received from Step 2 in the logs dir diff log file Implementation steps Pre requisites A cloud source repository or any other source control repositories A cloud build configuration file with basic steps to be executed on a single folder of the repository Note We are assuming that the pipeline is built on Google cloud build If the pipeline is built on other platforms you might need to retrofit this solution accordingly Setup Clooud build Add the right values for the below CloudBuild substitution variables in the cloudbuild yaml file sh TF SA EMAIL PREVIOUS COMMIT SHA RUN ALL PROJECTS false TF SA EMAIL is the GCP service account with the necessary IAM permissions that the terraform impersonates Cloudbuild default SA should have roles iam serviceAccountTokenCreator https cloud google com iam docs service accounts token creator role on the TF SA EMAIL service account PREVIOUS COMMIT SHA is the github commit sha that is used for explicitly checking delta changes between this commit and the latest commit instead of automatically detecting last sucessful commit based on a successful cloudbuild execution Especially this would be required for first time execution when there is not a successful cloudbuild execution RUN ALL PROJECTS is to force executing through all folders Once is a while this is required for deploying a change that imapcts all folders such as a terraform module commonly used by code in all multiple folders for detecting and fixing any configurations drifts especially when manual changes are performed Important points Use unshallow copy of git clone Cloud build in its default behaviour uses shallow a copy of the repository i e only the code associated with the commit with which the current build was triggered Shallow copy prevents us from performing git operations like git diff However we can use following step in the cloud build to fetch unshallow copy yaml id unshallow name gcr io cloud builders git args fetch unshallow
GCP GCP Project to host all the resources A terraform script is provided to setup all the required resources Introduction GCP resources This example implements the infrastructure required to deploy an end to end using platform MLOps with Vertex AI Infra setup
# MLOps with Vertex AI - Infra setup ## Introduction This example implements the infrastructure required to deploy an end-to-end [MLOps process](https://services.google.com/fh/files/misc/practitioners_guide_to_mlops_whitepaper.pdf) using [Vertex AI](https://cloud.google.com/vertex-ai) platform. ## GCP resources A terraform script is provided to setup all the required resources: - GCP Project to host all the resources - Isolated VPC network and a subnet to be used by Vertex and Dataflow (using a Shared VPC is also possible). - Firewall rule to allow the internal subnet communication required by Dataflow - Cloud NAT required to reach the internet from the different computing resources (Vertex and Dataflow) - GCS buckets to host Vertex AI and Cloud Build Artifacts. - BigQuery Dataset where the training data will be stored - Service account `mlops-[env]@` with the minimum permissions required by Vertex and Dataflow - Service account `github` to be used by Workload Identity Federation, to federate Github identity. - Secret to store the Github SSH key to get access the CI/CD code repo (you will set the secret value later, so it can be used). ![MLOps project description](./images/mlops_projects.png "MLOps project description") ## Pre-requirements ### User groups User groups provide a stable frame of reference that allows decoupling the final set of permissions from the stage where entities and resources are created, and their IAM bindings defined. These groups should be created before launching Terraform. We use the following groups to control access to resources: - *Data Scientits* (gcp-ml-ds@<company.org>). They create ML pipelines in the experimentation environment. - *ML Engineers* (gcp-ml-eng@<company.org>). They handle and run the different environments, with access to all resources in order to troubleshoot possible issues with pipelines. These groups are not suitable for production grade environments. You can configure the group names through the `groups`variable. ### Git environment for the ML Pipelines Clone the Google Cloud Professional services [repo](https://github.com/GoogleCloudPlatform/professional-services) to a temp directory: ``` git clone https://github.com/GoogleCloudPlatform/professional-services.git cd professional-services/ ``` Setup your new Github repo using the Github web console or CLI. Copy the `vertex_mlops_enterprise` folder to your local folder, including the Github actions, hidden dirs and files: ``` cp -r ./examples/vertex_mlops_enterprise/ <YOUR LOCAL FOLDER> ``` Commit the files in the main branch (`main`): ``` git init git add * git commit -m "first commit" git branch -M main git remote add origin https://github.com/<ORG>/<REPO>.git git push -u origin main ``` You will need to configure the Github organization and repo name in the `github` variable. ### Branches Create the additional branches in Github (`dev`, `staging`, `prod`). This can be also done from the UI (`Create branch: dev from main`). Pull the remote repo with `git pull`. Checkout the staging branch with `git checkout dev`. Review the files `*.yml` files in the `.github/workflows` and modify them if needed. These files should be automatically updated when launched terraform. Review the files `*.yaml` files in the `build` folder and modify them if needed. These files should be automatically updated when launched terraform. ## Instructions ### Deploy the different environments You will need to repeat this process for each one of the different environments (01-development, 02-staging, 03-production): - Go to the environment folder: I.e. `cd ../terraform/01-dev` - It is recommended to have a remote state file. In this case, make sure to create the right `providers.tf` file, set the name of a bucket that you want to use as the storage for your Terraform state. This should be an existing bucket that your user has access to. - Create a `terraform.tfvars` file and specify the required variables. You can use the `terraform.tfvars.sample` an an starting point ```tfm project_create = { billing_account_id = "000000-123456-123456" parent = "folders/111111111111" } project_id = "creditcards-dev" ``` - Make sure you fill in the following parameters: - `project_create.billing_account_id`: Billing account - `project_create.parent `: Parent folder where the project will be created. - `project_id`: Project id, references existing project if `project_create` is null. - Make sure you have the right authentication setup (application default credentials, or a service account key) - Run `terraform init` and `terraform apply` - It is possible that some errors like `googleapi: Error 400: Service account xxxx does not exist.` appears. This is due to some dependencies with the Project IAM authoritative bindings of the service accounts. In this case, re-run again the process with `terraform apply` ## What's next? Continue [configuring the GIT integration with Cloud Build](./02-GIT_SETUP.md) and [launching the MLOps pipeline](./03-MLOPS.md). <!-- BEGIN TFDOC --> <!-- END TFDOC -->
GCP
MLOps with Vertex AI Infra setup Introduction This example implements the infrastructure required to deploy an end to end MLOps process https services google com fh files misc practitioners guide to mlops whitepaper pdf using Vertex AI https cloud google com vertex ai platform GCP resources A terraform script is provided to setup all the required resources GCP Project to host all the resources Isolated VPC network and a subnet to be used by Vertex and Dataflow using a Shared VPC is also possible Firewall rule to allow the internal subnet communication required by Dataflow Cloud NAT required to reach the internet from the different computing resources Vertex and Dataflow GCS buckets to host Vertex AI and Cloud Build Artifacts BigQuery Dataset where the training data will be stored Service account mlops env with the minimum permissions required by Vertex and Dataflow Service account github to be used by Workload Identity Federation to federate Github identity Secret to store the Github SSH key to get access the CI CD code repo you will set the secret value later so it can be used MLOps project description images mlops projects png MLOps project description Pre requirements User groups User groups provide a stable frame of reference that allows decoupling the final set of permissions from the stage where entities and resources are created and their IAM bindings defined These groups should be created before launching Terraform We use the following groups to control access to resources Data Scientits gcp ml ds company org They create ML pipelines in the experimentation environment ML Engineers gcp ml eng company org They handle and run the different environments with access to all resources in order to troubleshoot possible issues with pipelines These groups are not suitable for production grade environments You can configure the group names through the groups variable Git environment for the ML Pipelines Clone the Google Cloud Professional services repo https github com GoogleCloudPlatform professional services to a temp directory git clone https github com GoogleCloudPlatform professional services git cd professional services Setup your new Github repo using the Github web console or CLI Copy the vertex mlops enterprise folder to your local folder including the Github actions hidden dirs and files cp r examples vertex mlops enterprise YOUR LOCAL FOLDER Commit the files in the main branch main git init git add git commit m first commit git branch M main git remote add origin https github com ORG REPO git git push u origin main You will need to configure the Github organization and repo name in the github variable Branches Create the additional branches in Github dev staging prod This can be also done from the UI Create branch dev from main Pull the remote repo with git pull Checkout the staging branch with git checkout dev Review the files yml files in the github workflows and modify them if needed These files should be automatically updated when launched terraform Review the files yaml files in the build folder and modify them if needed These files should be automatically updated when launched terraform Instructions Deploy the different environments You will need to repeat this process for each one of the different environments 01 development 02 staging 03 production Go to the environment folder I e cd terraform 01 dev It is recommended to have a remote state file In this case make sure to create the right providers tf file set the name of a bucket that you want to use as the storage for your Terraform state This should be an existing bucket that your user has access to Create a terraform tfvars file and specify the required variables You can use the terraform tfvars sample an an starting point tfm project create billing account id 000000 123456 123456 parent folders 111111111111 project id creditcards dev Make sure you fill in the following parameters project create billing account id Billing account project create parent Parent folder where the project will be created project id Project id references existing project if project create is null Make sure you have the right authentication setup application default credentials or a service account key Run terraform init and terraform apply It is possible that some errors like googleapi Error 400 Service account xxxx does not exist appears This is due to some dependencies with the Project IAM authoritative bindings of the service accounts In this case re run again the process with terraform apply What s next Continue configuring the GIT integration with Cloud Build 02 GIT SETUP md and launching the MLOps pipeline 03 MLOPS md BEGIN TFDOC END TFDOC
GCP Using dbt and Cloud Composer for managing BigQuery example code Cloud Composer is a fully managed data workflow orchestration service that empowers you to author schedule and monitor pipelines Code Examples 1 Basic This repository demonstrate using the dbt to manage tables in BigQuery and using Cloud Composer for schedule the dbt run DBT Data Building Tool is a command line tool that enables data analysts and engineers to transform data in their warehouses simply by writing select statements There are two sets of example
# Using dbt and Cloud Composer for managing BigQuery example code DBT (Data Building Tool) is a command-line tool that enables data analysts and engineers to transform data in their warehouses simply by writing select statements. Cloud Composer is a fully managed data workflow orchestration service that empowers you to author, schedule, and monitor pipelines. This repository demonstrate using the dbt to manage tables in BigQuery and using Cloud Composer for schedule the dbt run. ## Code Examples There are two sets of example: 1. Basic The basic example is demonstrating the minimum configuration that you need to run dbt on Cloud Composer 2. Optimized The optimized example is demonstrating optimization on splitting the dbt run for each models, implementing incremental in the dbt model, and using Airflow execution date to handle backfill. ## Technical Requirements These GCP services will be used in the example code: - Cloud Composer - BigQuery - Google Cloud Storage (GCS) - Cloud Build - Google Container Repository (GCR) - Cloud Source Repository (CSR) ## High Level Flow This diagram explains the example solution's flow: <img src="img/dbt-on-cloud-composer-diagram.PNG" width="700"> 1. The code starts from a dbt project stored in a repository. (The example is under [basic or optimized]/dbt-project folder) 2. Any changes from the dbt project will trigger Cloud Build run 3. The Cloud Build will create/update an image to GCR; and export dbt docs to GCS 4. The Airflow DAG deployed to Cloud Composer (The example is under [basic or optimized]/dag folder) 5. The dbt run triggered using KubernetesPodOperator that pulls image from the step \#3 6. At the end of the process the BigQuery objects will be created/updated (i.e datasets and tables) ## How to run ### Prerequisites 1. Cloud Composer environment https://cloud.google.com/composer/docs/how-to/managing/creating 2. Set 3 ENVIRONMENT VARIABLES in the Cloud Composer (AIRFLOW_VAR_BIGQUERY_LOCATION, AIRFLOW_VAR_RUN_ENVIRONMENT, AIRFLOW_VAR_SOURCE_DATA_PROJECT) https://cloud.google.com/composer/docs/how-to/managing/environment-variables 3. Cloud Source Repository (or any git provider) Store the code from dbt-project in this dedicated repository The repository should contain dbt_project.yml file (Check the example code under [basic or optimized]/dbt-project] folder) Note that the dedicated dbt-project repository is not this example code repository (github repo) 4. Cloud Build triggers Trigger build from the dbt project repository https://cloud.google.com/build/docs/automating-builds/create-manage-triggers Set Trigger's substitution variables : _GCS_BUCKET and _DBT_SERVICE_ACCOUNT _GCS_BUCKET : A GCS bucket id for storing dbt documentation files. _DBT_SERVICE_ACCOUNT : A service account to run dbt from Cloud Build. 5. BigQuery API enabled 6. Service account to run dbt commands 7. Kubernetes Secret to be bound with the service account https://cloud.google.com/kubernetes-engine/docs/concepts/secret Alternatively, instead of using Kubernetes Secret, Workload Identity federation can be used (recommended approach). More details in **Authentication** section below. ### Profiles for running the dbt project Check in the /dbt-project/.dbt/profiles.yml, you will find 2 options to run the dbt: 1. local You can run the dbt project using your local machine or Cloud Shell. To do that, run ``` gcloud auth application-default login ``` Trigger dbt run by using this command: ``` dbt run --vars '{"project_id": [Your Project id], "bigquery_location": "us", "execution_date": "1970-01-01","source_data_project": "bigquery-public-data"}' --profiles-dir .dbt ``` 2. remote This option is for running dbt using service account For example from Cloud build and Cloud Composer Check cloudbuild.yaml and dag/dbt_with_kubernetes.py to see how to use this option ### Run the code After all the Prerequisites are prepared. You will have: 1. A dbt-project repository 2. Airflow DAG to run the dbt Here are the follow up steps for running the code: 1. Push the code in dbt-project repository and make sure the Cloud Build triggered; and successfully create the docker image 2. In the Cloud Composer UI, run the DAG (e.g dbt_with_kubernetes.py) 3. If successfull, check the BigQuery console to check the tables With this mechanism, you have 2 independent runs. Updating the dbt-project, including models, schema and configurations will run the Cloud Build to create the docker image. The DAG as dbt scheduler will run the dbt-project from the latest docker image available. ### Passing variables from Cloud Composer to dbt run You can pass variables from Cloud Composer to the dbt run. As an example, in this code we configure the BigQuery dataset location in the US as part of the DAG. ``` default_dbt_vars = { "project_id": project, # Example on using Cloud Composer's variable to be passed to dbt "bigquery_location": Variable.get("bigquery_location"), "key_file_dir": '/var/secrets/google/key.json', "source_data_project": Variable.get("source_data_project") } ``` In the dbt script, you can use the variable like this: ``` location: "" ``` ### Authentication When provisioning DBT runtime environment using KubernetesPodOperator there are two available options for authentication of the DBT process. To achieve better separation of concerns and follow good security practices, the identity of the DBT process (Service Account) should be different than the Cloud Composer Service Account. Authentication options are: - Workload Identity federation [**Recommended**] A better way to manage the identity and authentication for K8s workloads is to avoid using SA Keys as Secrets and use Workload Identity federation mechanism [[documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity)] Depending on the Composer version you might need to enable Workload Identity in the cluster, configure the node pool to use the GKE_METADATA metadata server to request a short-lived auth tokens. - Service Account key stored as Kubernetes Secret Create a the SA key using the command (Note: SA keys are very sensitive and easy to misuse which can be a security risk. They should be kept protected and only be used under special circumstances) ```bash gcloud iam service-accounts keys create key-file \ [email protected] ``` Then save the key json file as *key.json* and configure *kubectl* command line tool to access the GKE cluster used by the Cloud Composer environment. ```bash gcloud container clusters get-credentials gke-cluster-name --zone cluster-zone --project project-name ``` Onced authenticated, create dbt secret in the default namespace by running ``` kubectl create secret generic dbt-sa-secret --from-file key.json=./key.json ``` Since then, in the DAG code, when creating a container, the service account key will extracted from K8s Secret and then be be mounted under /var/secrets/google path in the container filesystem and available for DBT in the runtime. - Composer 1 To enable Workload Identity on a new cluster, run the following command: ```bash gcloud container clusters create CLUSTER_NAME \ --region=COMPUTE_REGION \ --workload-pool=PROJECT_ID.svc.id.goog ``` Create new node pool (recommended) or update the existing one (might break the airflow setup and require extra steps): ```bash gcloud container node-pools create NODEPOOL_NAME \ --cluster=CLUSTER_NAME \ --workload-metadata=GKE_METADATA ``` - Composer 2 No further actions required, as the GKE is already using Workload Identity and thanks to the Autopilot mode there's no need to manage the node pool manually. To let the DAG to use the Workload Identity the following steps are required: 1) Create a namespace for the Kubernetes service account: ```bash kubectl create namespace NAMESPACE ``` 2) Create a Kubernetes service account for your application to use ```bash kubectl create serviceaccount KSA_NAME \ --namespace NAMESPACE ``` 3) Assuming that the dbt-sa already exists and has the right permissions to trigger BigQuery jobs, the special binding has to be added to allow the Kubernetes service account as the IAM service account: ```bash gcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]" ``` 4) Using *kubectl* tool annotate the Kubernetes service account with the email address of the IAM service account. ```bash kubectl annotate serviceaccount KSA_NAME \ --namespace NAMESPACE \ iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com ``` To make use of the Workload Identity in our DAG, replace the existing KubernetesPodOperator call with the one that uses the Workload Identity. 1) Composer 1 Use example configuration from the snippet below: ```python KubernetesPodOperator( (...) namespace='dbt-namespace', service_account_name="dbt-k8s-sa", affinity={ 'nodeAffinity': { 'requiredDuringSchedulingIgnoredDuringExecution': { 'nodeSelectorTerms': [{ 'matchExpressions': [{ 'key': 'cloud.google.com/gke-nodepool', 'operator': 'In', 'values': [ 'dbt-pool', ] }] }] } } } (...) ).execute(context) ``` The affinity configuration lets GKE to schedule the pod in one of the specific node-pools that are set up to use Workload Identity. 2) Composer 2 In case of Composer 2 (Autopilot), the configuration is simpler, example snippet: ```python KubernetesPodOperator( (...) namespace='dbt-tasks', service_account_name="dbt-k8s-sa" (...) ).execute(context) ``` When using Workload Identity option there is no need to store the IAM SA key as a Secret in GKE which massively improves the maintenance efforts and is generally considered more secure, as there is no need to generate and export the SA Key.
GCP
Using dbt and Cloud Composer for managing BigQuery example code DBT Data Building Tool is a command line tool that enables data analysts and engineers to transform data in their warehouses simply by writing select statements Cloud Composer is a fully managed data workflow orchestration service that empowers you to author schedule and monitor pipelines This repository demonstrate using the dbt to manage tables in BigQuery and using Cloud Composer for schedule the dbt run Code Examples There are two sets of example 1 Basic The basic example is demonstrating the minimum configuration that you need to run dbt on Cloud Composer 2 Optimized The optimized example is demonstrating optimization on splitting the dbt run for each models implementing incremental in the dbt model and using Airflow execution date to handle backfill Technical Requirements These GCP services will be used in the example code Cloud Composer BigQuery Google Cloud Storage GCS Cloud Build Google Container Repository GCR Cloud Source Repository CSR High Level Flow This diagram explains the example solution s flow img src img dbt on cloud composer diagram PNG width 700 1 The code starts from a dbt project stored in a repository The example is under basic or optimized dbt project folder 2 Any changes from the dbt project will trigger Cloud Build run 3 The Cloud Build will create update an image to GCR and export dbt docs to GCS 4 The Airflow DAG deployed to Cloud Composer The example is under basic or optimized dag folder 5 The dbt run triggered using KubernetesPodOperator that pulls image from the step 3 6 At the end of the process the BigQuery objects will be created updated i e datasets and tables How to run Prerequisites 1 Cloud Composer environment https cloud google com composer docs how to managing creating 2 Set 3 ENVIRONMENT VARIABLES in the Cloud Composer AIRFLOW VAR BIGQUERY LOCATION AIRFLOW VAR RUN ENVIRONMENT AIRFLOW VAR SOURCE DATA PROJECT https cloud google com composer docs how to managing environment variables 3 Cloud Source Repository or any git provider Store the code from dbt project in this dedicated repository The repository should contain dbt project yml file Check the example code under basic or optimized dbt project folder Note that the dedicated dbt project repository is not this example code repository github repo 4 Cloud Build triggers Trigger build from the dbt project repository https cloud google com build docs automating builds create manage triggers Set Trigger s substitution variables GCS BUCKET and DBT SERVICE ACCOUNT GCS BUCKET A GCS bucket id for storing dbt documentation files DBT SERVICE ACCOUNT A service account to run dbt from Cloud Build 5 BigQuery API enabled 6 Service account to run dbt commands 7 Kubernetes Secret to be bound with the service account https cloud google com kubernetes engine docs concepts secret Alternatively instead of using Kubernetes Secret Workload Identity federation can be used recommended approach More details in Authentication section below Profiles for running the dbt project Check in the dbt project dbt profiles yml you will find 2 options to run the dbt 1 local You can run the dbt project using your local machine or Cloud Shell To do that run gcloud auth application default login Trigger dbt run by using this command dbt run vars project id Your Project id bigquery location us execution date 1970 01 01 source data project bigquery public data profiles dir dbt 2 remote This option is for running dbt using service account For example from Cloud build and Cloud Composer Check cloudbuild yaml and dag dbt with kubernetes py to see how to use this option Run the code After all the Prerequisites are prepared You will have 1 A dbt project repository 2 Airflow DAG to run the dbt Here are the follow up steps for running the code 1 Push the code in dbt project repository and make sure the Cloud Build triggered and successfully create the docker image 2 In the Cloud Composer UI run the DAG e g dbt with kubernetes py 3 If successfull check the BigQuery console to check the tables With this mechanism you have 2 independent runs Updating the dbt project including models schema and configurations will run the Cloud Build to create the docker image The DAG as dbt scheduler will run the dbt project from the latest docker image available Passing variables from Cloud Composer to dbt run You can pass variables from Cloud Composer to the dbt run As an example in this code we configure the BigQuery dataset location in the US as part of the DAG default dbt vars project id project Example on using Cloud Composer s variable to be passed to dbt bigquery location Variable get bigquery location key file dir var secrets google key json source data project Variable get source data project In the dbt script you can use the variable like this location Authentication When provisioning DBT runtime environment using KubernetesPodOperator there are two available options for authentication of the DBT process To achieve better separation of concerns and follow good security practices the identity of the DBT process Service Account should be different than the Cloud Composer Service Account Authentication options are Workload Identity federation Recommended A better way to manage the identity and authentication for K8s workloads is to avoid using SA Keys as Secrets and use Workload Identity federation mechanism documentation https cloud google com kubernetes engine docs how to workload identity Depending on the Composer version you might need to enable Workload Identity in the cluster configure the node pool to use the GKE METADATA metadata server to request a short lived auth tokens Service Account key stored as Kubernetes Secret Create a the SA key using the command Note SA keys are very sensitive and easy to misuse which can be a security risk They should be kept protected and only be used under special circumstances bash gcloud iam service accounts keys create key file iam account sa name project id iam gserviceaccount com Then save the key json file as key json and configure kubectl command line tool to access the GKE cluster used by the Cloud Composer environment bash gcloud container clusters get credentials gke cluster name zone cluster zone project project name Onced authenticated create dbt secret in the default namespace by running kubectl create secret generic dbt sa secret from file key json key json Since then in the DAG code when creating a container the service account key will extracted from K8s Secret and then be be mounted under var secrets google path in the container filesystem and available for DBT in the runtime Composer 1 To enable Workload Identity on a new cluster run the following command bash gcloud container clusters create CLUSTER NAME region COMPUTE REGION workload pool PROJECT ID svc id goog Create new node pool recommended or update the existing one might break the airflow setup and require extra steps bash gcloud container node pools create NODEPOOL NAME cluster CLUSTER NAME workload metadata GKE METADATA Composer 2 No further actions required as the GKE is already using Workload Identity and thanks to the Autopilot mode there s no need to manage the node pool manually To let the DAG to use the Workload Identity the following steps are required 1 Create a namespace for the Kubernetes service account bash kubectl create namespace NAMESPACE 2 Create a Kubernetes service account for your application to use bash kubectl create serviceaccount KSA NAME namespace NAMESPACE 3 Assuming that the dbt sa already exists and has the right permissions to trigger BigQuery jobs the special binding has to be added to allow the Kubernetes service account as the IAM service account bash gcloud iam service accounts add iam policy binding GSA NAME GSA PROJECT iam gserviceaccount com role roles iam workloadIdentityUser member serviceAccount PROJECT ID svc id goog NAMESPACE KSA NAME 4 Using kubectl tool annotate the Kubernetes service account with the email address of the IAM service account bash kubectl annotate serviceaccount KSA NAME namespace NAMESPACE iam gke io gcp service account GSA NAME GSA PROJECT iam gserviceaccount com To make use of the Workload Identity in our DAG replace the existing KubernetesPodOperator call with the one that uses the Workload Identity 1 Composer 1 Use example configuration from the snippet below python KubernetesPodOperator namespace dbt namespace service account name dbt k8s sa affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key cloud google com gke nodepool operator In values dbt pool execute context The affinity configuration lets GKE to schedule the pod in one of the specific node pools that are set up to use Workload Identity 2 Composer 2 In case of Composer 2 Autopilot the configuration is simpler example snippet python KubernetesPodOperator namespace dbt tasks service account name dbt k8s sa execute context When using Workload Identity option there is no need to store the IAM SA key as a Secret in GKE which massively improves the maintenance efforts and is generally considered more secure as there is no need to generate and export the SA Key
GCP FairingXGBoost this notebook demonstrate how to complete you can use Kubeflow Fairing to deploy your trained model as a prediction endpoint Kubeflow Fairing Examples Train an XGBoost model in a local notebook Train an XGBoost model remotely on Kubeflow cluster with Kubeflow Fairing ML models in a hybrid cloud environment By using Kubeflow Fairing and adding a few lines of code you can run your ML In the repo we provided three notebooks to demonstrate the usage of Kubeflow Faring training job locally or in the cloud directly from Python code or a Jupyter notebook After your training job is is a Python package that streamlines the process of building training and deploying machine learning
# Kubeflow Fairing Examples `Kubeflow Fairing` is a Python package that streamlines the process of building, training, and deploying machine learning (ML) models in a hybrid cloud environment. By using Kubeflow Fairing and adding a few lines of code, you can run your ML training job locally or in the cloud, directly from Python code or a Jupyter notebook. After your training job is complete, you can use Kubeflow Fairing to deploy your trained model as a prediction endpoint. In the repo, we provided three notebooks to demonstrate the usage of Kubeflow Faring: - Fairing_XGBoost: this notebook demonstrate how to * Train an XGBoost model in a local notebook, * Train an XGBoost model remotely on Kubeflow cluster, with Kubeflow Fairing * Train an XGBoost model remotely on AI Platform training, with Kubeflow Fairing * Deploy a trained model to Kubeflow, and call the deployed endpoint for predictions, with Kubeflow Fairing - Fairing_Tensorflow_Keras: this notebook demonstrate how to * Train an Keras model in a local notebook, * Train an Keras model remotely on Kubeflow cluster (distributed), with Kubeflow Fairing * Train an Keras model remotely on AI Platform training, with Kubeflow Fairing * Deploy a trained model to Kubeflow, with Kubeflow Fairing - Fairing_Py_File: this notebook introduces you to using Kubeflow Fairing to train the model, which is developed using tensorflow or keras and enclosed in python files * Train an Tensorflow model remotely on Kubeflow cluster (distributed), with Kubeflow Fairing * Train an Tensorflow model remotely on AI Platform training, with Kubeflow Fairing **Note that Kubeflow Fairing doesn't require kubeflow cluster as pre-requisite. Kubeflow Fairing + AI platform is a valid combination** ## Setups: ### Prerequisites Before you follow the instructions below to deploy your own kubeflow cluster, you need a Google cloud project if you don't have one. You can find detailed instructions [here](https://cloud.google.com/dataproc/docs/guides/setup-project). - Make sure the following API & Services are enabled. * Cloud Storage * Cloud Machine Learning Engine * Cloud Source Repositories API (for CI/CD integration) * Compute Engine API * GKE API * IAM API * Deployment Manager API - Configure project id and bucket id as environment variable. ```bash $ export PROJECT_ID=[your-google-project-id] $ export GCP_BUCKET=[your-google-cloud-storage-bucket-name] $ export DEPLOYMENT_NAME=[your-deployment-name] ``` - Deploy Kubeflow Cluster on GCP. The running of training and serving jobs on kubeflow will require a kubeflow deployment. Please refer the link [here](https://www.kubeflow.org/docs/gke/deploy/) to set up your Kubeflow deployment in your environment. ### Setup Environment Please refer the link [here](https://www.kubeflow.org/docs/fairing/gcp-local-notebook/) to properly setup the environments. The key steps are summarized as follows - Create service account ```bash export SA_NAME = [service account name] gcloud iam service-accounts create ${SA_NAME} gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \ --role 'roles/editor' gcloud iam service-accounts keys create ~/key.json \ --iam-account ${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com ``` - Authorize for Source Repository ```bash gcloud auth configure-docker ``` - Update local kubeconfig (for submitting job to kubeflow cluster) ```bash export CLUSTER_NAME=${DEPLOYMENT_NAME} # this is the deployment name or the kubernetes cluster name export ZONE=us-central1-c gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${ZONE} ``` - Set the environmental variable: GOOGLE_APPLICATION_CREDENTIALS ```bash export GOOGLE_APPLICATION_CREDENTIALS = ~/key.json ``` - Install the latest version of fairing ```bash pip install git+https://github.com/kubeflow/fairing@master ``` ### Running Notebook Please not that the above configuration is required for notebook service running outside Kubeflow environment. And the examples demonstrated are fully tested on notebook service outside Kubeflow cluster also, which means it could be - Notebook running on your personal computer - Notebook on AI Platform, Google Cloud Platform - Essentially notebook on any environment outside Kubeflow cluster For notebook running inside Kubeflow cluster, for example JupytHub will be deployed together with kubeflow, the environment variables, e.g. service account, projects and etc, should have been pre-configured while setting up the cluster. The fairing package will also be pre-installed together with the deployment. **The only thing need to be aware is that docker is usually not installed, which would require `cluster` as the builder option as explained in the following section** ## Concepts of Kubeflow Fairing There are three major concepts in Kubeflow Fairing: preprocessor, builder and deployer ### Preprocessor The preprocessor defines how Kubeflow Fairing will map a set of inputs to a context when building the container image for your training job. The preprocessor can convert input files, exclude some files, and change the entrypoint for the training job. * **python**: Copies the input files directly into the container image. * **notebook**: Converts a notebook into a runnable python file. Strips out the non-python code. * **full_notebook**: Runs a full notebook as-is, including bash scripts or non-Python code. * **function**: FunctionPreProcessor preprocesses a single function. It sets as the command a function_shim that calls the function directly. ### Builder The builder defines how Kubeflow Fairing will build the container image for your training job, and location of the container registry to store the container image in. There are different strategies that will make sense for different environments and use cases. * **append**: Creates a Dockerfile by appending the your code as a new layer on an existing docker image. This builder requires less to time to create a container image for your training job, because the base image is not pulled to create the image and only the differences are pushed to the container image registry. * **cluster**: Builds the container image for your training job in the Kubernetes cluster. This option is useful for building jobs in environments where a Docker daemon is not present, for example a hosted notebook. * **docker**: Uses a local docker daemon to build and push the container image for your training job to your container image registry. ### Deployer The deployer defines where Kubeflow Fairing will deploy and run your training job. The deployer uses the image produced by the builder to deploy and run your training job on Kubeflow or Kubernetes. * **Job**: Uses a Kubernetes Job resource to launch your training job. * **TfJob**: Uses the TFJob component of Kubeflow to launch your Tensorflow training job. * **GCPJob**: Handle submitting training job to GCP. * **Serving**: Serves a prediction endpoint using Kubernetes deployments and services
GCP
Kubeflow Fairing Examples Kubeflow Fairing is a Python package that streamlines the process of building training and deploying machine learning ML models in a hybrid cloud environment By using Kubeflow Fairing and adding a few lines of code you can run your ML training job locally or in the cloud directly from Python code or a Jupyter notebook After your training job is complete you can use Kubeflow Fairing to deploy your trained model as a prediction endpoint In the repo we provided three notebooks to demonstrate the usage of Kubeflow Faring Fairing XGBoost this notebook demonstrate how to Train an XGBoost model in a local notebook Train an XGBoost model remotely on Kubeflow cluster with Kubeflow Fairing Train an XGBoost model remotely on AI Platform training with Kubeflow Fairing Deploy a trained model to Kubeflow and call the deployed endpoint for predictions with Kubeflow Fairing Fairing Tensorflow Keras this notebook demonstrate how to Train an Keras model in a local notebook Train an Keras model remotely on Kubeflow cluster distributed with Kubeflow Fairing Train an Keras model remotely on AI Platform training with Kubeflow Fairing Deploy a trained model to Kubeflow with Kubeflow Fairing Fairing Py File this notebook introduces you to using Kubeflow Fairing to train the model which is developed using tensorflow or keras and enclosed in python files Train an Tensorflow model remotely on Kubeflow cluster distributed with Kubeflow Fairing Train an Tensorflow model remotely on AI Platform training with Kubeflow Fairing Note that Kubeflow Fairing doesn t require kubeflow cluster as pre requisite Kubeflow Fairing AI platform is a valid combination Setups Prerequisites Before you follow the instructions below to deploy your own kubeflow cluster you need a Google cloud project if you don t have one You can find detailed instructions here https cloud google com dataproc docs guides setup project Make sure the following API Services are enabled Cloud Storage Cloud Machine Learning Engine Cloud Source Repositories API for CI CD integration Compute Engine API GKE API IAM API Deployment Manager API Configure project id and bucket id as environment variable bash export PROJECT ID your google project id export GCP BUCKET your google cloud storage bucket name export DEPLOYMENT NAME your deployment name Deploy Kubeflow Cluster on GCP The running of training and serving jobs on kubeflow will require a kubeflow deployment Please refer the link here https www kubeflow org docs gke deploy to set up your Kubeflow deployment in your environment Setup Environment Please refer the link here https www kubeflow org docs fairing gcp local notebook to properly setup the environments The key steps are summarized as follows Create service account bash export SA NAME service account name gcloud iam service accounts create SA NAME gcloud projects add iam policy binding PROJECT ID member serviceAccount SA NAME PROJECT ID iam gserviceaccount com role roles editor gcloud iam service accounts keys create key json iam account SA NAME PROJECT ID iam gserviceaccount com Authorize for Source Repository bash gcloud auth configure docker Update local kubeconfig for submitting job to kubeflow cluster bash export CLUSTER NAME DEPLOYMENT NAME this is the deployment name or the kubernetes cluster name export ZONE us central1 c gcloud container clusters get credentials CLUSTER NAME region ZONE Set the environmental variable GOOGLE APPLICATION CREDENTIALS bash export GOOGLE APPLICATION CREDENTIALS key json Install the latest version of fairing bash pip install git https github com kubeflow fairing master Running Notebook Please not that the above configuration is required for notebook service running outside Kubeflow environment And the examples demonstrated are fully tested on notebook service outside Kubeflow cluster also which means it could be Notebook running on your personal computer Notebook on AI Platform Google Cloud Platform Essentially notebook on any environment outside Kubeflow cluster For notebook running inside Kubeflow cluster for example JupytHub will be deployed together with kubeflow the environment variables e g service account projects and etc should have been pre configured while setting up the cluster The fairing package will also be pre installed together with the deployment The only thing need to be aware is that docker is usually not installed which would require cluster as the builder option as explained in the following section Concepts of Kubeflow Fairing There are three major concepts in Kubeflow Fairing preprocessor builder and deployer Preprocessor The preprocessor defines how Kubeflow Fairing will map a set of inputs to a context when building the container image for your training job The preprocessor can convert input files exclude some files and change the entrypoint for the training job python Copies the input files directly into the container image notebook Converts a notebook into a runnable python file Strips out the non python code full notebook Runs a full notebook as is including bash scripts or non Python code function FunctionPreProcessor preprocesses a single function It sets as the command a function shim that calls the function directly Builder The builder defines how Kubeflow Fairing will build the container image for your training job and location of the container registry to store the container image in There are different strategies that will make sense for different environments and use cases append Creates a Dockerfile by appending the your code as a new layer on an existing docker image This builder requires less to time to create a container image for your training job because the base image is not pulled to create the image and only the differences are pushed to the container image registry cluster Builds the container image for your training job in the Kubernetes cluster This option is useful for building jobs in environments where a Docker daemon is not present for example a hosted notebook docker Uses a local docker daemon to build and push the container image for your training job to your container image registry Deployer The deployer defines where Kubeflow Fairing will deploy and run your training job The deployer uses the image produced by the builder to deploy and run your training job on Kubeflow or Kubernetes Job Uses a Kubernetes Job resource to launch your training job TfJob Uses the TFJob component of Kubeflow to launch your Tensorflow training job GCPJob Handle submitting training job to GCP Serving Serves a prediction endpoint using Kubernetes deployments and services
GCP you may not use this file except in compliance with the License https www apache org licenses LICENSE 2 0 Copyright 2022 Google LLC You may obtain a copy of the License at Unless required by applicable law or agreed to in writing software Licensed under the Apache License Version 2 0 the License
<!-- Copyright 2022 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Deploy the Custom Dataflow template by following these steps ## Overview The purpose of this walkthrough is to create [Custom Dataflow templates](https://cloud.google.com/dataflow/docs/concepts/dataflow-templates). The value of Custom Dataflow templates is that it allows us to execute Dataflow jobs without installing any code. This is useful to enable Dataflow execution using an automated process or to enable others without technical expertise to run jobs via a user-friendly guided user interface. ## 1. Select or create project **It is recommended to go through this walkthrough using a new temporary Google Cloud project, unrelated to any of your existing Google Cloud projects.** Select or create a project to begin. <walkthrough-project-setup></walkthrough-project-setup> ## 2. Set default project ```sh gcloud config set project <walkthrough-project-id/> ``` ## 3. Setup environment Best practice recommends a Dataflow job to: 1) Utilize a worker service account to access the pipeline's files and resources 2) Minimally necessary IAM permissions for the worker service account 3) Minimally required Google cloud services Therefore, this step will: - Create service accounts - Provision IAM credentials - Enable required Google cloud services Run the terraform workflow in the [infrastructure/01.setup](infrastructure/01.setup) directory. Terraform will ask your permission before provisioning resources. If you agree with terraform provisioning resources, type `yes` to proceed. ```sh DIR=infrastructure/01.setup terraform -chdir=$DIR init terraform -chdir=$DIR apply -var='project=<walkthrough-project-id/>' ``` ## 4. Provision network Best practice recommends a Dataflow job to: 1) Utilize a custom network and subnetwork 2) Minimally necessary network firewall rules 3) Building Python custom templates additionally requires the use of a [Cloud NAT](https://cloud.google.com/nat/docs/overview); per best practice we execute the Dataflow job using private IPs Therefore, this step will: - Provision a custom network and subnetwork - Provision firewall rules - Provision a Cloud NAT and its dependent Cloud Router Run the terraform workflow in the [infrastructure/02.network](infrastructure/02.network) directory. Terraform will ask your permission before provisioning resources. If you agree with terraform provisioning resources, type `yes` to proceed. ```sh DIR=infrastructure/02.network terraform -chdir=$DIR init terraform -chdir=$DIR apply -var='project=<walkthrough-project-id/>' ``` ## 5. Provision data pipeline IO resources The Apache Beam example that our Dataflow template executes is a derived word count for both [Java](https://github.com/apache/beam/blob/master/examples/java/src/main/java/org/apache/beam/examples/MinimalWordCount.java) and [python](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount_minimal.py). The word count example requires a source [Google Cloud Storage](https://cloud.google.com/storage) bucket. To make the example interesting, we copy all the files from `gs://apache-beam-samples/shakespeare/*` to a custom bucket in our project. Therefore, this step will: - Provision a Google Cloud storage bucket - Create Google Cloud storage objects to read from in the pipeline Run the terraform workflow in the [infrastructure/03.io](infrastructure/03.io) directory. Terraform will ask your permission before provisioning resources. If you agree with terraform provisioning resources, type `yes` to proceed. ```sh DIR=infrastructure/03.io terraform -chdir=$DIR init terraform -chdir=$DIR apply -var='project=<walkthrough-project-id/>' ``` ## 6. Provision the Dataflow template builder We will use [Cloud Build](https://cloud.google.com/build) to build the custom Dataflow template. There are advantages to using Cloud Build to build our custom Dataflow template, instead of performing the necessary commands on our local machine. Cloud Build connects to our version control, [GitHub](https://GitHub.com) in this example, so that any changes made to a specific branch will automatically trigger a new build of our Dataflow template. Therefore, this step will: - Provision cloud build trigger that will: 1. Run the language specific build process i.e. gradle shadowJar, go build, etc. 2. Execute the `gcloud dataflow flex-template` command with relevant arguments. ### 6.1. Fork the repository into your own GitHub organization or personal account In order to benefit from [Cloud Build](https://cloud.google.com/build), the service requires we own this repository; it will not work with a any repository, even if it is public. Therefore, complete these steps before proceeding: 1) [Fork the repository](https://github.com/GoogleCloudPlatform/professional-services/fork) 2) [Connect forked repository to Cloud Build](https://console.cloud.google.com/cloud-build/triggers/connect) ### 6.2 Execute terraform module to provision Cloud Build trigger First, set your GitHub organization or username: ```sh GITHUB_REPO_OWNER=<change me> ``` Next, set expected defaults. (_Note: Normally it makes sense to default terraform variables instead of doing this._) ```sh GITHUB_REPO_NAME=professional-services WORKING_DIR_PREFIX=examples/dataflow-custom-templates ``` Run the terraform workflow in the [infrastructure/04.template](infrastructure/04.template) directory. Terraform will ask your permission before provisioning resources. If you agree with terraform provisioning resources, type `yes` to proceed. ```sh DIR=infrastructure/04.template terraform -chdir=$DIR init terraform -chdir=$DIR apply -var="project=$(gcloud config get-value project)" -var="github_repository_owner=$GITHUB_REPO_OWNER" -var="github_repository_name=$GITHUB_REPO_NAME" -var="working_dir_prefix=$WORKING_DIR_PREFIX" ``` ## 7. Run Cloud Build Trigger Navigate to [cloud-build/triggers](https://console.cloud.google.com/cloud-build/triggers). You should see a Cloud Build trigger listed for each language of this example. Click the `RUN` button next to the created Cloud Build trigger to execute the custom template Cloud Build trigger for your language of choice manually. See [Create Manual Triggers](https://cloud.google.com/build/docs/automating-builds/create-manual-triggers?hl=en#running_manual_triggers) for more information. This step will take several minutes to complete. ## 8. Execute the Dataflow Template ### 1. Start the Dataflow Job creation form There are multiple ways to run a Dataflow Job from a custom template. We will use the Google Cloud Web UI. To start the process, navigate to [dataflow/createjob](https://console.cloud.google.com/dataflow/createjob). ### 2. Select Custom Template Select `Custom Template` from the `Dataflow template` drop down menu. Then, click the `BROWSE` button and navigate to the bucket with the name that starts with `dataflow-templates-`. Within this bucket, select the json file object that represents the template details. You should see a JSON file for each of the Cloud Build triggers you ran to create the custom template. ### 3. Complete Dataflow Job template UI form The Google Cloud console will further prompt for required fields such as Job name and any required fields for the custom Dataflow template. ### 4. Run the template When you are satisfied by the values provided to the custom Dataflow template, click the `RUN` button. ### 5. Monitor the Dataflow Job Navigate to [dataflow/jobs](https://console.cloud.google.com/dataflow/jobs) to locate the job you just created. Clicking on the job will let you navigate to the job monitoring screen.
GCP
Copyright 2022 Google LLC Licensed under the Apache License Version 2 0 the License you may not use this file except in compliance with the License You may obtain a copy of the License at https www apache org licenses LICENSE 2 0 Unless required by applicable law or agreed to in writing software distributed under the License is distributed on an AS IS BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License Deploy the Custom Dataflow template by following these steps Overview The purpose of this walkthrough is to create Custom Dataflow templates https cloud google com dataflow docs concepts dataflow templates The value of Custom Dataflow templates is that it allows us to execute Dataflow jobs without installing any code This is useful to enable Dataflow execution using an automated process or to enable others without technical expertise to run jobs via a user friendly guided user interface 1 Select or create project It is recommended to go through this walkthrough using a new temporary Google Cloud project unrelated to any of your existing Google Cloud projects Select or create a project to begin walkthrough project setup walkthrough project setup 2 Set default project sh gcloud config set project walkthrough project id 3 Setup environment Best practice recommends a Dataflow job to 1 Utilize a worker service account to access the pipeline s files and resources 2 Minimally necessary IAM permissions for the worker service account 3 Minimally required Google cloud services Therefore this step will Create service accounts Provision IAM credentials Enable required Google cloud services Run the terraform workflow in the infrastructure 01 setup infrastructure 01 setup directory Terraform will ask your permission before provisioning resources If you agree with terraform provisioning resources type yes to proceed sh DIR infrastructure 01 setup terraform chdir DIR init terraform chdir DIR apply var project walkthrough project id 4 Provision network Best practice recommends a Dataflow job to 1 Utilize a custom network and subnetwork 2 Minimally necessary network firewall rules 3 Building Python custom templates additionally requires the use of a Cloud NAT https cloud google com nat docs overview per best practice we execute the Dataflow job using private IPs Therefore this step will Provision a custom network and subnetwork Provision firewall rules Provision a Cloud NAT and its dependent Cloud Router Run the terraform workflow in the infrastructure 02 network infrastructure 02 network directory Terraform will ask your permission before provisioning resources If you agree with terraform provisioning resources type yes to proceed sh DIR infrastructure 02 network terraform chdir DIR init terraform chdir DIR apply var project walkthrough project id 5 Provision data pipeline IO resources The Apache Beam example that our Dataflow template executes is a derived word count for both Java https github com apache beam blob master examples java src main java org apache beam examples MinimalWordCount java and python https github com apache beam blob master sdks python apache beam examples wordcount minimal py The word count example requires a source Google Cloud Storage https cloud google com storage bucket To make the example interesting we copy all the files from gs apache beam samples shakespeare to a custom bucket in our project Therefore this step will Provision a Google Cloud storage bucket Create Google Cloud storage objects to read from in the pipeline Run the terraform workflow in the infrastructure 03 io infrastructure 03 io directory Terraform will ask your permission before provisioning resources If you agree with terraform provisioning resources type yes to proceed sh DIR infrastructure 03 io terraform chdir DIR init terraform chdir DIR apply var project walkthrough project id 6 Provision the Dataflow template builder We will use Cloud Build https cloud google com build to build the custom Dataflow template There are advantages to using Cloud Build to build our custom Dataflow template instead of performing the necessary commands on our local machine Cloud Build connects to our version control GitHub https GitHub com in this example so that any changes made to a specific branch will automatically trigger a new build of our Dataflow template Therefore this step will Provision cloud build trigger that will 1 Run the language specific build process i e gradle shadowJar go build etc 2 Execute the gcloud dataflow flex template command with relevant arguments 6 1 Fork the repository into your own GitHub organization or personal account In order to benefit from Cloud Build https cloud google com build the service requires we own this repository it will not work with a any repository even if it is public Therefore complete these steps before proceeding 1 Fork the repository https github com GoogleCloudPlatform professional services fork 2 Connect forked repository to Cloud Build https console cloud google com cloud build triggers connect 6 2 Execute terraform module to provision Cloud Build trigger First set your GitHub organization or username sh GITHUB REPO OWNER change me Next set expected defaults Note Normally it makes sense to default terraform variables instead of doing this sh GITHUB REPO NAME professional services WORKING DIR PREFIX examples dataflow custom templates Run the terraform workflow in the infrastructure 04 template infrastructure 04 template directory Terraform will ask your permission before provisioning resources If you agree with terraform provisioning resources type yes to proceed sh DIR infrastructure 04 template terraform chdir DIR init terraform chdir DIR apply var project gcloud config get value project var github repository owner GITHUB REPO OWNER var github repository name GITHUB REPO NAME var working dir prefix WORKING DIR PREFIX 7 Run Cloud Build Trigger Navigate to cloud build triggers https console cloud google com cloud build triggers You should see a Cloud Build trigger listed for each language of this example Click the RUN button next to the created Cloud Build trigger to execute the custom template Cloud Build trigger for your language of choice manually See Create Manual Triggers https cloud google com build docs automating builds create manual triggers hl en running manual triggers for more information This step will take several minutes to complete 8 Execute the Dataflow Template 1 Start the Dataflow Job creation form There are multiple ways to run a Dataflow Job from a custom template We will use the Google Cloud Web UI To start the process navigate to dataflow createjob https console cloud google com dataflow createjob 2 Select Custom Template Select Custom Template from the Dataflow template drop down menu Then click the BROWSE button and navigate to the bucket with the name that starts with dataflow templates Within this bucket select the json file object that represents the template details You should see a JSON file for each of the Cloud Build triggers you ran to create the custom template 3 Complete Dataflow Job template UI form The Google Cloud console will further prompt for required fields such as Job name and any required fields for the custom Dataflow template 4 Run the template When you are satisfied by the values provided to the custom Dataflow template click the RUN button 5 Monitor the Dataflow Job Navigate to dataflow jobs https console cloud google com dataflow jobs to locate the job you just created Clicking on the job will let you navigate to the job monitoring screen
GCP 1 Cloud Storage Entities creation and update for Dialogflow This module is an example how to create and update entities for Dialogflow Recommended Reading Technology Stack 1 Dialogflow 1 Cloud Functions
# Entities creation and update for Dialogflow This module is an example how to create and update entities for Dialogflow. ## Recommended Reading [Entities Options](https://cloud.google.com/dialogflow/docs/entities-options) ## Technology Stack 1. Cloud Storage 1. Cloud Functions 1. Dialogflow ## Programming Language Python 3 ## Project Structure ``` . └── dialogflow_webhook_bank_example β”œβ”€β”€ main.py # Implementation of examples how to load entities in Dialogflow β”œβ”€β”€ entities.json # file with entities to be load in a json format β”œβ”€β”€ requirements.txt # Required libraries for this example ``` ## Setup Instructions ### Project Setup How to setup your project for this example can be found [here](https://cloud.google.com/dialogflow/docs/quick/setup). ### Dialogflow Agent Setup Build an agent by following the instructions [here](https://cloud.google.com/dialogflow/docs/quick/build-agent). ### Cloud Storage Setup Upload the entities.json file to a bucket by following the instructions [here](https://cloud.google.com/storage/docs/quickstart-console#create_a_bucket). ### Cloud Functions Setup This implementation is deployed on GCP using Cloud Functions. More info [here](https://cloud.google.com/functions/docs/concepts/overview). To run the Python scripts on GCP, the `gcloud` command-line tool from the Google Cloud SDK is needed. Refer to the [installation](https://cloud.google.com/sdk/install) page for the appropriate instructions depending on your platform. Note that this project has been tested on a Unix-based environment. After installing, make sure to initialize your Cloud project: ``` `$ gcloud init` ``` ## Usage ### Create entities one by one Use the EntityTypesClient.create_entity_type to create entities one by one. #### More Info [EntityType proto](https://github.com/googleapis/googleapis/blob/551cf1e6e3addcc63740427c4f9b40dedd3dac27/google/cloud/dialogflow/v2/entity_type.proto#L200) [Client for Dialogflow API - EntityTypeClient.create_entity_type](https://dialogflow-python-client-v2.readthedocs.io/en/latest/_modules/dialogflow_v2/gapic/entity_types_client.html#EntityTypesClient.create_entity_type) ### Example Run the sample using gcloud util as followed: ``` $ gcloud functions call entities_builder --data '{ "entities": [{ "display_name": "saving-account-types", "kind": "KIND_MAP", "entities": [{ "value": "saving-account-types", "synonyms": [ "saving", "saving account", "child saving", "IRA", "CD", "student saving"] }] }, { "display_name": "checking-account-types", "kind": "KIND_MAP", "entities": [{ "value": "checking-account-types", "synonyms": [ "checking", "checking account", "student checking account", "student account", "business checking account", "business account" ] }] }, { "display_name": "account_types", "kind": "KIND_LIST", "entities": [ { "value": "@saving-account-types:saving-account-types", "synonyms": [ "@saving-account-types:saving-account-types" ] }, { "value": "@checking-account-types:checking-account-types", "synonyms": [ "@checking-account-types:checking-account-types" ] }, { "value": "@sys.date-period:date-period @saving-account-types:saving-account-types", "synonyms": [ "@sys.date-period:date-period @saving-account-types:saving-account-types" ] }, { "value": "@sys.date-period:date-period @checking-account-types:checking-account-types", "synonyms": [ "@sys.date-period:date-period @checking-account-types:checking-account-types" ] } ] }] }' ``` ### Create entities in batch Use the EntityTypesClient.batch_update_entity_types to create or update entities in batch. #### More Info [Client for Dialogflow API - EntityTypeClient.batch_update_entity_types](https://dialogflow-python-client-v2.readthedocs.io/en/latest/_modules/dialogflow_v2/gapic/entity_types_client.html#EntityTypesClient.batch_update_entity_types) [EntityTypeBatch proto](https://github.com/googleapis/googleapis/blob/551cf1e6e3addcc63740427c4f9b40dedd3dac27/google/cloud/dialogflow/v2/entity_type.proto#L533) [BatchUpdateEntityTypesRequest proto](https://github.com/googleapis/googleapis/blob/master/google/cloud/dialogflow/v2/entity_type.proto#L397) #### Examples ##### Using entity_type_batch_uri The URI to a Google Cloud Storage file containing entity types to update or create. The URI must start with "gs://". The entities.json file is an example of a json format file that can be uploaded to gcs and passed to the function. ``` $ gcloud functions call entities_builder --data '{ "bucket": "gs://<bucket_name>/entities.json"}' ``` ##### Using entity_type_batch_inline For each entity type in the batch: - The `name` is the the unique identifier of the entity type - If `name` is specified, we update an existing entity type. - If `name` is not specified, we create a new entity type. ``` $ gcloud functions call entities_builder --data '{ "entities_batch": { "entity_types":[ { "name": "5201cee0-ddfb-4f7c-ae94-fff87189d13c", "display_name": "saving-account-types", "kind": "KIND_MAP", "entities": [{ "value": "saving-account-types", "synonyms": [ "saving", "saving account", "child saving", "IRA", "CD", "student saving", "senior saving"] }] }, { "display_name": "checking-account-types", "kind": "KIND_MAP", "entities": [{ "value": "checking-account-types", "synonyms": [ "checking", "checking account", "student checking account", "student account", "business checking account", "business account" ] }] }, { "display_name": "account_types", "kind": "KIND_LIST", "entities": [ { "value": "@saving-account-types:saving-account-types", "synonyms": [ "@saving-account-types:saving-account-types" ] }, { "value": "@checking-account-types:checking-account-types", "synonyms": [ "@checking-account-types:checking-account-types" ] }, { "value": "@sys.date-period:date-period @saving-account-types:saving-account-types", "synonyms": [ "@sys.date-period:date-period @saving-account-types:saving-account-types" ] }, { "value": "@sys.date-period:date-period @checking-account-types:checking-account-types", "synonyms": [ "@sys.date-period:date-period @checking-account-types:checking-account-types" ] } ] } ] } }' ``` # Entities Definition ``` └── main β”œβ”€β”€ creates a map entity β”œβ”€β”€ create a composite entity β”œβ”€β”€ updates a map entity ``` Below the definition of the entities. ## Map entities #### entity name: saving-accounts-types Define synonyms: true ``` { "value": "saving-account-types", "synonyms": [ "saving", "saving account", "child saving", "IRA", "CD" ] } ``` #### entity name: checking-account-types Define synonyms: true ``` { "value": "checking-account-types", "synonyms": [ "checking", "checking account", "student checking account", "student account", "business checking account", "business account" ] } ``` ## Composite entities ### entity name: account-types ``` { "value": "@saving-account-type:saving-account-type", "synonyms": [ "@saving-account-type:saving-account-type" ] }, { "value": "@checking-account-type:checking-account-type", "synonyms": [ "@checking-account-type:checking-account-type" ] }, { "value": "@sys.date-period:date-period @saving-account-type:saving-account-type", "synonyms": [ "@sys.date-period:date-period @saving-account-type:saving-account-type" ] }, { "value": "@sys.date-period:date-period @checking-account-type:checking-account-type", "synonyms": [ "@sys.date-period:date-period @checking-account-type:checking-account-type" ] } ``` # References [Client for Dialogflow API ](https://dialogflow-python-client-v2.readthedocs.io/en/latest/gapic/v2/api.html#dialogflow_v2.EntityTypesClient) [EntityType proto](https://github.com/googleapis/googleapis/blob/master/google/cloud/dialogflow/v2/entity_type.proto) [Protocol Buffers Tutorial](https://developers.google.com/protocol-buffers/docs/pythontutorial
GCP
Entities creation and update for Dialogflow This module is an example how to create and update entities for Dialogflow Recommended Reading Entities Options https cloud google com dialogflow docs entities options Technology Stack 1 Cloud Storage 1 Cloud Functions 1 Dialogflow Programming Language Python 3 Project Structure dialogflow webhook bank example main py Implementation of examples how to load entities in Dialogflow entities json file with entities to be load in a json format requirements txt Required libraries for this example Setup Instructions Project Setup How to setup your project for this example can be found here https cloud google com dialogflow docs quick setup Dialogflow Agent Setup Build an agent by following the instructions here https cloud google com dialogflow docs quick build agent Cloud Storage Setup Upload the entities json file to a bucket by following the instructions here https cloud google com storage docs quickstart console create a bucket Cloud Functions Setup This implementation is deployed on GCP using Cloud Functions More info here https cloud google com functions docs concepts overview To run the Python scripts on GCP the gcloud command line tool from the Google Cloud SDK is needed Refer to the installation https cloud google com sdk install page for the appropriate instructions depending on your platform Note that this project has been tested on a Unix based environment After installing make sure to initialize your Cloud project gcloud init Usage Create entities one by one Use the EntityTypesClient create entity type to create entities one by one More Info EntityType proto https github com googleapis googleapis blob 551cf1e6e3addcc63740427c4f9b40dedd3dac27 google cloud dialogflow v2 entity type proto L200 Client for Dialogflow API EntityTypeClient create entity type https dialogflow python client v2 readthedocs io en latest modules dialogflow v2 gapic entity types client html EntityTypesClient create entity type Example Run the sample using gcloud util as followed gcloud functions call entities builder data entities display name saving account types kind KIND MAP entities value saving account types synonyms saving saving account child saving IRA CD student saving display name checking account types kind KIND MAP entities value checking account types synonyms checking checking account student checking account student account business checking account business account display name account types kind KIND LIST entities value saving account types saving account types synonyms saving account types saving account types value checking account types checking account types synonyms checking account types checking account types value sys date period date period saving account types saving account types synonyms sys date period date period saving account types saving account types value sys date period date period checking account types checking account types synonyms sys date period date period checking account types checking account types Create entities in batch Use the EntityTypesClient batch update entity types to create or update entities in batch More Info Client for Dialogflow API EntityTypeClient batch update entity types https dialogflow python client v2 readthedocs io en latest modules dialogflow v2 gapic entity types client html EntityTypesClient batch update entity types EntityTypeBatch proto https github com googleapis googleapis blob 551cf1e6e3addcc63740427c4f9b40dedd3dac27 google cloud dialogflow v2 entity type proto L533 BatchUpdateEntityTypesRequest proto https github com googleapis googleapis blob master google cloud dialogflow v2 entity type proto L397 Examples Using entity type batch uri The URI to a Google Cloud Storage file containing entity types to update or create The URI must start with gs The entities json file is an example of a json format file that can be uploaded to gcs and passed to the function gcloud functions call entities builder data bucket gs bucket name entities json Using entity type batch inline For each entity type in the batch The name is the the unique identifier of the entity type If name is specified we update an existing entity type If name is not specified we create a new entity type gcloud functions call entities builder data entities batch entity types name 5201cee0 ddfb 4f7c ae94 fff87189d13c display name saving account types kind KIND MAP entities value saving account types synonyms saving saving account child saving IRA CD student saving senior saving display name checking account types kind KIND MAP entities value checking account types synonyms checking checking account student checking account student account business checking account business account display name account types kind KIND LIST entities value saving account types saving account types synonyms saving account types saving account types value checking account types checking account types synonyms checking account types checking account types value sys date period date period saving account types saving account types synonyms sys date period date period saving account types saving account types value sys date period date period checking account types checking account types synonyms sys date period date period checking account types checking account types Entities Definition main creates a map entity create a composite entity updates a map entity Below the definition of the entities Map entities entity name saving accounts types Define synonyms true value saving account types synonyms saving saving account child saving IRA CD entity name checking account types Define synonyms true value checking account types synonyms checking checking account student checking account student account business checking account business account Composite entities entity name account types value saving account type saving account type synonyms saving account type saving account type value checking account type checking account type synonyms checking account type checking account type value sys date period date period saving account type saving account type synonyms sys date period date period saving account type saving account type value sys date period date period checking account type checking account type synonyms sys date period date period checking account type checking account type References Client for Dialogflow API https dialogflow python client v2 readthedocs io en latest gapic v2 api html dialogflow v2 EntityTypesClient EntityType proto https github com googleapis googleapis blob master google cloud dialogflow v2 entity type proto Protocol Buffers Tutorial https developers google com protocol buffers docs pythontutorial
GCP Automatic Subordinate CA activation using Root CA in the CAS This repository contains sample Terraform resource definitions for deploying several Root and Subordinate CA provisioning It demonstrates several Certificate Authority Service features and provides examples Certificate Authority Service Demo of Terraform configuration for the following CAS features related Certificate Authority Service CAS resources to the Google Cloud
# Certificate Authority Service Demo This repository contains sample Terraform resource definitions for deploying several related Certificate Authority Service (CAS) resources to the Google Cloud. It demonstrates several Certificate Authority Service features and provides examples of Terraform configuration for the following CAS features: * Root and Subordinate CA provisioning * Automatic Subordinate CA activation using Root CA in the CAS * Configuration example for manual subordinate CA activation * Multi-regional Subordinate CA deployment * CA configuration with Cloud HSM signing keys including example for imported keys * Application team domain ownership validation using CAS Certificate Templates and conditional IAM policies * CAS CA Pool throughput scaling and a load test script for certificate request load generation * CAS API activation The following diagram shows resources being deployed by this project and the resulting CA hiearchy structure: ![Demo Deployment](images/deployment.png?raw=true) ## Pre-requisites The deployment presumes and relies upon an existing Google Cloud Project with attached active Billing account. To perform the successful deployment, your Google Cloud account needs to have `Project Editor` role in the target Google Cloud project. Update the Google Cloud project id in the [terraform.tfvar](./terraform.tfvars) file by setting the `project_id` variable to the id of the target Google Cloud project before proceeding with the execution. ## Demonstration The Terraform project in this repository defines the following input variables that can either be edited in the `variables.tf` file directly or passed over the Terraform command line. The project deploys the Google Cloud resources by default into the regions defined by the `location1` and `location2` variables. You can change that by passing alternative values in the `terraform.tfvars.sample` file and copying it to the `terraform.tfvars` file. Initiate Terraform and deploy Google Cloud resources ``` terraform init terraform plan terraform apply ``` ### Provisioned resources The created CAS resources become visible in the Certificate Authority Service [section](https://console.cloud.google.com/security/cas/caPools) in the Cloud Console. ### Domain ownership validation 1. ACME and Non-ACME service accounts get created in the [cas-template.tf](./cas-template.tf) 2. [Load Python cryptography library](https://cloud.google.com/kms/docs/crypto#macos) ``` export CLOUDSDK_PYTHON_SITEPACKAGES=1 ``` 3. Set environment variables to the desired values, for example: ``` export PROJECT_ID=my_project_id export LOCATION=europe-west3 export CA_POOL=acme-sub-pool-europe ``` 4. Non-ACME account should NOT be able to create certificate in acme.com domain: ``` gcloud privateca certificates create \ --issuer-location ${LOCATION} \ --issuer-pool ${CA_POOL} \ --generate-key \ --key-output-file .cert.key \ --cert-output-file .cert.crt \ --dns-san "team1.acme.com" \ --template "projects/${PROJECT_ID}/locations/${LOCATION}/certificateTemplates/acme-sub-ca-europe-template" \ --impersonate-service-account "non-acme-team-sa@${PROJECT_ID}.iam.gserviceaccount.com" ``` 5. Non-ACME account should NOT be able to create certificate in other example.com domain: ``` gcloud privateca certificates create \ --issuer-location ${LOCATION} \ --issuer-pool ${CA_POOL} \ --generate-key \ --key-output-file .cert.key \ --cert-output-file .cert.crt \ --dns-san "team1.example.com" \ --template "projects/${PROJECT_ID}/locations/${LOCATION}/certificateTemplates/acme-sub-ca-europe-template" \ --impersonate-service-account "non-acme-team-sa@${PROJECT_ID}.iam.gserviceaccount.com" ``` 6. ACME account should be able to create certificate in acme.com domain: ``` gcloud privateca certificates create \ --issuer-location ${LOCATION} \ --issuer-pool ${CA_POOL} \ --generate-key \ --key-output-file .cert.key \ --cert-output-file .cert.crt \ --dns-san "team1.acme.com" \ --template "projects/${PROJECT_ID}/locations/${LOCATION}/certificateTemplates/acme-sub-ca-europe-template" \ --impersonate-service-account "acme-team-sa@${PROJECT_ID}.iam.gserviceaccount.com" ``` 7. ACME account should NOT be able to create certificate in other example.com domain: ``` gcloud privateca certificates create \ --issuer-location ${LOCATION} \ --issuer-pool ${CA_POOL} \ --generate-key \ --key-output-file .cert.key \ --cert-output-file .cert.crt \ --dns-san "team1.example.com" \ --template "projects/${PROJECT_ID}/locations/${LOCATION}/certificateTemplates/db-sub-ca-europe-template" \ --impersonate-service-account "acme-team-sa@${PROJECT_ID}.iam.gserviceaccount.com" ``` ### Scaling CA Pool 1. Set environment variables ``` export PROJECT_ID=my_project_id export LOCATION=europe-west3 export CA_POOL=acme-sub-pool-europe export CONCURRENCY=2 export QPS=50 export TIME=15s ``` 2. Run load test ``` ./load-cas.sh $PROJECT_ID $LOCATION $CA_POOL $CONCURRENCY $QPS $TIME ``` The test uses [Fortio](https://github.com/fortio/fortio) to call CAS API concurrenly over HTTPS to generate dummy certificates simulataing load on the CA Pool. The test will run for the time duration defined by the `TIME` environment variable. Check the outcome ``` 142.251.39.106:443: 3 172.217.168.234:443: 3 142.251.36.42:443: 3 216.58.214.10:443: 3 172.217.23.202:443: 3 142.250.179.138:443: 3 216.58.208.106:443: 3 142.251.36.10:443: 3 142.250.179.202:443: 3 142.250.179.170:443: 3 Code 200 : 248 (89.2 %) Code 429 : 30 (10.8 %) Response Header Sizes : count 278 avg 347.91367 +/- 121 min 0 max 390 sum 96720 Response Body/Total Sizes : count 278 avg 6981.3165 +/- 2237 min 550 max 7763 sum 1940806 All done 278 calls (plus 4 warmup) 224.976 ms avg, 16.8 qps ``` Notice the portion of the requests returning 429 error, which indicated that the load exceeds the current CA pool throughput limit. 3. Add additional subordinate CA to the CA pool. For that, rename `cas-scaling.tf.sample` to `cas-scaling.tf` and run ``` terraform apply --auto-approve ``` 4. Run the load test again ``` ./load-cas.sh $PROJECT_ID $LOCATION $CA_POOL $CONCURRENCY $QPS $TIME ``` and check the outcome: ``` IP addresses distribution: 142.250.179.170:443: 1 172.217.23.202:443: 1 142.251.39.106:443: 1 172.217.168.202:443: 1 Code 200 : 258 (100.0 %) Response Header Sizes : count 258 avg 390 +/- 0 min 390 max 390 sum 100620 Response Body/Total Sizes : count 258 avg 7759.624 +/- 1.497 min 7758 max 7763 sum 2001983 All done 258 calls (plus 4 warmup) 233.180 ms avg, 17.1 qps ``` Notice that there are no 429 errors in responses any more and the load can now be handled. ### Manual Subordinate CA activation The [sub-activation.tf.sample](./sub-activation.tf.sample) file contains example Terraform configuration to perform manual Subordinate CA activaction. Use the Certificate Signing Request in the Terraform run output to sign using external Root Certificate Authority. Set the `pem_ca_certificate` and `subordinate_config.pem_issuer_chain` fields in the [ca.tf](./modules/cas-ca/ca.tf) to the files obtained from the issuer. ## Clean up You can clean up and free Google Cloud resources created with this project you can either * Delete the Google Cloud project with all created resources * Run the following command ``` terraform destroy ``` It is not possible to create new CAS resources with the same resource id as were already used before even of they were deleted by the `terraform destroy` command. The new deployment attempt to the same Google Cloud project needs to use new resource names. Modify the values of the following variables in the [terraform.tfvars](./terraform.tfvars) file before running the demo deployment again: * `root_pool_name` * `sub_pool1_name` * `sub_pool2_name` * `root_ca_name` * `sub_ca1_name` * `sub_ca2_name
GCP
Certificate Authority Service Demo This repository contains sample Terraform resource definitions for deploying several related Certificate Authority Service CAS resources to the Google Cloud It demonstrates several Certificate Authority Service features and provides examples of Terraform configuration for the following CAS features Root and Subordinate CA provisioning Automatic Subordinate CA activation using Root CA in the CAS Configuration example for manual subordinate CA activation Multi regional Subordinate CA deployment CA configuration with Cloud HSM signing keys including example for imported keys Application team domain ownership validation using CAS Certificate Templates and conditional IAM policies CAS CA Pool throughput scaling and a load test script for certificate request load generation CAS API activation The following diagram shows resources being deployed by this project and the resulting CA hiearchy structure Demo Deployment images deployment png raw true Pre requisites The deployment presumes and relies upon an existing Google Cloud Project with attached active Billing account To perform the successful deployment your Google Cloud account needs to have Project Editor role in the target Google Cloud project Update the Google Cloud project id in the terraform tfvar terraform tfvars file by setting the project id variable to the id of the target Google Cloud project before proceeding with the execution Demonstration The Terraform project in this repository defines the following input variables that can either be edited in the variables tf file directly or passed over the Terraform command line The project deploys the Google Cloud resources by default into the regions defined by the location1 and location2 variables You can change that by passing alternative values in the terraform tfvars sample file and copying it to the terraform tfvars file Initiate Terraform and deploy Google Cloud resources terraform init terraform plan terraform apply Provisioned resources The created CAS resources become visible in the Certificate Authority Service section https console cloud google com security cas caPools in the Cloud Console Domain ownership validation 1 ACME and Non ACME service accounts get created in the cas template tf cas template tf 2 Load Python cryptography library https cloud google com kms docs crypto macos export CLOUDSDK PYTHON SITEPACKAGES 1 3 Set environment variables to the desired values for example export PROJECT ID my project id export LOCATION europe west3 export CA POOL acme sub pool europe 4 Non ACME account should NOT be able to create certificate in acme com domain gcloud privateca certificates create issuer location LOCATION issuer pool CA POOL generate key key output file cert key cert output file cert crt dns san team1 acme com template projects PROJECT ID locations LOCATION certificateTemplates acme sub ca europe template impersonate service account non acme team sa PROJECT ID iam gserviceaccount com 5 Non ACME account should NOT be able to create certificate in other example com domain gcloud privateca certificates create issuer location LOCATION issuer pool CA POOL generate key key output file cert key cert output file cert crt dns san team1 example com template projects PROJECT ID locations LOCATION certificateTemplates acme sub ca europe template impersonate service account non acme team sa PROJECT ID iam gserviceaccount com 6 ACME account should be able to create certificate in acme com domain gcloud privateca certificates create issuer location LOCATION issuer pool CA POOL generate key key output file cert key cert output file cert crt dns san team1 acme com template projects PROJECT ID locations LOCATION certificateTemplates acme sub ca europe template impersonate service account acme team sa PROJECT ID iam gserviceaccount com 7 ACME account should NOT be able to create certificate in other example com domain gcloud privateca certificates create issuer location LOCATION issuer pool CA POOL generate key key output file cert key cert output file cert crt dns san team1 example com template projects PROJECT ID locations LOCATION certificateTemplates db sub ca europe template impersonate service account acme team sa PROJECT ID iam gserviceaccount com Scaling CA Pool 1 Set environment variables export PROJECT ID my project id export LOCATION europe west3 export CA POOL acme sub pool europe export CONCURRENCY 2 export QPS 50 export TIME 15s 2 Run load test load cas sh PROJECT ID LOCATION CA POOL CONCURRENCY QPS TIME The test uses Fortio https github com fortio fortio to call CAS API concurrenly over HTTPS to generate dummy certificates simulataing load on the CA Pool The test will run for the time duration defined by the TIME environment variable Check the outcome 142 251 39 106 443 3 172 217 168 234 443 3 142 251 36 42 443 3 216 58 214 10 443 3 172 217 23 202 443 3 142 250 179 138 443 3 216 58 208 106 443 3 142 251 36 10 443 3 142 250 179 202 443 3 142 250 179 170 443 3 Code 200 248 89 2 Code 429 30 10 8 Response Header Sizes count 278 avg 347 91367 121 min 0 max 390 sum 96720 Response Body Total Sizes count 278 avg 6981 3165 2237 min 550 max 7763 sum 1940806 All done 278 calls plus 4 warmup 224 976 ms avg 16 8 qps Notice the portion of the requests returning 429 error which indicated that the load exceeds the current CA pool throughput limit 3 Add additional subordinate CA to the CA pool For that rename cas scaling tf sample to cas scaling tf and run terraform apply auto approve 4 Run the load test again load cas sh PROJECT ID LOCATION CA POOL CONCURRENCY QPS TIME and check the outcome IP addresses distribution 142 250 179 170 443 1 172 217 23 202 443 1 142 251 39 106 443 1 172 217 168 202 443 1 Code 200 258 100 0 Response Header Sizes count 258 avg 390 0 min 390 max 390 sum 100620 Response Body Total Sizes count 258 avg 7759 624 1 497 min 7758 max 7763 sum 2001983 All done 258 calls plus 4 warmup 233 180 ms avg 17 1 qps Notice that there are no 429 errors in responses any more and the load can now be handled Manual Subordinate CA activation The sub activation tf sample sub activation tf sample file contains example Terraform configuration to perform manual Subordinate CA activaction Use the Certificate Signing Request in the Terraform run output to sign using external Root Certificate Authority Set the pem ca certificate and subordinate config pem issuer chain fields in the ca tf modules cas ca ca tf to the files obtained from the issuer Clean up You can clean up and free Google Cloud resources created with this project you can either Delete the Google Cloud project with all created resources Run the following command terraform destroy It is not possible to create new CAS resources with the same resource id as were already used before even of they were deleted by the terraform destroy command The new deployment attempt to the same Google Cloud project needs to use new resource names Modify the values of the following variables in the terraform tfvars terraform tfvars file before running the demo deployment again root pool name sub pool1 name sub pool2 name root ca name sub ca1 name sub ca2 name
GCP generates fake JSON messages matching the schema to a Pub Sub topic at the rate of the QPS Dataflow Streaming Benchmark fake or generated data This pipeline takes in a QPS parameter a path to a schema file and A streaming pipeline which generates messages at a specified rate to a Pub Sub topic The messages Pipeline When developing Dataflow pipelines it s common to want to benchmark them at a specific QPS using
# Dataflow Streaming Benchmark When developing Dataflow pipelines, it's common to want to benchmark them at a specific QPS using fake or generated data. This pipeline takes in a QPS parameter, a path to a schema file, and generates fake JSON messages matching the schema to a Pub/Sub topic at the rate of the QPS. ## Pipeline [StreamingBenchmark](src/main/java/com/google/cloud/pso/pipeline/StreamingBenchmark.java) - A streaming pipeline which generates messages at a specified rate to a Pub/Sub topic. The messages are generated according to a schema template which instructs the pipeline how to populate the messages with fake data compliant to constraints. > Note the number of workers executing the pipeline must be large enough to support the supplied > QPS. Use a general rule of 2,500 QPS per core in the worker pool when configuring your pipeline. ![Pipeline DAG](img/pipeline-dag.png "Pipeline DAG") ## Getting Started ### Requirements * Java 8 * Maven 3 ### Building the Project Build the entire project using the maven compile command. ```sh mvn clean compile ``` ### Creating the Schema File The schema file used to generate JSON messages with fake data is based on the [json-data-generator](https://github.com/vincentrussell/json-data-generator) library. This library allows for the structuring of a sample JSON schema and injection of common faker functions to instruct the data generator of what type of fake data to create in each field. See the json-data-generator [docs](https://github.com/vincentrussell/json-data-generator) for more information on the faker functions. #### Message Attributes If the message schema contains fields matching (case-insensitive) the following names then such fields will be added to the output Pub/Sub message attributes: eventId, eventTimestamp Attribute fields can be helpful in various scenarios like deduping messages, inspecting message timestamps etc #### Example Schema File Below is an example schema file which generates fake game event payloads with random data. ```javascript { "eventId": "", "eventTimestamp": , "ipv4": "", "ipv6": "", "country": "", "username": "", "quest": "", "score": , "completed": } ``` #### Example Output Data Based on the above schema, the below would be an example of a message which would be output to the Pub/Sub topic. ```javascript { "eventId": "5dacca34-163b-42cb-872e-fe3bad7bffa9", "eventTimestamp": 1537729128894, "ipv4": "164.215.241.55", "ipv6": "e401:58fc:93c5:689b:4401:206f:4734:2740", "country": "Montserrat", "username": "asellers", "quest": "A Break In the Ice", "score": 2721, "completed": false } ``` Since the schema includes the reserved field names of `eventId` and `eventTimestamp`, the output Pub/Sub message will also contain these fields in the message attributes in addition to the regular payload. ### Executing the Pipeline ```bash # Set the pipeline vars PROJECT_ID=<project-id> BUCKET=<bucket> PIPELINE_FOLDER=gs://${BUCKET}/dataflow/pipelines/streaming-benchmark SCHEMA_LOCATION=gs://<path-to-schema-location-in-gcs> PUBSUB_TOPIC=projects/$PROJECT_ID/topics/<topic-id> # Set the desired QPS QPS=50000 # Set the runner RUNNER=DataflowRunner # Compute engine zone ZONE=us-east1-d # Build the template mvn compile exec:java \ -Dexec.mainClass=com.google.cloud.pso.pipeline.StreamingBenchmark \ -Dexec.cleanupDaemonThreads=false \ -Dexec.args=" \ --project=${PROJECT_ID} \ --stagingLocation=${PIPELINE_FOLDER}/staging \ --tempLocation=${PIPELINE_FOLDER}/temp \ --runner=${RUNNER} \ --zone=${ZONE} \ --autoscalingAlgorithm=THROUGHPUT_BASED \ --maxNumWorkers=5 \ --qps=${QPS} \ --schemaLocation=${SCHEMA_LOCATION} \ --topic=${PUBSUB_TOPIC}" ```
GCP
Dataflow Streaming Benchmark When developing Dataflow pipelines it s common to want to benchmark them at a specific QPS using fake or generated data This pipeline takes in a QPS parameter a path to a schema file and generates fake JSON messages matching the schema to a Pub Sub topic at the rate of the QPS Pipeline StreamingBenchmark src main java com google cloud pso pipeline StreamingBenchmark java A streaming pipeline which generates messages at a specified rate to a Pub Sub topic The messages are generated according to a schema template which instructs the pipeline how to populate the messages with fake data compliant to constraints Note the number of workers executing the pipeline must be large enough to support the supplied QPS Use a general rule of 2 500 QPS per core in the worker pool when configuring your pipeline Pipeline DAG img pipeline dag png Pipeline DAG Getting Started Requirements Java 8 Maven 3 Building the Project Build the entire project using the maven compile command sh mvn clean compile Creating the Schema File The schema file used to generate JSON messages with fake data is based on the json data generator https github com vincentrussell json data generator library This library allows for the structuring of a sample JSON schema and injection of common faker functions to instruct the data generator of what type of fake data to create in each field See the json data generator docs https github com vincentrussell json data generator for more information on the faker functions Message Attributes If the message schema contains fields matching case insensitive the following names then such fields will be added to the output Pub Sub message attributes eventId eventTimestamp Attribute fields can be helpful in various scenarios like deduping messages inspecting message timestamps etc Example Schema File Below is an example schema file which generates fake game event payloads with random data javascript eventId eventTimestamp ipv4 ipv6 country username quest score completed Example Output Data Based on the above schema the below would be an example of a message which would be output to the Pub Sub topic javascript eventId 5dacca34 163b 42cb 872e fe3bad7bffa9 eventTimestamp 1537729128894 ipv4 164 215 241 55 ipv6 e401 58fc 93c5 689b 4401 206f 4734 2740 country Montserrat username asellers quest A Break In the Ice score 2721 completed false Since the schema includes the reserved field names of eventId and eventTimestamp the output Pub Sub message will also contain these fields in the message attributes in addition to the regular payload Executing the Pipeline bash Set the pipeline vars PROJECT ID project id BUCKET bucket PIPELINE FOLDER gs BUCKET dataflow pipelines streaming benchmark SCHEMA LOCATION gs path to schema location in gcs PUBSUB TOPIC projects PROJECT ID topics topic id Set the desired QPS QPS 50000 Set the runner RUNNER DataflowRunner Compute engine zone ZONE us east1 d Build the template mvn compile exec java Dexec mainClass com google cloud pso pipeline StreamingBenchmark Dexec cleanupDaemonThreads false Dexec args project PROJECT ID stagingLocation PIPELINE FOLDER staging tempLocation PIPELINE FOLDER temp runner RUNNER zone ZONE autoscalingAlgorithm THROUGHPUT BASED maxNumWorkers 5 qps QPS schemaLocation SCHEMA LOCATION topic PUBSUB TOPIC
GCP Webhook example Recommended Reading This module is a webhook example for Dialogflow An agent created in Dialogflow is connected to this webhook that is running in Cloud Function Technology Stack 1 Cloud Firestore The webhook also connects to a Cloud Firestore to get the users information used in the example
# Webhook example This module is a webhook example for Dialogflow. An agent created in Dialogflow is connected to this webhook that is running in Cloud Function. The webhook also connects to a Cloud Firestore to get the users information used in the example. ## Recommended Reading [Dialogflow Fulfillment Overview](https://cloud.google.com/dialogflow/docs/fulfillment-overview). ## Technology Stack 1. Cloud Firestore 1. Cloud Functions 1. Dialogflow ## Libraries 1. Pandas 1. Google Cloud Firestore 1. Dialogflow ## Programming Language Python 3 ## Project Structure ``` . └── dialogflow_webhook_bank_example β”œβ”€β”€ main.py # Implementation of the webhook β”œβ”€β”€ intents_config.yaml # Configuration of the intent from dialogflow β”œβ”€β”€ agent.zip # Configuration of the agent for this example in dialogflow β”œβ”€β”€ requirements.txt # Required libraries for this example ``` ## Setup Instructions ### Project Setup How to setup your project for this example can be found [here](https://cloud.google.com/dialogflow/docs/quick/setup). ### Dialogflow Agent Setup 1. Build an agent by following the instructions [here](https://cloud.google.com/dialogflow/docs/quick/build-agent). 1. Once the agent is built, go to settings βš™ and under the Export and Import tab, choose the option RESTORE FROM ZIP. 1. Follow the instructions to restore the agent from agent.zip. ### Cloud Functions Setup This implementation is deployed on GCP using Cloud Functions. More info [here](https://cloud.google.com/functions/docs/concepts/overview). To run the Python scripts on GCP, the `gcloud` command-line tool from the Google Cloud SDK is needed. Refer to the [installation](https://cloud.google.com/sdk/install) page for the appropriate instructions depending on your platform. Note that this project has been tested on a Unix-based environment. After installing, make sure to initialize your Cloud project: `$ gcloud init` ### Cloud Firestore Setup Quick start for Cloud Firestore can be found [here](https://cloud.google.com/firestore/docs/quickstart-servers). #### How to add data This example connects to a Cloud Firestore with a collection with the following specification: Root collection users => document_id NXJn5wTqWXwiTuc5tdun => { 'first_name': 'Pedro', 'last_name': 'Perez', 'accounts': { 'saving': { 'transactions': [ {'type': 'deposit', 'amount': 20}, {'type': 'deposit', 'amount': 90} ], 'balance': 110}, 'checking': { 'transactions': [ {'type': 'deposit', 'amount': 50}, {'type': 'withdraw', 'amount': '-10'} ], 'balance': 150} }, 'user_id': 123456 } Examples how to add data to a collection can be found [here](https://cloud.google.com/firestore/docs/quickstart-servers#add_data). from google.cloud import firestore user_dict= { u'user_id': u'123456', u'first_name': u'Pedro', u'last_name': u'Perez', u'accounts': { u'checking': { u'transactions': [ {u'amount': 50, 'type': 'udeposit'}, {u'type': u'withdraw', u'amount': u'-10'} ], u'balance': 150 }, u'saving': { u'transactions': [ {u'amount': 20, u'type': u'deposit'}, {u'type': u'deposit', u'amount': 90} ], u'balance': 110 } } } db = firestore.Client() db.collection(u'users').document(user_dict['user_id']).set(user_dict) ## Deployment $ gcloud functions deploy dialogflow_webhook_bank --runtime python37 --trigger-http --allow-unauthenticated ## Usage ### Dialogflow Agent Example [User] Hi, Hello, I need assistance [Agent] Welcome to our bank! Can I have your user id? [User] <Give an invalid user_id number> user_id 12345 ↳ [Agent] Sorry I could not find your user_id. Can you try again? [User] <Give a valid user_id number> user id 123456 ↳ [Agent] What can I do for you? ↳ [User] Check my balance, Verify my balance, balance ↳ [Agent] Here are your account balances. <List of all account balances from firebase> [Agent]What else can I do for you? - Follow up ↳ [User] All my transactions, transactions, ↳ [Agent] Here are all the transactions that I found. <List of all the transactions from firebase> [Agent] What else can I do for you? - Follow up ↳ [User] Deposit transactions, credits, deposits, ↳ [Agent] Here are all the deposit transactions that I found. <List of deposit transactions in firebase> [Agent] What else can I do for you? - Follow up ↳[User] I am done, thanks, bye ↳ [Agent] Have a nice day! ### Running the sample from Dialogflow console In [Dialogflow's console](https://console.dialogflow.com), in the simulator on the right, query your Dialogflow agent with `I need assistance` and respond to the questions your Dialogflow agent asks. ### Running the sample using gcloud util Example: $ gcloud functions call dialogflow_webhook_bank --data '{ "responseId": "ec0be141-e09a-4dca-b445-4e811ad4999b-ab1309b0", "queryResult": { "queryText": "123456 user id", "action": "welcome.welcome-custom", "parameters": { "user_id": 123456 }, "allRequiredParamsPresent": true, "fulfillmentText": "What can I do for you?", "fulfillmentMessages": [ { "text": { "text": [ "What can I do for you?" ] } } ], "outputContexts": [ { "name": "projects/<project-id>/agent/sessions/e7f62474-fd2c-3ca0-dfa6-73d3ed2ab17f/contexts/user_id_action-followup", "lifespanCount": 5, "parameters": { "user_id": 123456, "user_id.original": "123456" } }, { "name": "projects/<project-id>/agent/sessions/e7f62474-fd2c-3ca0-dfa6-73d3ed2ab17f/contexts/welcome-followup", "lifespanCount": 1, "parameters": { "user_id": 123456, "user_id.original": "123456" } }, { "name": "projects/<project-id>/agent/sessions/e7f62474-fd2c-3ca0-dfa6-73d3ed2ab17f/contexts/__system_counters__", "parameters": { "no-input": 0, "no-match": 0, "user_id": 1234567891, "user_id.original": "123456" } } ], "intent": { "name": "projects/<project-id>/agent/intents/e3cabac7-cfb8-4da1-96bb-f14687913bf6", "displayName": "user_id_action" }, "intentDetectionConfidence": 0.78590345, "languageCode": "en" }, "originalDetectIntentRequest": { "payload": {} }, "session": "projects/<project-id>/agent/sessions/e7f62474-fd2c-3ca0-dfa6-73d3ed2ab17f" } # References [google-cloud-firestore.Documents API](https://googleapis.dev/python/firestore/latest/document.html#google.cloud.firestore_v1.document) [google-cloud-firestore.Queries API](https://googleapis.dev/python/firestore/latest/query.html#google.cloud.firestore_v1.query) [Example querying and filtering data from Cloud Firestore](https://cloud.google.com/firestore/docs/query-data/queries) [Pandas Dataframe Libraries](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html
GCP
Webhook example This module is a webhook example for Dialogflow An agent created in Dialogflow is connected to this webhook that is running in Cloud Function The webhook also connects to a Cloud Firestore to get the users information used in the example Recommended Reading Dialogflow Fulfillment Overview https cloud google com dialogflow docs fulfillment overview Technology Stack 1 Cloud Firestore 1 Cloud Functions 1 Dialogflow Libraries 1 Pandas 1 Google Cloud Firestore 1 Dialogflow Programming Language Python 3 Project Structure dialogflow webhook bank example main py Implementation of the webhook intents config yaml Configuration of the intent from dialogflow agent zip Configuration of the agent for this example in dialogflow requirements txt Required libraries for this example Setup Instructions Project Setup How to setup your project for this example can be found here https cloud google com dialogflow docs quick setup Dialogflow Agent Setup 1 Build an agent by following the instructions here https cloud google com dialogflow docs quick build agent 1 Once the agent is built go to settings and under the Export and Import tab choose the option RESTORE FROM ZIP 1 Follow the instructions to restore the agent from agent zip Cloud Functions Setup This implementation is deployed on GCP using Cloud Functions More info here https cloud google com functions docs concepts overview To run the Python scripts on GCP the gcloud command line tool from the Google Cloud SDK is needed Refer to the installation https cloud google com sdk install page for the appropriate instructions depending on your platform Note that this project has been tested on a Unix based environment After installing make sure to initialize your Cloud project gcloud init Cloud Firestore Setup Quick start for Cloud Firestore can be found here https cloud google com firestore docs quickstart servers How to add data This example connects to a Cloud Firestore with a collection with the following specification Root collection users document id NXJn5wTqWXwiTuc5tdun first name Pedro last name Perez accounts saving transactions type deposit amount 20 type deposit amount 90 balance 110 checking transactions type deposit amount 50 type withdraw amount 10 balance 150 user id 123456 Examples how to add data to a collection can be found here https cloud google com firestore docs quickstart servers add data from google cloud import firestore user dict u user id u 123456 u first name u Pedro u last name u Perez u accounts u checking u transactions u amount 50 type udeposit u type u withdraw u amount u 10 u balance 150 u saving u transactions u amount 20 u type u deposit u type u deposit u amount 90 u balance 110 db firestore Client db collection u users document user dict user id set user dict Deployment gcloud functions deploy dialogflow webhook bank runtime python37 trigger http allow unauthenticated Usage Dialogflow Agent Example User Hi Hello I need assistance Agent Welcome to our bank Can I have your user id User Give an invalid user id number user id 12345 Agent Sorry I could not find your user id Can you try again User Give a valid user id number user id 123456 Agent What can I do for you User Check my balance Verify my balance balance Agent Here are your account balances List of all account balances from firebase Agent What else can I do for you Follow up User All my transactions transactions Agent Here are all the transactions that I found List of all the transactions from firebase Agent What else can I do for you Follow up User Deposit transactions credits deposits Agent Here are all the deposit transactions that I found List of deposit transactions in firebase Agent What else can I do for you Follow up User I am done thanks bye Agent Have a nice day Running the sample from Dialogflow console In Dialogflow s console https console dialogflow com in the simulator on the right query your Dialogflow agent with I need assistance and respond to the questions your Dialogflow agent asks Running the sample using gcloud util Example gcloud functions call dialogflow webhook bank data responseId ec0be141 e09a 4dca b445 4e811ad4999b ab1309b0 queryResult queryText 123456 user id action welcome welcome custom parameters user id 123456 allRequiredParamsPresent true fulfillmentText What can I do for you fulfillmentMessages text text What can I do for you outputContexts name projects project id agent sessions e7f62474 fd2c 3ca0 dfa6 73d3ed2ab17f contexts user id action followup lifespanCount 5 parameters user id 123456 user id original 123456 name projects project id agent sessions e7f62474 fd2c 3ca0 dfa6 73d3ed2ab17f contexts welcome followup lifespanCount 1 parameters user id 123456 user id original 123456 name projects project id agent sessions e7f62474 fd2c 3ca0 dfa6 73d3ed2ab17f contexts system counters parameters no input 0 no match 0 user id 1234567891 user id original 123456 intent name projects project id agent intents e3cabac7 cfb8 4da1 96bb f14687913bf6 displayName user id action intentDetectionConfidence 0 78590345 languageCode en originalDetectIntentRequest payload session projects project id agent sessions e7f62474 fd2c 3ca0 dfa6 73d3ed2ab17f References google cloud firestore Documents API https googleapis dev python firestore latest document html google cloud firestore v1 document google cloud firestore Queries API https googleapis dev python firestore latest query html google cloud firestore v1 query Example querying and filtering data from Cloud Firestore https cloud google com firestore docs query data queries Pandas Dataframe Libraries https pandas pydata org pandas docs stable reference api pandas DataFrame html
GCP based failover The load balancers in front of the Managed Instance Group or Cloud Run service are configured for DNS load balancing for enhancing availability of a Cloud Run or Google Cloud Compute Managed Instance Groups based applications The applciation instances get redundantly deployed to two distinct regions Multi regional Application Availability This demo project contains Google Cloud infrastrcuture components that illustrate use cases The project covers several use cases that can be broken into the following categories with respective entry points in Terraform
# Multi-regional Application Availability This demo project contains Google Cloud infrastrcuture components that illustrate use cases for enhancing availability of a Cloud Run or Google Cloud Compute Managed Instance Groups based applications. The applciation instances get redundantly deployed to two distinct regions. The load balancers in front of the Managed Instance Group or Cloud Run service are configured for DNS load balancing based failover. The project covers several use cases that can be broken into the following categories with respective entry points in Terraform files for Google Cloud resources definitions for load balancing and Cloud DNS service configuration: | Load Balancer | Type | OCI | Cloud Run Backend | GCE MIG Backend | |----------------------------|----------|------|----------------------------------|------------------------------------| | Regional Pass-through | Internal | L4 | - |[l4-rilb-mig.tf](./l4-rilb-mig.tf) | | Regional Application | Internal | L7 |[l7-rilb-cr.tf](./l7-rilb-cr.tf) |[l7-rilb-mig.tf](./l7-rilb-mig.tf) | | Cross-Regional Application | Internal | L7 |[l7-crilb-cr.tf](./l7-crilb-cr.tf)|[l7-crilb-mig.tf](./l7-crilb-mig.tf)| | Global Application | External | L7 |[l7-gxlb-cr.tf](./l7-gxlb-cr.tf) | - | Terraform files with `dns-` prefix contain Cloud DNS resource definitions for the respective use case. When all resources from the project are provisioned the respective demo aplication endpoints can be used to verify the deployment and test failover. The following table contains the URLs to be tested from a GCE VM attached to the same internal VPC network where the load balancers are deployed. | Load Balancer | Type | OCI | Cloud Run Backend | GCE MIG Backend | |----------------------------|----------|------|--------------------------------|------------------------------------| | Regional Pass-through | Internal | L4 | - |`http://l4-rilb-mig.hello.zone:8080`| | Regional Application | Internal | L7 |`https://l7-rilb-cr.hello.zone` |`https://l7-rilb-mig.hello.zone` | | Cross-Regional Application | Internal | L7 |`https://l7-crilb-cr.hello.zone`|`https://l7-crilb-mig.hello.zone` | | Global Application | External | L7 |`https://l7-gxlb.hello.zone` | - | The following diagrams illustrate the Google Cloud resources created for the respective load balancer type: 1) L4 Regional Pass-through Internal Load Balancer DNS load balancing to GCE Managed Instance Groups [\[1\]](https://cloud.google.com/load-balancing/docs/internal/setting-up-internal) [l4-rilb-mig.tf](./l4-rilb-mig.tf) ![Deployment Diagram](./images/l4-rilb-mig.png) 2) L7 Regional Internal Application Load Balancer DNS load balancing to Cloud Run service instances [\[2\]](https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-internal-serverless) [l7-rilb-cr.tf](./l7-rilb-cr.tf) ![Deployment Diagram](./images/l7-rilb-cr.png) 3) L7 Cross-Regional Internal Application Load Balancer DNS load balancing to Cloud Run service instances [\[3\]](https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-cross-reg-serverless) [l7-crilb-cr.tf](./l7-crilb-cr.tf) ![Deployment Diagram](https://cloud.google.com/static/load-balancing/images/cross-reg-int-cloudrun.svg) 4) L7 Regional Internal Application Load Balancer DNS load balancing to GCE Managed Instance Groups [\[4\]](https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-internal) [l7-rilb-mig.tf](./l7-rilb-mig.tf) ![Deployment Diagram](./images/l7-rilb-mig.png) 5) L7 Cross-Regional Internal Application Load Balancer DNS load balancing to GCE Managed Instance Groups [\[5\]](https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-cross-reg-internal) [l7-crilb-mig.tf](./l7-crilb-mig.tf) ![Deployment Diagram](https://cloud.google.com/static/load-balancing/images/cross-reg-int-vm.svg) 6) L7 External Application Load Balancer based load balancing [\[6\]](https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless) [l7-gxlb-cr.tf](./l7-gxlb-cr.tf) ![Deployment Diagram](./images/l7-gxlb-cr.png) ## Pre-requisites The deployment presumes and relies upon an existing Google Cloud Project with attached active Billing account. To perform successful deployment, your Google Cloud account needs to have `Project Editor` role in the target Google Cloud project. Copy the [terraform.tfvar.sample](./terraform.tfvars.sample) file into `terraform.tfvar` file and update it with the Google Cloud project id in `project_id` variable and other variables according to your environment. You can also choose the generation of the Cloud Run service instances by setting the `cloud_run_generation` input variable to `v1` or `v2` (default) respectively. [Enable](https://cloud.google.com/artifact-registry/docs/enable-service) Google Artifact Registry API in the demo project. VPC network and load balancer subnet are not created in the project and are referenced in the input variables. The additional proxy subnetwork required for load balancers setup is defined in [network.tf](./network.tf) together with the references to the network resources. A jumpbox GCE VM attached to the project’s VPC network for accessing internal resources and running load tests. ## GCE Managed Instance Groups 1. Checkout demo HTTP responder service container ``` git clone https://github.com/GoogleCloudPlatform/golang-samples.git cd golang-samples/run/hello-broken ``` 2. Build container, tag it and push to the Artifact Registry ``` docker build . -t eu.gcr.io/${PROJECT_ID}/hello-broken:latest docker push eu.gcr.io/${PROJECT_ID}/hello-broken:latest ``` 3. Edit the `terraform.tfvars` file setting project_id variable to the id of the Google Cloud project where resources will be deployed to. To reach the external IP of the Global External (L7) load balancer created by the resources in the `l7-gxlb-cr.tf` file you also need to modify the `domain` variable to the subdomain value of the DNS domain that you control. 4. Provision the demo infrastructure in the Google Cloud: ``` terraform init terraform apply –auto-approve ``` To reach the external IP of the Global External (L7) load balancer created by the resources in the `l7-gxlb-cr.tf` file you can now modify your DNS record for the subdomain defined in the `domain` variable and point to to the IP address of the created Global External Load Balancer: ``` gcloud compute forwarding-rules list | grep gxlb-cr ``` 5. Open the [Cloud Console](https://console.cloud.google.com/net-services/loadbalancing/list/loadBalancers) and check the L4 Regional Internal Network Load Balancer, Managed Instance Group, Cloud Run services and the `hello.zone` in the Cloud DNS. In the same way you can also check load balancer resources created for other use cases. 6. Log in into the jumpbox VM attached to the internal VPC network and switch to sudo mode for simpler docker container execution: ``` gcloud compute ssh jumpbox sudo -i ``` Check whether all load balancers and components have come up properly: ``` curl -s http://l4-rilb-mig.hello.zone:8080 && echo OK || echo NOK curl -sk https://l7-crilb-cr.hello.zone && echo OK || echo NOK curl -sk https://l7-crilb-mig.hello.zone && echo OK || echo NOK curl -sk https://l7-gxlb.hello.zone && echo OK || echo NOK curl -sk https://l7-rilb-cr.hello.zone && echo OK || echo NOK curl -sk https://l7-rilb-mig.hello.zone && echo OK || echo NOK ``` All of the commands must return successfully and print `OK`. 7. Run the load test For the load test you can use the open source [Fortio tool](https://github.com/fortio/fortio) which is often used for testing Kubernetes and service mesh workloads. ``` curl http://l4-rilb-mig.hello.zone:8080 docker run fortio/fortio load --https-insecure -t 1m -qps 1 http://l4-rilb-mig.hello.zone:8080 ``` The result after 1 minute of execution should be similar to ``` IP addresses distribution: 10.156.0.11:8080: 1 Code 200 : 258 (100.0 %) Response Header Sizes : count 258 avg 390 +/- 0 min 390 max 390 sum 100620 Response Body/Total Sizes : count 258 avg 7759.624 +/- 1.497 min 7758 max 7763 sum 2001983 All done 258 calls (plus 4 warmup) 233.180 ms avg, 17.1 qps ``` Please note the IP address of the internal pass-through load balancer in the nearest region getting all calls. 8. Test failover In the second console window SSH into the VM in the GCE MIG group in the nearest region ``` export MIG_VM=$(gcloud compute instances list --format="value[](name)" --filter="name~l4-europe-west3") export MIG_VM_ZONE=$(gcloud compute instances list --format="value[](zone)" --filter="name=${MIG_VM}") gcloud compute ssh --zone $MIG_VM_ZONE $MIG_VM --tunnel-through-iap --project $PROJECT_ID sudo -i docker ps ``` Run the load test in the first console window again. While the test is running switch to the second console window and execute: ``` docker stop ${CONTAINER} ``` Switch to the first console window and notice the failover happening. The output at the end of the execution should look like following ``` IP addresses distribution: 10.156.0.11:8080: 16 10.199.0.48:8080: 4 Code -1 : 12 (10.0 %) Code 200 : 108 (90.0 %) Response Header Sizes : count 258 avg 390 +/- 0 min 390 max 390 sum 100620 Response Body/Total Sizes : count 258 avg 7759.624 +/- 1.497 min 7758 max 7763 sum 2001983 All done 120 calls (plus 4 warmup) 83.180 ms avg, 2.0 qps ``` The Cloud DNS starts returning the second IP address from the healthy region with apvailable backend service and it starts processing incoming requests. Please note that the service VM in the Managed Instance has been automatically restarted by the GCE Managed Instance Group [autohealing](https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs). ## HA for Cloud Run This demo project alsocontains scearios for improving cross-regional availability of application services deployed and running in Cloud Run. There are several aspects related to the Cloud Run deployment which need to be taken into account and are discussed in the following sections. ### Authentication In case when the application Cloud Run service needs to be protected by the authentication and not allow unauthenticated invocations, the credentials need to be passed in the `Authentication: Bearer <ID token>` HTTP reqeust header. When the client application is running on Google Cloud, e.g. in a GCE VM, the following commands obtain correct ID tokens for authentication with each respective regional Cloud Run service instance. Presuming the Cloud Run instances are deployed in two regions and exposed under `cr-service-beh76gkxvq-ey.a.run.app` and `cr-service-us-beh76gkxvq-uc.a.run.app` hostnames respectively, the commands to obtain authentication tokens for each of them are: ``` curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://cr-service-beh76gkxvq-ey.a.run.app" -H "Metadata-Flavor: Google" > ./id-token.txt curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://cr-service-us-beh76gkxvq-uc.a.run.app" -H "Metadata-Flavor: Google" > ./id-token-us.txt ``` Otherwise, the ID token can be obtained using the gcloud command, please read on. As you can see, the regional Cloud Run service endpoint FQDN is used as the ID token authentication scope. That makes the tokens not interchangeable. That is, a token obtained for the Cloud Run service in Region A will fail authentication with the Cloud Run service in the Region B. Here is how to utilize the authentication token when invoking the regional Cloud Service instance directly: Region A (e.g. in EU): ``` curl -H "Authorization: Bearer $(cat ./id-token.txt)" https://cr-service-beh76gkxvq-ey.a.run.app ``` Region B (e.g. in US): ``` curl -H "Authorization: Bearer $(cat ./id-token-us.txt)" https://cr-service-us-beh76gkxvq-uc.a.run.app ``` To overcome the limitation of distinct ID token scopes and to be able to make the Cloud Service client seamlessly failover to the Cloud Run service in another region using the same ID token for authentication, the [custom audiences](https://cloud.google.com/run/docs/configuring/custom-audiences) can be used. (Please note, that the custom audience `cr-service` is already being set in the `google_cloud_run_v2_service` Terraform resource in this demo project.) ``` gcloud run services update cr-service --region=europe-west3 --add-custom-audiences=cr-service gcloud run services update cr-service --region=us-central1 --add-custom-audiences=cr-service export TOKEN=$(gcloud auth print-identity-token --impersonate-service-account SERVICE_ACCOUNT_EMAIL --audiences='cr-service') ``` or ``` curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=cr-service" -H "Metadata-Flavor: Google" > ./id-token.txt ``` Now we can make an authenticated call using the single ID token to the global FQDN hostname representing Cloud Run instances running in both regions. That will work and authentication will succeed even in case of a Cloud Run or entire Google Cloud region outage in one of the regions. ``` curl -k -H "Authorization: Bearer $(TOKEN)" https://l7-crilb-cr.hello.zone curl -k -H "Authorization: Bearer $(cat ./id-token.txt)" https://l7-crilb-cr.hello.zone # If the internal of external application load balancer with serverless network endpoint groups (NEGs) # is configured with a TLS certificate for the Cloud DNS name resolving to the load balancer IP address, # then the we can also omit `-k` curl parameter and client will verify the server TLS certificate properly: curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=cr-service" -H "Metadata-Flavor: Google" > ./id-token.txt cat ./id-token.txt curl -H "Authorization: Bearer $(cat Creds/id-token.txt)" https://l7-crilb-cr.hello.zone ``` ### Failover We can follow the [instructions](https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-cross-reg-serverless#test-failover) in the public documentation to simulate regional Cloud Run backend outage. First, let's ensure, that our demo application service is accessible via internal cross-regional load balancer backed by two Cloud Run instances running in distinct regions (`europe-west3` and `us-central1` by default). From the client GCE VM attached to the demo project private VPC network: ``` docker run fortio/fortio load --https-insecure -t 1m -qps 1 https://l7-crilb-cr.hello.zone ``` We should see 100% successful invocations: ``` IP addresses distribution: 10.156.0.51:443: 4 Code 200 : 60 (100.0 %) ``` Now let's simulate regional outage by removing all of the serverless NEG backends from one of the regions, e.g. ``` # Check the serverless NEG backends before the backend deletion: gcloud compute backend-services list --filter="name:l7-crilb-cr" gcloud compute backend-services remove-backend l7-crilb-cr \ --network-endpoint-group=cloudrun-sneg \ --network-endpoint-group-region=europe-west3 \ --global # Check the serverless NEG backends after the backend deletion: gcloud compute backend-services list --filter="name:l7-crilb-cr" ``` If you executed the previous command while running Fortio in parallel (you can increase the time interval the tool runs for by modifying the `-t` command line parameter), you should see an output similar to: ``` IP addresses distribution: 10.156.0.51:443: 4 Code 200 : 300 (100.0 %) ``` With our test setup of 1 call per second, all calls have reached their destination. If we now delete Serverless NEG backend in the second region client calls will start failing. To restore the load balancer infrastructure, just re-apply the Terraform configuration by running `terraform apply`. What we have seen so far was the failover at the Internal Cross-Regional Application load balancer backend side. That is, the client application (Fortio) was still accessing the load balancer IP address in the nearest `europe-west3` region. You can check that by running `host l7-crilb-cr.hello.zone` which will return the IP address from the `europe-west3` region. What would happen in case of a full `europe-west3` region outage? The [./l4-rilb-mig.tf](./l4-rilb-mig.tf) use case discussed above illustrates that case. Unfortunately, the [Cloud DNS health checks](https://cloud.google.com/dns/docs/zones/manage-routing-policies#health-checks) for L7 load balancers cannot detect the outage of the application backend service yet. A missing load balancer backend is also not considered as outage and the IP address switch does not occur. They only check the availability of the internal Google Cloud infrastructure (Envoy proxies) supporting the L7 load balancers. It is difficult to simulate the actual Google Cloud region outage that would trigger the Cloud DNS IP address failover. It is expected in the future that the Cloud DNS health checks will also be able to detect availability of the application service providing similar behaviour as the health checks for the L4 Network Passthrough load balancer currently provide. ## Cleanup To clear custom audiences: ``` gcloud run services update cr-service --region=europe-west3 --clear-custom-audiences gcloud run services update cr-service --region=us-central1 --clear-custom-audiences ``` To remove resources created by this project deployment either delete the target Google Cloud project or run ``` terraform destroy --auto-approve ``` ## Useful Links * [Multi-region failover using Cloud DNS Routing Policies and Health Checks for Internal TCP/UDP Load Balancer](https://codelabs.developers.google.com/clouddns-failover-policy-codelab#0) * [AWS DNS load balancing example](https://docs.aws.amazon.com/whitepapers/latest/real-time-communication-on-aws/cross-region-dns-based-load-balancing-and-failover.html)
GCP
Multi regional Application Availability This demo project contains Google Cloud infrastrcuture components that illustrate use cases for enhancing availability of a Cloud Run or Google Cloud Compute Managed Instance Groups based applications The applciation instances get redundantly deployed to two distinct regions The load balancers in front of the Managed Instance Group or Cloud Run service are configured for DNS load balancing based failover The project covers several use cases that can be broken into the following categories with respective entry points in Terraform files for Google Cloud resources definitions for load balancing and Cloud DNS service configuration Load Balancer Type OCI Cloud Run Backend GCE MIG Backend Regional Pass through Internal L4 l4 rilb mig tf l4 rilb mig tf Regional Application Internal L7 l7 rilb cr tf l7 rilb cr tf l7 rilb mig tf l7 rilb mig tf Cross Regional Application Internal L7 l7 crilb cr tf l7 crilb cr tf l7 crilb mig tf l7 crilb mig tf Global Application External L7 l7 gxlb cr tf l7 gxlb cr tf Terraform files with dns prefix contain Cloud DNS resource definitions for the respective use case When all resources from the project are provisioned the respective demo aplication endpoints can be used to verify the deployment and test failover The following table contains the URLs to be tested from a GCE VM attached to the same internal VPC network where the load balancers are deployed Load Balancer Type OCI Cloud Run Backend GCE MIG Backend Regional Pass through Internal L4 http l4 rilb mig hello zone 8080 Regional Application Internal L7 https l7 rilb cr hello zone https l7 rilb mig hello zone Cross Regional Application Internal L7 https l7 crilb cr hello zone https l7 crilb mig hello zone Global Application External L7 https l7 gxlb hello zone The following diagrams illustrate the Google Cloud resources created for the respective load balancer type 1 L4 Regional Pass through Internal Load Balancer DNS load balancing to GCE Managed Instance Groups 1 https cloud google com load balancing docs internal setting up internal l4 rilb mig tf l4 rilb mig tf Deployment Diagram images l4 rilb mig png 2 L7 Regional Internal Application Load Balancer DNS load balancing to Cloud Run service instances 2 https cloud google com load balancing docs l7 internal setting up l7 internal serverless l7 rilb cr tf l7 rilb cr tf Deployment Diagram images l7 rilb cr png 3 L7 Cross Regional Internal Application Load Balancer DNS load balancing to Cloud Run service instances 3 https cloud google com load balancing docs l7 internal setting up l7 cross reg serverless l7 crilb cr tf l7 crilb cr tf Deployment Diagram https cloud google com static load balancing images cross reg int cloudrun svg 4 L7 Regional Internal Application Load Balancer DNS load balancing to GCE Managed Instance Groups 4 https cloud google com load balancing docs l7 internal setting up l7 internal l7 rilb mig tf l7 rilb mig tf Deployment Diagram images l7 rilb mig png 5 L7 Cross Regional Internal Application Load Balancer DNS load balancing to GCE Managed Instance Groups 5 https cloud google com load balancing docs l7 internal setting up l7 cross reg internal l7 crilb mig tf l7 crilb mig tf Deployment Diagram https cloud google com static load balancing images cross reg int vm svg 6 L7 External Application Load Balancer based load balancing 6 https cloud google com load balancing docs https setting up https serverless l7 gxlb cr tf l7 gxlb cr tf Deployment Diagram images l7 gxlb cr png Pre requisites The deployment presumes and relies upon an existing Google Cloud Project with attached active Billing account To perform successful deployment your Google Cloud account needs to have Project Editor role in the target Google Cloud project Copy the terraform tfvar sample terraform tfvars sample file into terraform tfvar file and update it with the Google Cloud project id in project id variable and other variables according to your environment You can also choose the generation of the Cloud Run service instances by setting the cloud run generation input variable to v1 or v2 default respectively Enable https cloud google com artifact registry docs enable service Google Artifact Registry API in the demo project VPC network and load balancer subnet are not created in the project and are referenced in the input variables The additional proxy subnetwork required for load balancers setup is defined in network tf network tf together with the references to the network resources A jumpbox GCE VM attached to the project s VPC network for accessing internal resources and running load tests GCE Managed Instance Groups 1 Checkout demo HTTP responder service container git clone https github com GoogleCloudPlatform golang samples git cd golang samples run hello broken 2 Build container tag it and push to the Artifact Registry docker build t eu gcr io PROJECT ID hello broken latest docker push eu gcr io PROJECT ID hello broken latest 3 Edit the terraform tfvars file setting project id variable to the id of the Google Cloud project where resources will be deployed to To reach the external IP of the Global External L7 load balancer created by the resources in the l7 gxlb cr tf file you also need to modify the domain variable to the subdomain value of the DNS domain that you control 4 Provision the demo infrastructure in the Google Cloud terraform init terraform apply auto approve To reach the external IP of the Global External L7 load balancer created by the resources in the l7 gxlb cr tf file you can now modify your DNS record for the subdomain defined in the domain variable and point to to the IP address of the created Global External Load Balancer gcloud compute forwarding rules list grep gxlb cr 5 Open the Cloud Console https console cloud google com net services loadbalancing list loadBalancers and check the L4 Regional Internal Network Load Balancer Managed Instance Group Cloud Run services and the hello zone in the Cloud DNS In the same way you can also check load balancer resources created for other use cases 6 Log in into the jumpbox VM attached to the internal VPC network and switch to sudo mode for simpler docker container execution gcloud compute ssh jumpbox sudo i Check whether all load balancers and components have come up properly curl s http l4 rilb mig hello zone 8080 echo OK echo NOK curl sk https l7 crilb cr hello zone echo OK echo NOK curl sk https l7 crilb mig hello zone echo OK echo NOK curl sk https l7 gxlb hello zone echo OK echo NOK curl sk https l7 rilb cr hello zone echo OK echo NOK curl sk https l7 rilb mig hello zone echo OK echo NOK All of the commands must return successfully and print OK 7 Run the load test For the load test you can use the open source Fortio tool https github com fortio fortio which is often used for testing Kubernetes and service mesh workloads curl http l4 rilb mig hello zone 8080 docker run fortio fortio load https insecure t 1m qps 1 http l4 rilb mig hello zone 8080 The result after 1 minute of execution should be similar to IP addresses distribution 10 156 0 11 8080 1 Code 200 258 100 0 Response Header Sizes count 258 avg 390 0 min 390 max 390 sum 100620 Response Body Total Sizes count 258 avg 7759 624 1 497 min 7758 max 7763 sum 2001983 All done 258 calls plus 4 warmup 233 180 ms avg 17 1 qps Please note the IP address of the internal pass through load balancer in the nearest region getting all calls 8 Test failover In the second console window SSH into the VM in the GCE MIG group in the nearest region export MIG VM gcloud compute instances list format value name filter name l4 europe west3 export MIG VM ZONE gcloud compute instances list format value zone filter name MIG VM gcloud compute ssh zone MIG VM ZONE MIG VM tunnel through iap project PROJECT ID sudo i docker ps Run the load test in the first console window again While the test is running switch to the second console window and execute docker stop CONTAINER Switch to the first console window and notice the failover happening The output at the end of the execution should look like following IP addresses distribution 10 156 0 11 8080 16 10 199 0 48 8080 4 Code 1 12 10 0 Code 200 108 90 0 Response Header Sizes count 258 avg 390 0 min 390 max 390 sum 100620 Response Body Total Sizes count 258 avg 7759 624 1 497 min 7758 max 7763 sum 2001983 All done 120 calls plus 4 warmup 83 180 ms avg 2 0 qps The Cloud DNS starts returning the second IP address from the healthy region with apvailable backend service and it starts processing incoming requests Please note that the service VM in the Managed Instance has been automatically restarted by the GCE Managed Instance Group autohealing https cloud google com compute docs instance groups autohealing instances in migs HA for Cloud Run This demo project alsocontains scearios for improving cross regional availability of application services deployed and running in Cloud Run There are several aspects related to the Cloud Run deployment which need to be taken into account and are discussed in the following sections Authentication In case when the application Cloud Run service needs to be protected by the authentication and not allow unauthenticated invocations the credentials need to be passed in the Authentication Bearer ID token HTTP reqeust header When the client application is running on Google Cloud e g in a GCE VM the following commands obtain correct ID tokens for authentication with each respective regional Cloud Run service instance Presuming the Cloud Run instances are deployed in two regions and exposed under cr service beh76gkxvq ey a run app and cr service us beh76gkxvq uc a run app hostnames respectively the commands to obtain authentication tokens for each of them are curl http metadata google internal computeMetadata v1 instance service accounts default identity audience https cr service beh76gkxvq ey a run app H Metadata Flavor Google id token txt curl http metadata google internal computeMetadata v1 instance service accounts default identity audience https cr service us beh76gkxvq uc a run app H Metadata Flavor Google id token us txt Otherwise the ID token can be obtained using the gcloud command please read on As you can see the regional Cloud Run service endpoint FQDN is used as the ID token authentication scope That makes the tokens not interchangeable That is a token obtained for the Cloud Run service in Region A will fail authentication with the Cloud Run service in the Region B Here is how to utilize the authentication token when invoking the regional Cloud Service instance directly Region A e g in EU curl H Authorization Bearer cat id token txt https cr service beh76gkxvq ey a run app Region B e g in US curl H Authorization Bearer cat id token us txt https cr service us beh76gkxvq uc a run app To overcome the limitation of distinct ID token scopes and to be able to make the Cloud Service client seamlessly failover to the Cloud Run service in another region using the same ID token for authentication the custom audiences https cloud google com run docs configuring custom audiences can be used Please note that the custom audience cr service is already being set in the google cloud run v2 service Terraform resource in this demo project gcloud run services update cr service region europe west3 add custom audiences cr service gcloud run services update cr service region us central1 add custom audiences cr service export TOKEN gcloud auth print identity token impersonate service account SERVICE ACCOUNT EMAIL audiences cr service or curl http metadata google internal computeMetadata v1 instance service accounts default identity audience cr service H Metadata Flavor Google id token txt Now we can make an authenticated call using the single ID token to the global FQDN hostname representing Cloud Run instances running in both regions That will work and authentication will succeed even in case of a Cloud Run or entire Google Cloud region outage in one of the regions curl k H Authorization Bearer TOKEN https l7 crilb cr hello zone curl k H Authorization Bearer cat id token txt https l7 crilb cr hello zone If the internal of external application load balancer with serverless network endpoint groups NEGs is configured with a TLS certificate for the Cloud DNS name resolving to the load balancer IP address then the we can also omit k curl parameter and client will verify the server TLS certificate properly curl http metadata google internal computeMetadata v1 instance service accounts default identity audience cr service H Metadata Flavor Google id token txt cat id token txt curl H Authorization Bearer cat Creds id token txt https l7 crilb cr hello zone Failover We can follow the instructions https cloud google com load balancing docs l7 internal setting up l7 cross reg serverless test failover in the public documentation to simulate regional Cloud Run backend outage First let s ensure that our demo application service is accessible via internal cross regional load balancer backed by two Cloud Run instances running in distinct regions europe west3 and us central1 by default From the client GCE VM attached to the demo project private VPC network docker run fortio fortio load https insecure t 1m qps 1 https l7 crilb cr hello zone We should see 100 successful invocations IP addresses distribution 10 156 0 51 443 4 Code 200 60 100 0 Now let s simulate regional outage by removing all of the serverless NEG backends from one of the regions e g Check the serverless NEG backends before the backend deletion gcloud compute backend services list filter name l7 crilb cr gcloud compute backend services remove backend l7 crilb cr network endpoint group cloudrun sneg network endpoint group region europe west3 global Check the serverless NEG backends after the backend deletion gcloud compute backend services list filter name l7 crilb cr If you executed the previous command while running Fortio in parallel you can increase the time interval the tool runs for by modifying the t command line parameter you should see an output similar to IP addresses distribution 10 156 0 51 443 4 Code 200 300 100 0 With our test setup of 1 call per second all calls have reached their destination If we now delete Serverless NEG backend in the second region client calls will start failing To restore the load balancer infrastructure just re apply the Terraform configuration by running terraform apply What we have seen so far was the failover at the Internal Cross Regional Application load balancer backend side That is the client application Fortio was still accessing the load balancer IP address in the nearest europe west3 region You can check that by running host l7 crilb cr hello zone which will return the IP address from the europe west3 region What would happen in case of a full europe west3 region outage The l4 rilb mig tf l4 rilb mig tf use case discussed above illustrates that case Unfortunately the Cloud DNS health checks https cloud google com dns docs zones manage routing policies health checks for L7 load balancers cannot detect the outage of the application backend service yet A missing load balancer backend is also not considered as outage and the IP address switch does not occur They only check the availability of the internal Google Cloud infrastructure Envoy proxies supporting the L7 load balancers It is difficult to simulate the actual Google Cloud region outage that would trigger the Cloud DNS IP address failover It is expected in the future that the Cloud DNS health checks will also be able to detect availability of the application service providing similar behaviour as the health checks for the L4 Network Passthrough load balancer currently provide Cleanup To clear custom audiences gcloud run services update cr service region europe west3 clear custom audiences gcloud run services update cr service region us central1 clear custom audiences To remove resources created by this project deployment either delete the target Google Cloud project or run terraform destroy auto approve Useful Links Multi region failover using Cloud DNS Routing Policies and Health Checks for Internal TCP UDP Load Balancer https codelabs developers google com clouddns failover policy codelab 0 AWS DNS load balancing example https docs aws amazon com whitepapers latest real time communication on aws cross region dns based load balancing and failover html
GCP This is the BigQuery query that will generate the source data SELECT table that will be created in AlloyDB sql We are going to be moving data from a public dataset stored in BigQuery into a toaddress dataflow bigquery to alloydb fromaddress
# dataflow-bigquery-to-alloydb We are going to be moving data from a public dataset stored in BigQuery into a table that will be created in AlloyDB. This is the BigQuery query that will generate the source data: ```sql SELECT from_address, to_address, CASE WHEN SAFE_CAST(value AS NUMERIC) IS NULL THEN 0 ELSE SAFE_CAST(value AS NUMERIC) END AS value, block_timestamp FROM bigquery-public-data.crypto_ethereum.token_transfers WHERE DATE(block_timestamp) = DATE_ADD(CURRENT_DATE(), INTERVAL -1 DAY) ``` ## Create the AlloyDB table in which we will store the BigQuery data Create a database for the table in AlloyDB: ```SQL CREATE DATABASE ethereum; ``` Create the table in which we will write the BigQuery data: ```sql CREATE TABLE token_transfers ( from_address VARCHAR, to_address VARCHAR, value NUMERIC, block_timestamp TIMESTAMP ); ``` ## Create the local environment ``` python3 -m venv env source env/bin/activate pip3 install -r requirements.txt ``` ## Running the Dataflow pipeline If the Python environment is not activated, you need to do it: ``` source env/bin/activate ``` For running the Dataflow pipeline, a Bucket is needed for staging the BigQuery data. If you don't have a bucket, please create one in the same region in which Dataflow will run, for example in `southamerica-east1` ``` gcloud storage buckets create gs://<BUCKET_NAME> --location=southamerica-east1 ``` Configure environment variables ``` TMP_BUCKET=<name of the bucket used for staging> PROJECT=<name of your GCP project> REGION=<name of the GCP region in which Dataflow will run> SUBNETWORK=<ID of the subnetwork in which Dataflow will run, for example: https://www.googleapis.com/compute/v1/projects/<NAME_OF_THE_VPC_PROJECT>/regions/<REGION>/subnetworks/<NAME_OF_THE_SUBNET> ALLOYDB_IP=<IP address of AlloyDB> ALLOYDB_USERNAME=<USERNAME used for connecting to AlloyDB> ALLOYDB_PASSWORD=<PASSWORD used for connecting to AlloyDB> ALLOYDB_DATABASE=ethereum ALLOYDB_TABLE=token_transfers BQ_QUERY=" SELECT from_address, to_address, CASE WHEN SAFE_CAST(value AS NUMERIC) IS NULL THEN 0 ELSE SAFE_CAST(value AS NUMERIC) END AS value, block_timestamp FROM bigquery-public-data.crypto_ethereum.token_transfers WHERE DATE(block_timestamp) = DATE_ADD(CURRENT_DATE(), INTERVAL -1 DAY) " ``` Execute the pipeline ``` python3 main.py \ --runner DataflowRunner \ --region ${REGION} \ --project ${PROJECT} \ --temp_location gs://${TMP_BUCKET}/tmp/ \ --alloydb_username ${ALLOYDB_USERNAME} \ --alloydb_password ${ALLOYDB_PASSWORD} \ --alloydb_ip ${ALLOYDB_IP} \ --alloydb_database ${ALLOYDB_DATABASE} \ --alloydb_table ${ALLOYDB_TABLE} \ --bq_query "${BQ_QUERY}" \ --no_use_public_ips \ --subnetwork=${SUBNETWORK} ```
GCP
dataflow bigquery to alloydb We are going to be moving data from a public dataset stored in BigQuery into a table that will be created in AlloyDB This is the BigQuery query that will generate the source data sql SELECT from address to address CASE WHEN SAFE CAST value AS NUMERIC IS NULL THEN 0 ELSE SAFE CAST value AS NUMERIC END AS value block timestamp FROM bigquery public data crypto ethereum token transfers WHERE DATE block timestamp DATE ADD CURRENT DATE INTERVAL 1 DAY Create the AlloyDB table in which we will store the BigQuery data Create a database for the table in AlloyDB SQL CREATE DATABASE ethereum Create the table in which we will write the BigQuery data sql CREATE TABLE token transfers from address VARCHAR to address VARCHAR value NUMERIC block timestamp TIMESTAMP Create the local environment python3 m venv env source env bin activate pip3 install r requirements txt Running the Dataflow pipeline If the Python environment is not activated you need to do it source env bin activate For running the Dataflow pipeline a Bucket is needed for staging the BigQuery data If you don t have a bucket please create one in the same region in which Dataflow will run for example in southamerica east1 gcloud storage buckets create gs BUCKET NAME location southamerica east1 Configure environment variables TMP BUCKET name of the bucket used for staging PROJECT name of your GCP project REGION name of the GCP region in which Dataflow will run SUBNETWORK ID of the subnetwork in which Dataflow will run for example https www googleapis com compute v1 projects NAME OF THE VPC PROJECT regions REGION subnetworks NAME OF THE SUBNET ALLOYDB IP IP address of AlloyDB ALLOYDB USERNAME USERNAME used for connecting to AlloyDB ALLOYDB PASSWORD PASSWORD used for connecting to AlloyDB ALLOYDB DATABASE ethereum ALLOYDB TABLE token transfers BQ QUERY SELECT from address to address CASE WHEN SAFE CAST value AS NUMERIC IS NULL THEN 0 ELSE SAFE CAST value AS NUMERIC END AS value block timestamp FROM bigquery public data crypto ethereum token transfers WHERE DATE block timestamp DATE ADD CURRENT DATE INTERVAL 1 DAY Execute the pipeline python3 main py runner DataflowRunner region REGION project PROJECT temp location gs TMP BUCKET tmp alloydb username ALLOYDB USERNAME alloydb password ALLOYDB PASSWORD alloydb ip ALLOYDB IP alloydb database ALLOYDB DATABASE alloydb table ALLOYDB TABLE bq query BQ QUERY no use public ips subnetwork SUBNETWORK
GCP gcs hive external table file optimization 1 Example solution to showcase impact of file count file size and file type on Hive external tables and query speeds 2 Table Of Contents
# gcs-hive-external-table-file-optimization Example solution to showcase impact of file count, file size, and file type on Hive external tables and query speeds ---- ## Table Of Contents 1. [About](#about) 2. [Use Case](#use-case) 3. [Architecture](#architecture) 4. [Guide](#guide) 5. [Sample Queries](#sample-queries) 6. [Sample Results](#sample-results) ---- ## about One way to perform data analytics is through Hive on [Cloud Dataproc](). You can create external tables in Hive, where the schema resides in Dataproc but the data resides in [Google Cloud Storage](). This allows you to separate compute and storage, enabling you to scale your data independently of compute power. In older HDFS / Hive On-Prem setups, the compute and storage were closely tied together, either on the same machine or in a nearby machine. But when storage is separated on the cloud, you save on storage costs at the expense of latency. It takes time for Cloud Dataproc to retrieve files on Google Cloud Storage. When there are many small files, this can negatively affect query performance. File type and compression can also affect query performance. **It is important to be deliberate in choosing your Google Cloud Storage file strategy when performing data analytics on Google Cloud.** **In this example you'll see a 99.996% improvement in query run time.** ---- ## use-case This repository sets up a real-world example of comparing query performance between different file sizes on Google Cloud Storage. It provides code to perform a one-time **file compaction** using [Google Bigquery](https://cloud.google.com/bigquery) and the [bq cli](https://cloud.google.com/bigquery/docs/bq-command-line-tool), and in doing so, optimizes your query performance when using Cloud Dataproc + External Tables in Hive + data on Google Cloud Storage. The setup script will create external tables with source data in the form of: - small raw json files - compacted json files - compacted compressed json files - compacted parquet files - compacted compressed parquet files - compacted avro files - compacted compressed avro files Finally, it will show you how to query all of the tables and demonstrate query run times for each source data / file format. ---- ## guide Do the following sample guide to generate many small files in Google Cloud Storage: https://github.com/CYarros10/gcp-dataproc-workflow-template-custom-image-sample Then: ```bash cd gcs-hive-external-table-file-optimization ./scripts/setup.sh <project_id> <project_number> <region> <dataset> <table> ``` ---- ## sample-queries **Hive** ```sql msck repair table comments; msck repair table comments_json; msck repair table comments_json_gz; msck repair table comments_avro; msck repair table comments_avro_snappy; msck repair table comments_avro_deflate; msck repair table comments_parquet; msck repair table comments_parquet_snappy; msck repair table comments_parquet_gzip; add jar /lib/hive/lib/hive-hcatalog-core-2.3.7.jar; add jar /lib/hive/lib/json-1.8.jar; add jar /lib/hive/lib/json-path-2.1.0.jar; add jar /lib/hive/lib/json4s-ast_2.12-3.5.3.jar; add jar /lib/hive/lib/json4s-core_2.12-3.5.3.jar; add jar /lib/hive/lib/json4s-jackson_2.12-3.5.3.jar; add jar /lib/hive/lib/json4s-scalap_2.12-3.5.3.jar; select count(*) from comments; select count(*) from comments_json; select count(*) from comments_json_gz; select count(*) from comments_avro; select count(*) from comments_avro_snappy; select count(*) from comments_avro_deflate; select count(*) from comments_parquet; select count(*) from comments_parquet_snappy; select count(*) from comments_parquet_gzip; ``` ---- ## sample-results sorted by query runtime: | file type | compression | file count | file size (mb) | query runtime (seconds) | |---|--|---|---|---| | parquet | GZIP | 1 | 13.1 | 1.64 | | parquet | SNAPPY | 1 | 20.1 | 2.11 | | json | none | 1 | 95.6 | 2.35 | | parquet | none | 1 | 32.2 | 2.66 | | json | GZIP | 1 | 17.1 | 4.20 | | avro | SNAPPY | 1 | 25.7 | 8.79 | | avro | DEFLATE | 1 | 18.4 | 9.20 | | avro | none | 1 | 44.7 | 15.59 | | json | none | 6851 | 0.01 | 476.52 | comments = 6851 x 10kb file(s) ![Stack-Resources](images/comments.png) comments_json = 1 x 95.6mb file(s) ![Stack-Resources](images/comments_json.png) comments_json_gz = 1 x 17.1mb file(s) ![Stack-Resources](images/comments_json_gz.png) comments_avro = 1 x 44.7mb file(s) ![Stack-Resources](images/comments_avro.png) comments_avro_snappy = 1 x 25.7mb file(s) ![Stack-Resources](images/comments_avro_snappy.png) comments_avro_deflate = 1 x 18.4mb file(s) ![Stack-Resources](images/comments_avro_deflate.png) comments_parquet = 1 x 32.2mb file(s) ![Stack-Resources](images/comments_parquet.png) comments_parquet_snappy = 1 x 20.1mb file(s) ![Stack-Resources](images/comments_parquet_snappy.png) comments_parquet_gzip = 1 x 13.1mb file(s) ![Stack-Resources](images/comments_parquet_gzip.png
GCP
gcs hive external table file optimization Example solution to showcase impact of file count file size and file type on Hive external tables and query speeds Table Of Contents 1 About about 2 Use Case use case 3 Architecture architecture 4 Guide guide 5 Sample Queries sample queries 6 Sample Results sample results about One way to perform data analytics is through Hive on Cloud Dataproc You can create external tables in Hive where the schema resides in Dataproc but the data resides in Google Cloud Storage This allows you to separate compute and storage enabling you to scale your data independently of compute power In older HDFS Hive On Prem setups the compute and storage were closely tied together either on the same machine or in a nearby machine But when storage is separated on the cloud you save on storage costs at the expense of latency It takes time for Cloud Dataproc to retrieve files on Google Cloud Storage When there are many small files this can negatively affect query performance File type and compression can also affect query performance It is important to be deliberate in choosing your Google Cloud Storage file strategy when performing data analytics on Google Cloud In this example you ll see a 99 996 improvement in query run time use case This repository sets up a real world example of comparing query performance between different file sizes on Google Cloud Storage It provides code to perform a one time file compaction using Google Bigquery https cloud google com bigquery and the bq cli https cloud google com bigquery docs bq command line tool and in doing so optimizes your query performance when using Cloud Dataproc External Tables in Hive data on Google Cloud Storage The setup script will create external tables with source data in the form of small raw json files compacted json files compacted compressed json files compacted parquet files compacted compressed parquet files compacted avro files compacted compressed avro files Finally it will show you how to query all of the tables and demonstrate query run times for each source data file format guide Do the following sample guide to generate many small files in Google Cloud Storage https github com CYarros10 gcp dataproc workflow template custom image sample Then bash cd gcs hive external table file optimization scripts setup sh project id project number region dataset table sample queries Hive sql msck repair table comments msck repair table comments json msck repair table comments json gz msck repair table comments avro msck repair table comments avro snappy msck repair table comments avro deflate msck repair table comments parquet msck repair table comments parquet snappy msck repair table comments parquet gzip add jar lib hive lib hive hcatalog core 2 3 7 jar add jar lib hive lib json 1 8 jar add jar lib hive lib json path 2 1 0 jar add jar lib hive lib json4s ast 2 12 3 5 3 jar add jar lib hive lib json4s core 2 12 3 5 3 jar add jar lib hive lib json4s jackson 2 12 3 5 3 jar add jar lib hive lib json4s scalap 2 12 3 5 3 jar select count from comments select count from comments json select count from comments json gz select count from comments avro select count from comments avro snappy select count from comments avro deflate select count from comments parquet select count from comments parquet snappy select count from comments parquet gzip sample results sorted by query runtime file type compression file count file size mb query runtime seconds parquet GZIP 1 13 1 1 64 parquet SNAPPY 1 20 1 2 11 json none 1 95 6 2 35 parquet none 1 32 2 2 66 json GZIP 1 17 1 4 20 avro SNAPPY 1 25 7 8 79 avro DEFLATE 1 18 4 9 20 avro none 1 44 7 15 59 json none 6851 0 01 476 52 comments 6851 x 10kb file s Stack Resources images comments png comments json 1 x 95 6mb file s Stack Resources images comments json png comments json gz 1 x 17 1mb file s Stack Resources images comments json gz png comments avro 1 x 44 7mb file s Stack Resources images comments avro png comments avro snappy 1 x 25 7mb file s Stack Resources images comments avro snappy png comments avro deflate 1 x 18 4mb file s Stack Resources images comments avro deflate png comments parquet 1 x 32 2mb file s Stack Resources images comments parquet png comments parquet snappy 1 x 20 1mb file s Stack Resources images comments parquet snappy png comments parquet gzip 1 x 13 1mb file s Stack Resources images comments parquet gzip png
GCP the entire solution on Google Cloud Platform GCP IoT Nirvana distributed all over the world and to follow temperature evolution by city in Architecture real time This document will guide you through the necessary steps to set up Internet of Things architecture running on Google Cloud Platform The purpose of This solution was built with the purpose of demonstrating an end to end the solution is to simulate the collection of temperature measures from sensors
# IoT Nirvana This solution was built with the purpose of demonstrating an end-to-end Internet of Things architecture running on Google Cloud Platform. The purpose of the solution is to simulate the collection of temperature measures from sensors distributed all over the world and to follow temperature evolution by city in real time. This document will guide you through the necessary steps to set up the entire solution on Google Cloud Platform (GCP). ## Architecture The image below contains a high level diagram of the solution. ![](img/architecture.png) The following components are represented on the diagram: 1. Temperature sensors are simulated by running IoT Java clients on Google Compute Engine 2. The sensors send temperature data to an IoT Core registry running on GCP 3. The IoT Core registry publishes it into a PubSub topic 4. A streaming Dataflow pipeline is capturing the temperature data in real time by subscribing to and reading from the PubSub topic 5. Temperature data is pushed into BigQuery for analytics purposes 6. Temperature data is also saved to Datastore for real time querying 7. Temperature is displayed in real time in a Web AppEngine application 8. All components are logging data to Stackdriver ## Bootstrapping As a pre-requisite you will need a GCP project to which you have owner rights in order to facilitate the setup of the solution. In the remainder of this guide this project's identifier will be referred to as **[PROJECT_ID]**. Enable the following APIs in your project: * [Cloud Pub/Sub API](https://console.cloud.google.com/apis/api/pubsub.googleapis.com) * [DataFlow API](https://console.cloud.google.com/apis/api/dataflow.googleapis.com) * [Google Cloud IoT API](https://console.cloud.google.com/apis/library/cloudiot.googleapis.com) In order to run the simulation in ideal conditions, with 10 virtual machines, please request an increase of your CPU quota to 80 vCPU. This is however optional. Create the environment variables that will be used throughout this tutorial. You can edit the default values provided below, however please note that not all products may be available in all regions: ``` export PROJECT_ID=<PROJECT_ID> export BUCKET_NAME=<BUCKET_NAME> export REGION=us-central1 export ZONE=us-central1-a export PUBSUB_TOPIC=telemetry export PUBSUB_SUBSCRIPTION=telemetry-sub export IOT_REGISTRY=devices export BIGQUERY_DATASET=warehouse export BIGQUERY_TABLE=device_data ``` Run the **setup_gcp_environment.sh** script with the following parameters, in this order, to create the corresponding resources in your GCP project: Following arguments must be provided: 1) Project Id 2) Region where the Cloud IoT Core registry will be created 3) Zone where a temporary VM to generate the Java image will be created 4) Cloud IoT Core registry name 5) PubSub telemetry topic name 6) PubSub subscription name 7) BigQuery dataset name In addition, the script also creates an Debian image with Java pre-installed, called **debian9-java8-img** that will be used to run the Java programs simulating temperature sensors. Example: `setup_gcp_environment.sh $PROJECT_ID $REGION $ZONE $IOT_REGISTRY $PUBSUB_TOPIC $PUBSUB_SUBSCRIPTION $BIGQUERY_DATASET` ## Build the solution The first action is to compile and package all the modules of the solution: the client simulating the temperature sensor, the Dataflow pipeline and the frontend AppEngine application displaying temperatures in real time. To to this, run the following command at the root of the project: `mvn clean install` ## Dataflow pipeline In order to run the Dataflow pipeline execute the `run_oncloud.sh` script at the root of the project with the following parameters: * **[PROJECT_ID]** - your project's identifier * **[BUCKET_NAME]** - the name of the bucket created by the bootstrapping script, identical to your project's identifier, **[PROJECT_ID]**, where the Dataflow pipeline's binary package will be stored * **[PUBSUB_TOPIC]** - the name of the PubSub topic created by the bootstrapping script, from which the Dataflow pipeline will read the temperature data; please note that this isn't the topic's canonical name, but instead the name relative to your project * **[BIGQUERY_TABLE]** - a name for the BigQuery table where Dataflow will save the temperature data; the format of this parameter must follow the rule **[BIGQUERY_DATASET].[TABLE_NAME]** Example: `run_oncloud.sh $PROJECT_ID $BUCKET_NAME $PUBSUB_TOPIC $BIGQUERY_DATASET.$BIGQUERY_TABLE` ## Temperature sensor Copy the JAR package containing the client binaries to Google Cloud Storage in the bucket previously created. Run the following command in the `/client` folder: `gsutil cp target/google-cloud-demo-iot-nirvana-client-jar-with-dependencies.jar gs://$BUCKET_NAME/client/` Check that the JAR file has been correctly copied in the Google Cloud Storage bucket with the following command: `gsutil ls gs://$BUCKET_NAME/client/google-cloud-demo-iot-nirvana-client-jar-with-dependencies.jar` ## AppEngine Web frontend The following steps will allow you to set up and run on AppEngine the Web frontend that allows to visualize in real time the temperature data captured from the temperature sensors: 1. Modify the `src/main/webapp/startup.sh`file in the `/app-engine` folder by updating the variables below. This is the startup script of the Virtual Machines that will be created from the image **debian9-java8-img** and it creates 10 instances of the Java client simulating a temperature sensor. * PROJECT_ID - your GCP project's identifier, **[PROJECT_ID]** * BUCKET_NAME - name of the Google Cloud Storage bucket created by the bootstrapping script * REGISTRY_NAME - name of the IoT Core registry created by the bootstrapping script * REGION - region in which the IoT Core registry was created by the bootstrapping script 2. Copy the `startup.sh` file in the Google Cloud Storage bucket by running the following command in the `/app-engine` folder: `gsutil cp src/main/webapp/startup.sh gs://$BUCKET_NAME/` 3. Modify the `/pom.xml` file in the `/app-engine` folder: * Update the `<app.id/>` node with the **[PROJECT_ID]** of your GCP project * Update the `<app.version/>` with the desired version of the application 4. Modify the `src/main/webapp/config/client.properties` file in the `/app-engine` folder by updating the values of the following parameters: * GCS_BUCKET- name of the Google Cloud Storage bucket created by the bootstrapping script * GCE_METADATA_STARTUP_VALUE - path on Google Cloud Storage to the startup script edited at the previous step, gs://[BUCKET_NAME]/startup.sh * GCP_CLOUD_IOT_CORE_REGISTRY_NAME - name of the IoT Core registry created by the bootstrapping script * GCP_CLOUD_IOT_CORE_REGION - region in which the IoT Core registry was created by the bootstrapping script 5. Enable the [Maps Javascript API](https://console.cloud.google.com/apis/library/maps-backend.googleapis.com) 6. In the *Credentials* section of the Maps Javascript API generate the API key that will be used by the Web frontend to call Google Maps. This key will be referred to as **[MAPS_API_KEY]** further in the document. Make sure to: * Select HTTP in the "Application restrictions" list * Enter the URLs of the application, `https://[YOUR_PROJECT_ID].appspot.com/*` and `http://[YOUR_PROJECT_ID].appspot.com/*`, in the "Accept requests for these HTTP referrers (web sites)" input zone 7. Update the `src/main/webapp/index.html` file in the `/app-engine` folder by replacing the **[MAPS_API_KEY]** text with the actual value of the Google Maps API key generated at step 2. 8. Run the `gcloud app create` command to create the Google AppEngine application 9. Deploy the frontend Web application on AppEngine by running the following command in the at the root of the project: `mvn -pl app-engine appengine:update` ## Testing In order to test the end to end solution, it is necessary first to start the temperature sensors simulation. Follow the steps below to achieve this: * Go to the following address in your web browser, which will display the map of the Earth with 3 buttons at the bottom: **Start**, **Update**, **Stop** `https://[YOUR_PROJECT_ID].appspot.com/index.html` * Click on the **Start** button at the bottom left of the page (this also enables the buttons **Update** and **Stop**) * The VM instances being launched are visible in the Google Cloud Console under [Compute Engine](https://console.cloud.google.com/compute/instances) In order to visualize temperature data in real time on Google Maps do the following: * Click on the **Update** button at the bottom center of the page `https://[YOUR_PROJECT_ID].appspot.com/index.html`. This will display on the map test cities used for simulating the temperature sensors. * Run the following SQL query in BigQuery to retrieve the most recent cities for which data is available: `SELECT City, Time FROM ``[BIGQUERY_DATASET].[TABLE_NAME]`` ORDER BY 2 DESC LIMIT 10` * Locate on the map one of the cities returned by the query and click on the city icon to visualise the temperatures graph. To stop the simulation click on the **Stop** button at the bottom right of the page `https://[YOUR_PROJECT_ID].appspot.com/index.html`.
GCP
IoT Nirvana This solution was built with the purpose of demonstrating an end to end Internet of Things architecture running on Google Cloud Platform The purpose of the solution is to simulate the collection of temperature measures from sensors distributed all over the world and to follow temperature evolution by city in real time This document will guide you through the necessary steps to set up the entire solution on Google Cloud Platform GCP Architecture The image below contains a high level diagram of the solution img architecture png The following components are represented on the diagram 1 Temperature sensors are simulated by running IoT Java clients on Google Compute Engine 2 The sensors send temperature data to an IoT Core registry running on GCP 3 The IoT Core registry publishes it into a PubSub topic 4 A streaming Dataflow pipeline is capturing the temperature data in real time by subscribing to and reading from the PubSub topic 5 Temperature data is pushed into BigQuery for analytics purposes 6 Temperature data is also saved to Datastore for real time querying 7 Temperature is displayed in real time in a Web AppEngine application 8 All components are logging data to Stackdriver Bootstrapping As a pre requisite you will need a GCP project to which you have owner rights in order to facilitate the setup of the solution In the remainder of this guide this project s identifier will be referred to as PROJECT ID Enable the following APIs in your project Cloud Pub Sub API https console cloud google com apis api pubsub googleapis com DataFlow API https console cloud google com apis api dataflow googleapis com Google Cloud IoT API https console cloud google com apis library cloudiot googleapis com In order to run the simulation in ideal conditions with 10 virtual machines please request an increase of your CPU quota to 80 vCPU This is however optional Create the environment variables that will be used throughout this tutorial You can edit the default values provided below however please note that not all products may be available in all regions export PROJECT ID PROJECT ID export BUCKET NAME BUCKET NAME export REGION us central1 export ZONE us central1 a export PUBSUB TOPIC telemetry export PUBSUB SUBSCRIPTION telemetry sub export IOT REGISTRY devices export BIGQUERY DATASET warehouse export BIGQUERY TABLE device data Run the setup gcp environment sh script with the following parameters in this order to create the corresponding resources in your GCP project Following arguments must be provided 1 Project Id 2 Region where the Cloud IoT Core registry will be created 3 Zone where a temporary VM to generate the Java image will be created 4 Cloud IoT Core registry name 5 PubSub telemetry topic name 6 PubSub subscription name 7 BigQuery dataset name In addition the script also creates an Debian image with Java pre installed called debian9 java8 img that will be used to run the Java programs simulating temperature sensors Example setup gcp environment sh PROJECT ID REGION ZONE IOT REGISTRY PUBSUB TOPIC PUBSUB SUBSCRIPTION BIGQUERY DATASET Build the solution The first action is to compile and package all the modules of the solution the client simulating the temperature sensor the Dataflow pipeline and the frontend AppEngine application displaying temperatures in real time To to this run the following command at the root of the project mvn clean install Dataflow pipeline In order to run the Dataflow pipeline execute the run oncloud sh script at the root of the project with the following parameters PROJECT ID your project s identifier BUCKET NAME the name of the bucket created by the bootstrapping script identical to your project s identifier PROJECT ID where the Dataflow pipeline s binary package will be stored PUBSUB TOPIC the name of the PubSub topic created by the bootstrapping script from which the Dataflow pipeline will read the temperature data please note that this isn t the topic s canonical name but instead the name relative to your project BIGQUERY TABLE a name for the BigQuery table where Dataflow will save the temperature data the format of this parameter must follow the rule BIGQUERY DATASET TABLE NAME Example run oncloud sh PROJECT ID BUCKET NAME PUBSUB TOPIC BIGQUERY DATASET BIGQUERY TABLE Temperature sensor Copy the JAR package containing the client binaries to Google Cloud Storage in the bucket previously created Run the following command in the client folder gsutil cp target google cloud demo iot nirvana client jar with dependencies jar gs BUCKET NAME client Check that the JAR file has been correctly copied in the Google Cloud Storage bucket with the following command gsutil ls gs BUCKET NAME client google cloud demo iot nirvana client jar with dependencies jar AppEngine Web frontend The following steps will allow you to set up and run on AppEngine the Web frontend that allows to visualize in real time the temperature data captured from the temperature sensors 1 Modify the src main webapp startup sh file in the app engine folder by updating the variables below This is the startup script of the Virtual Machines that will be created from the image debian9 java8 img and it creates 10 instances of the Java client simulating a temperature sensor PROJECT ID your GCP project s identifier PROJECT ID BUCKET NAME name of the Google Cloud Storage bucket created by the bootstrapping script REGISTRY NAME name of the IoT Core registry created by the bootstrapping script REGION region in which the IoT Core registry was created by the bootstrapping script 2 Copy the startup sh file in the Google Cloud Storage bucket by running the following command in the app engine folder gsutil cp src main webapp startup sh gs BUCKET NAME 3 Modify the pom xml file in the app engine folder Update the app id node with the PROJECT ID of your GCP project Update the app version with the desired version of the application 4 Modify the src main webapp config client properties file in the app engine folder by updating the values of the following parameters GCS BUCKET name of the Google Cloud Storage bucket created by the bootstrapping script GCE METADATA STARTUP VALUE path on Google Cloud Storage to the startup script edited at the previous step gs BUCKET NAME startup sh GCP CLOUD IOT CORE REGISTRY NAME name of the IoT Core registry created by the bootstrapping script GCP CLOUD IOT CORE REGION region in which the IoT Core registry was created by the bootstrapping script 5 Enable the Maps Javascript API https console cloud google com apis library maps backend googleapis com 6 In the Credentials section of the Maps Javascript API generate the API key that will be used by the Web frontend to call Google Maps This key will be referred to as MAPS API KEY further in the document Make sure to Select HTTP in the Application restrictions list Enter the URLs of the application https YOUR PROJECT ID appspot com and http YOUR PROJECT ID appspot com in the Accept requests for these HTTP referrers web sites input zone 7 Update the src main webapp index html file in the app engine folder by replacing the MAPS API KEY text with the actual value of the Google Maps API key generated at step 2 8 Run the gcloud app create command to create the Google AppEngine application 9 Deploy the frontend Web application on AppEngine by running the following command in the at the root of the project mvn pl app engine appengine update Testing In order to test the end to end solution it is necessary first to start the temperature sensors simulation Follow the steps below to achieve this Go to the following address in your web browser which will display the map of the Earth with 3 buttons at the bottom Start Update Stop https YOUR PROJECT ID appspot com index html Click on the Start button at the bottom left of the page this also enables the buttons Update and Stop The VM instances being launched are visible in the Google Cloud Console under Compute Engine https console cloud google com compute instances In order to visualize temperature data in real time on Google Maps do the following Click on the Update button at the bottom center of the page https YOUR PROJECT ID appspot com index html This will display on the map test cities used for simulating the temperature sensors Run the following SQL query in BigQuery to retrieve the most recent cities for which data is available SELECT City Time FROM BIGQUERY DATASET TABLE NAME ORDER BY 2 DESC LIMIT 10 Locate on the map one of the cities returned by the query and click on the city icon to visualise the temperatures graph To stop the simulation click on the Stop button at the bottom right of the page https YOUR PROJECT ID appspot com index html
GCP
- [Reusable Plugins](#reusable-plugins-for-cloud-data-fusion-cdf--cdap) - [Overview](#overview) - [CheckPointReadAction, CheckPointUpdateAction](#checkpointreadaction-checkpointupdateaction) - [Dependencies](#dependencies) - [Setting up Firestore](#setting-up-firestore) - [Set Runtime Arguments](#set-runtime-arguments) - [CopyTableAction](#copytableaction) - [DropTableAction](#droptableaction) - [TruncateTableAction](#truncatetableaction) - [Putting it all together into a Pipeline](#putting-it-all-together-into-a-pipeline) - [CheckPointReadAction](#checkpointreadaction) - [TruncateTableAction](#truncatetableaction) - [Database source](#database-source) - [BigQuery sink](#bigquery-sink) - [MergeLastUpdateTSAction](#mergelastupdatetsaction) - [CheckPointUpdateAction](#checkpointupdateaction) - [Building the CDF/CDAP Plugin (JAR file / JSON file) and deploying into CDF/CDAP](#building-the-cdfcdap-plugin-jar-file--json-file-and-deploying-into-cdfcdap) # Reusable Plugins for Cloud Data Fusion (CDF) / CDAP ## Overview The CDF/CDAP plugins detailed below can be reused in the context of data pipelines. Let's say you run your incremental pipeline once every 5 minutes. When running an incremental pipeline, you have to filter the records by a specific field (e.g., `lastUpdateDateTime` of records > latest watermark value - buffer time) so it will sync the records that were updated since your last incremental sync. Subsequently, a merge and dedupe step is done to make sure only new/updated are synced into the destination table. ## `CheckPointReadAction`, `CheckPointUpdateAction` **Plugin Description** Creates, reads, and updates checkpoints in incremental pull pipelines. `CheckPointReadAction` - reads checkpoints in Firestore DB and provides the data during runtime as environment variable `CheckPointUpdateAction` - updates checkpoints in Firestore DB (i.e., creates a new document and stores maximum update date / time from BQ so the next run it can use this checkpoint value to filter records that were added since then) For now these plugins only support timestamp values - in the future, integer values can potentially be added. ### Dependencies #### Setting up Firestore 1. Setup Firestore DB 1. Firestore is used to store / read checkpoints, which is used in your incremental pipelines 1. Create a collection with a document from the parent path / 1. Collection ID: `PIPELINE_CHECKPOINTS` 1. Document ID: `INCREMENTAL_DEMO` ![image](img/1-create_pipeline_checkpoint_collection.png) 1. Create a collection under Parent path `/PIPELINE_CHECKPOINTS/INCREMENTAL_DEMO` 1. Collection ID: `CHECKPOINT` 1. Document ID: just accept what was provided initially 1. Field #1 1. Note: 1. Set to maximum timestamp from destination (BQ table) 1. Set to minimum timestamp if running for the first time from source (e.g., SQL server table) 1. Field name: `CREATED_TIMESTAMP` 1. Field type: `string` 1. Date and time: `2020-05-08 17:21:01` 1. Field #2 1. Note: enter the current time in timestamp format 1. Field name: CREATED_TIMESTAMP 1. Field type: timestamp 1. Date and time: 25/08/2020, 15:49 ![image](img/2-create_check_point_collection.png) #### Set Runtime Arguments Before running the pipeline, add the `lastWatermarkValue` variable as runtime argument (on Pipeline Studio view, click on drop-down arrow for Run button) and set the value = 0 : ![image](img/3-runtime_arguments.png) CheckpointReadAction will populate lastWatermarkValue with the CHECKPOINT_VALUE from Firestore. lastWatermarkValue runtime argument will be used as parameter of the import query of the Database Source in a subsequent step: ```sql SELECT * FROM test WHERE last_update_datetime > '${latestWatermarkValue}' ``` BigQuery - actual destination table name (this is where max checkpoint is taken from - i.e., max timestamp) **Use Case** This plugin can be used at the beginning of an incremental CDAP data pipeline to read the checkpoint value from the last sync. Let's say you run your pipeline once every 5 minutes. When running an incremental pipeline, you have to filter the records by a specific field (timestamp - current date > current date -3) - it is doing merge and dedupe even though we are processing the same records to make sure duplicate records are not in the destination table. `CheckPointReadAction` - reads checkpoints in Firestore DB and provides the data during runtime as environment variable `CheckPointUpdateAction` - updates checkpoints in Firestore DB (i.e., creates a new document and stores maximum update date / time from BQ so the next run it can use this checkpoint value to filter records that were added since then) For now these plugins only support timestamp values - in the future, integer values can potentially be added. **`CheckpointReadAction` plugin requires the following config properties:** - Label : plugin label name. - Specify the collection name in firestore DB: Name of the Collection. - Specify the document name to read the checkpoint details: Provide the document name specified in the Collection. - Buffer time to add to checkpoint value. (Note: in Minutes): Number of minutes that need to be subtracted from the Firestore collection value. - Project: project ID. - Key path: Service account key file path to communicate with the Firestore DB. **Please see the following screenshot for example.** ![image](img/4-checkpoint_read_action_plugin_ui.png) **`CheckpointUpdateAction` plugin requires the following configuration:** - Label : plugin label name. - Specify the collection name in firestore DB: Name of the Collection. - Specify the document name to read the checkpoint details: Provide the document name specified in the Collection. - Dataset name where incremental pull table exists: Big Query Dataset name. - Table name that needs incremental pull: Big Query table name. - Specify the checkpoint column from incremental pull table: - Project: project ID. - Key path: Service account key file path to communicate with the Firestore DB. **Please see the below screenshot for example:** ![image](img/5-checkpoint_update_action_plugin_ui.png) ## `CopyTableAction` **Plugin description** Copies the BigQuery table from staging to destination at the end of the pipeline run. A new table is created if it doesn't exist. Otherwise, if the table exists, the plugin replaces the existing BigQuery destination table with data from staging. **Use case** This is applicable in the CDAP data pipelines which do the full import/scan the data from source system to BigQuery. **Dependencies** Destination dataset : `bq_dataset` Destination table : `bq_table` Source dataset : `bq_dataset_batch_staging` Source table : `bq_table` **`CopyTableAction` plugin requires the following configuration:** - Label: plugin label name. - Key path: Service account key file path to call the Big Query API. - Project ID: GCP project ID. - Dataset: Big Query dataset name. - Table Name: Big Query table name. **Please see the following screenshot for example:** ![image](img/6-copy_table_action_ui.png) ## `DropTableAction` **Plugin Description** Drops a BigQuery table in the beginning of the pipeline runs. **Use Case** Useful to drop staging tables. **Dependencies** Requires BQ table to drop to exist. **Drop table action plugin requires the following configuration:** - Label : plugin label name. - Key path: Service account key file path to call the Big Query API. - Project ID: GCP project ID. - Dataset: Big Query dataset name. - Table Name: Big Query table name. Please see the following screenshot for example configuration: ![image](img/7-drop_table_action_ui.png) ## `TruncateTableAction` **Plugin Description** Truncates a BigQuery table when we set pipelines to restore the data from source. **Use Case** Applicable in restoring data pipelines from source. **TruncateTable action plugin requires the following configuration:** - Label : plugin label name. - Key path: Service account key file path to call the Big Query API. - Project ID: GCP project ID. - Dataset: Big Query dataset name. - Table Name: Big Query table name. **Please see the following screenshot for example configuration:** ![image](img/8-truncate_table_action_ui.png) # Putting it all together into a Pipeline `CheckPointReadAction` β†’ `TruncateTableAction` β†’ Database β†’ BigQuery β†’ `MergeLastUpdateTSAction` β†’ `CheckPointUpdateAction` What does the pipeline do? 1. `CheckPointReadAction` - reads latest checkpoint from Firestore 1. `TruncateTableAction` - truncate the records in the log table 1. Database Source- imports data from the source 1. BigQuery Sink - exports data into BigQuery from previous step (database source) 1. `MergeLastUpdateTSAction` - merge based on timestamp and the update column list (columns to keep in the merge). - Note: Alternatively, you can use [`BigQueryExecute`](https://github.com/data-integrations/google-cloud/blob/develop/src/main/java/io/cdap/plugin/gcp/bigquery/action/BigQueryExecute.java) action to do a Merge. 1. `CheckPointUpdateAction` - update checkpoint in Firestore from the max record lastUpdateTimestamp in BigQuery ## Successful run of Incremental Pipeline ![image](img/9-successful_run_incremental_v2_pipeline.png) ### Runtime arguments (set latestWatermarkValue to 0) ![image](img/9-set_runtime_arugments_latestWatermarkValue.png) ### `CheckPointReadAction` **Label:** `CheckPointReadAction` **Specify the document name to read the checkpoint details\*:** INCREMENTAL_DEMO **Buffer time to add to checkpoint value. (Note: in Minutes):** 1 **project:** `pso-cdf-plugins-287518` **serviceFilePath:** auto-detect **Screenshot:** ![image](img/10-checkpoint_read_action_ui_pipeline_parameters.png) ### `TruncateTableAction` **Label:** `TruncateTableAction` **Key path*:** auto-detect **ProjectId* :** `pso-cdf-plugins-287518` **Dataset** * `bq_dataset` **Table name*** `bq_table_LOG` ![image](img/11-truncate_table_action_ui_pipeline_parameters.png) ### Database source **Label** * Database **Reference Name*** test **Plugin Name*** sqlserver42 **Plugin Type** jdbc **Connection String** jdbc:sqlserver://<fill in IP address of database server>:<db port>;databaseName=main;user=<fill in user>;password=<fill in password>; **Import Query:** ```sql SELECT * FROM test WHERE last_update_datetime > '${latestWatermarkValue}' ``` ![image](img/12-database_source_ui_pipeline_parameters.png) ### BigQuery sink **Label** * BigQuery **Reference Name*** bq_table_sink **Project ID** `pso-cdf-plugins-287518` **Dataset*** `bq_dataset` **Table*** (write to a temporary table, e.g., bq_table_LOG) `bq_table_LOG` **Service Account File Path** auto-detect **Schema** ![image](img/13-bigquery_sink_ui_pipeline_parameters.png) ### `MergeLastUpdateTSAction` **Label*** `MergeLastUpdateTSAction` **Key path*** auto-detect **Project ID*** `pso-cdf-plugins-287518` **Dataset name** `bq_dataset` **Table name*** `bq_table` **Primary key list*** id **Update columns list*** id,name,last_update_datetime ![image](img/14-merge_last_update_ts_action.png) ### `CheckPointUpdateAction` **Label*** `CheckPointUpdateAction` **Specify the collection name in firestore DB*** PIPELINE_CHECKPOINTS **Specify the document name to read the checkpoint details*** INCREMENTAL_DEMO **Dataset name where incremental pull table exists*** `bq_dataset` **Table name that needs incremental pull*** `bq_table` **Specify the checkpoint column from incremental pull table*** last_update_datetime **serviceFilePath** auto-detect **project** `pso-cdf-plugins-287518` ![image](img/15-checkpoint_update_action_ui_pipeline_parameters.png) ## Building the CDF/CDAP Plugin (JAR file / JSON file) and deploying into CDF/CDAP This plugin requires Java JDK1.8 and maven. 1. To build the CDAP / CDF plugin jar, execute the following command on the root. ```bash mvn clean compile package ``` 2. You will find the generated JAR file and JSON file under target folder: 1. `GoogleFunctions-1.6.jar` 1. `GoogleFunctions-1.6.json` 1. Deploy `GoogleFunctions-1.6.jar` and `GoogleFunctions-1.6.json` into CDF/CDAP (note that if you have the same version already deployed then you’ll get an error that it already exists): 1. Go to Control Center 1. Delete `GoogleFunctions` artifact if the same version already exists. 1. Upload plugin by clicking on the circled green + button 1. Pick the JAR file / JSON file created under target folder 1. You’ll see a confirmation of the successful plugin upload
GCP
Reusable Plugins reusable plugins for cloud data fusion cdf cdap Overview overview CheckPointReadAction CheckPointUpdateAction checkpointreadaction checkpointupdateaction Dependencies dependencies Setting up Firestore setting up firestore Set Runtime Arguments set runtime arguments CopyTableAction copytableaction DropTableAction droptableaction TruncateTableAction truncatetableaction Putting it all together into a Pipeline putting it all together into a pipeline CheckPointReadAction checkpointreadaction TruncateTableAction truncatetableaction Database source database source BigQuery sink bigquery sink MergeLastUpdateTSAction mergelastupdatetsaction CheckPointUpdateAction checkpointupdateaction Building the CDF CDAP Plugin JAR file JSON file and deploying into CDF CDAP building the cdfcdap plugin jar file json file and deploying into cdfcdap Reusable Plugins for Cloud Data Fusion CDF CDAP Overview The CDF CDAP plugins detailed below can be reused in the context of data pipelines Let s say you run your incremental pipeline once every 5 minutes When running an incremental pipeline you have to filter the records by a specific field e g lastUpdateDateTime of records latest watermark value buffer time so it will sync the records that were updated since your last incremental sync Subsequently a merge and dedupe step is done to make sure only new updated are synced into the destination table CheckPointReadAction CheckPointUpdateAction Plugin Description Creates reads and updates checkpoints in incremental pull pipelines CheckPointReadAction reads checkpoints in Firestore DB and provides the data during runtime as environment variable CheckPointUpdateAction updates checkpoints in Firestore DB i e creates a new document and stores maximum update date time from BQ so the next run it can use this checkpoint value to filter records that were added since then For now these plugins only support timestamp values in the future integer values can potentially be added Dependencies Setting up Firestore 1 Setup Firestore DB 1 Firestore is used to store read checkpoints which is used in your incremental pipelines 1 Create a collection with a document from the parent path 1 Collection ID PIPELINE CHECKPOINTS 1 Document ID INCREMENTAL DEMO image img 1 create pipeline checkpoint collection png 1 Create a collection under Parent path PIPELINE CHECKPOINTS INCREMENTAL DEMO 1 Collection ID CHECKPOINT 1 Document ID just accept what was provided initially 1 Field 1 1 Note 1 Set to maximum timestamp from destination BQ table 1 Set to minimum timestamp if running for the first time from source e g SQL server table 1 Field name CREATED TIMESTAMP 1 Field type string 1 Date and time 2020 05 08 17 21 01 1 Field 2 1 Note enter the current time in timestamp format 1 Field name CREATED TIMESTAMP 1 Field type timestamp 1 Date and time 25 08 2020 15 49 image img 2 create check point collection png Set Runtime Arguments Before running the pipeline add the lastWatermarkValue variable as runtime argument on Pipeline Studio view click on drop down arrow for Run button and set the value 0 image img 3 runtime arguments png CheckpointReadAction will populate lastWatermarkValue with the CHECKPOINT VALUE from Firestore lastWatermarkValue runtime argument will be used as parameter of the import query of the Database Source in a subsequent step sql SELECT FROM test WHERE last update datetime latestWatermarkValue BigQuery actual destination table name this is where max checkpoint is taken from i e max timestamp Use Case This plugin can be used at the beginning of an incremental CDAP data pipeline to read the checkpoint value from the last sync Let s say you run your pipeline once every 5 minutes When running an incremental pipeline you have to filter the records by a specific field timestamp current date current date 3 it is doing merge and dedupe even though we are processing the same records to make sure duplicate records are not in the destination table CheckPointReadAction reads checkpoints in Firestore DB and provides the data during runtime as environment variable CheckPointUpdateAction updates checkpoints in Firestore DB i e creates a new document and stores maximum update date time from BQ so the next run it can use this checkpoint value to filter records that were added since then For now these plugins only support timestamp values in the future integer values can potentially be added CheckpointReadAction plugin requires the following config properties Label plugin label name Specify the collection name in firestore DB Name of the Collection Specify the document name to read the checkpoint details Provide the document name specified in the Collection Buffer time to add to checkpoint value Note in Minutes Number of minutes that need to be subtracted from the Firestore collection value Project project ID Key path Service account key file path to communicate with the Firestore DB Please see the following screenshot for example image img 4 checkpoint read action plugin ui png CheckpointUpdateAction plugin requires the following configuration Label plugin label name Specify the collection name in firestore DB Name of the Collection Specify the document name to read the checkpoint details Provide the document name specified in the Collection Dataset name where incremental pull table exists Big Query Dataset name Table name that needs incremental pull Big Query table name Specify the checkpoint column from incremental pull table Project project ID Key path Service account key file path to communicate with the Firestore DB Please see the below screenshot for example image img 5 checkpoint update action plugin ui png CopyTableAction Plugin description Copies the BigQuery table from staging to destination at the end of the pipeline run A new table is created if it doesn t exist Otherwise if the table exists the plugin replaces the existing BigQuery destination table with data from staging Use case This is applicable in the CDAP data pipelines which do the full import scan the data from source system to BigQuery Dependencies Destination dataset bq dataset Destination table bq table Source dataset bq dataset batch staging Source table bq table CopyTableAction plugin requires the following configuration Label plugin label name Key path Service account key file path to call the Big Query API Project ID GCP project ID Dataset Big Query dataset name Table Name Big Query table name Please see the following screenshot for example image img 6 copy table action ui png DropTableAction Plugin Description Drops a BigQuery table in the beginning of the pipeline runs Use Case Useful to drop staging tables Dependencies Requires BQ table to drop to exist Drop table action plugin requires the following configuration Label plugin label name Key path Service account key file path to call the Big Query API Project ID GCP project ID Dataset Big Query dataset name Table Name Big Query table name Please see the following screenshot for example configuration image img 7 drop table action ui png TruncateTableAction Plugin Description Truncates a BigQuery table when we set pipelines to restore the data from source Use Case Applicable in restoring data pipelines from source TruncateTable action plugin requires the following configuration Label plugin label name Key path Service account key file path to call the Big Query API Project ID GCP project ID Dataset Big Query dataset name Table Name Big Query table name Please see the following screenshot for example configuration image img 8 truncate table action ui png Putting it all together into a Pipeline CheckPointReadAction TruncateTableAction Database BigQuery MergeLastUpdateTSAction CheckPointUpdateAction What does the pipeline do 1 CheckPointReadAction reads latest checkpoint from Firestore 1 TruncateTableAction truncate the records in the log table 1 Database Source imports data from the source 1 BigQuery Sink exports data into BigQuery from previous step database source 1 MergeLastUpdateTSAction merge based on timestamp and the update column list columns to keep in the merge Note Alternatively you can use BigQueryExecute https github com data integrations google cloud blob develop src main java io cdap plugin gcp bigquery action BigQueryExecute java action to do a Merge 1 CheckPointUpdateAction update checkpoint in Firestore from the max record lastUpdateTimestamp in BigQuery Successful run of Incremental Pipeline image img 9 successful run incremental v2 pipeline png Runtime arguments set latestWatermarkValue to 0 image img 9 set runtime arugments latestWatermarkValue png CheckPointReadAction Label CheckPointReadAction Specify the document name to read the checkpoint details INCREMENTAL DEMO Buffer time to add to checkpoint value Note in Minutes 1 project pso cdf plugins 287518 serviceFilePath auto detect Screenshot image img 10 checkpoint read action ui pipeline parameters png TruncateTableAction Label TruncateTableAction Key path auto detect ProjectId pso cdf plugins 287518 Dataset bq dataset Table name bq table LOG image img 11 truncate table action ui pipeline parameters png Database source Label Database Reference Name test Plugin Name sqlserver42 Plugin Type jdbc Connection String jdbc sqlserver fill in IP address of database server db port databaseName main user fill in user password fill in password Import Query sql SELECT FROM test WHERE last update datetime latestWatermarkValue image img 12 database source ui pipeline parameters png BigQuery sink Label BigQuery Reference Name bq table sink Project ID pso cdf plugins 287518 Dataset bq dataset Table write to a temporary table e g bq table LOG bq table LOG Service Account File Path auto detect Schema image img 13 bigquery sink ui pipeline parameters png MergeLastUpdateTSAction Label MergeLastUpdateTSAction Key path auto detect Project ID pso cdf plugins 287518 Dataset name bq dataset Table name bq table Primary key list id Update columns list id name last update datetime image img 14 merge last update ts action png CheckPointUpdateAction Label CheckPointUpdateAction Specify the collection name in firestore DB PIPELINE CHECKPOINTS Specify the document name to read the checkpoint details INCREMENTAL DEMO Dataset name where incremental pull table exists bq dataset Table name that needs incremental pull bq table Specify the checkpoint column from incremental pull table last update datetime serviceFilePath auto detect project pso cdf plugins 287518 image img 15 checkpoint update action ui pipeline parameters png Building the CDF CDAP Plugin JAR file JSON file and deploying into CDF CDAP This plugin requires Java JDK1 8 and maven 1 To build the CDAP CDF plugin jar execute the following command on the root bash mvn clean compile package 2 You will find the generated JAR file and JSON file under target folder 1 GoogleFunctions 1 6 jar 1 GoogleFunctions 1 6 json 1 Deploy GoogleFunctions 1 6 jar and GoogleFunctions 1 6 json into CDF CDAP note that if you have the same version already deployed then you ll get an error that it already exists 1 Go to Control Center 1 Delete GoogleFunctions artifact if the same version already exists 1 Upload plugin by clicking on the circled green button 1 Pick the JAR file JSON file created under target folder 1 You ll see a confirmation of the successful plugin upload
GCP The GitLab Kubernetes Agent KAS is a tool that helps you deploy your code to a Google Kubernetes Engine GKE cluster using GitOps practices It does this by using a YAML configuration file that you create in your repository It handles automated deployment of services after the images have been built by a CI CD pipeline from source code The flow diagram below illustrate the relationship between the two processes Note that in this example and is the same in production it can be hosted in different GitLab repos This example deploys a dummy image to decouple the Application CI CD step Brief explanation Terraform for Deploying a KAS agent in a GKE cluster This repository provides Terraform code for deploying a KAS agent in a GKE cluster to connect it with a Gitlab repository to automatically deploy manage and monitor your cloud native solutions using GitOps practices This creates resources in your cluster to deploy an agent that communicates with Gitlab to synchronize deployments Here is a to a guide that explains the manual steps for making this configuration and an overview of the solution More resource links are provided in section The Gitlab agent created in the Gitlab project is connected to the agentk service running in the cluster When there is a change in the manifest file describing the deployment the agent in Gitlab pulls the changes and invokes the connected service running in the GKE cluster to apply the declarative changes to the Kubernetes resources The Kubernetes Service Account federating the deployment is governed by the RBAC policies with limited permissions to the cluster configmap and the product namespaces
# Terraform for Deploying a KAS agent in a GKE cluster This repository provides Terraform code for deploying a KAS agent in a GKE cluster, to connect it with a Gitlab repository to automatically deploy, manage, and monitor your cloud-native solutions using GitOps practices. This creates resources in your cluster to deploy an agent that communicates with Gitlab to synchronize deployments. Here is a [link](https://about.gitlab.com/blog/2021/09/10/setting-up-the-k-agent/) to a guide that explains the manual steps for making this configuration and an overview of the solution. More resource links are provided in [this](#references-and-public-docs) section. ## Brief explanation The GitLab Kubernetes Agent (KAS) is a tool that helps you deploy your code to a Google Kubernetes Engine (GKE) cluster using GitOps practices. It does this by using a YAML configuration file that you create in your repository. It handles automated deployment of services after the images have been built by a CI/CD pipeline from source code. The flow diagram below illustrate the relationship between the two processes. Note that in this example `Manifest Repository` and `Agent configuration repository` is the same -- in production it can be hosted in different GitLab repos. This example deploys a dummy `nginx` image to decouple the Application CI/CD step. ![Gitlab KAS flow diagram](docs/gitlab-kas-flow-diag.png) The Gitlab agent created in the Gitlab project is connected to the agentk service running in the cluster. When there is a change in the manifest file describing the deployment, the agent in Gitlab pulls the changes and invokes the connected service running in the GKE cluster to apply the declarative changes to the Kubernetes resources. The Kubernetes Service Account federating the deployment is governed by the RBAC policies with limited permissions to the cluster configmap and the product namespaces. ![KAS Agent connection](docs/kas-agent-connection.png) ## How to setup - Create a new project in Gitlab - [Create a personal token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html) in Gitlab, and set environment variable `GITLAB_TOKEN` to be used my Terraform ``` export GITLAB_TOKEN=<token value> ``` Note that in production you'd want to use a [*Runner authentication token* or a *CI/CD job token*](https://docs.gitlab.com/ee/security/token_overview.html#runner-authentication-tokens-also-called-runner-tokens) depending on your IaC pipeline strategy. - Setup a [GKE cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster) that can egress to public internet to communicate with a Gitlab agent. - Use a personal or service account that has `roles/container.admin` permission. - Create directory `manifests/` in your Gitlab project - Create a tfvars file from the [sample](./terraform.tfvars.sample) file - Run terraform init and apply to deploy resources ``` terraform init terraform apply ``` - Add yaml file in `/manifests` directory (sample is provided below) to add a deployment, and commit the change. - Ensure your namespace for the product has the correct deployment ## Useful commands ``` # Get namespaces. Check that 2 namespaces for product and gitlab-agent exists kubectl get ns # Check pods gitlab kas namespace kubectl get pods -n <gitlab kas namespace name> # Check pods product namespace kubectl get pods -n <product namespace name> # Check logs in kas agent pod for synchronization kubectl logs <kas agent pod name> -n <gitlab kas namespace name> ``` ## Resources created See [terraform-docs.md](./terraform-docs.md) for details. Here is a summary: **Gitlab Resources** - Agent instance in Gitlab project to poll configuration changes in deployment manifests - Agent config file in the Gitlab project based on template in `templates/agent-config.yaml.tpl` **Product K8s Resources** - Sample prouct namespace **Agentk K8s Resources** - Namespace for agentk image in the cluster - Deploy KAS agent client through Helm chart - Kubernetes Service Account used by agent to manage deployments in product namespace - RBAC roles for KSA to read/write configMap of cluster - RBAC roles for KSA to read/write any k8s resources in product namespace ## Considerations - Setup remote backend in provider - Configure service account with appropriate permissions in providers.tf - This example assumes SaaS offering of Gitlab KAS server is used "wss://kas.gitlab.com". For a self-managed server, the endpoint will be different - The product namespace is created here for giving an example. Remove it for implementation. - Networking and proxy settings between cluster and Gitlab agent can be configured in the helm chart value. Documentation reference is provided in `main.tf`. - This solution assumes that the product namespaces are created separately.Before adding a manifest YAML file for a new namespace ensure that the namespace is created. - Host the agentk image in an Artifact Registry and set the reference to that in the variables rather than pulling it from Gitlab's registry. ## Variables to configure (Example) ``` project_id = "gitlab-kas-gke" cluster_name = "gitlab-kas-agent-cluster" cluster_location = "us-central1-c" gitlab_repo_name = "<user/org>/test-gitlab-kas-gke" product_name = "test-kas" agentk_image_tag = "v15.9.0-rc1" ``` ## Sample deployment.yaml ``` apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment-test namespace: test-kas # Make sure this matches the product's namespace labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 ``` ## [References and public docs](#references-and-public-docs) - [Using GitOps with a Kubernetes cluster](https://docs.gitlab.com/ee/user/clusters/agent/gitops.html) - [How to deploy the GitLab Agent for Kubernetes with limited permissions](https://about.gitlab.com/blog/2021/09/10/setting-up-the-k-agent/)<a name="helpful"> - [Troubleshooting](https://docs.gitlab.com/ee/user/clusters/agent/troubleshooting.html) - [Installing the agent for Kubernetes](https://docs.gitlab.com/ee/user/clusters/agent/install) - [Working with the agent for Kubernetes](https://docs.gitlab.com/ee/user/clusters/agent/work_with_agent.html)
GCP
Terraform for Deploying a KAS agent in a GKE cluster This repository provides Terraform code for deploying a KAS agent in a GKE cluster to connect it with a Gitlab repository to automatically deploy manage and monitor your cloud native solutions using GitOps practices This creates resources in your cluster to deploy an agent that communicates with Gitlab to synchronize deployments Here is a link https about gitlab com blog 2021 09 10 setting up the k agent to a guide that explains the manual steps for making this configuration and an overview of the solution More resource links are provided in this references and public docs section Brief explanation The GitLab Kubernetes Agent KAS is a tool that helps you deploy your code to a Google Kubernetes Engine GKE cluster using GitOps practices It does this by using a YAML configuration file that you create in your repository It handles automated deployment of services after the images have been built by a CI CD pipeline from source code The flow diagram below illustrate the relationship between the two processes Note that in this example Manifest Repository and Agent configuration repository is the same in production it can be hosted in different GitLab repos This example deploys a dummy nginx image to decouple the Application CI CD step Gitlab KAS flow diagram docs gitlab kas flow diag png The Gitlab agent created in the Gitlab project is connected to the agentk service running in the cluster When there is a change in the manifest file describing the deployment the agent in Gitlab pulls the changes and invokes the connected service running in the GKE cluster to apply the declarative changes to the Kubernetes resources The Kubernetes Service Account federating the deployment is governed by the RBAC policies with limited permissions to the cluster configmap and the product namespaces KAS Agent connection docs kas agent connection png How to setup Create a new project in Gitlab Create a personal token https docs gitlab com ee user profile personal access tokens html in Gitlab and set environment variable GITLAB TOKEN to be used my Terraform export GITLAB TOKEN token value Note that in production you d want to use a Runner authentication token or a CI CD job token https docs gitlab com ee security token overview html runner authentication tokens also called runner tokens depending on your IaC pipeline strategy Setup a GKE cluster https cloud google com kubernetes engine docs how to creating a zonal cluster that can egress to public internet to communicate with a Gitlab agent Use a personal or service account that has roles container admin permission Create directory manifests in your Gitlab project Create a tfvars file from the sample terraform tfvars sample file Run terraform init and apply to deploy resources terraform init terraform apply Add yaml file in manifests directory sample is provided below to add a deployment and commit the change Ensure your namespace for the product has the correct deployment Useful commands Get namespaces Check that 2 namespaces for product and gitlab agent exists kubectl get ns Check pods gitlab kas namespace kubectl get pods n gitlab kas namespace name Check pods product namespace kubectl get pods n product namespace name Check logs in kas agent pod for synchronization kubectl logs kas agent pod name n gitlab kas namespace name Resources created See terraform docs md terraform docs md for details Here is a summary Gitlab Resources Agent instance in Gitlab project to poll configuration changes in deployment manifests Agent config file in the Gitlab project based on template in templates agent config yaml tpl Product K8s Resources Sample prouct namespace Agentk K8s Resources Namespace for agentk image in the cluster Deploy KAS agent client through Helm chart Kubernetes Service Account used by agent to manage deployments in product namespace RBAC roles for KSA to read write configMap of cluster RBAC roles for KSA to read write any k8s resources in product namespace Considerations Setup remote backend in provider Configure service account with appropriate permissions in providers tf This example assumes SaaS offering of Gitlab KAS server is used wss kas gitlab com For a self managed server the endpoint will be different The product namespace is created here for giving an example Remove it for implementation Networking and proxy settings between cluster and Gitlab agent can be configured in the helm chart value Documentation reference is provided in main tf This solution assumes that the product namespaces are created separately Before adding a manifest YAML file for a new namespace ensure that the namespace is created Host the agentk image in an Artifact Registry and set the reference to that in the variables rather than pulling it from Gitlab s registry Variables to configure Example project id gitlab kas gke cluster name gitlab kas agent cluster cluster location us central1 c gitlab repo name user org test gitlab kas gke product name test kas agentk image tag v15 9 0 rc1 Sample deployment yaml apiVersion apps v1 kind Deployment metadata name nginx deployment test namespace test kas Make sure this matches the product s namespace labels app nginx spec replicas 3 selector matchLabels app nginx template metadata labels app nginx spec containers name nginx image nginx 1 14 2 ports containerPort 80 References and public docs references and public docs Using GitOps with a Kubernetes cluster https docs gitlab com ee user clusters agent gitops html How to deploy the GitLab Agent for Kubernetes with limited permissions https about gitlab com blog 2021 09 10 setting up the k agent a name helpful Troubleshooting https docs gitlab com ee user clusters agent troubleshooting html Installing the agent for Kubernetes https docs gitlab com ee user clusters agent install Working with the agent for Kubernetes https docs gitlab com ee user clusters agent work with agent html
GCP Getting Started Pub Sub topic to be ingested elsewhere This example illustrates how to use the DLP api in a Cloud Function to redact Redacting Sensitive Data Using the DLP API following These instructions will walk you through setting up your environment to do the sensitive data from log exports The scrubbed logs will then be posted to a
# Redacting Sensitive Data Using the DLP API This example illustrates how to use the DLP api in a Cloud Function to redact sensitive data from log exports. The scrubbed logs will then be posted to a Pub/Sub topic to be ingested elsewhere. ## Getting Started These instructions will walk you through setting up your environment to do the following: * Export logs to a Pub/Sub log export * Deploy a Cloud Function that subscribes to the log export * Write scrubbed logs to a Pub/Sub topic. ### Prerequisites Ensure that you have the [Google Cloud SDK](https://cloud.google.com/sdk/install) installed and authenticated to the project you want to deploy the example to. ### Enable Required APIs The Cloud Functions, Pub/Sub, and DLP APIs will all need to be enabled for this example to work properly. ``` gcloud services enable cloudfunctions pubsub dlp.googleapis.com ``` ### Pub/Sub Pub/Sub will be used to facilitate the transfer of logs from Stackdriver to the Cloud Function for processing. Once the logs are scrubbed, they will be sent to another Pub/Sub topic for final consumption. Define the Pub/Sub topic and subscription names ``` export LOG_EXPORT_TOPIC_NAME=log-export export DLP_SCRUBBED_TOPIC_NAME=scrubbed-log-export export LOG_EXPORT_NAME=log-export-destination export DLP_SCRUBBED_SUBSCRIPTION_NAME=scrubbed-log-export-subscription ``` Create the Pub/Sub topics and subscription ``` gcloud pubsub topics create $LOG_EXPORT_TOPIC_NAME gcloud pubsub topics create $DLP_SCRUBBED_TOPIC_NAME gcloud pubsub subscriptions create $DLP_SCRUBBED_SUBSCRIPTION_NAME \ --topic $DLP_SCRUBBED_TOPIC_NAME ``` ### Stackdriver Log Export A log export will be created in Stackdriver that sends all "global" logs to the first Pub/Sub topic we created. Create the export. ``` export PROJECT_ID=$(gcloud config get-value project) gcloud logging sinks create $LOG_EXPORT_NAME \ $LOG_EXPORT_TOPIC_NAME pubsub.googleapis.com/projects/$PROJECT_ID/topics/$LOG_EXPORT_TOPIC_NAME \ --log-filter resource.type="global" ``` Give Pub/Sub topic writer permissions to the service account being used for the log export. ``` export SERVICE_ACCOUNT=$(gcloud logging sinks describe $LOG_EXPORT_NAME --format="value(writerIdentity)") gcloud projects add-iam-policy-binding $PROJECT_ID \ --member $SERVICE_ACCOUNT \ --role roles/pubsub.publisher ``` ### Cloud Function Cloud Functions will be used to call the DLP API to scrub the log content, then post the output to a new Pub/Sub topic. Clone the professional services repo. ``` git clone https://github.com/GoogleCloudPlatform/professional-services.git ``` Deploy the Cloud Function. ``` gcloud functions deploy dlp-log-scrubber \ --runtime python37 \ --trigger-topic $LOG_EXPORT_TOPIC_NAME \ --entry-point process_log_entry \ --set-env-vars OUTPUT_TOPIC_NAME=$DLP_SCRUBBED_TOPIC_NAME \ --source professional-services/examples/dlp/cloud_function_example/cloud_function/. ``` Wait a few minutes for the Cloud Function to deploy, then you can proceed to test. ### Testing the DLP API Now that we have set up our Stackdriver log exports, Pub/Sub, and Cloud Function we can proceed to test the DLP API. First, write a log entry to Stackdriver that will be picked up by the log export we created. ``` gcloud logging write my-test-log \ '{ "message": "user: [email protected], visa: 1111-2222-3333-4444, DOB: 01/22/2019"}' \ --payload-type=json ``` You will see the log entry under Logging > Logs if you view the "Global" log type ![Stackdriver Log Screenshot](img/stackdriver_log_img.png) Now that we have written the log to Stackdriver, the log export to Pub/Sub should have triggered our Cloud Function. To verify, we can change the log viewer to filter for our Cloud Function logs. ![Stackdriver Cloud Function Screenshot](img/stackdriver_cloud_function_log.png) Finally, lets grab the Cloud Function output from the Pub/Sub subscription we created ``` gcloud pubsub subscriptions pull --auto-ack $DLP_SCRUBBED_SUBSCRIPTION_NAME ``` The output will show the same log message with the email address, visa card number, and date of birth removed. ``"jsonPayload": {"message": "user: [EMAIL_ADDRESS], visa: [CREDIT_CARD_NUMBER], DOB: [DATE_OF_BIRTH]"}`` This example demonstrates the identification and replacement of a small number of data types (EMAIL_ADDRESS, CREDIT_CARD_NUMBER, and DATE_OF_BIRTH), however, the DLP API supports many more which can be found [here](https://cloud.google.com/dlp/docs/infotypes-reference)
GCP
Redacting Sensitive Data Using the DLP API This example illustrates how to use the DLP api in a Cloud Function to redact sensitive data from log exports The scrubbed logs will then be posted to a Pub Sub topic to be ingested elsewhere Getting Started These instructions will walk you through setting up your environment to do the following Export logs to a Pub Sub log export Deploy a Cloud Function that subscribes to the log export Write scrubbed logs to a Pub Sub topic Prerequisites Ensure that you have the Google Cloud SDK https cloud google com sdk install installed and authenticated to the project you want to deploy the example to Enable Required APIs The Cloud Functions Pub Sub and DLP APIs will all need to be enabled for this example to work properly gcloud services enable cloudfunctions pubsub dlp googleapis com Pub Sub Pub Sub will be used to facilitate the transfer of logs from Stackdriver to the Cloud Function for processing Once the logs are scrubbed they will be sent to another Pub Sub topic for final consumption Define the Pub Sub topic and subscription names export LOG EXPORT TOPIC NAME log export export DLP SCRUBBED TOPIC NAME scrubbed log export export LOG EXPORT NAME log export destination export DLP SCRUBBED SUBSCRIPTION NAME scrubbed log export subscription Create the Pub Sub topics and subscription gcloud pubsub topics create LOG EXPORT TOPIC NAME gcloud pubsub topics create DLP SCRUBBED TOPIC NAME gcloud pubsub subscriptions create DLP SCRUBBED SUBSCRIPTION NAME topic DLP SCRUBBED TOPIC NAME Stackdriver Log Export A log export will be created in Stackdriver that sends all global logs to the first Pub Sub topic we created Create the export export PROJECT ID gcloud config get value project gcloud logging sinks create LOG EXPORT NAME LOG EXPORT TOPIC NAME pubsub googleapis com projects PROJECT ID topics LOG EXPORT TOPIC NAME log filter resource type global Give Pub Sub topic writer permissions to the service account being used for the log export export SERVICE ACCOUNT gcloud logging sinks describe LOG EXPORT NAME format value writerIdentity gcloud projects add iam policy binding PROJECT ID member SERVICE ACCOUNT role roles pubsub publisher Cloud Function Cloud Functions will be used to call the DLP API to scrub the log content then post the output to a new Pub Sub topic Clone the professional services repo git clone https github com GoogleCloudPlatform professional services git Deploy the Cloud Function gcloud functions deploy dlp log scrubber runtime python37 trigger topic LOG EXPORT TOPIC NAME entry point process log entry set env vars OUTPUT TOPIC NAME DLP SCRUBBED TOPIC NAME source professional services examples dlp cloud function example cloud function Wait a few minutes for the Cloud Function to deploy then you can proceed to test Testing the DLP API Now that we have set up our Stackdriver log exports Pub Sub and Cloud Function we can proceed to test the DLP API First write a log entry to Stackdriver that will be picked up by the log export we created gcloud logging write my test log message user test email test com visa 1111 2222 3333 4444 DOB 01 22 2019 payload type json You will see the log entry under Logging Logs if you view the Global log type Stackdriver Log Screenshot img stackdriver log img png Now that we have written the log to Stackdriver the log export to Pub Sub should have triggered our Cloud Function To verify we can change the log viewer to filter for our Cloud Function logs Stackdriver Cloud Function Screenshot img stackdriver cloud function log png Finally lets grab the Cloud Function output from the Pub Sub subscription we created gcloud pubsub subscriptions pull auto ack DLP SCRUBBED SUBSCRIPTION NAME The output will show the same log message with the email address visa card number and date of birth removed jsonPayload message user EMAIL ADDRESS visa CREDIT CARD NUMBER DOB DATE OF BIRTH This example demonstrates the identification and replacement of a small number of data types EMAIL ADDRESS CREDIT CARD NUMBER and DATE OF BIRTH however the DLP API supports many more which can be found here https cloud google com dlp docs infotypes reference
GCP Anthos Service Mesh ASM for multiple GKE clusters using Terraform Here are several reference documents if you encounter an issue when following the instructions below Pod traffic security scanning using ASM Docker and Google Artifact Registry GAR Documentation Multi Cluster ASM on Private Clusters Contents Connecting GKE clusters and ASM to an external database
# Contents - [Multi-Cluster ASM on Private Clusters](./infrastructure): Anthos Service Mesh (ASM) for multiple GKE clusters, using Terraform - [Twistlock PoC](./twistlock): Pod traffic security scanning, using ASM, Docker and Google Artifact Registry (GAR) - [Cloud SQL for PostgreSQL PoC](./postgres): Connecting GKE clusters and ASM to an external database # Multi-Cluster ASM on Private Clusters ## Documentation Here are several reference documents if you encounter an issue when following the instructions below: - [Installing ASM using Anthos CLI](https://cloud.google.com/service-mesh/docs/gke-anthos-cli-existing-cluster) - [Installing ASM using IstioCtl](https://cloud.google.com/service-mesh/docs/gke-install-existing-cluster) - [Adding clusters to an Athos Service Mesh](https://cloud.google.com/service-mesh/docs/gke-install-multi-cluster) ## Description In [Adding clusters to an Athos Service Mesh](https://cloud.google.com/service-mesh/docs/gke-install-multi-cluster), it shows how to federate service meshes of two Anthos **public** clusters. However, it misses a key instruction to open the firewall for the service port to the remote cluster. So, your final test of HelloWorld might not work. This sample builds on the topic of Google's Anthos Service Mesh official installation documents, and adds instructions on how to federate two private clusters, which is more likely in real world environments. As illustrated in the diagram below, we will create a VPC with three subnets. Two subnets are for private clusters, and one for GCE servers. So, we illustrate using a bastion server to access private clusters as in a real environment. ![NetworkImage](./asm-private-multiclusters-intranet.png) The clusters are not accessible from an external network. Users can only log into the bastion server via an IAP tunnel to gain access to this VPC. A firewall rule is built to allow IAP tunneling into the GCE subnet (Subnet C) only. For the bastion server in Subnet C to access Kubernetes APIs of both private clusters, Subnet C's CIDR range is added to the "_GKE Control Plane Authorized Network_" of both clusters. This is illustrated as blue lines and yellow underscore lines in the diagram above. Also, in order for both clusters to access the service mesh (Istiod) and service deployed on the other cluster, we need to do the following: - The pod CIDR range of one cluster must be added to the "_GKE Control Plane Authorized Network_" of the other cluster. This enables one cluster to ping _istiod_ on the other cluster. - The firewall needs to be open for one cluster's pod CIDR to access the service port on the other cluster. In this sample, it is port 5000 used by the HelloWord testing application. Because the invocation of service is bidirectional in HelloWorld testing application, we will add firewall rules for each direction. The infrastructure used in this sample is coded in Terraform scripts. The ASM installation steps are coded in a Shell script. ## Prerequisites As mentioned in [Add GKE clusters to Anthos Service Mesh](https://cloud.google.com/service-mesh/docs/gke-install-multi-cluster), there are several prerequisites. This guide assumes that you have: - [A Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects). - [A Cloud Billing account](https://cloud.google.com/billing/docs/how-to/manage-billing-account). Also, the multi-cluster configuration has these requirements for the clusters in it: - All clusters must be on the same network. **NOTE:** ASM 1.7 does [not support multiple networks](https://cloud.google.com/service-mesh/docs/supported-features#platform_environment), even peered ones. - If you join clusters that are not in the same project, they must be installed using the `asm-gcp-multiproject` profile and the clusters must be in a shared VPC configuration together on the same network. In addition, we recommend that you have one project to host the shared VPC, and two service projects for creating clusters. For more information, see [Setting up clusters with Shared VPC](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-shared-vpc). In this sample, we create two private clusters in different subnets of the same VPC in the same project, and enable clusters to communicate to each other's API server. ## How to set up and run this sample ### Build Infrastructure 1. Create a GCP project. 2. Create a VPC in GCP project. 3. Create a subnet in the VPC. 4. Create a VM in the subnet. This will be the bastion server to simulate an intranet access to GKE clusters. - This step is now done by Terraform, in file [infrastructure/bastion.tf](./infrastructure/bastion.tf) - The Bastion host is used for interaction with the GKE clusters - For this demo, we ran Terraform from a local machine, not from the Bastion host - **Note:** you will have to manually [create a Google Cloud firewall rule](https://cloud.google.com/vpc/docs/using-firewalls), to allow connection to the bastion server via SSH (port 22). We did not automate this for security reasons. 5. Set up Git on your local machine, then clone this Github sample. Also clone this Github sample onto the bastion server 6. Set up [Terraform](https://learn.hashicorp.com/terraform/getting-started/install.html) on your local machine, so you will be able to build infrastructure. 7. On your local machine, update the corresponding parameters for your project. - In ``vars.sh``, check to see whether you need to update `CLUSTER1_LOCATION`,`CLUSTER1_CLUSTER_NAME`, `CLUSTER1_CLUSTER_CTX`, `CLUSTER2_LOCATION`, `CLUSTER2_CLUSTER_NAME`, `CLUSTER2_CLUSTER_CTX`. - In [infrastructure/terraform.example.tfvars](./infrastructure/terraform.example.tfvars), rename this file to terraform.tfvars and update "project_id" and "billing_account". - In [infrastructure/shared.tf](./infrastructure/shared.tf), check whether you need to update "project_prefix" and "region". - **[OPTIONAL]** In the locals section of _infrastructure/shared.tf_, update CIDR ranges for bastion_cidr and existing_vpc if you need to. - Source _vars.sh_ to set up basic environment variables. ``` source vars.sh ``` 8. If you want to run Terraform in your own workspace, create a `backend.tf` file from _infrastructure/backend.tf_tmpl_, and update your Terraform workspace information in this file. 9. Under "_infrastructure_" directory, run - terraform init ``` terraform init ``` - terraform plan ``` terraform plan -out output.tftxt ``` **NOTE:** You may get an error that the Compute Engine API has not been used before in the project. In this case please [manually enable the Compute Engine API](https://console.cloud.google.com/apis/library/compute.googleapis.com) ``` Error: Error when reading or editing GCE default service account: googleapi: Error 403: Compute Engine API has not been used in project XXXXXXXXXXX before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry., accessNotConfigured ``` - terraform apply ``` terraform apply output.tftxt ``` - If Terraform completes without error, you should have VPC, NAT, a bastion server, two private clusters and firewall rules. Please check all artifacts in GCP Console. 10. SSH onto the bastion server. 11. Make sure you have the following tools installed: - The Cloud SDK (the gcloud command-line tool) - The standard command-line tools: awk, curl, grep, sed, sha256sum, and tr - git - kpt - kubectl - jq ### Install ASM 1. On bastion server, go to this source code directory, then source _vars.sh_ **NOTE:** Make sure you manually [create a Google Cloud firewall rule](https://cloud.google.com/vpc/docs/using-firewalls) to allow SSH connections to your bastion server over port 22 ``` cd asm-private-multiclusters-intranet source vars.sh ``` 2. Source _scripts/main.sh_ ``` source scripts/main.sh ``` 3. Run install_asm_mesh ``` install_asm_mesh ``` or, you can run the commands in install_asm_mesh step by step manually ``` # Navigate to your working directory. Binaries will be downloaded to this directory. cd ${WORK_DIR} # Set up K8s config and context set_up_credential ${CLUSTER1_CLUSTER_NAME} ${CLUSTER1_LOCATION} ${CLUSTER1_CLUSTER_CTX} ${TF_VAR_project_id} set_up_credential ${CLUSTER2_CLUSTER_NAME} ${CLUSTER2_LOCATION} ${CLUSTER2_CLUSTER_CTX} ${TF_VAR_project_id} # Download ASM Installer download_asm_installer ${ASM_MAJOR_VER} ${ASM_MINOR_VER} #Install ASM install_asm ${CLUSTER1_CLUSTER_NAME} ${CLUSTER1_LOCATION} ${TF_VAR_project_id} install_asm ${CLUSTER2_CLUSTER_NAME} ${CLUSTER2_LOCATION} ${TF_VAR_project_id} # Register clusters grant_role_to_connect_agent ${TF_VAR_project_id} register_cluster ${CLUSTER1_CLUSTER_CTX} ${CLUSTER1_LOCATION} register_cluster ${CLUSTER2_CLUSTER_CTX} ${CLUSTER2_LOCATION} # Add clusters to mesh cross_cluster_service_secret ${CLUSTER1_CLUSTER_NAME} ${CLUSTER1_CLUSTER_CTX} ${CLUSTER2_CLUSTER_CTX} cross_cluster_service_secret ${CLUSTER2_CLUSTER_NAME} ${CLUSTER2_CLUSTER_CTX} ${CLUSTER1_CLUSTER_CTX} ``` ### Deploy test helloworld application Run install_test_app ``` install_test_app ``` ### Prepare for verification ``` export CTX1=$CLUSTER1_CLUSTER_CTX export CTX2=$CLUSTER2_CLUSTER_CTX ``` Follow the instruction in "**Verify cross-cluster load balancing**" section of [Add clusters to an Anthos Service Mesh](https://cloud.google.com/service-mesh/docs/gke-install-multi-cluster) to verify. **Please Note:** You don't need to install `Helloworld` application, it has been installed for you already. ## Internal Load Balancer Anthos ASM deploys ingress gateway using external load balancer by default. If we need to change the ingress gateway to be internal load balancer, we can use `--option` or `--custom-overlay` parameter along with out load balancer yaml (./istio-profiles/internal-load-balancer.yaml). Please note that we need to specify out "targetPort" for https and http2 ports for current ASM version. # Twistlock PoC - Pod traffic security scanning, using ASM, Docker and Google Artifact Registry (GAR) - Please see the [twistlock folder readme](./twistlock) # Cloud SQL for PostgreSQL PoC - Connecting GKE clusters and ASM to a database that is external database to the Kubernetes clusters - Please see the [postgres folder readme](./postgres)
GCP
Contents Multi Cluster ASM on Private Clusters infrastructure Anthos Service Mesh ASM for multiple GKE clusters using Terraform Twistlock PoC twistlock Pod traffic security scanning using ASM Docker and Google Artifact Registry GAR Cloud SQL for PostgreSQL PoC postgres Connecting GKE clusters and ASM to an external database Multi Cluster ASM on Private Clusters Documentation Here are several reference documents if you encounter an issue when following the instructions below Installing ASM using Anthos CLI https cloud google com service mesh docs gke anthos cli existing cluster Installing ASM using IstioCtl https cloud google com service mesh docs gke install existing cluster Adding clusters to an Athos Service Mesh https cloud google com service mesh docs gke install multi cluster Description In Adding clusters to an Athos Service Mesh https cloud google com service mesh docs gke install multi cluster it shows how to federate service meshes of two Anthos public clusters However it misses a key instruction to open the firewall for the service port to the remote cluster So your final test of HelloWorld might not work This sample builds on the topic of Google s Anthos Service Mesh official installation documents and adds instructions on how to federate two private clusters which is more likely in real world environments As illustrated in the diagram below we will create a VPC with three subnets Two subnets are for private clusters and one for GCE servers So we illustrate using a bastion server to access private clusters as in a real environment NetworkImage asm private multiclusters intranet png The clusters are not accessible from an external network Users can only log into the bastion server via an IAP tunnel to gain access to this VPC A firewall rule is built to allow IAP tunneling into the GCE subnet Subnet C only For the bastion server in Subnet C to access Kubernetes APIs of both private clusters Subnet C s CIDR range is added to the GKE Control Plane Authorized Network of both clusters This is illustrated as blue lines and yellow underscore lines in the diagram above Also in order for both clusters to access the service mesh Istiod and service deployed on the other cluster we need to do the following The pod CIDR range of one cluster must be added to the GKE Control Plane Authorized Network of the other cluster This enables one cluster to ping istiod on the other cluster The firewall needs to be open for one cluster s pod CIDR to access the service port on the other cluster In this sample it is port 5000 used by the HelloWord testing application Because the invocation of service is bidirectional in HelloWorld testing application we will add firewall rules for each direction The infrastructure used in this sample is coded in Terraform scripts The ASM installation steps are coded in a Shell script Prerequisites As mentioned in Add GKE clusters to Anthos Service Mesh https cloud google com service mesh docs gke install multi cluster there are several prerequisites This guide assumes that you have A Cloud project https cloud google com resource manager docs creating managing projects A Cloud Billing account https cloud google com billing docs how to manage billing account Also the multi cluster configuration has these requirements for the clusters in it All clusters must be on the same network NOTE ASM 1 7 does not support multiple networks https cloud google com service mesh docs supported features platform environment even peered ones If you join clusters that are not in the same project they must be installed using the asm gcp multiproject profile and the clusters must be in a shared VPC configuration together on the same network In addition we recommend that you have one project to host the shared VPC and two service projects for creating clusters For more information see Setting up clusters with Shared VPC https cloud google com kubernetes engine docs how to cluster shared vpc In this sample we create two private clusters in different subnets of the same VPC in the same project and enable clusters to communicate to each other s API server How to set up and run this sample Build Infrastructure 1 Create a GCP project 2 Create a VPC in GCP project 3 Create a subnet in the VPC 4 Create a VM in the subnet This will be the bastion server to simulate an intranet access to GKE clusters This step is now done by Terraform in file infrastructure bastion tf infrastructure bastion tf The Bastion host is used for interaction with the GKE clusters For this demo we ran Terraform from a local machine not from the Bastion host Note you will have to manually create a Google Cloud firewall rule https cloud google com vpc docs using firewalls to allow connection to the bastion server via SSH port 22 We did not automate this for security reasons 5 Set up Git on your local machine then clone this Github sample Also clone this Github sample onto the bastion server 6 Set up Terraform https learn hashicorp com terraform getting started install html on your local machine so you will be able to build infrastructure 7 On your local machine update the corresponding parameters for your project In vars sh check to see whether you need to update CLUSTER1 LOCATION CLUSTER1 CLUSTER NAME CLUSTER1 CLUSTER CTX CLUSTER2 LOCATION CLUSTER2 CLUSTER NAME CLUSTER2 CLUSTER CTX In infrastructure terraform example tfvars infrastructure terraform example tfvars rename this file to terraform tfvars and update project id and billing account In infrastructure shared tf infrastructure shared tf check whether you need to update project prefix and region OPTIONAL In the locals section of infrastructure shared tf update CIDR ranges for bastion cidr and existing vpc if you need to Source vars sh to set up basic environment variables source vars sh 8 If you want to run Terraform in your own workspace create a backend tf file from infrastructure backend tf tmpl and update your Terraform workspace information in this file 9 Under infrastructure directory run terraform init terraform init terraform plan terraform plan out output tftxt NOTE You may get an error that the Compute Engine API has not been used before in the project In this case please manually enable the Compute Engine API https console cloud google com apis library compute googleapis com Error Error when reading or editing GCE default service account googleapi Error 403 Compute Engine API has not been used in project XXXXXXXXXXX before or it is disabled Enable it by visiting https console developers google com apis api compute googleapis com overview then retry If you enabled this API recently wait a few minutes for the action to propagate to our systems and retry accessNotConfigured terraform apply terraform apply output tftxt If Terraform completes without error you should have VPC NAT a bastion server two private clusters and firewall rules Please check all artifacts in GCP Console 10 SSH onto the bastion server 11 Make sure you have the following tools installed The Cloud SDK the gcloud command line tool The standard command line tools awk curl grep sed sha256sum and tr git kpt kubectl jq Install ASM 1 On bastion server go to this source code directory then source vars sh NOTE Make sure you manually create a Google Cloud firewall rule https cloud google com vpc docs using firewalls to allow SSH connections to your bastion server over port 22 cd asm private multiclusters intranet source vars sh 2 Source scripts main sh source scripts main sh 3 Run install asm mesh install asm mesh or you can run the commands in install asm mesh step by step manually Navigate to your working directory Binaries will be downloaded to this directory cd WORK DIR Set up K8s config and context set up credential CLUSTER1 CLUSTER NAME CLUSTER1 LOCATION CLUSTER1 CLUSTER CTX TF VAR project id set up credential CLUSTER2 CLUSTER NAME CLUSTER2 LOCATION CLUSTER2 CLUSTER CTX TF VAR project id Download ASM Installer download asm installer ASM MAJOR VER ASM MINOR VER Install ASM install asm CLUSTER1 CLUSTER NAME CLUSTER1 LOCATION TF VAR project id install asm CLUSTER2 CLUSTER NAME CLUSTER2 LOCATION TF VAR project id Register clusters grant role to connect agent TF VAR project id register cluster CLUSTER1 CLUSTER CTX CLUSTER1 LOCATION register cluster CLUSTER2 CLUSTER CTX CLUSTER2 LOCATION Add clusters to mesh cross cluster service secret CLUSTER1 CLUSTER NAME CLUSTER1 CLUSTER CTX CLUSTER2 CLUSTER CTX cross cluster service secret CLUSTER2 CLUSTER NAME CLUSTER2 CLUSTER CTX CLUSTER1 CLUSTER CTX Deploy test helloworld application Run install test app install test app Prepare for verification export CTX1 CLUSTER1 CLUSTER CTX export CTX2 CLUSTER2 CLUSTER CTX Follow the instruction in Verify cross cluster load balancing section of Add clusters to an Anthos Service Mesh https cloud google com service mesh docs gke install multi cluster to verify Please Note You don t need to install Helloworld application it has been installed for you already Internal Load Balancer Anthos ASM deploys ingress gateway using external load balancer by default If we need to change the ingress gateway to be internal load balancer we can use option or custom overlay parameter along with out load balancer yaml istio profiles internal load balancer yaml Please note that we need to specify out targetPort for https and http2 ports for current ASM version Twistlock PoC Pod traffic security scanning using ASM Docker and Google Artifact Registry GAR Please see the twistlock folder readme twistlock Cloud SQL for PostgreSQL PoC Connecting GKE clusters and ASM to a database that is external database to the Kubernetes clusters Please see the postgres folder readme postgres
GCP Summary PostgreSQL uses application level protocol negotiation for SSL connection Istio Proxy currently uses TCP level protocol negotiation so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL In this article we document how to reproduce this issue Auto Encrypt PostgreSQL SSL Connection Using Istio Proxy Sidecar Enforce SSL connection on Cloud SQL PostgreSQL instance Prerequisites Create client certificate and download client certificate client key and server certificate We will use them in the client container for testing without sidecar auto encryption and mount them into Istio Proxy sidecar for sidecar auto encryption
# Auto Encrypt PostgreSQL SSL Connection Using Istio Proxy Sidecar ### Summary PostgreSQL uses application-level protocol negotiation for SSL connection. Istio Proxy currently uses TCP-level protocol negotiation, so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL. In this article, we document how to reproduce this issue. ### Prerequisites * Enforce SSL connection on Cloud SQL PostgreSQL instance. * Create client certificate and download client certificate, client key and server certificate. We will use them in the client container for testing without sidecar auto-encryption, and mount them into Istio Proxy sidecar for sidecar auto-encryption. * Add K8s node IPs to the Authorized Networks of PostgreSQL instance. Or, we can add "0.0.0.0/0" to allow client connection from any IP address for testing purpose. ### Build Container Use the Dockerfile to build a testing PostgreSQL client container image. We package the certificates into Docker image just for initial connection testing. **They are not needed in client container for sidecar auto-encryption**. ### Test Direct SSL Connection Deploy a PostgreSQL client without any sidecar. Please note that we turn off the sidecar inject in the YAML file even though we have label our namespace to have Istio sidecar auto inject. ``` kubectl apply -f postgres-plain.yaml -n <YOUR_NAMESPACE> ``` Run the follow command to make sure SSL connection works. ``` # Enter into postgres-plain Pod kubectl exec -it deploy/postgres-plain -n <YOUR_NAMESPACE> -- /bin/bash # Once in the Pod, run this psql command with SSL mode psql "sslmode=verify-ca sslrootcert=server-ca.pem \ sslcert=client-cert.pem sslkey=client-key.pem \ hostaddr=YOUR_POSTGRESQL_IP \ port=5432 \ user=YOUR_USERNAME dbname=YOUR_DB_NAME" # Enter your password when it is prompted ``` You should see something like this: ``` psql (12.5, server 12.4) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. ``` Now, run `psql` command in Non-SSL mode ``` psql "hostaddr=YOUR_POSTGRESQL_IP port=5432 user=YOUR_USERNAME dbname=YOUR_DB_NAME" ``` You should see the error message as below. This proves that Non-SSL connection doesn't work. ``` psql: error: FATAL: connection requires a valid client certificate FATAL: pg_hba.conf rejects connection for host "35.235.65.143", user "postgres", database "postgres", SSL off ``` ### Deploy Istio Proxy Sidecar #### Create K8s secret for the certificates We will mount our PostgreSQL certificates into Istio Proxy sidecar. In order to achieve this, we need to upload certificates as K8s secret. ``` kubectl create secret generic postgres-cert --from-file=certs/client-cert.pem --from-file=certs/client-key.pem --from-file=certs/server-ca.pem ``` #### Mount the certificates into Istio Proxy sidecar We mount our PostgreSQL certificates into Istio Proxy sidecar via the following annotations. ``` sidecar.istio.io/userVolume: '[{"name":"postgres-cert", "secret":{"secretName":"postgres-cert"}}]' sidecar.istio.io/userVolumeMount: '[{"name":"postgres-cert", "mountPath":"/etc/certs/postgres-cert", "re adonly":true}]' ``` #### Configure Sidecar certificates We use [Service Entry](https://istio.io/latest/docs/reference/config/networking/service-entry/) and [Destination Rule](https://istio.io/latest/docs/reference/config/networking/destination-rule/) to instruct Istio Proxy sidec ar to auto encrypt network traffic with specified Redis host and port with **SIMPLE** TLS. The detailed comments can be found in `destination-rule.yaml` and `service-entry.yaml` source code. Also, here is how we instruct Istio Proxy sidecar to use our certificates for encryption. ``` clientCertificate: /etc/certs/postgres-cert/client-cert.pem privateKey: /etc/certs/postgres-cert/client-key.pem caCertificates: /etc/certs/postgres-cert/server-ca.pem ``` Deploy both YAML files to your namespace. ``` kubectl apply -f destination-rule.yaml -n <YOUR_NAMESPACE> kubectl apply -f service-entry.yaml -n <YOUR_NAMESPACE> ``` #### Deploy PostgresSQL client with sidecar injection Run this command to deploy PostgreSQL client with Istio Proxy sidecar inject `` kubectl apply -f postgres-istio.yaml -n <YOUR_NAMESPACE> `` #### Run `psql` command in Non-SSL mode ``` psql "hostaddr=YOUR_POSTGRESQL_IP port=5432 user=YOUR_USERNAME dbname=YOUR_DB_NAME" ``` You should be prompted for password. You will see errors. #### Look into the Istio Proxy log You can read the logs in Cloud Logging. However, you may want to view sidecar log messages for the detailed network traffic information and errors with the following command. ``` kubectl logs deploy/postgres-istio -c istio-proxy -n <YOUR_NAMESPACE> ```
GCP
Auto Encrypt PostgreSQL SSL Connection Using Istio Proxy Sidecar Summary PostgreSQL uses application level protocol negotiation for SSL connection Istio Proxy currently uses TCP level protocol negotiation so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL In this article we document how to reproduce this issue Prerequisites Enforce SSL connection on Cloud SQL PostgreSQL instance Create client certificate and download client certificate client key and server certificate We will use them in the client container for testing without sidecar auto encryption and mount them into Istio Proxy sidecar for sidecar auto encryption Add K8s node IPs to the Authorized Networks of PostgreSQL instance Or we can add 0 0 0 0 0 to allow client connection from any IP address for testing purpose Build Container Use the Dockerfile to build a testing PostgreSQL client container image We package the certificates into Docker image just for initial connection testing They are not needed in client container for sidecar auto encryption Test Direct SSL Connection Deploy a PostgreSQL client without any sidecar Please note that we turn off the sidecar inject in the YAML file even though we have label our namespace to have Istio sidecar auto inject kubectl apply f postgres plain yaml n YOUR NAMESPACE Run the follow command to make sure SSL connection works Enter into postgres plain Pod kubectl exec it deploy postgres plain n YOUR NAMESPACE bin bash Once in the Pod run this psql command with SSL mode psql sslmode verify ca sslrootcert server ca pem sslcert client cert pem sslkey client key pem hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME Enter your password when it is prompted You should see something like this psql 12 5 server 12 4 SSL connection protocol TLSv1 3 cipher TLS AES 256 GCM SHA384 bits 256 compression off Type help for help Now run psql command in Non SSL mode psql hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME You should see the error message as below This proves that Non SSL connection doesn t work psql error FATAL connection requires a valid client certificate FATAL pg hba conf rejects connection for host 35 235 65 143 user postgres database postgres SSL off Deploy Istio Proxy Sidecar Create K8s secret for the certificates We will mount our PostgreSQL certificates into Istio Proxy sidecar In order to achieve this we need to upload certificates as K8s secret kubectl create secret generic postgres cert from file certs client cert pem from file certs client key pem from file certs server ca pem Mount the certificates into Istio Proxy sidecar We mount our PostgreSQL certificates into Istio Proxy sidecar via the following annotations sidecar istio io userVolume name postgres cert secret secretName postgres cert sidecar istio io userVolumeMount name postgres cert mountPath etc certs postgres cert re adonly true Configure Sidecar certificates We use Service Entry https istio io latest docs reference config networking service entry and Destination Rule https istio io latest docs reference config networking destination rule to instruct Istio Proxy sidec ar to auto encrypt network traffic with specified Redis host and port with SIMPLE TLS The detailed comments can be found in destination rule yaml and service entry yaml source code Also here is how we instruct Istio Proxy sidecar to use our certificates for encryption clientCertificate etc certs postgres cert client cert pem privateKey etc certs postgres cert client key pem caCertificates etc certs postgres cert server ca pem Deploy both YAML files to your namespace kubectl apply f destination rule yaml n YOUR NAMESPACE kubectl apply f service entry yaml n YOUR NAMESPACE Deploy PostgresSQL client with sidecar injection Run this command to deploy PostgreSQL client with Istio Proxy sidecar inject kubectl apply f postgres istio yaml n YOUR NAMESPACE Run psql command in Non SSL mode psql hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME You should be prompted for password You will see errors Look into the Istio Proxy log You can read the logs in Cloud Logging However you may want to view sidecar log messages for the detailed network traffic information and errors with the following command kubectl logs deploy postgres istio c istio proxy n YOUR NAMESPACE
GCP Summary PostgreSQL Auto SSL Connection Using Cloud SQL Proxy Because ASM Istio Proxy sidecar doesn t work with PostgreSQL SSL auto encryption we demonstrate how to use Cloud SQL Proxy to auto encrypt SSL connection with Cloud SQL PostgreSQL database in this article PostgreSQL uses application level protocol negotiation for SSL connection Istio Proxy currently uses TCP level protocol negotiation so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL Please follow the steps in to see the details of this issue Prerequisites
# PostgreSQL Auto SSL Connection Using Cloud SQL Proxy ## Summary PostgreSQL uses application-level protocol negotiation for SSL connection. Istio Proxy currently uses TCP-level protocol negotiation, so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL. Please follow the steps in [PostgreSQL Auto SSL Connection Problem Using Istio Sidecar](./Istio-Sidecar.md) to see the details of this issue. Because ASM Istio Proxy sidecar doesn't work with PostgreSQL SSL auto encryption, we demonstrate how to use Cloud SQL Proxy to auto encrypt SSL connection with Cloud SQL PostgreSQL database in this article. ## Prerequisites * Enforce SSL connection on Cloud SQL PostgreSQL instance. * **We don't need certificates for Cloud SQL Proxy connection.** However, we will create client certificate and download client certificate, client key and server certificate for the purpose of initial SSL connection without sidecar auto-encryption. Instructions for downloading Cloud SQL for PostgreSQL certificates is on this page: [Configuring SSL/TLS certificates](https://cloud.google.com/sql/docs/postgres/configure-ssl-instance) * Add K8s node IPs to the Authorized Networks of PostgreSQL instance. Or, we can add "0.0.0.0/0" to allow client connection from any IP address for testing purpose. ## Build Container 1. Download the `client-cert.pem`, `client-key.pem` and `server-ca.pem` certificates, using the instructions on [Configuring SSL/TLS certificates](https://cloud.google.com/sql/docs/postgres/configure-ssl-instance) **NOTE:** These certificates are not needed for connecting via the Cloud SQL Proxy 2. Use the [Dockerfile](./Dockerfile) to build a test PostgreSQL client container image. The certificates are packaged into the Docker image just for initial connection testing. ## Test Direct SSL Connection 1. Deploy a PostgreSQL client without any sidecar. ``` kubectl apply -f postgres-plain.yaml -n sample ``` 2. Run the follow command to make sure the SSL connection works. ``` # Enter into postgres-plain Pod kubectl exec -it deploy/postgres-plain -n sample -- /bin/bash # Once in the Pod, run this psql command with SSL mode psql "sslmode=verify-ca sslrootcert=server-ca.pem \ sslcert=client-cert.pem sslkey=client-key.pem \ hostaddr=YOUR_POSTGRESQL_IP \ port=5432 \ user=YOUR_USERNAME dbname=YOUR_DB_NAME" # Enter your password when it is prompted ``` You should see something like this: ``` psql (12.5, server 12.4) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. ``` 3. Now, run `psql` command in Non-SSL mode ``` psql "hostaddr=YOUR_POSTGRESQL_IP port=5432 user=YOUR_USERNAME dbname=YOUR_DB_NAME" ``` You should see the below error message. This proves that a Non-SSL connection doesn't work. ``` psql: error: FATAL: connection requires a valid client certificate FATAL: pg_hba.conf rejects connection for host "35.235.65.143", user "postgres", database "postgres", SSL off ``` ## Deploy the Cloud SQL Proxy Sidecar 1. Create a Kubernetes Service Account - We have already deployed our GKE cluster with [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) enabled. - We will use a Kubernetes Service Account (KSA) binding to a Google Cloud Service Account (GSA) to simplify our Cloud Proxy sidecar deployment. - Create a KSA named "ksa-sqlproxy" ``` kubectl apply -f service-account.yaml -n sample ``` 2. Set up a Google Cloud Service Account - Set up a Google Cloud Service Account (or use an existing GSA). - Make sure that [Cloud SQL Client predefined role](https://cloud.google.com/sql/docs/mysql/project-access-control#roles) (roles/cloudsql.client) is granted to this GSA. - In the next step, we will create a new GSA `sql-client@${PROJECT_ID}.iam.gserviceaccount.com`. 3. Bind KSA to GSA ``` export PROJECT_ID="$(gcloud config get-value project || ${GOOGLE_CLOUD_PROJECT})" gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:${PROJECT_ID}.svc.id.goog[YOUR_NAMESPACE/ksa-sqlproxy]" \ sql-client@${PROJECT_ID}.iam.gserviceaccount.com ``` 4. Add annotation to the service account ``` kubectl annotate serviceaccount \ ksa-sqlproxy \ iam.gke.io/gcp-service-account="sql-client@${PROJECT_ID}.iam.gserviceaccount.com" \ -n YOUR_NAMESPACE ``` 5. Deploy PostgreSQL client with Cloud SQL Proxy sidecar Take a look at the deployment YAML file, [postgres-cloudproxy.yaml](./postgres-cloudproxy.yaml). Please note the following two items: i. The "serviceAccountName: ksa-sqlproxy" entry for pod. - This pod will use this KSA to authenticate itself through Google Cloud IAM. - Remember that we don't need the certificate files. ii. The container entry for Cloud SQL Proxy. ``` kubectl apply -f postgres-cloudproxy.yaml -n sample ``` 6. Test out Cloud SQL Proxy sidecar - Run the following command to get into Postgres client container ``` kubectl exec -it deploy/postgres-check -c postgres-check -n sample -- /bin/bash ``` - Run `psql` command in Non-SSL mode ``` psql "hostaddr=YOUR_POSTGRESQL_IP port=5432 user=YOUR_USERNAME dbname=YOUR_DB_NAME" ``` You should be prompted for password, then you should be connected to your PostgreSQL database.
GCP
PostgreSQL Auto SSL Connection Using Cloud SQL Proxy Summary PostgreSQL uses application level protocol negotiation for SSL connection Istio Proxy currently uses TCP level protocol negotiation so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL Please follow the steps in PostgreSQL Auto SSL Connection Problem Using Istio Sidecar Istio Sidecar md to see the details of this issue Because ASM Istio Proxy sidecar doesn t work with PostgreSQL SSL auto encryption we demonstrate how to use Cloud SQL Proxy to auto encrypt SSL connection with Cloud SQL PostgreSQL database in this article Prerequisites Enforce SSL connection on Cloud SQL PostgreSQL instance We don t need certificates for Cloud SQL Proxy connection However we will create client certificate and download client certificate client key and server certificate for the purpose of initial SSL connection without sidecar auto encryption Instructions for downloading Cloud SQL for PostgreSQL certificates is on this page Configuring SSL TLS certificates https cloud google com sql docs postgres configure ssl instance Add K8s node IPs to the Authorized Networks of PostgreSQL instance Or we can add 0 0 0 0 0 to allow client connection from any IP address for testing purpose Build Container 1 Download the client cert pem client key pem and server ca pem certificates using the instructions on Configuring SSL TLS certificates https cloud google com sql docs postgres configure ssl instance NOTE These certificates are not needed for connecting via the Cloud SQL Proxy 2 Use the Dockerfile Dockerfile to build a test PostgreSQL client container image The certificates are packaged into the Docker image just for initial connection testing Test Direct SSL Connection 1 Deploy a PostgreSQL client without any sidecar kubectl apply f postgres plain yaml n sample 2 Run the follow command to make sure the SSL connection works Enter into postgres plain Pod kubectl exec it deploy postgres plain n sample bin bash Once in the Pod run this psql command with SSL mode psql sslmode verify ca sslrootcert server ca pem sslcert client cert pem sslkey client key pem hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME Enter your password when it is prompted You should see something like this psql 12 5 server 12 4 SSL connection protocol TLSv1 3 cipher TLS AES 256 GCM SHA384 bits 256 compression off Type help for help 3 Now run psql command in Non SSL mode psql hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME You should see the below error message This proves that a Non SSL connection doesn t work psql error FATAL connection requires a valid client certificate FATAL pg hba conf rejects connection for host 35 235 65 143 user postgres database postgres SSL off Deploy the Cloud SQL Proxy Sidecar 1 Create a Kubernetes Service Account We have already deployed our GKE cluster with Workload Identity https cloud google com kubernetes engine docs how to workload identity enabled We will use a Kubernetes Service Account KSA binding to a Google Cloud Service Account GSA to simplify our Cloud Proxy sidecar deployment Create a KSA named ksa sqlproxy kubectl apply f service account yaml n sample 2 Set up a Google Cloud Service Account Set up a Google Cloud Service Account or use an existing GSA Make sure that Cloud SQL Client predefined role https cloud google com sql docs mysql project access control roles roles cloudsql client is granted to this GSA In the next step we will create a new GSA sql client PROJECT ID iam gserviceaccount com 3 Bind KSA to GSA export PROJECT ID gcloud config get value project GOOGLE CLOUD PROJECT gcloud iam service accounts add iam policy binding role roles iam workloadIdentityUser member serviceAccount PROJECT ID svc id goog YOUR NAMESPACE ksa sqlproxy sql client PROJECT ID iam gserviceaccount com 4 Add annotation to the service account kubectl annotate serviceaccount ksa sqlproxy iam gke io gcp service account sql client PROJECT ID iam gserviceaccount com n YOUR NAMESPACE 5 Deploy PostgreSQL client with Cloud SQL Proxy sidecar Take a look at the deployment YAML file postgres cloudproxy yaml postgres cloudproxy yaml Please note the following two items i The serviceAccountName ksa sqlproxy entry for pod This pod will use this KSA to authenticate itself through Google Cloud IAM Remember that we don t need the certificate files ii The container entry for Cloud SQL Proxy kubectl apply f postgres cloudproxy yaml n sample 6 Test out Cloud SQL Proxy sidecar Run the following command to get into Postgres client container kubectl exec it deploy postgres check c postgres check n sample bin bash Run psql command in Non SSL mode psql hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME You should be prompted for password then you should be connected to your PostgreSQL database
GCP Cost Optimization Dashboard This repo contains SQL scripts for analyzing GCP Billing Recommendations data and also a guide to setup the Cost Optimization dashboard Introduction For sample dashboard The Cost Optimization dashboard builds on top of existing and adds following additional insights to the dashboard Compute Engine Insights Cloud Storage Insights
# Cost Optimization Dashboard This repo contains SQL scripts for analyzing GCP Billing, Recommendations data and also a guide to setup the Cost Optimization dashboard. For sample dashboard [see here](https://datastudio.google.com/c/u/0/reporting/6cf564a4-9c94-4cfd-becd-b9c770ee7aa2/page/r34iB). ## Introduction The Cost Optimization dashboard builds on top of existing [GCP billing dashboard](https://cloud.google.com/billing/docs/how-to/visualize-data) and adds following additional insights to the dashboard. * Compute Engine Insights * Cloud Storage Insights * BigQuery Insights * Cost Optimization Recommendations * Etc. Few key things to keep in mind before starting. * [Recommendations data export](https://cloud.google.com/recommender/docs/bq-export/export-recommendations-to-bq) to bigquery is still in Preview(as of June 2021). * Currently, no automation is available for this setup. > <span style="color:red">*NOTE: Implementing this dashboard will incur additional BigQuery charges.*</span> ## Prerequisites The user running the steps in this guide should have ability to * Create a GCP project and BQ datasets. * Schedule BQ queries and data transfer jobs. * Provision BQ access to the Datastudio dashboard. * Fully qualified names of BQ tables. Example format: ```<project-id>.<dataset-name>.<table-name>```. Also, Make sure the following resources are accessible. * [billing_dashboard_view](https://datastudio.google.com/datasources/c7f4b535-3dd0-4aaa-bd2b-78f9718df9e2) * [co_dashboard_view](https://datastudio.google.com/datasources/78dc2597-d8e7-40db-8fbb-f3b2c8271b6d) * [co_recommendations_view](https://datastudio.google.com/datasources/c972e0f6-51e1-483c-8947-214b300d26a6) * [GCP Cost Optimization Dashboard](https://datastudio.google.com/reporting/6cf564a4-9c94-4cfd-becd-b9c770ee7aa2) > <span style="color:red">*NOTE: In case of permission issues with above links, please reach out to [email protected]*</span> ## Setup The overall process of the setup is as follows, each step is outlined in detail below, * Create a project and the required BigQuery datasets. * Create data transfers jobs, and scheduled queries. * Initiate data transfer for billing data export into ```billing``` dataset. * Initiate data transfer for recommendations data export into ```recommender``` dataset. * Setup dashboard related functions, views and scheduled queries in ```dashboard``` dataset. * Use pre-existing templates as a starting point to set up the DataStudio dashboard. ### Project and datasets * Create a project to hold the BigQuery datasets for the dashboard * Create the following datasets in the project. Please make sure that all datasets are created in the same region (eg: US). [See instructions](https://cloud.google.com/bigquery/docs/datasets#create-dataset) on how to create a dataset in BigQuery * ```billing``` * ```recommender``` * ```dashboard``` ### Billing data exports to Bigquery * In the project created above, enable export for both β€˜Daily cost detail’ and β€˜Pricing’ data tables to the ```billing``` dataset, by following the instructions [here](https://cloud.google.com/billing/docs/how-to/export-data-bigquery-setup). * Data availability > Your [BigQuery dataset](https://cloud.google.com/bigquery/docs/datasets-intro) only reflects Google Cloud usage and cost data incurred from the date you set up Cloud Billing export, and after. That is, Google Cloud billing data is not added retroactively, so you won't see Cloud Billing data from before you enable export. ### Recommendations data exports to Bigquery * Export recommendations data to the ```recommender``` dataset, by following the instructions [here](https://cloud.google.com/recommender/docs/bq-export/export-recommendations-to-bq). * Data availability > There is a one day delay after a transfer is first created before your requested organization is opted-in to exports, and recommendations for your organization are available for export. In the interim, you will see the message β€œTransfer deferred due to source data not being available”. This message may also be shown in case the data is not ready for export on future days - the transfers will automatically be rescheduled to check for source data at a later time. * At this point the datasets would look something like below. ![](docs/image1.png) ### Cost Optimization data analysis scripts This step involves setting up the following data analysis components. * Required SQL functions * Daily scheduled scripts for data analysis and aggregation #### Common Functions * Compose a new query and copy the SQL at [common_functions.sql](scripts/common_functions.sql). * Execute the query to create some required functions in the ```dashboard``` dataset. * This is how the dataset will look like after the above step. ![](docs/image2.png) #### CO Billing Data Table * Compose a new query and copy the SQL at [co_billing_data.sql](scripts/co_billing_data.sql). * Replace ```<BILLING_EXPORT_TABLE>``` with the correct table name created at "Billing data export to Bigquery" step. * Run the query and ensure it’s completed without errors * Click β€˜Schedule query -> Create new scheduled query’. ![](docs/image3.png) * Fill the details as seen in the below screenshot. ![](docs/image4.png) * Click β€˜Schedule’ to create a scheduled query. #### CO Pricing Data Table * Compose a new query and copy the SQL at [co_pricing_data.sql](scripts/co_pricing_data.sql). * Replace ```<PRICING_EXPORT_TABLE>``` with the correct table name created at "Billing data export to Bigquery" step. * Click β€˜Schedule query -> Create new scheduled query’. ![](docs/image3.png) * Fill the details as seen in the below screenshot. [ ![](docs/image5.png)](docs/image5.png) * Click β€˜Schedule’ to create a scheduled query. #### CO Recommendations Data Table * Compose a new query and copy the SQL at [co_recommendations_data.sql](scripts/co_recommendations_data.sql). * Click β€˜Schedule query -> Create new scheduled query’. ![](docs/image3.png) * Fill the details as seen in the below screenshot. ![](docs/image6.png) * Click β€˜Schedule’ to create a scheduled query. #### Verify * This is how the BQ scheduled queries screen will look like for CO queries after the above steps. ![](docs/image7.png) * This is how the dataset will look like after the above step. ![](docs/image8.png) ## Dashboard This step involves copying the template data sources and template dashboard report and making necessary changes. ### Data Sources Below are the template data sources that are of interest. * [billing_dashboard_view](https://datastudio.google.com/datasources/c7f4b535-3dd0-4aaa-bd2b-78f9718df9e2) * [co_dashboard_view](https://datastudio.google.com/datasources/78dc2597-d8e7-40db-8fbb-f3b2c8271b6d) * [co_recommendations_view](https://datastudio.google.com/datasources/c972e0f6-51e1-483c-8947-214b300d26a6) For each of the above data sources * Copy the data source ![](docs/image9.png) * Change the name at the top left corner. * For Example, from "Copy of [EXTERNAL] xyz_view" to β€œxyz_view”. * Click the β€˜Edit Connection’ button if the Custom Query editor window does not show up after copying the data source in the above step. * In the Billing Project Selector panel (middle panel), select the project created in the beginning of this guide. * In the Query panel to the right * Wherever applicable, replace all occurrences of project name(anilgcp-co-dev) to the project created in the beginning of this guide. * Example: `anilgcp-co-dev.dashboard.co_billing_data` to `REPLACE_WITH_PROJECT_ID.dashboard.co_billing_data` * Wherever applicable, replace all occurrences of billing export table name to the correct table name. * Example: `anilgcp-co-dev.billing.gcp_billing_export_v1_xxxxxxxxxxxx` to `REPLACE_WITH_PROJECT_ID.billing.billing_export_table_name` * Make sure all occurrences of fully qualified table names are enclosed within the backticks (`). * Make sure β€˜Enable date parameters’ is selected for both data sources from above steps. ![](docs/image10.png) * Click "Reconnect". * Review the confirmation, which should say "Allow parameter sharing?", and click "Allow". ![](docs/image11.png) * Review the confirmation, which should say "No fields changed due to configuration change.", and click "Apply". ### Report * Copy the [dashboard template](https://datastudio.google.com/reporting/6cf564a4-9c94-4cfd-becd-b9c770ee7aa2) by clicking the "Make a copy" button at the top right hand side, as shown below. ![](docs/image12.png) * Point the "New Data Source" to the newly created data sources from the above steps, and click "Copy Report". ![](docs/image13.png) * Change the name of the dashboard from β€œCopy of [EXTERNAL] GCP Cost Optimization Dashboard” to β€œGCP Cost Optimization Dashboard” or something similar. ## References and support * For feedback and support reach out to your TAM
GCP
Cost Optimization Dashboard This repo contains SQL scripts for analyzing GCP Billing Recommendations data and also a guide to setup the Cost Optimization dashboard For sample dashboard see here https datastudio google com c u 0 reporting 6cf564a4 9c94 4cfd becd b9c770ee7aa2 page r34iB Introduction The Cost Optimization dashboard builds on top of existing GCP billing dashboard https cloud google com billing docs how to visualize data and adds following additional insights to the dashboard Compute Engine Insights Cloud Storage Insights BigQuery Insights Cost Optimization Recommendations Etc Few key things to keep in mind before starting Recommendations data export https cloud google com recommender docs bq export export recommendations to bq to bigquery is still in Preview as of June 2021 Currently no automation is available for this setup span style color red NOTE Implementing this dashboard will incur additional BigQuery charges span Prerequisites The user running the steps in this guide should have ability to Create a GCP project and BQ datasets Schedule BQ queries and data transfer jobs Provision BQ access to the Datastudio dashboard Fully qualified names of BQ tables Example format project id dataset name table name Also Make sure the following resources are accessible billing dashboard view https datastudio google com datasources c7f4b535 3dd0 4aaa bd2b 78f9718df9e2 co dashboard view https datastudio google com datasources 78dc2597 d8e7 40db 8fbb f3b2c8271b6d co recommendations view https datastudio google com datasources c972e0f6 51e1 483c 8947 214b300d26a6 GCP Cost Optimization Dashboard https datastudio google com reporting 6cf564a4 9c94 4cfd becd b9c770ee7aa2 span style color red NOTE In case of permission issues with above links please reach out to gcp co dashboard google com span Setup The overall process of the setup is as follows each step is outlined in detail below Create a project and the required BigQuery datasets Create data transfers jobs and scheduled queries Initiate data transfer for billing data export into billing dataset Initiate data transfer for recommendations data export into recommender dataset Setup dashboard related functions views and scheduled queries in dashboard dataset Use pre existing templates as a starting point to set up the DataStudio dashboard Project and datasets Create a project to hold the BigQuery datasets for the dashboard Create the following datasets in the project Please make sure that all datasets are created in the same region eg US See instructions https cloud google com bigquery docs datasets create dataset on how to create a dataset in BigQuery billing recommender dashboard Billing data exports to Bigquery In the project created above enable export for both Daily cost detail and Pricing data tables to the billing dataset by following the instructions here https cloud google com billing docs how to export data bigquery setup Data availability Your BigQuery dataset https cloud google com bigquery docs datasets intro only reflects Google Cloud usage and cost data incurred from the date you set up Cloud Billing export and after That is Google Cloud billing data is not added retroactively so you won t see Cloud Billing data from before you enable export Recommendations data exports to Bigquery Export recommendations data to the recommender dataset by following the instructions here https cloud google com recommender docs bq export export recommendations to bq Data availability There is a one day delay after a transfer is first created before your requested organization is opted in to exports and recommendations for your organization are available for export In the interim you will see the message Transfer deferred due to source data not being available This message may also be shown in case the data is not ready for export on future days the transfers will automatically be rescheduled to check for source data at a later time At this point the datasets would look something like below docs image1 png Cost Optimization data analysis scripts This step involves setting up the following data analysis components Required SQL functions Daily scheduled scripts for data analysis and aggregation Common Functions Compose a new query and copy the SQL at common functions sql scripts common functions sql Execute the query to create some required functions in the dashboard dataset This is how the dataset will look like after the above step docs image2 png CO Billing Data Table Compose a new query and copy the SQL at co billing data sql scripts co billing data sql Replace BILLING EXPORT TABLE with the correct table name created at Billing data export to Bigquery step Run the query and ensure it s completed without errors Click Schedule query Create new scheduled query docs image3 png Fill the details as seen in the below screenshot docs image4 png Click Schedule to create a scheduled query CO Pricing Data Table Compose a new query and copy the SQL at co pricing data sql scripts co pricing data sql Replace PRICING EXPORT TABLE with the correct table name created at Billing data export to Bigquery step Click Schedule query Create new scheduled query docs image3 png Fill the details as seen in the below screenshot docs image5 png docs image5 png Click Schedule to create a scheduled query CO Recommendations Data Table Compose a new query and copy the SQL at co recommendations data sql scripts co recommendations data sql Click Schedule query Create new scheduled query docs image3 png Fill the details as seen in the below screenshot docs image6 png Click Schedule to create a scheduled query Verify This is how the BQ scheduled queries screen will look like for CO queries after the above steps docs image7 png This is how the dataset will look like after the above step docs image8 png Dashboard This step involves copying the template data sources and template dashboard report and making necessary changes Data Sources Below are the template data sources that are of interest billing dashboard view https datastudio google com datasources c7f4b535 3dd0 4aaa bd2b 78f9718df9e2 co dashboard view https datastudio google com datasources 78dc2597 d8e7 40db 8fbb f3b2c8271b6d co recommendations view https datastudio google com datasources c972e0f6 51e1 483c 8947 214b300d26a6 For each of the above data sources Copy the data source docs image9 png Change the name at the top left corner For Example from Copy of EXTERNAL xyz view to xyz view Click the Edit Connection button if the Custom Query editor window does not show up after copying the data source in the above step In the Billing Project Selector panel middle panel select the project created in the beginning of this guide In the Query panel to the right Wherever applicable replace all occurrences of project name anilgcp co dev to the project created in the beginning of this guide Example anilgcp co dev dashboard co billing data to REPLACE WITH PROJECT ID dashboard co billing data Wherever applicable replace all occurrences of billing export table name to the correct table name Example anilgcp co dev billing gcp billing export v1 xxxxxxxxxxxx to REPLACE WITH PROJECT ID billing billing export table name Make sure all occurrences of fully qualified table names are enclosed within the backticks Make sure Enable date parameters is selected for both data sources from above steps docs image10 png Click Reconnect Review the confirmation which should say Allow parameter sharing and click Allow docs image11 png Review the confirmation which should say No fields changed due to configuration change and click Apply Report Copy the dashboard template https datastudio google com reporting 6cf564a4 9c94 4cfd becd b9c770ee7aa2 by clicking the Make a copy button at the top right hand side as shown below docs image12 png Point the New Data Source to the newly created data sources from the above steps and click Copy Report docs image13 png Change the name of the dashboard from Copy of EXTERNAL GCP Cost Optimization Dashboard to GCP Cost Optimization Dashboard or something similar References and support For feedback and support reach out to your TAM
GCP Copyright 2021 Google LLC All Rights Reserved you may not use this file except in compliance with the License http www apache org licenses LICENSE 2 0 You may obtain a copy of the License at Unless required by applicable law or agreed to in writing software Licensed under the Apache License Version 2 0 the License
<!-- Copyright 2021 Google LLC. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================================================================== --> # Vertex AI Pipeline This repository demonstrates end-to-end [MLOps process](https://services.google.com/fh/files/misc/practitioners_guide_to_mlops_whitepaper.pdf) using [Vertex AI](https://cloud.google.com/vertex-ai) platform and [Smart Analytics](https://cloud.google.com/solutions/smart-analytics) technology capabilities. In particular two general [Vertex AI Pipeline](https://cloud.google.com/vertex-ai/docs/pipelines) templates has been provided: - Training pipeline including: - Data processing - Custom model training - Model evaluation - Endpoint creation - Model deployment - Deployment testing - Model monitoring - Batch-prediction pipeline including - Data processing - Batch prediction using deployed model Note that besides Data processing being done using BigQuery, all other steps are build on top of [Vertex AI](https://cloud.google.com/vertex-ai) platform capabilities. <p align="center"> <img src="./training_pipeline.png" alt="Sample Training Pipeline" width="600"/> </p> ### Dataset The dataset used throughout the demonstration is [Banknote Authentication Data Set](https://archive.ics.uci.edu/ml/datasets/banknote+authentication). Data were extracted from images that were taken from genuine and forged banknote-like specimens. For digitization, an industrial camera usually used for print inspection was used. The final images have 400x 400 pixels. Due to the object lens and distance to the investigated object gray-scale pictures with a resolution of about 660 dpi were gained. Wavelet Transform tool were used to extract features from images. Attribute Information: 1. variance of Wavelet Transformed image (continuous) 2. skewness of Wavelet Transformed image (continuous) 3. curtosis of Wavelet Transformed image (continuous) 4. entropy of image (continuous) 5. class (integer) ### Machine Learning Problem Given the Banknote Authentication Data Set, a binary classification problem is adopted where attribute `class` is chosen as label and the remaining attributes are used as features. [LightGBM](https://github.com/microsoft/LightGBM), a gradient boosting framework that uses tree based learning algorithms, is used to train the model for purpose of demonstrating [custom training](https://cloud.google.com/vertex-ai/docs/training/custom-training) and [custom serving](https://cloud.google.com/vertex-ai/docs/predictions/use-custom-container) capabilities of Vertex AI platform, which provide more native support for e.g. Tensorflow, Pytorch, Scikit-Learn and Pytorch. ## Repository Structure The repository contains the following: ``` . β”œβ”€β”€ components : custom vertex pipeline components β”œβ”€β”€ images : custom container images for training and serving β”œβ”€β”€ pipelines : vertex ai pipeline definitions and runners β”œβ”€β”€ configs : configurations for defining vertex ai pipeline β”œβ”€β”€ scripts : scripts for runing local testing └── notebooks : notebooks used development and testing of vertex ai pipeline ``` In addition - `build_components_cb.sh`: build all components defined in `components` folder using Cloud Build - `build_images_cb.sh`: build custom images (training and serving) defined in `images` folder using Cloud Build - `build_pipeline_cb.sh`: build training and batch-prediction pipeline defined in `pipelines` folder using Cloud Build ## Get Started The end-to-end process of creating and running the training pipeline contains the following steps: 1. Setting up [MLOps environment](https://github.com/GoogleCloudPlatform/mlops-with-vertex-ai/tree/main/provision) on Google Cloud. 2. Create an [Artifact Registry](https://cloud.google.com/artifact-registry) for your organization to manage container images 3. Develop the training and serving logic 4. Create the components required to build and run the pipeline 5. Prepare and consolidate the configurations of the various steps of the pipeline 6. Build the pipeline 7. Run and orchestrate the pipeline ### Create Artifact Registry [Artifact Registry](https://cloud.google.com/artifact-registry) is a single place for your organization to manage container images and language packages (such as Maven and npm). It is fully integrated with Google Cloud’s tooling and runtimes and comes with support for native artifact protocols. More importantly, it supports regional and multi-regional repositories. We have provided a helper script: `scripts/create_artifact_registry.sh` ### Develop Training and Serving Logic Develop your machine learning program and then containerize them as demonstrated in `images`. The requirements for writing training code can be found [here](https://cloud.google.com/vertex-ai/docs/training/code-requirements) as well. Note that custom serving image is not necessary if your choosen framework is supported by [pre-built-container](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers), which are organized by machine learning (ML) framework and framework version, provide HTTP prediction servers that you can use to serve predictions with minimal configuration We have also provided helper scripts: - `scripts/run_training_local.sh`: test the training program locally - `scripts/run_serving_local.sh`: test the serving program locally - `build_images_cb.sh`: build the images using Cloud Build service #### Environment variables for special Cloud Storage directories Vertex AI sets the following environment variables when it runs your training code: - `AIP_MODEL_DIR`: a Cloud Storage URI of a directory intended for saving model artifacts. - `AIP_CHECKPOINT_DIR`: a Cloud Storage URI of a directory intended for saving checkpoints. - `AIP_TENSORBOARD_LOG_DIR`: a Cloud Storage URI of a directory intended for saving TensorBoard logs. See Using Vertex TensorBoard with custom training. ### Build Components The following template custom components are provided: - `components/data_process`: read BQ table, perform transformation in BQ and export to GCS - `components/train_model`: launch custom (distributed) training job on Vertex AI platform - `components/check_model_metrics`: check the metrics of a training job and verify whether it produces better model - `components/create_endpoint`: create an endpoint on Vertex AI platform - `components/deploy_model`: deployed a model artifact to a created endpoint on Vertex AI platform - `components/test_endpoint`: call the endpoint of deployed model for verification - `components/monitor_model`: track deployed model performance using Vertex Model Monitoring - `components/batch_prediction`: launch batch prediction job on Vertex AI platform We have also provided a helper script: `build_components_cb.sh` ### Build and Run Pipeline The sample definition of pipelines are - `pipelines/training_pipeline.py` - `pipelines/batch_prediction_pipeline.py` After compiled the training or batch-prediction pipeline, you may trigger the pipeline run using the provided runner - `pipelines/trainin_pipeline_runner.py` - `pipelines/batch_prediction_pipeline_runner.py` An example to run training pipeline using the runner ```shell python training_pipeline_runner \ --project_id "$PROJECT_ID" \ --pipeline_region $PIPELINE_REGION \ --pipeline_root $PIPELINE_ROOT \ --pipeline_job_spec_path $PIPELINE_SPEC_PATH \ --data_pipeline_root $DATA_PIPELINE_ROOT \ --input_dataset_uri "$DATA_URI" \ --training_data_schema ${DATA_SCHEMA} \ --data_region $DATA_REGION \ --gcs_data_output_folder $GCS_OUTPUT_PATH \ --training_container_image_uri "$TRAIN_IMAGE_URI" \ --train_additional_args $TRAIN_ARGS \ --serving_container_image_uri "$SERVING_IMAGE_URI" \ --custom_job_service_account $CUSTOM_JOB_SA \ --hptune_region $PIPELINE_REGION \ --hp_config_max_trials 30 \ --hp_config_suggestions_per_request 5 \ --vpc_network "$VPC_NETWORK" \ --metrics_name $METRIC_NAME \ --metrics_threshold $METRIC_THRESHOLD \ --endpoint_machine_type n1-standard-4 \ --endpoint_min_replica_count 1 \ --endpoint_max_replica_count 2 \ --endpoint_test_instances ${TEST_INSTANCE} \ --monitoring_user_emails $MONITORING_EMAIL \ --monitoring_log_sample_rate 0.8 \ --monitor_interval 3600 \ --monitoring_default_threshold 0.3 \ --monitoring_custom_skew_thresholds $MONITORING_CONFIG \ --monitoring_custom_drift_thresholds $MONITORING_CONFIG \ --enable_model_monitoring True \ --pipeline_schedule "0 2 * * *" \ --pipeline_schedule_timezone "US/Pacific" \ --enable_pipeline_caching ``` We have also provided helper scripts: - `scripts/build_pipeline_spec.sh`: compile and build the pipeline specs locally - `scripts/run_training_pipeline.sh`: create and run training Vertex AI Pipeline based on the specs - `scripts/run_batch_prediction_pipeline.sh`: create and run batch-prediction Vertex AI Pipeline based on the specs - `build_pipeline_spec_cb.sh`: compile and build the pipeline specs using Cloud Build service ### Some common parameters |Field|Explanation| |-----|-----| |project_id|Your GCP project| |pipeline_region|The region to run Vertex AI Pipeline| |pipeline_root|The GCS buckets used for storing artifacts of your pipeline runs| |data_pipeline_root|The GCS staging location for custom job| |input_dataset_uri|Full URI of input dataset| |data_region|Region of input dataset| ## Contributors - [Shixin Luo](https://github.com/luotigerlsx) - [Tommy Siu](https://github.com/tommysiu) - [Nathan Faggian](https://github.com/nfaggian
GCP
Copyright 2021 Google LLC All Rights Reserved Licensed under the Apache License Version 2 0 the License you may not use this file except in compliance with the License You may obtain a copy of the License at http www apache org licenses LICENSE 2 0 Unless required by applicable law or agreed to in writing software distributed under the License is distributed on an AS IS BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License Vertex AI Pipeline This repository demonstrates end to end MLOps process https services google com fh files misc practitioners guide to mlops whitepaper pdf using Vertex AI https cloud google com vertex ai platform and Smart Analytics https cloud google com solutions smart analytics technology capabilities In particular two general Vertex AI Pipeline https cloud google com vertex ai docs pipelines templates has been provided Training pipeline including Data processing Custom model training Model evaluation Endpoint creation Model deployment Deployment testing Model monitoring Batch prediction pipeline including Data processing Batch prediction using deployed model Note that besides Data processing being done using BigQuery all other steps are build on top of Vertex AI https cloud google com vertex ai platform capabilities p align center img src training pipeline png alt Sample Training Pipeline width 600 p Dataset The dataset used throughout the demonstration is Banknote Authentication Data Set https archive ics uci edu ml datasets banknote authentication Data were extracted from images that were taken from genuine and forged banknote like specimens For digitization an industrial camera usually used for print inspection was used The final images have 400x 400 pixels Due to the object lens and distance to the investigated object gray scale pictures with a resolution of about 660 dpi were gained Wavelet Transform tool were used to extract features from images Attribute Information 1 variance of Wavelet Transformed image continuous 2 skewness of Wavelet Transformed image continuous 3 curtosis of Wavelet Transformed image continuous 4 entropy of image continuous 5 class integer Machine Learning Problem Given the Banknote Authentication Data Set a binary classification problem is adopted where attribute class is chosen as label and the remaining attributes are used as features LightGBM https github com microsoft LightGBM a gradient boosting framework that uses tree based learning algorithms is used to train the model for purpose of demonstrating custom training https cloud google com vertex ai docs training custom training and custom serving https cloud google com vertex ai docs predictions use custom container capabilities of Vertex AI platform which provide more native support for e g Tensorflow Pytorch Scikit Learn and Pytorch Repository Structure The repository contains the following components custom vertex pipeline components images custom container images for training and serving pipelines vertex ai pipeline definitions and runners configs configurations for defining vertex ai pipeline scripts scripts for runing local testing notebooks notebooks used development and testing of vertex ai pipeline In addition build components cb sh build all components defined in components folder using Cloud Build build images cb sh build custom images training and serving defined in images folder using Cloud Build build pipeline cb sh build training and batch prediction pipeline defined in pipelines folder using Cloud Build Get Started The end to end process of creating and running the training pipeline contains the following steps 1 Setting up MLOps environment https github com GoogleCloudPlatform mlops with vertex ai tree main provision on Google Cloud 2 Create an Artifact Registry https cloud google com artifact registry for your organization to manage container images 3 Develop the training and serving logic 4 Create the components required to build and run the pipeline 5 Prepare and consolidate the configurations of the various steps of the pipeline 6 Build the pipeline 7 Run and orchestrate the pipeline Create Artifact Registry Artifact Registry https cloud google com artifact registry is a single place for your organization to manage container images and language packages such as Maven and npm It is fully integrated with Google Cloud s tooling and runtimes and comes with support for native artifact protocols More importantly it supports regional and multi regional repositories We have provided a helper script scripts create artifact registry sh Develop Training and Serving Logic Develop your machine learning program and then containerize them as demonstrated in images The requirements for writing training code can be found here https cloud google com vertex ai docs training code requirements as well Note that custom serving image is not necessary if your choosen framework is supported by pre built container https cloud google com vertex ai docs predictions pre built containers which are organized by machine learning ML framework and framework version provide HTTP prediction servers that you can use to serve predictions with minimal configuration We have also provided helper scripts scripts run training local sh test the training program locally scripts run serving local sh test the serving program locally build images cb sh build the images using Cloud Build service Environment variables for special Cloud Storage directories Vertex AI sets the following environment variables when it runs your training code AIP MODEL DIR a Cloud Storage URI of a directory intended for saving model artifacts AIP CHECKPOINT DIR a Cloud Storage URI of a directory intended for saving checkpoints AIP TENSORBOARD LOG DIR a Cloud Storage URI of a directory intended for saving TensorBoard logs See Using Vertex TensorBoard with custom training Build Components The following template custom components are provided components data process read BQ table perform transformation in BQ and export to GCS components train model launch custom distributed training job on Vertex AI platform components check model metrics check the metrics of a training job and verify whether it produces better model components create endpoint create an endpoint on Vertex AI platform components deploy model deployed a model artifact to a created endpoint on Vertex AI platform components test endpoint call the endpoint of deployed model for verification components monitor model track deployed model performance using Vertex Model Monitoring components batch prediction launch batch prediction job on Vertex AI platform We have also provided a helper script build components cb sh Build and Run Pipeline The sample definition of pipelines are pipelines training pipeline py pipelines batch prediction pipeline py After compiled the training or batch prediction pipeline you may trigger the pipeline run using the provided runner pipelines trainin pipeline runner py pipelines batch prediction pipeline runner py An example to run training pipeline using the runner shell python training pipeline runner project id PROJECT ID pipeline region PIPELINE REGION pipeline root PIPELINE ROOT pipeline job spec path PIPELINE SPEC PATH data pipeline root DATA PIPELINE ROOT input dataset uri DATA URI training data schema DATA SCHEMA data region DATA REGION gcs data output folder GCS OUTPUT PATH training container image uri TRAIN IMAGE URI train additional args TRAIN ARGS serving container image uri SERVING IMAGE URI custom job service account CUSTOM JOB SA hptune region PIPELINE REGION hp config max trials 30 hp config suggestions per request 5 vpc network VPC NETWORK metrics name METRIC NAME metrics threshold METRIC THRESHOLD endpoint machine type n1 standard 4 endpoint min replica count 1 endpoint max replica count 2 endpoint test instances TEST INSTANCE monitoring user emails MONITORING EMAIL monitoring log sample rate 0 8 monitor interval 3600 monitoring default threshold 0 3 monitoring custom skew thresholds MONITORING CONFIG monitoring custom drift thresholds MONITORING CONFIG enable model monitoring True pipeline schedule 0 2 pipeline schedule timezone US Pacific enable pipeline caching We have also provided helper scripts scripts build pipeline spec sh compile and build the pipeline specs locally scripts run training pipeline sh create and run training Vertex AI Pipeline based on the specs scripts run batch prediction pipeline sh create and run batch prediction Vertex AI Pipeline based on the specs build pipeline spec cb sh compile and build the pipeline specs using Cloud Build service Some common parameters Field Explanation project id Your GCP project pipeline region The region to run Vertex AI Pipeline pipeline root The GCS buckets used for storing artifacts of your pipeline runs data pipeline root The GCS staging location for custom job input dataset uri Full URI of input dataset data region Region of input dataset Contributors Shixin Luo https github com luotigerlsx Tommy Siu https github com tommysiu Nathan Faggian https github com nfaggian
GCP Instrumenting Web Applications End to End with Stackdriver and OpenTelemetry Stackdriver export to BigQuery and analyze the logs with SQL queries This tutorial demonstrates instrumenting a web application end to end from the JavaScript browser code that drives HTTP requests to a Node js backend that with OpenTelemetry and Stackdriver to run for a load test It shows how to can be run anywhere that you can run Node collect the instrumentation data from the browser and the server ship to browser to the backend application including logging monitoring and tracing The app is something like a simple web version of Apache Bench It includes
# Instrumenting Web Applications End-to-End with Stackdriver and OpenTelemetry This tutorial demonstrates instrumenting a web application end-to-end, from the browser to the backend application, including logging, monitoring, and tracing with OpenTelemetry and Stackdriver to run for a load test. It shows how to collect the instrumentation data from the browser and the server, ship to Stackdriver, export to BigQuery, and analyze the logs with SQL queries. The app is something like a simple web version of Apache Bench. It includes JavaScript browser code that drives HTTP requests to a Node.js backend that can be run anywhere that you can run Node. ## Setup To work through this example, you will need a GCP project. Follow these steps 1. [Select or create a GCP project](https://console.cloud.google.com/projectselector2/home/dashboard) 2. [Enable billing for your project](https://support.google.com/cloud/answer/6293499#enable-billing) 3. Clone the repo git clone https://github.com/GoogleCloudPlatform/professional-services.git 4. Install Node.js 5. Install Go 6. Install Docker 7. Install the [Google Cloud SDK](https://cloud.google.com/sdk/install) Clone this repository to your environment with the command ```shell git clone https://github.com/GoogleCloudPlatform/professional-services.git ``` Change to this directory and set an environment variable to remember the location ```shell cd professional-services/examples/web-instrumentation WI_HOME=`pwd` ``` Set Google Cloud SDK to the current project ```shell export GOOGLE_CLOUD_PROJECT=[Your project] gcloud config set project $GOOGLE_CLOUD_PROJECT ``` Enable the required services ```shell gcloud services enable bigquery.googleapis.com \ cloudbuild.googleapis.com \ cloudtrace.googleapis.com \ compute.googleapis.com \ container.googleapis.com \ containerregistry.googleapis.com \ logging.googleapis.com \ monitoring.googleapis.com ``` Install the JavaScript packages required by both the server and the browser: ```shell npm install ``` ## OpenTelemetry collector Open up a new shell. In a new directory, clone the OpenTelemetry collector contrib project, which contains the Stackdriver exporter ```shell git clone https://github.com/open-telemetry/opentelemetry-collector-contrib cd opentelemetry-collector-contrib ``` Build the binary executable ```shell make otelcontribcol ``` Build the container ```shell make docker-otelcontribcol ``` Tag it for Google Container Registry (GCR) ```shell docker tag otelcontribcol gcr.io/$GOOGLE_CLOUD_PROJECT/otelcontribcol:latest ``` Push to GCR ```shell docker push gcr.io/$GOOGLE_CLOUD_PROJECT/otelcontribcol ``` ### Run the OpenTelemetry collector locally If you are running on GKE only, you do do not need to do this step. For running locally, the OpenTelemetry collector needs permissions and credentials to write to Stackdriver. Obtain user access credentials and store them for Application Default Credentials ```shell gcloud auth application-default login \ --scopes="https://www.googleapis.com/auth/trace.append" ``` Install Go and run **This bit is not working for OT** ```shell make otelcontribcol bin/linux/otelcontribcol --config=$WI_HOME/conf/otservice-config.yaml ``` ## Browser code The browser code refers to ES2015 modules that need to be transpiled and bundled with the help of webpack. Make sure that the variable `agentURL` in `src\index.js` refers to localhost if running the OpenCensus agent locally or to the external IP of the OpenCensus agent if running on Kubernetes. In the original terminal, change to the browser code directory ```shell cd browser ``` Install the browser dependencies ```shell npm install ``` Compile the code ```shell npm run build ``` ## Run app locally The app can be deployed locally. First change to the top level directory ```shell cd .. ``` To run the app locally type ```shell node ./src/app.js ``` Open your browser at http://localhost:8080 Fill in the test form to generate some load. You should see logs from both the Node.js server and the browser code in the console. You should see traces in Stackdriver. ## Deploy to Kubernetes Create a cluster with 1 node and cluster autoscaling enabled ```shell ZONE=us-central1-a NAME=web-instrumentation CHANNEL=regular # choose rapid if you want to live on the edge gcloud beta container clusters create $NAME \ --num-nodes 1 \ --enable-autoscaling --min-nodes 1 --max-nodes 4 \ --enable-basic-auth \ --issue-client-certificate \ --release-channel $CHANNEL \ --zone $ZONE \ --enable-stackdriver-kubernetes ``` Change the project id in file `k8s/ot-service.yaml` with the sed command ```shell sed -i.bak "s//$GOOGLE_CLOUD_PROJECT/" k8s/ot-service.yaml ``` Deploy the OpenTelemetry service to the Kubernetes cluster ```shell kubectl apply -f k8s/ot-service.yaml ``` Get the external IP. It might take a few minutes for the deployment to complete and an IP address be allocated. You may have to execute the command below several times before the EXTERNAL_IP shell variable is successfully set. ```shell EXTERNAL_IP=$(kubectl get svc ot-service-service \ -o jsonpath="{.status.loadBalancer.ingress[*].ip}") echo "External IP: $EXTERNAL_IP" ``` Edit the file `browser/src/index.js` changing the variable `collectorURL` to refer to the external IP and port (80) of the agent with the following sed command ```shell sed -i.bak "s/localhost:55678/${EXTERNAL_IP}:80/" browser/src/index.js ``` Rebuild the web client ```shell cd browser npm run build cd .. ``` ### Build the app image To deploy the image to the Google Container Registry (GCR), use the following Cloud Build command ```shell gcloud builds submit ``` Change the project id in file `k8s/deployment.yaml` with the sed command ```shell sed -i.bak "s//$GOOGLE_CLOUD_PROJECT/" k8s/deployment.yaml ``` Deploy the app ```shell kubectl apply -f k8s/deployment.yaml ``` Configure a service ```shell kubectl apply -f k8s/service.yaml ``` Expose a service: ```shell kubectl apply -f k8s/ingress.yaml ``` Check for the external IP ```shell kubectl get ingress ``` It may take a few minutes for the service to be exposed through an external IP. Navigate to the external IP. It should present a form that will allow you to send a series of XML HTTP requests to the server. That will generate trace and monitoring data. ## Log exports Before running tests, consider first setting up [log exports](https://cloud.google.com/logging/docs/export). to BigQuery for more targeted log queries to analyse the results of your load tests or production issues. Create a BQ dataset to export the container logs to ```shell bq --location=US mk -d \ --description "Web instrumentation container log exports" \ --project_id $GOOGLE_CLOUD_PROJECT \ web_instr_container ``` Create a log export for the container logs ```shell LOG_SA=$(gcloud logging sinks create web-instr-container-logs \ bigquery.googleapis.com/projects/$GOOGLE_CLOUD_PROJECT/datasets/web_instr_container \ --log-filter='resource.type="k8s_container" AND labels.k8s-pod/app="web-instrumentation"' \ --format='value("writerIdentity")') ``` The identify of the logs writer service account is captured in the shell variable `LOG_SA`. Give this service account write access to BigQuery ```shell gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ --member $LOG_SA \ --role roles/bigquery.dataEditor ``` Create a BQ dataset for the load balancer logs ```shell bq --location=US mk -d \ --description "Web instrumentation load balancer log exports" \ --project_id $GOOGLE_CLOUD_PROJECT \ web_instr_load_balancer ``` Repeat creation of the log sink for load balancer logs ```shell LOG_SA=$(gcloud logging sinks create web-instr-load-balancer-logs \ bigquery.googleapis.com/projects/$GOOGLE_CLOUD_PROJECT/datasets/web_instr_load_balancer \ --log-filter='resource.type="http_load_balancer"' \ --format='value("writerIdentity")') ``` Note that the service account id changes so that you need to note that anre repeat the step for granting write access BigQuery ```shell gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ --member $LOG_SA \ --role roles/bigquery.dataEditor ``` ## Running the load test Now you are ready to run the load test. You might try opening two tabs in a browser. In one tab generate a steady state load with request of say, 1 second apart, to give a baseline. Then hit the app with a sudden spike to see how it responds. You can send a request from the command line with cURL ```shell EXTERNAL_IP=[from kubectl get ingress command] REQUEST_ID=1234567889 # A random number # See W3C Trace Context for format TRACE=00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01 MILLIS=`date +%s%N | cut -b1-13` curl "http://$EXTERNAL_IP/data/$REQUEST_ID" \ -H "traceparent: $TRACE" \ -H 'Content-Type: application/json' \ --data-binary "{\"data\":{\"name\":\"Smoke test\",\"reqId\":$REQUEST_ID,\"tSent\":$MILLIS}}" ``` Check that you see the log for the request in the Log Viewer. After the log is replicated to BigQuery, you should be able to query it with a query like below. Note that the table name will be something like `requests_20200129`. A shell variable is used to set the date below. ```shell DATE=$(date -u +'%Y%m%d') bq query --use_legacy_sql=false \ "SELECT httpRequest.status, httpRequest.requestUrl, timestamp FROM web_instr_load_balancer.requests_${DATE} ORDER BY timestamp DESC LIMIT 10" ``` There are more queries in the Colab sheet [load_test_analysis.ipynb](https://colab.research.google.com/github/googlecolab/GoogleCloudPlatform/professional-services/blob/web-instrumentation/examples/web-instrumentation/load_test_analysis.ipynb). ## Troubleshooting Try the following, depending on where you encounter problems. ### Check project id Check that you have set your project id in files `k8s/oc-service.yaml`, `conf\ocagent-config.yaml`, and `deployment.yaml`. ### Tracing issues You can use zPages to see the trace data sent to the OC agent. Find the name of the POD running the agent: ```shell kubectl get pods ``` Start port forwarding ```shell kubectl port-forward $POD 55679:55679 ``` Browse to the URL http://localhost:55679/debug/tracez ### Browser JavaScript For trouble bundling the web client, see Webpack [Getting Started](https://webpack.js.org/guides/getting-started/).
GCP
Instrumenting Web Applications End to End with Stackdriver and OpenTelemetry This tutorial demonstrates instrumenting a web application end to end from the browser to the backend application including logging monitoring and tracing with OpenTelemetry and Stackdriver to run for a load test It shows how to collect the instrumentation data from the browser and the server ship to Stackdriver export to BigQuery and analyze the logs with SQL queries The app is something like a simple web version of Apache Bench It includes JavaScript browser code that drives HTTP requests to a Node js backend that can be run anywhere that you can run Node Setup To work through this example you will need a GCP project Follow these steps 1 Select or create a GCP project https console cloud google com projectselector2 home dashboard 2 Enable billing for your project https support google com cloud answer 6293499 enable billing 3 Clone the repo git clone https github com GoogleCloudPlatform professional services git 4 Install Node js 5 Install Go 6 Install Docker 7 Install the Google Cloud SDK https cloud google com sdk install Clone this repository to your environment with the command shell git clone https github com GoogleCloudPlatform professional services git Change to this directory and set an environment variable to remember the location shell cd professional services examples web instrumentation WI HOME pwd Set Google Cloud SDK to the current project shell export GOOGLE CLOUD PROJECT Your project gcloud config set project GOOGLE CLOUD PROJECT Enable the required services shell gcloud services enable bigquery googleapis com cloudbuild googleapis com cloudtrace googleapis com compute googleapis com container googleapis com containerregistry googleapis com logging googleapis com monitoring googleapis com Install the JavaScript packages required by both the server and the browser shell npm install OpenTelemetry collector Open up a new shell In a new directory clone the OpenTelemetry collector contrib project which contains the Stackdriver exporter shell git clone https github com open telemetry opentelemetry collector contrib cd opentelemetry collector contrib Build the binary executable shell make otelcontribcol Build the container shell make docker otelcontribcol Tag it for Google Container Registry GCR shell docker tag otelcontribcol gcr io GOOGLE CLOUD PROJECT otelcontribcol latest Push to GCR shell docker push gcr io GOOGLE CLOUD PROJECT otelcontribcol Run the OpenTelemetry collector locally If you are running on GKE only you do do not need to do this step For running locally the OpenTelemetry collector needs permissions and credentials to write to Stackdriver Obtain user access credentials and store them for Application Default Credentials shell gcloud auth application default login scopes https www googleapis com auth trace append Install Go and run This bit is not working for OT shell make otelcontribcol bin linux otelcontribcol config WI HOME conf otservice config yaml Browser code The browser code refers to ES2015 modules that need to be transpiled and bundled with the help of webpack Make sure that the variable agentURL in src index js refers to localhost if running the OpenCensus agent locally or to the external IP of the OpenCensus agent if running on Kubernetes In the original terminal change to the browser code directory shell cd browser Install the browser dependencies shell npm install Compile the code shell npm run build Run app locally The app can be deployed locally First change to the top level directory shell cd To run the app locally type shell node src app js Open your browser at http localhost 8080 Fill in the test form to generate some load You should see logs from both the Node js server and the browser code in the console You should see traces in Stackdriver Deploy to Kubernetes Create a cluster with 1 node and cluster autoscaling enabled shell ZONE us central1 a NAME web instrumentation CHANNEL regular choose rapid if you want to live on the edge gcloud beta container clusters create NAME num nodes 1 enable autoscaling min nodes 1 max nodes 4 enable basic auth issue client certificate release channel CHANNEL zone ZONE enable stackdriver kubernetes Change the project id in file k8s ot service yaml with the sed command shell sed i bak s GOOGLE CLOUD PROJECT k8s ot service yaml Deploy the OpenTelemetry service to the Kubernetes cluster shell kubectl apply f k8s ot service yaml Get the external IP It might take a few minutes for the deployment to complete and an IP address be allocated You may have to execute the command below several times before the EXTERNAL IP shell variable is successfully set shell EXTERNAL IP kubectl get svc ot service service o jsonpath status loadBalancer ingress ip echo External IP EXTERNAL IP Edit the file browser src index js changing the variable collectorURL to refer to the external IP and port 80 of the agent with the following sed command shell sed i bak s localhost 55678 EXTERNAL IP 80 browser src index js Rebuild the web client shell cd browser npm run build cd Build the app image To deploy the image to the Google Container Registry GCR use the following Cloud Build command shell gcloud builds submit Change the project id in file k8s deployment yaml with the sed command shell sed i bak s GOOGLE CLOUD PROJECT k8s deployment yaml Deploy the app shell kubectl apply f k8s deployment yaml Configure a service shell kubectl apply f k8s service yaml Expose a service shell kubectl apply f k8s ingress yaml Check for the external IP shell kubectl get ingress It may take a few minutes for the service to be exposed through an external IP Navigate to the external IP It should present a form that will allow you to send a series of XML HTTP requests to the server That will generate trace and monitoring data Log exports Before running tests consider first setting up log exports https cloud google com logging docs export to BigQuery for more targeted log queries to analyse the results of your load tests or production issues Create a BQ dataset to export the container logs to shell bq location US mk d description Web instrumentation container log exports project id GOOGLE CLOUD PROJECT web instr container Create a log export for the container logs shell LOG SA gcloud logging sinks create web instr container logs bigquery googleapis com projects GOOGLE CLOUD PROJECT datasets web instr container log filter resource type k8s container AND labels k8s pod app web instrumentation format value writerIdentity The identify of the logs writer service account is captured in the shell variable LOG SA Give this service account write access to BigQuery shell gcloud projects add iam policy binding GOOGLE CLOUD PROJECT member LOG SA role roles bigquery dataEditor Create a BQ dataset for the load balancer logs shell bq location US mk d description Web instrumentation load balancer log exports project id GOOGLE CLOUD PROJECT web instr load balancer Repeat creation of the log sink for load balancer logs shell LOG SA gcloud logging sinks create web instr load balancer logs bigquery googleapis com projects GOOGLE CLOUD PROJECT datasets web instr load balancer log filter resource type http load balancer format value writerIdentity Note that the service account id changes so that you need to note that anre repeat the step for granting write access BigQuery shell gcloud projects add iam policy binding GOOGLE CLOUD PROJECT member LOG SA role roles bigquery dataEditor Running the load test Now you are ready to run the load test You might try opening two tabs in a browser In one tab generate a steady state load with request of say 1 second apart to give a baseline Then hit the app with a sudden spike to see how it responds You can send a request from the command line with cURL shell EXTERNAL IP from kubectl get ingress command REQUEST ID 1234567889 A random number See W3C Trace Context for format TRACE 00 0af7651916cd43dd8448eb211c80319c b7ad6b7169203331 01 MILLIS date s N cut b1 13 curl http EXTERNAL IP data REQUEST ID H traceparent TRACE H Content Type application json data binary data name Smoke test reqId REQUEST ID tSent MILLIS Check that you see the log for the request in the Log Viewer After the log is replicated to BigQuery you should be able to query it with a query like below Note that the table name will be something like requests 20200129 A shell variable is used to set the date below shell DATE date u Y m d bq query use legacy sql false SELECT httpRequest status httpRequest requestUrl timestamp FROM web instr load balancer requests DATE ORDER BY timestamp DESC LIMIT 10 There are more queries in the Colab sheet load test analysis ipynb https colab research google com github googlecolab GoogleCloudPlatform professional services blob web instrumentation examples web instrumentation load test analysis ipynb Troubleshooting Try the following depending on where you encounter problems Check project id Check that you have set your project id in files k8s oc service yaml conf ocagent config yaml and deployment yaml Tracing issues You can use zPages to see the trace data sent to the OC agent Find the name of the POD running the agent shell kubectl get pods Start port forwarding shell kubectl port forward POD 55679 55679 Browse to the URL http localhost 55679 debug tracez Browser JavaScript For trouble bundling the web client see Webpack Getting Started https webpack js org guides getting started
GCP Five9 Voicestream Integration with Agent Assist Copyright 2024 Google This software is provided as is without warranty or representation for any use or purpose Your use of it is subject to your agreement with Google Project Structure This is a PoC to integrate Five9 Voicestream with Agent Assist
Copyright 2024 Google. This software is provided as-is, without warranty or representation for any use or purpose. Your use of it is subject to your agreement with Google. # Five9 Voicestream Integration with Agent Assist This is a PoC to integrate Five9 Voicestream with Agent Assist. ## Project Structure ``` . β”œβ”€β”€ assets β”‚ └── FAQ.csv β”œβ”€β”€ client β”‚ β”œβ”€β”€ audio β”‚ β”‚ β”œβ”€β”€ END_USER.wav β”‚ β”‚ └── HUMAN_AGENT.wav β”‚ └── client_voicestream.py β”œβ”€β”€ .env β”œβ”€β”€ proto β”‚ β”œβ”€β”€ voice_pb2_grpc.py β”‚ β”œβ”€β”€ voice_pb2.py β”‚ └── voice.proto β”œβ”€β”€ README.md β”œβ”€β”€ requirements.txt └── server β”œβ”€β”€ server.py β”œβ”€β”€ services β”‚ └── get_suggestions.py └── utils β”œβ”€β”€ conversation_management.py └── participant_management.py ``` ## Components - Agent Assist - Five9 with VoiceStream ## Setup Instructions ### GCP Project Setup #### Creating a Project in the Google Cloud Platform Console If you haven't already created a project, create one now. Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services. 1. Open the [Cloud Platform Console][cloud-console]. 1. In the drop-down menu at the top, select **Create a project**. 1. Give your project a name. 1. Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations. [cloud-console]: https://console.cloud.google.com/ #### Enabling billing for your project. If you haven't already enabled billing for your project, [enable billing][enable-billing] now. Enabling billing allows is required to use Cloud Bigtable and to create VM instances. [enable-billing]: https://console.cloud.google.com/project/_/settings #### Install the Google Cloud SDK. If you haven't already installed the Google Cloud SDK, [install the Google Cloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform. [cloud-sdk]: https://cloud.google.com/sdk/ #### Setting Google Application Default Credentials Set your [Google Application Default Credentials][application-default-credentials] by [initializing the Google Cloud SDK][cloud-sdk-init] with the command: ``` gcloud init ``` Generate a credentials file by running the [application-default login](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) command: ``` gcloud auth application-default login ``` [cloud-sdk-init]: https://cloud.google.com/sdk/docs/initializing [application-default-credentials]: https://developers.google.com/identity/protocols/application-default-credentials #### Create a Knowledge Base Agent Assist follows a conversation between a human agent and an end-user and provide the human agent with relevant document suggestions. These suggestions are based on knowledge bases, namely, collections of documents that you upload to Agent Assist. These documents are called knowledge documents and can be either articles (for use with Article Suggestion) or FAQ documents (for use with FAQ Assist). In this specific implementation, a CSV sheet with FAQ will be used as knowledge document. > [FAQ CSV file](./assets/FAQ.csv) > [Create a Knowledge Base](https://cloud.google.com/agent-assist/docs/knowledge-base) #### Create a Conversation Profile A conversation profile configures a set of parameters that control the suggestions made to an agent. > [Create/Edit an Agent Assist Conversation Profile](https://cloud.google.com/agent-assist/docs/conversation-profile#create_and_edit_a_conversation_profile) While creating the the conversation profile, check the FAQs box. In the "Knowledge bases" input box, select the recently created Knowledge Base. The other values in the section should be set as default. Once the conversation profile is created, you can find the CONVERSATION_PROFILE_ID (Integration ID) in the following ways: > Open [Agent Assist](https://agentassist.cloud.google.com/), then Conversation > Profiles on the left bottom ### Usage Pre-requisites - FAQs Suggestions should be enabled in the Agent Assist Conversation Profile - Agent Assist will only give you suggestions to conversations with Human Agents. It will not give suggestions if the conversation is being guided by virtual agents. ### Local Development Set Up This application is designed to run on port 8080. Upon launch, the application will initialize and bind to port 8080, making it accessible for incoming connections. This can be changed in the .env file. #### Protocol Buffer Compiler: This implementation leverages from Buffer compilers for service definitions and data serialization. In this case, protoc is used to compile Five9's protofile. ``` NOTE: The compilation of the Five9's Voicestream protofile was already made, therefore this step can be skipped. But if an update of the protofile is needed, please follow these steps to properly output the required python files. ``` > [Protocol Buffer Compiler Installation](https://grpc.io/docs/protoc-installation/) > [Five9's Voicestream protofile](./proto/voice.proto) To compile the protofile: > Open a terminal window > Go to the root where your proto folder is > Run the following command: ``` python3 -m grpc_tools.protoc -I proto --python_out=proto --grpc_python_out=proto proto/voice.proto ``` > Two python files will be generated inside the proto folder. > [voice_pb2_grpc.py](./proto/voice_pb2_grpc.py) > [voice_pb2.py](./proto/voice_pb2.py) #### Set of variables: The following variables need to be set up in the .env file inside the root folder ``` SERVER_ADDRESS : Target server address PORT : Connection Port PROJECT_ID : GCP Project ID where the Agent Assist Conversation Profile is deployed. CONVERSATION_PROFILE_ID : Agent Assist Conversation Profile ID CHUNK_SIZE : Number of bytes of audio to be sent each time RESTART_TIMEOUT : Timeout of one stream MAX_LOOKBACK : Lookback for unprocessed audio data ``` ### Steps to follow ## Start gRPC Server Start the gRPC Server controller. This will start a server on port 8080, where the voicestream client will send the data. > [Server Controller](./server/server.py) Inside the server folder, run the following command: ``` python server.py ``` ## Start gRPC Client According to Five9's Self Service Developer Guide: ``` VoiceStream does not support multi-channel streaming. VoiceStream transmits each distinct audio stream over a separate gRPC session: one for audio from the agent, and one for audio to the agent. ``` In order to simulate this behaviour using our local environment, the same script should be run simultaneously. One that sends the customer audio (END_USER) and one that sends the agent audio (HUMAN_AGENT) > [Five9 Voicestream Client](./client/client_voicestream.py) Inside the client folder, run the following command to send the human agent audio: ``` python client_voicestream.py --role=HUMAN_AGENT --call_id=<CALL_ID> ``` In another terminal, run the following command to send the customer audio: ``` python client_voicestream.py --role=END_USER --call_id=<CALL_ID> ``` In order for both streams to be associated to the same conversation it is fundamental to specify a destination CONVERSATION_ID. For this to happen, the CALL_ID specified in the initial configuration sent by Five9 will be passed to the Agent Assist as the internal CONVERSATION_ID. In this implementation, we are manually defining this CALL_ID for testing purposes. # References 1.[Agent Assist Documentation](https://cloud.google.com/agent-assist/docs) 2.[Dialogflow](https://cloud.google.com/dialogflow/docs) 3.[Five9 VoiceStream](https://www.five9.com/news/news-releases/five9-announces-five9-voicestream) 4,[Five9 VoiceStream Release Notes](https://releasenotes.five9.com/space/RNA/23143057870/VoiceStream)
GCP
Copyright 2024 Google This software is provided as is without warranty or representation for any use or purpose Your use of it is subject to your agreement with Google Five9 Voicestream Integration with Agent Assist This is a PoC to integrate Five9 Voicestream with Agent Assist Project Structure assets FAQ csv client audio END USER wav HUMAN AGENT wav client voicestream py env proto voice pb2 grpc py voice pb2 py voice proto README md requirements txt server server py services get suggestions py utils conversation management py participant management py Components Agent Assist Five9 with VoiceStream Setup Instructions GCP Project Setup Creating a Project in the Google Cloud Platform Console If you haven t already created a project create one now Projects enable you to manage all Google Cloud Platform resources for your app including deployment access control billing and services 1 Open the Cloud Platform Console cloud console 1 In the drop down menu at the top select Create a project 1 Give your project a name 1 Make a note of the project ID which might be different from the project name The project ID is used in commands and in configurations cloud console https console cloud google com Enabling billing for your project If you haven t already enabled billing for your project enable billing enable billing now Enabling billing allows is required to use Cloud Bigtable and to create VM instances enable billing https console cloud google com project settings Install the Google Cloud SDK If you haven t already installed the Google Cloud SDK install the Google Cloud SDK cloud sdk now The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform cloud sdk https cloud google com sdk Setting Google Application Default Credentials Set your Google Application Default Credentials application default credentials by initializing the Google Cloud SDK cloud sdk init with the command gcloud init Generate a credentials file by running the application default login https cloud google com sdk gcloud reference auth application default login command gcloud auth application default login cloud sdk init https cloud google com sdk docs initializing application default credentials https developers google com identity protocols application default credentials Create a Knowledge Base Agent Assist follows a conversation between a human agent and an end user and provide the human agent with relevant document suggestions These suggestions are based on knowledge bases namely collections of documents that you upload to Agent Assist These documents are called knowledge documents and can be either articles for use with Article Suggestion or FAQ documents for use with FAQ Assist In this specific implementation a CSV sheet with FAQ will be used as knowledge document FAQ CSV file assets FAQ csv Create a Knowledge Base https cloud google com agent assist docs knowledge base Create a Conversation Profile A conversation profile configures a set of parameters that control the suggestions made to an agent Create Edit an Agent Assist Conversation Profile https cloud google com agent assist docs conversation profile create and edit a conversation profile While creating the the conversation profile check the FAQs box In the Knowledge bases input box select the recently created Knowledge Base The other values in the section should be set as default Once the conversation profile is created you can find the CONVERSATION PROFILE ID Integration ID in the following ways Open Agent Assist https agentassist cloud google com then Conversation Profiles on the left bottom Usage Pre requisites FAQs Suggestions should be enabled in the Agent Assist Conversation Profile Agent Assist will only give you suggestions to conversations with Human Agents It will not give suggestions if the conversation is being guided by virtual agents Local Development Set Up This application is designed to run on port 8080 Upon launch the application will initialize and bind to port 8080 making it accessible for incoming connections This can be changed in the env file Protocol Buffer Compiler This implementation leverages from Buffer compilers for service definitions and data serialization In this case protoc is used to compile Five9 s protofile NOTE The compilation of the Five9 s Voicestream protofile was already made therefore this step can be skipped But if an update of the protofile is needed please follow these steps to properly output the required python files Protocol Buffer Compiler Installation https grpc io docs protoc installation Five9 s Voicestream protofile proto voice proto To compile the protofile Open a terminal window Go to the root where your proto folder is Run the following command python3 m grpc tools protoc I proto python out proto grpc python out proto proto voice proto Two python files will be generated inside the proto folder voice pb2 grpc py proto voice pb2 grpc py voice pb2 py proto voice pb2 py Set of variables The following variables need to be set up in the env file inside the root folder SERVER ADDRESS Target server address PORT Connection Port PROJECT ID GCP Project ID where the Agent Assist Conversation Profile is deployed CONVERSATION PROFILE ID Agent Assist Conversation Profile ID CHUNK SIZE Number of bytes of audio to be sent each time RESTART TIMEOUT Timeout of one stream MAX LOOKBACK Lookback for unprocessed audio data Steps to follow Start gRPC Server Start the gRPC Server controller This will start a server on port 8080 where the voicestream client will send the data Server Controller server server py Inside the server folder run the following command python server py Start gRPC Client According to Five9 s Self Service Developer Guide VoiceStream does not support multi channel streaming VoiceStream transmits each distinct audio stream over a separate gRPC session one for audio from the agent and one for audio to the agent In order to simulate this behaviour using our local environment the same script should be run simultaneously One that sends the customer audio END USER and one that sends the agent audio HUMAN AGENT Five9 Voicestream Client client client voicestream py Inside the client folder run the following command to send the human agent audio python client voicestream py role HUMAN AGENT call id CALL ID In another terminal run the following command to send the customer audio python client voicestream py role END USER call id CALL ID In order for both streams to be associated to the same conversation it is fundamental to specify a destination CONVERSATION ID For this to happen the CALL ID specified in the initial configuration sent by Five9 will be passed to the Agent Assist as the internal CONVERSATION ID In this implementation we are manually defining this CALL ID for testing purposes References 1 Agent Assist Documentation https cloud google com agent assist docs 2 Dialogflow https cloud google com dialogflow docs 3 Five9 VoiceStream https www five9 com news news releases five9 announces five9 voicestream 4 Five9 VoiceStream Release Notes https releasenotes five9 com space RNA 23143057870 VoiceStream
GCP export BUCKET YOURBUCKET Before launching training job please copy the raw data and define the environmental variables the bucket for staging and the bucket where you are going to store data as well as training job s outputs TensorFlow Profiling Examples gsutil m cp gs cloud training demos babyweight preproc gs BUCKET babyweight preproc Profiler hooks You also need to have installed export BUCKETSTAGING YOURSTAGINGBUCKET The code below is based on this you can find more
# TensorFlow Profiling Examples Before launching training job, please copy the raw data and define the environmental variables (the bucket for staging and the bucket where you are going to store data as well as training job's outputs) `export BUCKET=YOUR_BUCKET gsutil -m cp gs://cloud-training-demos/babyweight/preproc/* gs://$BUCKET/babyweight/preproc/ export BUCKET_STAGING=YOUR_STAGING_BUCKET` You also need to have [bazel](https://docs.bazel.build/versions/master/install.html) installed. The code below is based on this [codelab](https://codelabs.developers.google.com/codelabs/scd-babyweight2/index.html?index=..%2F..%2Fcloud-quest-scientific-data#0) (you can find more [here](https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/blogs/babyweight)). ## Profiler hooks You can dump profiles for a every *n*-th step. We are going to demonstrate how to collect dumps both in distributed mode as well as when training is don on a single machine (including training with a GPU accelerator). After you've trained a model (see examples below), you need to copy the dumps localy in order to inspect them. You can do it as follows: ```shell rm -rf /tmp/profiler mkdir -p /tmp/profiler gsutil -m cp -r $OUTDIR/timeline*.json /tmp/profiler ``` And now you can launch a trace event profiling tool in your Chrome browser (chrome://tracing), load a specific timeline and visually inspect it. ### Training on a single CPU machine: BASIC CMLE tier You can launch the job as following: ```shell OUTDIR=gs://$BUCKET/babyweight/hooks_basic JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=us-west1 \ --module-name=trainer-hooks.task \ --package-path=trainer-hooks \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET_STAGING \ --scale-tier=BASIC \ --runtime-version="1.10" \ -- \ --bucket=$BUCKET/babyweight \ --output_dir=${OUTDIR} \ --eval_int=1200 \ --train_steps=50000 ``` ### Distributed training on CPUs: STANDARD tier You can launch the job as following: ```shell OUTDIR=gs://$BUCKET/babyweight/hooks_standard JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=us-west1 \ --module-name=trainer-hooks.task \ --package-path=trainer-hooks \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET_STAGING \ --scale-tier=STANDARD_1 \ --runtime-version="1.10" \ -- \ --bucket=$BUCKET/babyweight \ --output_dir=${OUTDIR} \ --train_steps=50000 ``` ### Training on GPU: ```shell OUTDIR=gs://$BUCKET/babyweight/hooks_gpu JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=us-west1 \ --module-name=trainer-hooks.task \ --package-path=trainer-hooks \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET_STAGING \ --scale-tier=BASIC_GPU \ --runtime-version="1.10" \ -- \ --bucket=$BUCKET/babyweight \ --output_dir=${OUTDIR} \ --batch_size=8192 \ --train_steps=21000 ``` ### Defining you own schedule ```shell OUTDIR=gs://$BUCKET/babyweight/hooks_basic-ext JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=us-west1 \ --module-name=trainer-hooks-ext.task \ --package-path=trainer-hooks-ext \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET_STAGING \ --scale-tier=BASIC \ --runtime-version="1.10" \ -- \ --bucket=$BUCKET/babyweight \ --output_dir=${OUTDIR} \ --eval_int=1200 \ --train_steps=15000 ``` ## Deep profiling We can collect a deep profiling dump that can be later analyzed with a profiling CLI tool or with a profiler-ui as described in a post (ADD LINK HERE). Launch the training job as following: ```shell OUTDIR=gs://$BUCKET/babyweight/profiler_standard JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=us-west1 \ --module-name=trainer-deep-profiling.task \ --package-path=trainer-deep-profiling \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET_STAGING \ --scale-tier=STANDARD_1 \ --runtime-version="1.10" \ -- \ --bucket=$BUCKET/babyweight \ --output_dir=${OUTDIR} \ --train_steps=100000 ``` ### Profiler CLI 1. In order to use [profiler CLI](https://github.com/tensorflow/tensorflow/blob/9590c4c32dd4346ea5c35673336f5912c6072bf2/tensorflow/core/profiler/README.md), you need to build the profiler first: `git clone https://github.com/tensorflow/tensorflow.git cd tensorflow bazel build -c opt tensorflow/core/profiler:profiler` 2. Copy dumps locally: `rm -rf /tmp/profiler mkdir -p /tmp/profiler gsutil -m cp -r $OUTDIR/profiler /tmp` 3. Launch the profiler with `bazel-bin/tensorflow/core/profiler/profiler --profile_path=/tmp/profiler/$(ls /tmp/profiler/ | head -1)` ### Profiler UI You can also use a [profiler-ui](https://github.com/tensorflow/profiler-ui), i.e. a web interface for a tensorflow profiler. 1. If you'd like to install [pprof](https://github.com/google/pprof), please follow the [installation instructions](https://github.com/google/pprof#building-pprof). 2. Clone the repository: `git clone https://github.com/tensorflow/profiler-ui.git cd profiler-ui` 3. Copy dumps locally: `rm -rf /tmp/profiler mkdir -p /tmp/profiler gsutil -m cp -r $OUTDIR/$MODEL/profiler /tmp` 4. Launch the profiler with `python ui.py --profile_context_path=/tmp/profiler/$(ls /tmp/profiler/ | head -1)`
GCP
TensorFlow Profiling Examples Before launching training job please copy the raw data and define the environmental variables the bucket for staging and the bucket where you are going to store data as well as training job s outputs export BUCKET YOUR BUCKET gsutil m cp gs cloud training demos babyweight preproc gs BUCKET babyweight preproc export BUCKET STAGING YOUR STAGING BUCKET You also need to have bazel https docs bazel build versions master install html installed The code below is based on this codelab https codelabs developers google com codelabs scd babyweight2 index html index 2F 2Fcloud quest scientific data 0 you can find more here https github com GoogleCloudPlatform training data analyst tree master blogs babyweight Profiler hooks You can dump profiles for a every n th step We are going to demonstrate how to collect dumps both in distributed mode as well as when training is don on a single machine including training with a GPU accelerator After you ve trained a model see examples below you need to copy the dumps localy in order to inspect them You can do it as follows shell rm rf tmp profiler mkdir p tmp profiler gsutil m cp r OUTDIR timeline json tmp profiler And now you can launch a trace event profiling tool in your Chrome browser chrome tracing load a specific timeline and visually inspect it Training on a single CPU machine BASIC CMLE tier You can launch the job as following shell OUTDIR gs BUCKET babyweight hooks basic JOBNAME babyweight date u y m d H M S gsutil m rm rf OUTDIR gcloud ml engine jobs submit training JOBNAME region us west1 module name trainer hooks task package path trainer hooks job dir OUTDIR staging bucket gs BUCKET STAGING scale tier BASIC runtime version 1 10 bucket BUCKET babyweight output dir OUTDIR eval int 1200 train steps 50000 Distributed training on CPUs STANDARD tier You can launch the job as following shell OUTDIR gs BUCKET babyweight hooks standard JOBNAME babyweight date u y m d H M S gsutil m rm rf OUTDIR gcloud ml engine jobs submit training JOBNAME region us west1 module name trainer hooks task package path trainer hooks job dir OUTDIR staging bucket gs BUCKET STAGING scale tier STANDARD 1 runtime version 1 10 bucket BUCKET babyweight output dir OUTDIR train steps 50000 Training on GPU shell OUTDIR gs BUCKET babyweight hooks gpu JOBNAME babyweight date u y m d H M S gsutil m rm rf OUTDIR gcloud ml engine jobs submit training JOBNAME region us west1 module name trainer hooks task package path trainer hooks job dir OUTDIR staging bucket gs BUCKET STAGING scale tier BASIC GPU runtime version 1 10 bucket BUCKET babyweight output dir OUTDIR batch size 8192 train steps 21000 Defining you own schedule shell OUTDIR gs BUCKET babyweight hooks basic ext JOBNAME babyweight date u y m d H M S gsutil m rm rf OUTDIR gcloud ml engine jobs submit training JOBNAME region us west1 module name trainer hooks ext task package path trainer hooks ext job dir OUTDIR staging bucket gs BUCKET STAGING scale tier BASIC runtime version 1 10 bucket BUCKET babyweight output dir OUTDIR eval int 1200 train steps 15000 Deep profiling We can collect a deep profiling dump that can be later analyzed with a profiling CLI tool or with a profiler ui as described in a post ADD LINK HERE Launch the training job as following shell OUTDIR gs BUCKET babyweight profiler standard JOBNAME babyweight date u y m d H M S gsutil m rm rf OUTDIR gcloud ml engine jobs submit training JOBNAME region us west1 module name trainer deep profiling task package path trainer deep profiling job dir OUTDIR staging bucket gs BUCKET STAGING scale tier STANDARD 1 runtime version 1 10 bucket BUCKET babyweight output dir OUTDIR train steps 100000 Profiler CLI 1 In order to use profiler CLI https github com tensorflow tensorflow blob 9590c4c32dd4346ea5c35673336f5912c6072bf2 tensorflow core profiler README md you need to build the profiler first git clone https github com tensorflow tensorflow git cd tensorflow bazel build c opt tensorflow core profiler profiler 2 Copy dumps locally rm rf tmp profiler mkdir p tmp profiler gsutil m cp r OUTDIR profiler tmp 3 Launch the profiler with bazel bin tensorflow core profiler profiler profile path tmp profiler ls tmp profiler head 1 Profiler UI You can also use a profiler ui https github com tensorflow profiler ui i e a web interface for a tensorflow profiler 1 If you d like to install pprof https github com google pprof please follow the installation instructions https github com google pprof building pprof 2 Clone the repository git clone https github com tensorflow profiler ui git cd profiler ui 3 Copy dumps locally rm rf tmp profiler mkdir p tmp profiler gsutil m cp r OUTDIR MODEL profiler tmp 4 Launch the profiler with python ui py profile context path tmp profiler ls tmp profiler head 1
GCP Google Cloud Platform Estimated cost to run this tutorial is 9 per day Prerequisites This tutorial uses Anthos which is on the Google Cloud Platform GCP If you don t have an account you can for 300 in free credits After signing up a GCP project and to a billing account
# Prerequisites ## Google Cloud Platform This tutorial uses Anthos which is on the Google Cloud Platform (GCP). If you don’t have an account, you can [sign up](https://cloud.google.com/free/) for $300 in free credits. Estimated cost to run this tutorial is $9 per day. After signing up, [create](https://cloud.google.com/resource-manager/docs/creating-managing-projects) a GCP project and [attach](https://cloud.google.com/billing/docs/how-to/modify-project) to a billing account. ## GitLab This tutorial uses GitLab for code repository and for continuous integration & continuous delivery. [Sign up](https://gitlab.com/users/sign_up) to create a GitLab account if you don’t already have one. To grant your laptop access to make changes as this account, generate and add SSH keys to your account. Generate SSH keys: ```bash ssh-keygen ``` It is recommended to add a passphrase when generating new ssh keys, but for this demo you can proceed without setting one. Copy content of `id_rsa.pub`, paste in the [key textbox](https://gitlab.com/-/profile/keys) and add key. Create a [new public group](https://gitlab.com/groups/new) and give it a name of choice (e.g anthos-demo). When creating a group, gitlab assigns you a URI. Take note of the URI. All repositories/projects created during this tutorial will be under this group. ## Install and Initialize the Google Cloud SDK Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/docs/install) to install and configure the `gcloud` command line utility. At the time this doc was created, some tools used in this tutorial are still in beta. To be able to utilize them, install `gcloud beta` ```bash gcloud components install beta ``` Initialize gcloud config with the project id: ```bash gcloud init ``` Then be sure to authorize `gcloud` to access the Cloud Platform with your Google user credentials: ```bash gcloud auth login ``` ## Install Kubectl The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` using `gcloud`: ```bash gcloud components install kubectl ``` Verify installation: ```bash kubectl version --client ``` ## Install nomos The nomos command is an optional command-line tool used to interact with the Config sync operator. Download nomos: ```bash gcloud components install nomos ``` macOS and Linux clients should run this extra command to configure the binary to be executable. ```bash chmod +x </path/to/nomos> ``` Move the command to a location your system searches for binaries, such as `/usr/local/bin`, or you can run the command by using its fully-qualified path. **Environment variables** The following environment variables are required for some of the commands in this guide ```bash export PROJECT_ID="<project id you created earlier>" export PROJECT_NUMBER="$(gcloud projects describe "${PROJECT_ID}" \ --format='value(projectNumber)')" export USER="<user email you are using>" export GROUP_NAME="<your gitlab group name>" # if your group name has space(s), replace with `-` export GROUP_URI="<your gitlab group URI>" export REGION="us-central1" export ZONE="us-central1-c" ``` **APIs** Enable all the APIs that are required for this guide: ```bash gcloud services enable \ anthos.googleapis.com \ anthosgke.googleapis.com \ anthosaudit.googleapis.com \ binaryauthorization.googleapis.com \ cloudbuild.googleapis.com \ containerscanning.googleapis.com \ cloudresourcemanager.googleapis.com \ container.googleapis.com \ gkeconnect.googleapis.com \ gkehub.googleapis.com \ serviceusage.googleapis.com \ stackdriver.googleapis.com \ monitoring.googleapis.com \ logging.googleapis.com ``` Next: [Register GKE Clusters with Anthos](2-register-gke-clusters-with-anthos.md
GCP
Prerequisites Google Cloud Platform This tutorial uses Anthos which is on the Google Cloud Platform GCP If you don t have an account you can sign up https cloud google com free for 300 in free credits Estimated cost to run this tutorial is 9 per day After signing up create https cloud google com resource manager docs creating managing projects a GCP project and attach https cloud google com billing docs how to modify project to a billing account GitLab This tutorial uses GitLab for code repository and for continuous integration continuous delivery Sign up https gitlab com users sign up to create a GitLab account if you don t already have one To grant your laptop access to make changes as this account generate and add SSH keys to your account Generate SSH keys bash ssh keygen It is recommended to add a passphrase when generating new ssh keys but for this demo you can proceed without setting one Copy content of id rsa pub paste in the key textbox https gitlab com profile keys and add key Create a new public group https gitlab com groups new and give it a name of choice e g anthos demo When creating a group gitlab assigns you a URI Take note of the URI All repositories projects created during this tutorial will be under this group Install and Initialize the Google Cloud SDK Follow the Google Cloud SDK documentation https cloud google com sdk docs install to install and configure the gcloud command line utility At the time this doc was created some tools used in this tutorial are still in beta To be able to utilize them install gcloud beta bash gcloud components install beta Initialize gcloud config with the project id bash gcloud init Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials bash gcloud auth login Install Kubectl The kubectl command line utility is used to interact with the Kubernetes API Server Download and install kubectl using gcloud bash gcloud components install kubectl Verify installation bash kubectl version client Install nomos The nomos command is an optional command line tool used to interact with the Config sync operator Download nomos bash gcloud components install nomos macOS and Linux clients should run this extra command to configure the binary to be executable bash chmod x path to nomos Move the command to a location your system searches for binaries such as usr local bin or you can run the command by using its fully qualified path Environment variables The following environment variables are required for some of the commands in this guide bash export PROJECT ID project id you created earlier export PROJECT NUMBER gcloud projects describe PROJECT ID format value projectNumber export USER user email you are using export GROUP NAME your gitlab group name if your group name has space s replace with export GROUP URI your gitlab group URI export REGION us central1 export ZONE us central1 c APIs Enable all the APIs that are required for this guide bash gcloud services enable anthos googleapis com anthosgke googleapis com anthosaudit googleapis com binaryauthorization googleapis com cloudbuild googleapis com containerscanning googleapis com cloudresourcemanager googleapis com container googleapis com gkeconnect googleapis com gkehub googleapis com serviceusage googleapis com stackdriver googleapis com monitoring googleapis com logging googleapis com Next Register GKE Clusters with Anthos 2 register gke clusters with anthos md
GCP ACM is a key component of Anthos that lets you define and enforce configs including custom policies and apply it across all your infrastructure both on premises and in the cloud Set up Anthos Config Management ACM Before setting up ACM first enable the ACM feature With ACM you can set configs and policies in one repo Typically the security or operator team manages this repo Using the repo model lets developers focus on app development repo s while the security operators focus on infrastructure As long as clusters are in sync with the ACM repo changes can t be made on policies config managed by ACM except using the repo This allows enterprises to take all the advantages that come with a version control system while creating and modifying configs
# Set up Anthos Config Management (ACM) [Anthos Config Management](https://cloud.google.com/anthos/config-management) (ACM) is a key component of Anthos that lets you define and enforce configs, including custom policies, and apply it across all your infrastructure both on-premises and in the cloud. With ACM, you can set configs and policies in one repo. Typically the security or operator team manages this repo. Using the repo model lets developers focus on app development repo(s) while the security/operators focus on infrastructure. As long as clusters are in sync with the ACM repo, changes can’t be made on policies/config managed by ACM except using the repo. This allows enterprises to take all the advantages that come with a version control system while creating and modifying configs. [Learn more](https://cloud.google.com/anthos-config-management/docs/concepts). Before setting up ACM, first enable the ACM feature: ```bash gcloud alpha container hub config-management enable ``` ## Create GitLab repo/project Click on the [GitLab $GROUP_NAME group](https://gitlab.com/dashboard/groups) you created earlier and create a new blank project called ACM. Make it public: ![alt_text](images/create-blank-project.png "Create blank project") Clone the repo locally and use nomos to create the [repo structure](https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/concepts/repo) that allows [Config Sync](https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/overview) to read from the repo: ```bash cd ~ mkdir $GROUP_NAME cd $GROUP_NAME/ git clone [email protected]:$GROUP_NAME/acm.git cd acm/ git switch -c main nomos init ``` To know more about the functions of the different directories created by `nomos init `you can read the [repo structure](https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/concepts/repo) documentation. ## Deploy config sync operator to clusters The [Config Sync Operator](https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/how-to/installing#git-creds-secret) is a controller that manages Config Sync in a Kubernetes cluster. [Download](https://cloud.google.com/anthos-config-management/downloads) the operator and deploy to your clusters: ```bash cd ~/$GROUP_NAME/acm mkdir setup cd setup/ gsutil cp gs://config-management-release/released/latest/config-management-operator.yaml config-management-operator.yaml for i in "dev" "prod"; do gcloud container clusters get-credentials ${i} --zone=$ZONE kubectl apply -f config-management-operator.yaml done ``` ## Configure operator To configure the behavior of Config sync, create 2 configuration files `config-management-dev.yaml `for the dev cluster and `config-management-prod.yaml` for the prod cluster: ```bash cat > config-management-dev.yaml << EOF apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management spec: clusterName: dev git: syncRepo: https://gitlab.com/$GROUP_URI/acm.git syncBranch: dev secretType: none policyController: enabled: true EOF cat > config-management-prod.yaml << EOF apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management spec: clusterName: prod git: syncRepo: https://gitlab.com/$GROUP_URI/acm.git syncBranch: main secretType: none policyController: enabled: true EOF ``` Note: Notice the syncBranch values. secretType is set to none because the repo is public. If repo isn’t public grant operator access by following these [steps](https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/how-to/installing#git-creds-secret). ## ClusterSelectors and Namespaces [ClusterSelectors](https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/how-to/clusterselectors) and [namespaces](https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/how-to/namespace-scoped-objects) are ways of grouping to apply configs on a subset of our infrastructure. The ACM repo is where all clusterselectors and namespaces are defined and applied Navigate to clusterregistry/ in acm repo: ```bash cd ~/$GROUP_NAME/acm/clusterregistry ``` Create prod and dev cluster selectors: ```bash cat > dev-cluster-selector.yaml << EOF kind: Cluster apiVersion: clusterregistry.k8s.io/v1alpha1 metadata: name: dev labels: environment: dev --- kind: ClusterSelector apiVersion: configmanagement.gke.io/v1 metadata: name: dev-cluster-selector spec: selector: matchLabels: environment: dev EOF cat > prod-cluster-selector.yaml << EOF kind: Cluster apiVersion: clusterregistry.k8s.io/v1alpha1 metadata: name: prod labels: environment: prod --- kind: ClusterSelector apiVersion: configmanagement.gke.io/v1 metadata: name: prod-cluster-selector spec: selector: matchLabels: environment: prod EOF ``` Create 3 namespaces dev, stage and prod and set up so that dev and stage namespaces deploy to the dev cluster and prod deploys to prod cluster: ```bash cd ~/$GROUP_NAME/acm/namespaces mkdir dev cd dev cat > namespace.yaml << EOF apiVersion: v1 kind: Namespace metadata: name: dev annotations: configmanagement.gke.io/cluster-selector: dev-cluster-selector EOF cd .. mkdir stage cd stage cat > namespace.yaml << EOF apiVersion: v1 kind: Namespace metadata: name: stage annotations: configmanagement.gke.io/cluster-selector: dev-cluster-selector EOF cd .. mkdir prod cd prod cat > namespace.yaml << EOF apiVersion: v1 kind: Namespace metadata: name: prod annotations: configmanagement.gke.io/cluster-selector: prod-cluster-selector EOF ``` ## Policy constraint Lastly, we’ll take advantage of the policy control feature of ACM and create a policy. [Policy controller](https://cloud.google.com/anthos-config-management/docs/concepts/policy-controller) is used to check, audit and enforce your clusters’ compliance with policies that may be related to security, regulations or arbitrary business rules. In this tutorial we’ll create a no privileged container constraint and apply it to the prod namespace. Notice we set policycontroller to true in `config-management-prod.yaml `earlier. This will enable this policy to be enforced in our clusters. Create `constraint-restrict-privileged-container.yaml` in `cluster/` : ```bash cd ~/$GROUP_NAME/acm/cluster cat > constraint-restrict-privileged-container.yaml << EOF apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: noprivilegedcontainer annotations: configmanagement.gke.io/cluster-selector: prod-cluster-selector spec: crd: spec: names: kind: NoPrivilegedContainer targets: - target: admission.k8s.gatekeeper.sh rego: | package noprivileged violation[{"msg": msg, "details": {}}] { c := input_containers[_] c.securityContext.privileged msg := sprintf("Privileged container is not allowed: %v, securityContext: %v", [c.name, c.securityContext]) } input_containers[c] { c := input.review.object.spec.containers[_] } input_containers[c] { c := input.review.object.spec.initContainers[_] } input_containers[c] { c := input.review.object.spec.template.spec.containers[_] } input_containers[c] { c := input.review.object.spec.template.spec.initContainers[_] } --- apiVersion: constraints.gatekeeper.sh/v1beta1 kind: NoPrivilegedContainer metadata: name: no-privileged-container annotations: configmanagement.gke.io/cluster-selector: prod-cluster-selector spec: enforcementAction: dryrun match: kinds: - apiGroups: ["*"] kinds: ["Deployment", "Pod"] EOF ``` ## Push changes to remote Now that we have made these changes locally, let’s push these changes to the remote repo so the clusters can read from it and configure itself: Push changes to remote: ```bash git add -A git commit -m "Set up ACM" git push -u origin main ``` Since the dev cluster is configured to sync with the dev branch, create a new branch called `dev` from the `main` branch by clicking the β€œ+” sign in the repo page. Also create a branch called dev in your local and switch to this branch ```bash git checkout dev ``` Apply the configuration: ```bash cd ~/$GROUP_NAME/acm/setup for i in "dev" "prod"; do gcloud container clusters get-credentials ${i} \ --zone=$ZONE kubectl apply -f config-management-${i}.yaml done ``` Verify the configuration for both dev and prod clusters: ```bash for i in "dev" "prod"; do gcloud container clusters get-credentials ${i} kubectl -n kube-system get pods | grep config-management done ``` You should see a response like below for both of your clusters: ```bash config-management-operator-59455ffc4-c6nvp 1/1 Running 4m52s ``` Confirm your clusters are synced from the [console](https://console.cloud.google.com/anthos/config_management) or run: ```bash nomos status ``` A status of `Pending` or `Synced` means your installation is fine Next: [CICD with Anthos and Gitlab](4-cicd-with-anthos-and-gitlab.md
GCP
Set up Anthos Config Management ACM Anthos Config Management https cloud google com anthos config management ACM is a key component of Anthos that lets you define and enforce configs including custom policies and apply it across all your infrastructure both on premises and in the cloud With ACM you can set configs and policies in one repo Typically the security or operator team manages this repo Using the repo model lets developers focus on app development repo s while the security operators focus on infrastructure As long as clusters are in sync with the ACM repo changes can t be made on policies config managed by ACM except using the repo This allows enterprises to take all the advantages that come with a version control system while creating and modifying configs Learn more https cloud google com anthos config management docs concepts Before setting up ACM first enable the ACM feature bash gcloud alpha container hub config management enable Create GitLab repo project Click on the GitLab GROUP NAME group https gitlab com dashboard groups you created earlier and create a new blank project called ACM Make it public alt text images create blank project png Create blank project Clone the repo locally and use nomos to create the repo structure https cloud google com kubernetes engine docs add on config sync concepts repo that allows Config Sync https cloud google com kubernetes engine docs add on config sync overview to read from the repo bash cd mkdir GROUP NAME cd GROUP NAME git clone git gitlab com GROUP NAME acm git cd acm git switch c main nomos init To know more about the functions of the different directories created by nomos init you can read the repo structure https cloud google com kubernetes engine docs add on config sync concepts repo documentation Deploy config sync operator to clusters The Config Sync Operator https cloud google com kubernetes engine docs add on config sync how to installing git creds secret is a controller that manages Config Sync in a Kubernetes cluster Download https cloud google com anthos config management downloads the operator and deploy to your clusters bash cd GROUP NAME acm mkdir setup cd setup gsutil cp gs config management release released latest config management operator yaml config management operator yaml for i in dev prod do gcloud container clusters get credentials i zone ZONE kubectl apply f config management operator yaml done Configure operator To configure the behavior of Config sync create 2 configuration files config management dev yaml for the dev cluster and config management prod yaml for the prod cluster bash cat config management dev yaml EOF apiVersion configmanagement gke io v1 kind ConfigManagement metadata name config management spec clusterName dev git syncRepo https gitlab com GROUP URI acm git syncBranch dev secretType none policyController enabled true EOF cat config management prod yaml EOF apiVersion configmanagement gke io v1 kind ConfigManagement metadata name config management spec clusterName prod git syncRepo https gitlab com GROUP URI acm git syncBranch main secretType none policyController enabled true EOF Note Notice the syncBranch values secretType is set to none because the repo is public If repo isn t public grant operator access by following these steps https cloud google com kubernetes engine docs add on config sync how to installing git creds secret ClusterSelectors and Namespaces ClusterSelectors https cloud google com kubernetes engine docs add on config sync how to clusterselectors and namespaces https cloud google com kubernetes engine docs add on config sync how to namespace scoped objects are ways of grouping to apply configs on a subset of our infrastructure The ACM repo is where all clusterselectors and namespaces are defined and applied Navigate to clusterregistry in acm repo bash cd GROUP NAME acm clusterregistry Create prod and dev cluster selectors bash cat dev cluster selector yaml EOF kind Cluster apiVersion clusterregistry k8s io v1alpha1 metadata name dev labels environment dev kind ClusterSelector apiVersion configmanagement gke io v1 metadata name dev cluster selector spec selector matchLabels environment dev EOF cat prod cluster selector yaml EOF kind Cluster apiVersion clusterregistry k8s io v1alpha1 metadata name prod labels environment prod kind ClusterSelector apiVersion configmanagement gke io v1 metadata name prod cluster selector spec selector matchLabels environment prod EOF Create 3 namespaces dev stage and prod and set up so that dev and stage namespaces deploy to the dev cluster and prod deploys to prod cluster bash cd GROUP NAME acm namespaces mkdir dev cd dev cat namespace yaml EOF apiVersion v1 kind Namespace metadata name dev annotations configmanagement gke io cluster selector dev cluster selector EOF cd mkdir stage cd stage cat namespace yaml EOF apiVersion v1 kind Namespace metadata name stage annotations configmanagement gke io cluster selector dev cluster selector EOF cd mkdir prod cd prod cat namespace yaml EOF apiVersion v1 kind Namespace metadata name prod annotations configmanagement gke io cluster selector prod cluster selector EOF Policy constraint Lastly we ll take advantage of the policy control feature of ACM and create a policy Policy controller https cloud google com anthos config management docs concepts policy controller is used to check audit and enforce your clusters compliance with policies that may be related to security regulations or arbitrary business rules In this tutorial we ll create a no privileged container constraint and apply it to the prod namespace Notice we set policycontroller to true in config management prod yaml earlier This will enable this policy to be enforced in our clusters Create constraint restrict privileged container yaml in cluster bash cd GROUP NAME acm cluster cat constraint restrict privileged container yaml EOF apiVersion templates gatekeeper sh v1beta1 kind ConstraintTemplate metadata name noprivilegedcontainer annotations configmanagement gke io cluster selector prod cluster selector spec crd spec names kind NoPrivilegedContainer targets target admission k8s gatekeeper sh rego package noprivileged violation msg msg details c input containers c securityContext privileged msg sprintf Privileged container is not allowed v securityContext v c name c securityContext input containers c c input review object spec containers input containers c c input review object spec initContainers input containers c c input review object spec template spec containers input containers c c input review object spec template spec initContainers apiVersion constraints gatekeeper sh v1beta1 kind NoPrivilegedContainer metadata name no privileged container annotations configmanagement gke io cluster selector prod cluster selector spec enforcementAction dryrun match kinds apiGroups kinds Deployment Pod EOF Push changes to remote Now that we have made these changes locally let s push these changes to the remote repo so the clusters can read from it and configure itself Push changes to remote bash git add A git commit m Set up ACM git push u origin main Since the dev cluster is configured to sync with the dev branch create a new branch called dev from the main branch by clicking the sign in the repo page Also create a branch called dev in your local and switch to this branch bash git checkout dev Apply the configuration bash cd GROUP NAME acm setup for i in dev prod do gcloud container clusters get credentials i zone ZONE kubectl apply f config management i yaml done Verify the configuration for both dev and prod clusters bash for i in dev prod do gcloud container clusters get credentials i kubectl n kube system get pods grep config management done You should see a response like below for both of your clusters bash config management operator 59455ffc4 c6nvp 1 1 Running 4m52s Confirm your clusters are synced from the console https console cloud google com anthos config management or run bash nomos status A status of Pending or Synced means your installation is fine Next CICD with Anthos and Gitlab 4 cicd with anthos and gitlab md
GCP In this section we ll automate a CI CD pipeline taking advantage of the features from anthos CICD with Anthos Before creating a CICD pipeline we need an application For this tutorial we ll use the popular hello kubernetes application created by paulbower but with a few modifications Download hello kubernetes app Create app
# CICD with Anthos In this section, we’ll automate a CI/CD pipeline taking advantage of the features from anthos. ## Create app Before creating a CICD pipeline we need an application. For this tutorial, we’ll use the popular hello kubernetes application created by paulbower but with a few modifications. Download hello-kubernetes app: ```bash cd ~/$GROUP_NAME/ git clone https://github.com/itodotimothy6/hello-kubernetes.git cd hello-kubernetes/ rm -rf .git ``` The hello-kubernetes dir will later be made to a gitlab repo and this is where the developer team will spend most of their time on. In this tutorial, we’ll isolate developer’s work in one repo and security/platform in a separate repo, that way developers can focus on application logic and other teams focus on what they do best. ## Platform admin repo As a good practice to keep out non-developer work from the app repo, create a platform admin repo that’ll contain code/scripts/commands that need to be run during the CI/CD process. Also [gitlab](https://docs.gitlab.com/ee/ci/quick_start/README.html#cicd-process-overview) uses `.gitlab-ci.yml` file to define a cicd pipeline. For a complex pipeline, we can avoid crowding the `.gitlab-ci.yml` file by abstracting some of the code and storing in the platform admin. Create platform-admin: ```bash cd ~/$GROUP_NAME/ mkdir platform-admin/ ``` Now, we’ll create the different stages of the ci-cd process and store in sub-directories in platform-admin ## Build This is the first stage. In this stage, we’ll create a build container job which builds an image using the `hello-kubernetes` Dockerfile and pushes this image to [container registry](https://cloud.google.com/container-registry) (gcr.io). In this tutorial we’ll use a build container tool known as [kaniko](https://github.com/GoogleContainerTools/kaniko#kaniko---build-images-in-kubernetes). Create build stage: ```bash cd platform-admin/ mkdir build/ cd build/ cat > build-container.yaml << EOF build: stage: Build tags: - prod image: name: gcr.io/kaniko-project/executor:debug entrypoint: [""] script: - echo "Building container image and pushing to gcr.io in ${PROJECT_ID}" - /kaniko/executor --context \$CI_PROJECT_DIR --dockerfile \$CI_PROJECT_DIR/Dockerfile --destination \${HOSTNAME}/\${PROJECT_ID}/\${CONTAINER_NAME}:\$CI_COMMIT_SHORT_SHA EOF ``` ## Binary Authorization [Binary authorization ](https://cloud.google.com/binary-authorization)is the process of creating [attestations](https://cloud.google.com/binary-authorization/docs/key-concepts#attestations) on container images for the purpose of verifying that certain criteria are met before you can deploy the images to GKE. In this guide, we’ll implement binary authorization using Cloud Build and GKE. [Learn more](https://cloud.google.com/binary-authorization). Enable binary authorization on your clusters: ```bash for i in "dev" "prod"; do gcloud container clusters update ${i} --enable-binauthz done ``` Create signing keys and configure attestations for stage and prod pipelines: (Read this [article](https://cloud.google.com/solutions/binary-auth-with-cloud-build-and-gke) to understand step by step what the below set of commands do) ```bash export CLOUD_BUILD_SA_EMAIL="${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com" gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member "serviceAccount:${CLOUD_BUILD_SA_EMAIL}" \ --role "roles/container.developer" # Create signing keys gcloud kms keyrings create "binauthz" \ --project "${PROJECT_ID}" \ --location "${REGION}" gcloud kms keys create "vulnz-signer" \ --project "${PROJECT_ID}" \ --location "${REGION}" \ --keyring "binauthz" \ --purpose "asymmetric-signing" \ --default-algorithm "rsa-sign-pkcs1-4096-sha512" gcloud kms keys create "qa-signer" \ --project "${PROJECT_ID}" \ --location "${REGION}" \ --keyring "binauthz" \ --purpose "asymmetric-signing" \ --default-algorithm "rsa-sign-pkcs1-4096-sha512" curl "https://containeranalysis.googleapis.com/v1/projects/${PROJECT_ID}/notes/?noteId=vulnz-note" \ --request "POST" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $(gcloud auth print-access-token)" \ --header "X-Goog-User-Project: ${PROJECT_ID}" \ --data-binary @- <<EOF { "name": "projects/${PROJECT_ID}/notes/vulnz-note", "attestation": { "hint": { "human_readable_name": "Vulnerability scan note" } } } EOF curl "https://containeranalysis.googleapis.com/v1/projects/${PROJECT_ID}/notes/vulnz-note:setIamPolicy" \ --request POST \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $(gcloud auth print-access-token)" \ --header "X-Goog-User-Project: ${PROJECT_ID}" \ --data-binary @- <<EOF { "resource": "projects/${PROJECT_ID}/notes/vulnz-note", "policy": { "bindings": [ { "role": "roles/containeranalysis.notes.occurrences.viewer", "members": [ "serviceAccount:${CLOUD_BUILD_SA_EMAIL}" ] }, { "role": "roles/containeranalysis.notes.attacher", "members": [ "serviceAccount:${CLOUD_BUILD_SA_EMAIL}" ] } ] } } EOF gcloud container binauthz attestors create "vulnz-attestor" \ --project "${PROJECT_ID}" \ --attestation-authority-note-project "${PROJECT_ID}" \ --attestation-authority-note "vulnz-note" \ --description "Vulnerability scan attestor" gcloud beta container binauthz attestors public-keys add \ --project "${PROJECT_ID}" \ --attestor "vulnz-attestor" \ --keyversion "1" \ --keyversion-key "vulnz-signer" \ --keyversion-keyring "binauthz" \ --keyversion-location "${REGION}" \ --keyversion-project "${PROJECT_ID}" gcloud container binauthz attestors add-iam-policy-binding "vulnz-attestor" \ --project "${PROJECT_ID}" \ --member "serviceAccount:${CLOUD_BUILD_SA_EMAIL}" \ --role "roles/binaryauthorization.attestorsViewer" gcloud kms keys add-iam-policy-binding "vulnz-signer" \ --project "${PROJECT_ID}" \ --location "${REGION}" \ --keyring "binauthz" \ --member "serviceAccount:${CLOUD_BUILD_SA_EMAIL}" \ --role 'roles/cloudkms.signerVerifier' curl "https://containeranalysis.googleapis.com/v1/projects/${PROJECT_ID}/notes/?noteId=qa-note" \ --request "POST" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $(gcloud auth print-access-token)" \ --header "X-Goog-User-Project: ${PROJECT_ID}" \ --data-binary @- <<EOF { "name": "projects/${PROJECT_ID}/notes/qa-note", "attestation": { "hint": { "human_readable_name": "QA note" } } } EOF curl "https://containeranalysis.googleapis.com/v1/projects/${PROJECT_ID}/notes/qa-note:setIamPolicy" \ --request POST \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $(gcloud auth print-access-token)" \ --header "X-Goog-User-Project: ${PROJECT_ID}" \ --data-binary @- <<EOF { "resource": "projects/${PROJECT_ID}/notes/qa-note", "policy": { "bindings": [ { "role": "roles/containeranalysis.notes.occurrences.viewer", "members": [ "serviceAccount:${CLOUD_BUILD_SA_EMAIL}" ] }, { "role": "roles/containeranalysis.notes.attacher", "members": [ "serviceAccount:${CLOUD_BUILD_SA_EMAIL}" ] } ] } } EOF gcloud container binauthz attestors create "qa-attestor" \ --project "${PROJECT_ID}" \ --attestation-authority-note-project "${PROJECT_ID}" \ --attestation-authority-note "qa-note" \ --description "QA attestor" gcloud beta container binauthz attestors public-keys add \ --project "${PROJECT_ID}" \ --attestor "qa-attestor" \ --keyversion "1" \ --keyversion-key "qa-signer" \ --keyversion-keyring "binauthz" \ --keyversion-location "${REGION}" \ --keyversion-project "${PROJECT_ID}" gcloud container binauthz attestors add-iam-policy-binding "qa-attestor" \ --project "${PROJECT_ID}" \ --member "serviceAccount:${CLOUD_BUILD_SA_EMAIL}" \ --role "roles/binaryauthorization.attestorsViewer" ``` Vulnerability scan checker needs to be created with Cloud Build for verifying `hello-kubernetes` container images in the CI/CD pipeline. Execute the following steps to create a `cloudbuild-attestor` in Container Registry: ```bash # Give cloudbuild service account the required roles and permissions gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \ --role roles/binaryauthorization.attestorsViewer gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \ --role roles/cloudkms.signerVerifier gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \ --role roles/containeranalysis.notes.attacher # Create attestor using cloudbuild git clone https://github.com/GoogleCloudPlatform/gke-binary-auth-tools ~/$GROUP_NAME/binauthz-tools gcloud builds submit \ --project "${PROJECT_ID}" \ --tag "gcr.io/${PROJECT_ID}/cloudbuild-attestor" \ ~/$GROUP_NAME/binauthz-tools # clean up - delete binauthz-tools rm -rf ~/$GROUP_NAME/binauthz-tools ``` Verify cloudbuild-attestor image is created by inputting `gcr.io/&lt;project-id>/cloudbuild-attestor `in your browser. Create binauth.yaml which describes the Binary Authorization policy for the project: ```bash cd ~/$GROUP_NAME/platform-admin/ mkdir binauth/ cd binauth/ cat > binauth.yaml << EOF defaultAdmissionRule: enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG evaluationMode: ALWAYS_DENY globalPolicyEvaluationMode: ENABLE admissionWhitelistPatterns: # Gitlab runner - namePattern: gitlab/gitlab-runner-helper:x86_64-8fa89735 - namePattern: gitlab/gitlab-runner-helper:x86_64-ece86343 - namePattern: gitlab/gitlab-runner:alpine-v13.6.0 - namePattern: gcr.io/abm-test-bed/gitlab-runner@sha256:8f623d3c55ffc783752d0b34097c5625a32a910a8c1427308f5c39fd9a23a3c0 # Gitlab runner job containers - namePattern: google/cloud-sdk - namePattern: gcr.io/cloud-builders/gke-deploy:latest - namePattern: gcr.io/kaniko-project/* - namePattern: gcr.io/cloud-solutions-images/kustomize:3.7 - namePattern: gcr.io/kpt-functions/gatekeeper-validate - namePattern: gcr.io/kpt-functions/read-yaml - namePattern: gcr.io/stackdriver-prometheus/* - namePattern: gcr.io/$PROJECT_ID/cloudbuild-attestor - namePattern: gcr.io/config-management-release/* clusterAdmissionRules: # Staging/dev cluster $ZONE.dev: evaluationMode: REQUIRE_ATTESTATION enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG requireAttestationsBy: - projects/$PROJECT_ID/attestors/vulnz-attestor # Production cluster $ZONE.prod: evaluationMode: REQUIRE_ATTESTATION enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG requireAttestationsBy: - projects/$PROJECT_ID/attestors/vulnz-attestor - projects/$PROJECT_ID/attestors/qa-attestor EOF ``` Upload binauth.yaml policy to the project: ```bash gcloud container binauthz policy import ./binauth.yaml ``` Create verify image: ```bash cd ~/$GROUP_NAME/platform-admin/ mkdir vulnerability/ cd vulnerability/ cat > vulnerability-scan-result.yaml << EOF check-vulnerability-scan-result: stage: Verify Image tags: - prod image: name: gcr.io/\${PROJECT_ID}/cloudbuild-attestor script: - | /scripts/check_vulnerabilities.sh \\ -p \${PROJECT_ID} \\ -i \${HOSTNAME}/\${PROJECT_ID}/\${CONTAINER_NAME}:\${CI_COMMIT_SHORT_SHA} \\ -t 8 EOF cat > vulnerability-scan-verify.yaml << EOF attest-vulnerability-scan: stage: Verify Image tags: - prod image: name: 'gcr.io/\${PROJECT_ID}/cloudbuild-attestor' script: - mkdir images - echo "\$(gcloud container images describe --format 'value(image_summary.digest)' \${HOSTNAME}/\${PROJECT_ID}/\${CONTAINER_NAME}:\${CI_COMMIT_SHORT_SHA})" > images/digest.txt - | FQ_DIGEST=\$(gcloud container images describe --format 'value(image_summary.fully_qualified_digest)' \${HOSTNAME}/\${PROJECT_ID}/\${CONTAINER_NAME}:\${CI_COMMIT_SHORT_SHA}) /scripts/create_attestation.sh \\ -p "\$PROJECT_ID" \\ -i "\$FQ_DIGEST" \\ -a "\$_VULNZ_ATTESTOR" \\ -v "\$_VULNZ_KMS_KEY_VERSION" \\ -k "\$_VULNZ_KMS_KEY" \\ -l "\$_KMS_LOCATION" \\ -r "\$_KMS_KEYRING" artifacts: paths: - images/ EOF ``` ## Hydrate manifest using Kustomize In this tutorial, we use Kustomize to create a hydrated manifest of our deployment which will be stored in a repo called `hello-kubernetes-env` Create shared nodejs kustomize base in platform-admin: ```bash cd ~/$GROUP_NAME/platform-admin/ mkdir -p shared-kustomize-bases/nodejs cd shared-kustomize-bases/nodejs cat > deployment.yaml << EOF kind: Deployment apiVersion: apps/v1 metadata: name: nodejs spec: replicas: 3 selector: matchLabels: app: nodejs template: metadata: labels: app: nodejs spec: containers: - name: nodejs image: app ports: - containerPort: 8080 EOF cat > kustomization.yaml << EOF apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml - service.yaml EOF cat > service.yaml << EOF kind: Service apiVersion: v1 metadata: name: nodejs spec: type: LoadBalancer selector: app: nodejs ports: - name: http port: 80 targetPort: 8080 EOF ``` To allow developers apply patches when deploying, create overlays for dev, stage and prod in the `hello-kubernetes` repo: ```bash cd ~/$GROUP_NAME/hello-kubernetes/ mkdir -p kubernetes/overlays/dev cd kubernetes/overlays/dev/ cat > kustomization.yaml << EOF apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: dev bases: - ../../base namePrefix: dev- EOF cd .. mkdir stage/ cd stage/ cat > kustomization.yaml << EOF apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: stage bases: - ../../base namePrefix: stage- EOF cd .. mkdir prod/ cd prod/ cat > kustomization.yaml << EOF apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: prod bases: - ../../base namePrefix: prod- EOF ``` Now that we have the kustomize base & overlay, we'll start creating the kustomize CI/CD jobs Create fetch base stage: ```bash cd ~/$GROUP_NAME/platform-admin/ mkdir kustomize-steps/ cd kustomize-steps/ cat > fetch-base.yaml << EOF fetch_kustomize_base: stage: Fetch Bases image: gcr.io/cloud-solutions-images/kustomize:3.7 tags: - prod script: # Add auth to git urls - git config --global url."https://gitlab-ci-token:\${CI_JOB_TOKEN}@\${CI_SERVER_HOST}".insteadOf "https://\${CI_SERVER_HOST}" - mkdir -p kubernetes/base/ # Pull from Kustomize shared base from platform repo - echo \${SSH_KEY} | base64 -d > /working/ssh-key - chmod 400 /working/ssh-key - export GIT_SSH_COMMAND="ssh -i /working/ssh-key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" - git clone git@\${CI_SERVER_HOST}:\${CI_PROJECT_NAMESPACE}/platform-admin.git -b main - cp platform-admin/shared-kustomize-bases/nodejs/* kubernetes/base artifacts: paths: - kubernetes/base/ EOF ``` Create hydrate dev/prod manifest stage: ```bash cat > hydrate-dev.yaml << EOF kustomize-dev: stage: Hydrate Manifests image: gcr.io/cloud-solutions-images/kustomize:3.7 tags: - prod except: refs: - main script: - DIGEST=\$(cat images/digest.txt) # dev - mkdir -p ./hydrated-manifests/ - cd \${KUSTOMIZATION_PATH_DEV} - kustomize edit set image app=\${HOSTNAME}/\${PROJECT_ID}/\${CONTAINER_NAME}@\${DIGEST} - kustomize build . -o ../../../hydrated-manifests/dev.yaml - cd - artifacts: paths: - hydrated-manifests/ EOF cat > hydrate-prod.yaml << EOF kustomize: stage: Hydrate Manifests image: gcr.io/cloud-solutions-images/kustomize:3.7 tags: - prod only: refs: - main script: - DIGEST=\$(cat images/digest.txt) # build out staging manifests - mkdir -p ./hydrated-manifests/ # stage - cd \${KUSTOMIZATION_PATH_NON_PROD} - kustomize edit set image app=\${HOSTNAME}/\${PROJECT_ID}/\${CONTAINER_NAME}@\${DIGEST} - kustomize build . -o ../../../hydrated-manifests/stage.yaml - cd - # prod - cd \${KUSTOMIZATION_PATH_PROD} - kustomize edit set image app=\${HOSTNAME}/\${PROJECT_ID}/\${CONTAINER_NAME}@\${DIGEST} - kustomize build . -o ../../../hydrated-manifests/production.yaml - cd - artifacts: paths: - hydrated-manifests/ EOF ``` ## ACM policy check in CI pipeline ACM, as discussed earlier, is used to ensure consistency in config and automate policy checks. We’ll incorporate ACM to our CI pipeline to ensure that any changes that fail policy check is terminated at the CI stage even before deployment. Create stage that downloads acm policies for the acm repo: ```bash cd ~/$GROUP_NAME/platform-admin/ mkdir acm/ cd acm/ cat > download-policies.yaml << EOF download-acm-policies: stage: Download ACM Policy image: gcr.io/cloud-solutions-images/kustomize:3.7 tags: - prod script: # Note: Having SSH_KEY in GitLab is only for demo purposes. You should # consider saving the key as a secret in the k8s cluster and have the secret # mounted as a file inside the container instead. - echo \${SSH_KEY} | base64 -d > /working/ssh-key - chmod 400 /working/ssh-key - export GIT_SSH_COMMAND="ssh -i /working/ssh-key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" - git clone git@\${CI_SERVER_HOST}:\${CI_PROJECT_NAMESPACE}/acm.git -b main - cp acm/policies/cluster/constraint* hydrated-manifests/. artifacts: paths: - hydrated-manifests/ EOF ``` Create stage that reads acm: ```bash cd ~/$GROUP_NAME/platform-admin/acm/ cat > read-acm.yaml << EOF read-yaml: stage: Read ACM YAML image: name: gcr.io/kpt-functions/read-yaml entrypoint: ["/bin/sh", "-c"] tags: - prod script: - mkdir stage && cp hydrated-manifests/stage.yaml stage && cp hydrated-manifests/constraint* stage - mkdir prod && cp hydrated-manifests/production.yaml prod && cp hydrated-manifests/constraint* prod # The following 2 commands are combining all the YAMLs from the source_dir into one single YAML file - /usr/local/bin/node /home/node/app/dist/read_yaml_run.js -d source_dir=stage/ --output stage-source.yaml --input /dev/null - /usr/local/bin/node /home/node/app/dist/read_yaml_run.js -d source_dir=prod/ --output prod-source.yaml --input /dev/null artifacts: paths: - stage-source.yaml - prod-source.yaml expire_in: 1 hour EOF ``` Create validate acm stage: ```bash cd ~/$GROUP_NAME/platform-admin/acm/ cat > validate-acm.yaml << EOF validate-acm-policy: stage: ACM Policy Check image: name: gcr.io/kpt-functions/gatekeeper-validate entrypoint: ["/bin/sh", "-c"] tags: - prod script: - /app/gatekeeper_validate --input stage-source.yaml - /app/gatekeeper_validate --input prod-source.yaml EOF ``` ## Deploy The last stage in the pipeline is to deploy our changes. For dev(other branches except main), we’ll deploy the hydrate-dev maifest. For stage and prod, we’ll copy the hydrated stage and prod manifests to the `hello-kubernetes-env` repo Create deploy prod stage: ```bash cd ~/$GROUP_NAME/platform-admin/ mkdir deploy/ cd deploy/ cat > deploy-dev.yaml << EOF deploy-dev: stage: Deploy Dev tags: - dev script: - kubectl apply -f hydrated-manifests/dev.yaml except: refs: - main EOF cat > deploy-prod.yaml << EOF push-manifests: only: refs: - main stage: Push Manifests image: gcr.io/cloud-solutions-images/kustomize:3.7 tags: - prod script: #- cp /working/.ssh/ssh-deploy /working/ssh-key - echo \${SSH_KEY} | base64 -d > /working/ssh-key - chmod 400 /working/ssh-key - export GIT_SSH_COMMAND="ssh -i /working/ssh-key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" - git config --global user.email "\${CI_PROJECT_NAME}-ci@\${CI_SERVER_HOST}" - git config --global user.name "\${CI_PROJECT_NAMESPACE}/\${CI_PROJECT_NAME}" - git clone git@\${CI_SERVER_HOST}:\${CI_PROJECT_NAMESPACE}/\${CI_PROJECT_NAME}-env.git -b stage - cd \${CI_PROJECT_NAME}-env - cp ../hydrated-manifests/stage.yaml stage.yaml - cp ../hydrated-manifests/production.yaml production.yaml - | # If files have changed, commit them back to the env repo in the staging branch if [ -z "\$(git status --porcelain)" ]; then echo "No changes found in env repository." else git add stage.yaml stage.yaml git add production.yaml production.yaml git commit -m "\${CI_COMMIT_REF_SLUG} -- \${CI_PIPELINE_URL}" git push origin stage fi EOF ``` Push platform-admin remote: In gitlab, create a blank public project under the [$GROUP_NAME](https://gitlab.com/dashboard/groups) group called `platform-admin` then run the following commands to push `platform-admin` dir to gitlab ```bash cd ~/$GROUP_NAME/platform-admin/ git init git remote add origin [email protected]:$GROUP_URI/platform-admin.git git add . git commit -m "Initial commit" git push -u origin main ``` ## gitlab-ci.yml .gitlab-ci.yml is the file used by gitlab for ci cd pipeline. We’ll create a gitlab-ci.yml that references the different stage files in platform-admin and orders them as listed above. Remember we separated out these stages in a platform-admin repo to avoid a crowded gitlab-ci.yml and to separate operations from the app repo. Create .gitlab.ci.yml in the root directory of hello-kubernetes: ```bash cd ~/$GROUP_NAME/hello-kubernetes/ cat > .gitlab-ci.yml << EOF image: google/cloud-sdk include: # Build Steps - project: "$GROUP_URI/platform-admin" file: "build/build-container.yaml" # Vulnerability Scan Steps - project: "$GROUP_URI/platform-admin" file: "vulnerability/vulnerability-scan-result.yaml" - project: "$GROUP_URI/platform-admin" file: "vulnerability/vulnerability-scan-verify.yaml" # Kustomize Steps - project: "$GROUP_URI/platform-admin" file: "kustomize-steps/fetch-base.yaml" - project: "$GROUP_URI/platform-admin" file: "kustomize-steps/hydrate-dev.yaml" - project: "$GROUP_URI/platform-admin" file: "kustomize-steps/hydrate-prod.yaml" # ACM Steps - project: "$GROUP_URI/platform-admin" file: "acm/download-policies.yaml" - project: "$GROUP_URI/platform-admin" file: "acm/read-acm.yaml" - project: "$GROUP_URI/platform-admin" file: "acm/validate-acm.yaml" # Deploy Steps - project: "$GROUP_URI/platform-admin" file: "deploy/deploy-dev.yaml" - project: "$GROUP_URI/platform-admin" file: "deploy/deploy-prod.yaml" variables: KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: default KUSTOMIZATION_PATH_BASE: "./base" KUSTOMIZATION_PATH_DEV: "./kubernetes/overlays/dev" KUSTOMIZATION_PATH_NON_PROD: "./kubernetes/overlays/stage" KUSTOMIZATION_PATH_PROD: "./kubernetes/overlays/prod" HOSTNAME: "gcr.io" PROJECT_ID: "$PROJECT_ID" CONTAINER_NAME: "hello-kubernetes" # Binary Authorization Variables _VULNZ_ATTESTOR: "vulnz-attestor" _VULNZ_KMS_KEY_VERSION: "1" _VULNZ_KMS_KEY: "vulnz-signer" _KMS_KEYRING: "binauthz" _KMS_LOCATION: "$REGION" _COMPUTE_REGION: "$REGION" _PROD_CLUSTER: "prod" _STAGING_CLUSTER: "dev" _QA_ATTESTOR: "qa-attestor" _QA_KMS_KEY: "qa-signer" _QA_KMS_KEY_VERSION: "1" stages: - Build - Verify Image - Fetch Bases - Hydrate Manifests - Download ACM Policy - Read ACM YAML - ACM Policy Check - Deploy Dev - Push Manifests EOF ``` ## Remote hello-kubernetes & hello kubernetes-env In gitlab, create a blank public project under [$GROUP_NAME](https://gitlab.com/dashboard/groups) called `hello-kubernetes`. Push local hello-kubernetes to remote: Make sure you have created `hello-kubernetes` project before running the below commands ```bash cd ~/$GROUP_NAME/hello-kubernetes/ git init git remote add origin [email protected]:$GROUP_URI/hello-kubernetes.git git add . git commit -m "Initial commit" git push -u origin main ``` Create a public project under [$GROUP_NAME](https://gitlab.com/dashboard/groups) called `hello-kubernetes-env`. Ensure β€œinitialize repository with a README is checked so you can have a non-empty repository. Create a branch called `prod` in the `hello-kubernetes-env` repo. Set `prod` as the [default branch](https://docs.gitlab.com/ee/user/project/repository/branches/default.html#change-the-default-branch-name-for-a-project) and delete the `main` branch. At this point, the pipeline fails because a gitlab runner does not exist ## SSH Keys Some stages in the pipeline require gitlab runner to clone repositories. We’ll give the repositories SSH keys to authenticate them to read or write from our repositories. **Note**: Having SSH_KEY in GitLab is only for demo purposes. You should consider saving the key as a secret in the k8s cluster and have the secret mounted as a file inside the container instead. Generate an SSH key pair and store it in your directory of choice. Make sure your pub key is base64 encoded. For the `ACM`,` hello-kubernetes` and `hello-kubernetes-env` repos, go to Settings > Repositories > Deploy Keys and paste the public key in the key text box. Check β€œwrite access allowed” for `hello-kubernetes-env` to enable the pipeline to push into it. ![alt_text](images/deploy-ssh-keys.png "Deploy SSH keyes") Create an SSH_KEY [variable](https://docs.gitlab.com/ee/ci/variables/#custom-cicd-variables) in hello-kubernetes repo by going to Settings > CI/CD > Variables. Make sure to mask your variables and uncheck β€œprotect variable” ![alt_text](images/environment-variables.png "Gitlab evironment variables") ## Register gitlab runner [Gitlab runner](https://docs.gitlab.com/runner/) is what runs the jobs in the gitlab pipeline. In this tutorial, we'll install the Gitlab runner application to our prod cluster and register it as our gitlab runner. Before registering gitlab runner on our system, we’ll enable[ workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable_on_cluster), which is the recommended way to access Google cloud services from applications running within GKE. Setup workload identity on your clusters: ```bash for i in "dev" "prod"; do gcloud container clusters update ${i} \ --workload-pool=$PROJECT_ID.svc.id.goog gcloud container node-pools update default-pool \ --cluster=${i} \ --zone=$ZONE \ --workload-metadata=GKE_METADATA gcloud container clusters get-credentials ${i} --zone=$ZONE kubectl create namespace gitlab kubectl create serviceaccount --namespace gitlab gitlab-runner done gcloud iam service-accounts create gitlab-sa gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:$PROJECT_ID.svc.id.goog[gitlab/gitlab-runner]" \ gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com kubectl annotate serviceaccount \ --namespace gitlab \ gitlab-runner \ iam.gke.io/gcp-service-account=gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com ``` Some actions in the pipeline require the runner to have some IAM roles to perform. You can permit the runner to perform these actions by assigning the roles to the workload identity service account. Give workload service account the required roles and permissions: ```bash gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member "serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/storage.admin" gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member "serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/binaryauthorization.attestorsViewer" gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member "serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/cloudkms.signerVerifier" gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member "serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/containeranalysis.occurrences.editor" gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member "serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/containeranalysis.notes.editor" gcloud kms keys add-iam-policy-binding "qa-signer" \ --project "${PROJECT_ID}" \ --location "${REGION}" \ --keyring "binauthz" \ --member "serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com" \ --role 'roles/cloudkms.signerVerifier' ``` Deploy gitlab runner in acm repo: <table> <tr> </tr> </table> ```bash cd ~/$GROUP_NAME/acm/namespaces cat > gitlab-rbac.yaml << EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: gitlab-runner rules: - apiGroups: ["*"] # "" indicates the core API group resources: ["*"] verbs: ["*"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: gitlab-runner subjects: - kind: ServiceAccount name: gitlab-runner namespace: gitlab roleRef: kind: Role name: gitlab-runner apiGroup: rbac.authorization.k8s.io EOF mkdir operations/ cd operations/ cat > config-map.yaml << EOF apiVersion: v1 kind: ConfigMap metadata: name: gitlab-runner-config annotations: configmanagement.gke.io/cluster-selector: prod-cluster-selector data: kubernetes-namespace: "gitlab" kubernetes-service-account: "gitlab-runner" gitlab-server-address: "https://gitlab.com/" runner-tag-list: "prod" entrypoint: | #!/bin/bash set -xe # Register the runner /entrypoint register --non-interactive \\ --url \$GITLAB_SERVER_ADDRESS \\ --registration-token \$REGISTRATION_TOKEN \\ --executor kubernetes # Start the runner /entrypoint run --user=gitlab-runner \\ --working-directory=/home/gitlab-runner --- apiVersion: v1 kind: ConfigMap metadata: name: gitlab-runner-config annotations: configmanagement.gke.io/cluster-selector: dev-cluster-selector data: kubernetes-namespace: "gitlab" kubernetes-service-account: "gitlab-runner" gitlab-server-address: "https://gitlab.com/" runner-tag-list: "dev" entrypoint: | #!/bin/bash set -xe # Register the runner /entrypoint register --non-interactive \\ --url \$GITLAB_SERVER_ADDRESS \\ --registration-token \$REGISTRATION_TOKEN \\ --executor kubernetes # Start the runner /entrypoint run --user=gitlab-runner \\ --working-directory=/home/gitlab-runner EOF cat > deployment.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: gitlab-runner spec: replicas: 1 selector: matchLabels: name: gitlab-runner template: metadata: labels: name: gitlab-runner spec: serviceAccountName: gitlab-runner containers: - command: - /bin/bash - /scripts/entrypoint image: gcr.io/abm-test-bed/gitlab-runner@sha256:8f623d3c55ffc783752d0b34097c5625a32a910a8c1427308f5c39fd9a23a3c0 imagePullPolicy: IfNotPresent name: gitlab-runner resources: requests: cpu: "100m" limits: cpu: "100m" env: - name: GITLAB_SERVER_ADDRESS valueFrom: configMapKeyRef: name: gitlab-runner-config key: gitlab-server-address - name: RUNNER_TAG_LIST valueFrom: configMapKeyRef: name: gitlab-runner-config key: runner-tag-list - name: KUBERNETES_NAMESPACE valueFrom: configMapKeyRef: name: gitlab-runner-config key: kubernetes-namespace - name: KUBERNETES_SERVICE_ACCOUNT valueFrom: configMapKeyRef: name: gitlab-runner-config key: kubernetes-service-account - name: REGISTRATION_TOKEN valueFrom: secretKeyRef: name: gitlab-runner-secret key: runner-registration-token volumeMounts: - name: config mountPath: /scripts/entrypoint readOnly: true subPath: entrypoint - mountPath: /tmp/template.config.toml name: config subPath: template.config.toml volumes: - name: config configMap: name: gitlab-runner-config restartPolicy: Always EOF mkdir gitlab cd gitlab/ cat > namespace.yaml << EOF apiVersion: v1 kind: Namespace metadata: name: gitlab EOF cd .. cat > service-account.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: gitlab-runner annotations: iam.gke.io/gcp-service-account: gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com EOF ``` Push changes to acm repo: ```bash git add -A git commit -m "register gitlab runner" git push origin dev ``` Merge `ACM` dev with main branch Create gitlab-runner secret: ```bash for i in "dev" "prod"; do gcloud container clusters get-credentials ${i} --zone=$ZONE kubectl create secret generic gitlab-runner-secret -n gitlab \ --from-literal=runner-registration-token=<REGISTRATION_TOKEN> done ``` To find your `REGISTRATION_TOKEN `navigate to $GROUP_NAME [group](https://gitlab.com/dashboard/groups) page, Click Settings > CI / CD > Runners > Expand ![alt_text](images/registration-token.png "Registration token") Verify your runner has been created. Settings > CI/CD > Runners > Expand. Should see two runners listed under group runners **Verify that App is deployed on dev** **Hello-Kubernetes-env ** Using the &lt;app-repo>-env model allows one more layer of checks before a deployment is pushed to production. The hydrated manifests are pushed to hello-kubernetes-env. The stage manifests are pushed to the stage branch and the prod manifests are pushed to the prod branch. In this repo you can set up manual checks to ensure a human reviews and approve changes before it’s pushed to production. Create gitlab-ci.yml for hello-kubernetes-env : ```bash cd ~/$GROUP_NAME/ git clone [email protected]:$GROUP_URI/hello-kubernetes-env.git cd hello-kubernetes-env/ cat > .gitlab-ci.yml << EOF image: google/cloud-sdk variables: KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: default KUSTOMIZATION_PATH_BASE: "./base" KUSTOMIZATION_PATH_DEV: "./kubernetes/overlays/dev" KUSTOMIZATION_PATH_NON_PROD: "./kubernetes/overlays/stage" KUSTOMIZATION_PATH_PROD: "./kubernetes/overlays/prod" HOSTNAME: "gcr.io" PROJECT_ID: "$PROJECT_ID" CONTAINER_NAME: "hello-kubernetes" # Binary Authorization Variables _VULNZ_ATTESTOR: "vulnz-attestor" _VULNZ_KMS_KEY_VERSION: "1" _VULNZ_KMS_KEY: "vulnz-signer" _KMS_KEYRING: "binauthz" _KMS_LOCATION: "us-central1" _COMPUTE_REGION: "us-central1" _PROD_CLUSTER: "prod" _STAGING_CLUSTER: "dev" _QA_ATTESTOR: "qa-attestor" _QA_KMS_KEY: "qa-signer" _QA_KMS_KEY_VERSION: "1" stages: - Deploy Stage - Manual Verification - Attest QA Deployment - Deploy Full GKE deploy-stage: stage: Deploy Stage tags: - dev only: - stage environment: name: nonprod url: https://\$CI_ENVIRONMENT_SLUG script: - kubectl apply -f stage.yaml manual-verification: stage: Manual Verification tags: - dev only: - stage script: - echo "run uat here" - sleep 1s when: manual allow_failure: false attest-qa-deployment: stage: Attest QA Deployment tags: - dev only: - stage environment: name: nonprod url: https://\$CI_ENVIRONMENT_SLUG retry: 2 script: - kubectl apply -f stage.yaml --dry-run -o jsonpath='{range .items[*]}{.spec.template.spec.containers[*].image}{"\n"}' | sed '/^[[:space:]]*$/d' | sed 's/ /\n/g' > nonprd_images.txt - cat nonprd_images.txt - | while IFS= read -r IMAGE; do if [[ \$(gcloud beta container binauthz attestations list \\ --project "\$PROJECT_ID" \\ --attestor "\$_QA_ATTESTOR" \\ --attestor-project "\$PROJECT_ID" \\ --artifact-url "\$IMAGE" \\ 2>&1 | grep "\$_QA_KMS_KEY" | wc -l) = 0 ]]; then gcloud beta container binauthz attestations sign-and-create \\ --project "\$PROJECT_ID" \\ --artifact-url "\$IMAGE" \\ --attestor "\$_QA_ATTESTOR" \\ --attestor-project "\$PROJECT_ID" \\ --keyversion "\$_QA_KMS_KEY_VERSION" \\ --keyversion-key "\$_QA_KMS_KEY" \\ --keyversion-location "\$_KMS_LOCATION" \\ --keyversion-keyring "\$_KMS_KEYRING" \\ --keyversion-project "\$PROJECT_ID" echo "Attested Image \$IMAGE" fi done < nonprd_images.txt deploy-production: stage: Deploy Full GKE tags: - prod only: - prod environment: name: production url: https://\$CI_ENVIRONMENT_SLUG script: - kubectl apply -f production.yaml EOF git add . git commit -m "Add stage pipeline" git push ``` In the hello-kubernetes-env repo on gitlab, create a branch named stage from prod
GCP
CICD with Anthos In this section we ll automate a CI CD pipeline taking advantage of the features from anthos Create app Before creating a CICD pipeline we need an application For this tutorial we ll use the popular hello kubernetes application created by paulbower but with a few modifications Download hello kubernetes app bash cd GROUP NAME git clone https github com itodotimothy6 hello kubernetes git cd hello kubernetes rm rf git The hello kubernetes dir will later be made to a gitlab repo and this is where the developer team will spend most of their time on In this tutorial we ll isolate developer s work in one repo and security platform in a separate repo that way developers can focus on application logic and other teams focus on what they do best Platform admin repo As a good practice to keep out non developer work from the app repo create a platform admin repo that ll contain code scripts commands that need to be run during the CI CD process Also gitlab https docs gitlab com ee ci quick start README html cicd process overview uses gitlab ci yml file to define a cicd pipeline For a complex pipeline we can avoid crowding the gitlab ci yml file by abstracting some of the code and storing in the platform admin Create platform admin bash cd GROUP NAME mkdir platform admin Now we ll create the different stages of the ci cd process and store in sub directories in platform admin Build This is the first stage In this stage we ll create a build container job which builds an image using the hello kubernetes Dockerfile and pushes this image to container registry https cloud google com container registry gcr io In this tutorial we ll use a build container tool known as kaniko https github com GoogleContainerTools kaniko kaniko build images in kubernetes Create build stage bash cd platform admin mkdir build cd build cat build container yaml EOF build stage Build tags prod image name gcr io kaniko project executor debug entrypoint script echo Building container image and pushing to gcr io in PROJECT ID kaniko executor context CI PROJECT DIR dockerfile CI PROJECT DIR Dockerfile destination HOSTNAME PROJECT ID CONTAINER NAME CI COMMIT SHORT SHA EOF Binary Authorization Binary authorization https cloud google com binary authorization is the process of creating attestations https cloud google com binary authorization docs key concepts attestations on container images for the purpose of verifying that certain criteria are met before you can deploy the images to GKE In this guide we ll implement binary authorization using Cloud Build and GKE Learn more https cloud google com binary authorization Enable binary authorization on your clusters bash for i in dev prod do gcloud container clusters update i enable binauthz done Create signing keys and configure attestations for stage and prod pipelines Read this article https cloud google com solutions binary auth with cloud build and gke to understand step by step what the below set of commands do bash export CLOUD BUILD SA EMAIL PROJECT NUMBER cloudbuild gserviceaccount com gcloud projects add iam policy binding PROJECT ID member serviceAccount CLOUD BUILD SA EMAIL role roles container developer Create signing keys gcloud kms keyrings create binauthz project PROJECT ID location REGION gcloud kms keys create vulnz signer project PROJECT ID location REGION keyring binauthz purpose asymmetric signing default algorithm rsa sign pkcs1 4096 sha512 gcloud kms keys create qa signer project PROJECT ID location REGION keyring binauthz purpose asymmetric signing default algorithm rsa sign pkcs1 4096 sha512 curl https containeranalysis googleapis com v1 projects PROJECT ID notes noteId vulnz note request POST header Content Type application json header Authorization Bearer gcloud auth print access token header X Goog User Project PROJECT ID data binary EOF name projects PROJECT ID notes vulnz note attestation hint human readable name Vulnerability scan note EOF curl https containeranalysis googleapis com v1 projects PROJECT ID notes vulnz note setIamPolicy request POST header Content Type application json header Authorization Bearer gcloud auth print access token header X Goog User Project PROJECT ID data binary EOF resource projects PROJECT ID notes vulnz note policy bindings role roles containeranalysis notes occurrences viewer members serviceAccount CLOUD BUILD SA EMAIL role roles containeranalysis notes attacher members serviceAccount CLOUD BUILD SA EMAIL EOF gcloud container binauthz attestors create vulnz attestor project PROJECT ID attestation authority note project PROJECT ID attestation authority note vulnz note description Vulnerability scan attestor gcloud beta container binauthz attestors public keys add project PROJECT ID attestor vulnz attestor keyversion 1 keyversion key vulnz signer keyversion keyring binauthz keyversion location REGION keyversion project PROJECT ID gcloud container binauthz attestors add iam policy binding vulnz attestor project PROJECT ID member serviceAccount CLOUD BUILD SA EMAIL role roles binaryauthorization attestorsViewer gcloud kms keys add iam policy binding vulnz signer project PROJECT ID location REGION keyring binauthz member serviceAccount CLOUD BUILD SA EMAIL role roles cloudkms signerVerifier curl https containeranalysis googleapis com v1 projects PROJECT ID notes noteId qa note request POST header Content Type application json header Authorization Bearer gcloud auth print access token header X Goog User Project PROJECT ID data binary EOF name projects PROJECT ID notes qa note attestation hint human readable name QA note EOF curl https containeranalysis googleapis com v1 projects PROJECT ID notes qa note setIamPolicy request POST header Content Type application json header Authorization Bearer gcloud auth print access token header X Goog User Project PROJECT ID data binary EOF resource projects PROJECT ID notes qa note policy bindings role roles containeranalysis notes occurrences viewer members serviceAccount CLOUD BUILD SA EMAIL role roles containeranalysis notes attacher members serviceAccount CLOUD BUILD SA EMAIL EOF gcloud container binauthz attestors create qa attestor project PROJECT ID attestation authority note project PROJECT ID attestation authority note qa note description QA attestor gcloud beta container binauthz attestors public keys add project PROJECT ID attestor qa attestor keyversion 1 keyversion key qa signer keyversion keyring binauthz keyversion location REGION keyversion project PROJECT ID gcloud container binauthz attestors add iam policy binding qa attestor project PROJECT ID member serviceAccount CLOUD BUILD SA EMAIL role roles binaryauthorization attestorsViewer Vulnerability scan checker needs to be created with Cloud Build for verifying hello kubernetes container images in the CI CD pipeline Execute the following steps to create a cloudbuild attestor in Container Registry bash Give cloudbuild service account the required roles and permissions gcloud projects add iam policy binding PROJECT ID member serviceAccount PROJECT NUMBER cloudbuild gserviceaccount com role roles binaryauthorization attestorsViewer gcloud projects add iam policy binding PROJECT ID member serviceAccount PROJECT NUMBER cloudbuild gserviceaccount com role roles cloudkms signerVerifier gcloud projects add iam policy binding PROJECT ID member serviceAccount PROJECT NUMBER cloudbuild gserviceaccount com role roles containeranalysis notes attacher Create attestor using cloudbuild git clone https github com GoogleCloudPlatform gke binary auth tools GROUP NAME binauthz tools gcloud builds submit project PROJECT ID tag gcr io PROJECT ID cloudbuild attestor GROUP NAME binauthz tools clean up delete binauthz tools rm rf GROUP NAME binauthz tools Verify cloudbuild attestor image is created by inputting gcr io lt project id cloudbuild attestor in your browser Create binauth yaml which describes the Binary Authorization policy for the project bash cd GROUP NAME platform admin mkdir binauth cd binauth cat binauth yaml EOF defaultAdmissionRule enforcementMode ENFORCED BLOCK AND AUDIT LOG evaluationMode ALWAYS DENY globalPolicyEvaluationMode ENABLE admissionWhitelistPatterns Gitlab runner namePattern gitlab gitlab runner helper x86 64 8fa89735 namePattern gitlab gitlab runner helper x86 64 ece86343 namePattern gitlab gitlab runner alpine v13 6 0 namePattern gcr io abm test bed gitlab runner sha256 8f623d3c55ffc783752d0b34097c5625a32a910a8c1427308f5c39fd9a23a3c0 Gitlab runner job containers namePattern google cloud sdk namePattern gcr io cloud builders gke deploy latest namePattern gcr io kaniko project namePattern gcr io cloud solutions images kustomize 3 7 namePattern gcr io kpt functions gatekeeper validate namePattern gcr io kpt functions read yaml namePattern gcr io stackdriver prometheus namePattern gcr io PROJECT ID cloudbuild attestor namePattern gcr io config management release clusterAdmissionRules Staging dev cluster ZONE dev evaluationMode REQUIRE ATTESTATION enforcementMode ENFORCED BLOCK AND AUDIT LOG requireAttestationsBy projects PROJECT ID attestors vulnz attestor Production cluster ZONE prod evaluationMode REQUIRE ATTESTATION enforcementMode ENFORCED BLOCK AND AUDIT LOG requireAttestationsBy projects PROJECT ID attestors vulnz attestor projects PROJECT ID attestors qa attestor EOF Upload binauth yaml policy to the project bash gcloud container binauthz policy import binauth yaml Create verify image bash cd GROUP NAME platform admin mkdir vulnerability cd vulnerability cat vulnerability scan result yaml EOF check vulnerability scan result stage Verify Image tags prod image name gcr io PROJECT ID cloudbuild attestor script scripts check vulnerabilities sh p PROJECT ID i HOSTNAME PROJECT ID CONTAINER NAME CI COMMIT SHORT SHA t 8 EOF cat vulnerability scan verify yaml EOF attest vulnerability scan stage Verify Image tags prod image name gcr io PROJECT ID cloudbuild attestor script mkdir images echo gcloud container images describe format value image summary digest HOSTNAME PROJECT ID CONTAINER NAME CI COMMIT SHORT SHA images digest txt FQ DIGEST gcloud container images describe format value image summary fully qualified digest HOSTNAME PROJECT ID CONTAINER NAME CI COMMIT SHORT SHA scripts create attestation sh p PROJECT ID i FQ DIGEST a VULNZ ATTESTOR v VULNZ KMS KEY VERSION k VULNZ KMS KEY l KMS LOCATION r KMS KEYRING artifacts paths images EOF Hydrate manifest using Kustomize In this tutorial we use Kustomize to create a hydrated manifest of our deployment which will be stored in a repo called hello kubernetes env Create shared nodejs kustomize base in platform admin bash cd GROUP NAME platform admin mkdir p shared kustomize bases nodejs cd shared kustomize bases nodejs cat deployment yaml EOF kind Deployment apiVersion apps v1 metadata name nodejs spec replicas 3 selector matchLabels app nodejs template metadata labels app nodejs spec containers name nodejs image app ports containerPort 8080 EOF cat kustomization yaml EOF apiVersion kustomize config k8s io v1beta1 kind Kustomization resources deployment yaml service yaml EOF cat service yaml EOF kind Service apiVersion v1 metadata name nodejs spec type LoadBalancer selector app nodejs ports name http port 80 targetPort 8080 EOF To allow developers apply patches when deploying create overlays for dev stage and prod in the hello kubernetes repo bash cd GROUP NAME hello kubernetes mkdir p kubernetes overlays dev cd kubernetes overlays dev cat kustomization yaml EOF apiVersion kustomize config k8s io v1beta1 kind Kustomization namespace dev bases base namePrefix dev EOF cd mkdir stage cd stage cat kustomization yaml EOF apiVersion kustomize config k8s io v1beta1 kind Kustomization namespace stage bases base namePrefix stage EOF cd mkdir prod cd prod cat kustomization yaml EOF apiVersion kustomize config k8s io v1beta1 kind Kustomization namespace prod bases base namePrefix prod EOF Now that we have the kustomize base overlay we ll start creating the kustomize CI CD jobs Create fetch base stage bash cd GROUP NAME platform admin mkdir kustomize steps cd kustomize steps cat fetch base yaml EOF fetch kustomize base stage Fetch Bases image gcr io cloud solutions images kustomize 3 7 tags prod script Add auth to git urls git config global url https gitlab ci token CI JOB TOKEN CI SERVER HOST insteadOf https CI SERVER HOST mkdir p kubernetes base Pull from Kustomize shared base from platform repo echo SSH KEY base64 d working ssh key chmod 400 working ssh key export GIT SSH COMMAND ssh i working ssh key o UserKnownHostsFile dev null o StrictHostKeyChecking no git clone git CI SERVER HOST CI PROJECT NAMESPACE platform admin git b main cp platform admin shared kustomize bases nodejs kubernetes base artifacts paths kubernetes base EOF Create hydrate dev prod manifest stage bash cat hydrate dev yaml EOF kustomize dev stage Hydrate Manifests image gcr io cloud solutions images kustomize 3 7 tags prod except refs main script DIGEST cat images digest txt dev mkdir p hydrated manifests cd KUSTOMIZATION PATH DEV kustomize edit set image app HOSTNAME PROJECT ID CONTAINER NAME DIGEST kustomize build o hydrated manifests dev yaml cd artifacts paths hydrated manifests EOF cat hydrate prod yaml EOF kustomize stage Hydrate Manifests image gcr io cloud solutions images kustomize 3 7 tags prod only refs main script DIGEST cat images digest txt build out staging manifests mkdir p hydrated manifests stage cd KUSTOMIZATION PATH NON PROD kustomize edit set image app HOSTNAME PROJECT ID CONTAINER NAME DIGEST kustomize build o hydrated manifests stage yaml cd prod cd KUSTOMIZATION PATH PROD kustomize edit set image app HOSTNAME PROJECT ID CONTAINER NAME DIGEST kustomize build o hydrated manifests production yaml cd artifacts paths hydrated manifests EOF ACM policy check in CI pipeline ACM as discussed earlier is used to ensure consistency in config and automate policy checks We ll incorporate ACM to our CI pipeline to ensure that any changes that fail policy check is terminated at the CI stage even before deployment Create stage that downloads acm policies for the acm repo bash cd GROUP NAME platform admin mkdir acm cd acm cat download policies yaml EOF download acm policies stage Download ACM Policy image gcr io cloud solutions images kustomize 3 7 tags prod script Note Having SSH KEY in GitLab is only for demo purposes You should consider saving the key as a secret in the k8s cluster and have the secret mounted as a file inside the container instead echo SSH KEY base64 d working ssh key chmod 400 working ssh key export GIT SSH COMMAND ssh i working ssh key o UserKnownHostsFile dev null o StrictHostKeyChecking no git clone git CI SERVER HOST CI PROJECT NAMESPACE acm git b main cp acm policies cluster constraint hydrated manifests artifacts paths hydrated manifests EOF Create stage that reads acm bash cd GROUP NAME platform admin acm cat read acm yaml EOF read yaml stage Read ACM YAML image name gcr io kpt functions read yaml entrypoint bin sh c tags prod script mkdir stage cp hydrated manifests stage yaml stage cp hydrated manifests constraint stage mkdir prod cp hydrated manifests production yaml prod cp hydrated manifests constraint prod The following 2 commands are combining all the YAMLs from the source dir into one single YAML file usr local bin node home node app dist read yaml run js d source dir stage output stage source yaml input dev null usr local bin node home node app dist read yaml run js d source dir prod output prod source yaml input dev null artifacts paths stage source yaml prod source yaml expire in 1 hour EOF Create validate acm stage bash cd GROUP NAME platform admin acm cat validate acm yaml EOF validate acm policy stage ACM Policy Check image name gcr io kpt functions gatekeeper validate entrypoint bin sh c tags prod script app gatekeeper validate input stage source yaml app gatekeeper validate input prod source yaml EOF Deploy The last stage in the pipeline is to deploy our changes For dev other branches except main we ll deploy the hydrate dev maifest For stage and prod we ll copy the hydrated stage and prod manifests to the hello kubernetes env repo Create deploy prod stage bash cd GROUP NAME platform admin mkdir deploy cd deploy cat deploy dev yaml EOF deploy dev stage Deploy Dev tags dev script kubectl apply f hydrated manifests dev yaml except refs main EOF cat deploy prod yaml EOF push manifests only refs main stage Push Manifests image gcr io cloud solutions images kustomize 3 7 tags prod script cp working ssh ssh deploy working ssh key echo SSH KEY base64 d working ssh key chmod 400 working ssh key export GIT SSH COMMAND ssh i working ssh key o UserKnownHostsFile dev null o StrictHostKeyChecking no git config global user email CI PROJECT NAME ci CI SERVER HOST git config global user name CI PROJECT NAMESPACE CI PROJECT NAME git clone git CI SERVER HOST CI PROJECT NAMESPACE CI PROJECT NAME env git b stage cd CI PROJECT NAME env cp hydrated manifests stage yaml stage yaml cp hydrated manifests production yaml production yaml If files have changed commit them back to the env repo in the staging branch if z git status porcelain then echo No changes found in env repository else git add stage yaml stage yaml git add production yaml production yaml git commit m CI COMMIT REF SLUG CI PIPELINE URL git push origin stage fi EOF Push platform admin remote In gitlab create a blank public project under the GROUP NAME https gitlab com dashboard groups group called platform admin then run the following commands to push platform admin dir to gitlab bash cd GROUP NAME platform admin git init git remote add origin git gitlab com GROUP URI platform admin git git add git commit m Initial commit git push u origin main gitlab ci yml gitlab ci yml is the file used by gitlab for ci cd pipeline We ll create a gitlab ci yml that references the different stage files in platform admin and orders them as listed above Remember we separated out these stages in a platform admin repo to avoid a crowded gitlab ci yml and to separate operations from the app repo Create gitlab ci yml in the root directory of hello kubernetes bash cd GROUP NAME hello kubernetes cat gitlab ci yml EOF image google cloud sdk include Build Steps project GROUP URI platform admin file build build container yaml Vulnerability Scan Steps project GROUP URI platform admin file vulnerability vulnerability scan result yaml project GROUP URI platform admin file vulnerability vulnerability scan verify yaml Kustomize Steps project GROUP URI platform admin file kustomize steps fetch base yaml project GROUP URI platform admin file kustomize steps hydrate dev yaml project GROUP URI platform admin file kustomize steps hydrate prod yaml ACM Steps project GROUP URI platform admin file acm download policies yaml project GROUP URI platform admin file acm read acm yaml project GROUP URI platform admin file acm validate acm yaml Deploy Steps project GROUP URI platform admin file deploy deploy dev yaml project GROUP URI platform admin file deploy deploy prod yaml variables KUBERNETES SERVICE ACCOUNT OVERWRITE default KUSTOMIZATION PATH BASE base KUSTOMIZATION PATH DEV kubernetes overlays dev KUSTOMIZATION PATH NON PROD kubernetes overlays stage KUSTOMIZATION PATH PROD kubernetes overlays prod HOSTNAME gcr io PROJECT ID PROJECT ID CONTAINER NAME hello kubernetes Binary Authorization Variables VULNZ ATTESTOR vulnz attestor VULNZ KMS KEY VERSION 1 VULNZ KMS KEY vulnz signer KMS KEYRING binauthz KMS LOCATION REGION COMPUTE REGION REGION PROD CLUSTER prod STAGING CLUSTER dev QA ATTESTOR qa attestor QA KMS KEY qa signer QA KMS KEY VERSION 1 stages Build Verify Image Fetch Bases Hydrate Manifests Download ACM Policy Read ACM YAML ACM Policy Check Deploy Dev Push Manifests EOF Remote hello kubernetes hello kubernetes env In gitlab create a blank public project under GROUP NAME https gitlab com dashboard groups called hello kubernetes Push local hello kubernetes to remote Make sure you have created hello kubernetes project before running the below commands bash cd GROUP NAME hello kubernetes git init git remote add origin git gitlab com GROUP URI hello kubernetes git git add git commit m Initial commit git push u origin main Create a public project under GROUP NAME https gitlab com dashboard groups called hello kubernetes env Ensure initialize repository with a README is checked so you can have a non empty repository Create a branch called prod in the hello kubernetes env repo Set prod as the default branch https docs gitlab com ee user project repository branches default html change the default branch name for a project and delete the main branch At this point the pipeline fails because a gitlab runner does not exist SSH Keys Some stages in the pipeline require gitlab runner to clone repositories We ll give the repositories SSH keys to authenticate them to read or write from our repositories Note Having SSH KEY in GitLab is only for demo purposes You should consider saving the key as a secret in the k8s cluster and have the secret mounted as a file inside the container instead Generate an SSH key pair and store it in your directory of choice Make sure your pub key is base64 encoded For the ACM hello kubernetes and hello kubernetes env repos go to Settings Repositories Deploy Keys and paste the public key in the key text box Check write access allowed for hello kubernetes env to enable the pipeline to push into it alt text images deploy ssh keys png Deploy SSH keyes Create an SSH KEY variable https docs gitlab com ee ci variables custom cicd variables in hello kubernetes repo by going to Settings CI CD Variables Make sure to mask your variables and uncheck protect variable alt text images environment variables png Gitlab evironment variables Register gitlab runner Gitlab runner https docs gitlab com runner is what runs the jobs in the gitlab pipeline In this tutorial we ll install the Gitlab runner application to our prod cluster and register it as our gitlab runner Before registering gitlab runner on our system we ll enable workload identity https cloud google com kubernetes engine docs how to workload identity enable on cluster which is the recommended way to access Google cloud services from applications running within GKE Setup workload identity on your clusters bash for i in dev prod do gcloud container clusters update i workload pool PROJECT ID svc id goog gcloud container node pools update default pool cluster i zone ZONE workload metadata GKE METADATA gcloud container clusters get credentials i zone ZONE kubectl create namespace gitlab kubectl create serviceaccount namespace gitlab gitlab runner done gcloud iam service accounts create gitlab sa gcloud iam service accounts add iam policy binding role roles iam workloadIdentityUser member serviceAccount PROJECT ID svc id goog gitlab gitlab runner gitlab sa PROJECT ID iam gserviceaccount com kubectl annotate serviceaccount namespace gitlab gitlab runner iam gke io gcp service account gitlab sa PROJECT ID iam gserviceaccount com Some actions in the pipeline require the runner to have some IAM roles to perform You can permit the runner to perform these actions by assigning the roles to the workload identity service account Give workload service account the required roles and permissions bash gcloud projects add iam policy binding PROJECT ID member serviceAccount gitlab sa PROJECT ID iam gserviceaccount com role roles storage admin gcloud projects add iam policy binding PROJECT ID member serviceAccount gitlab sa PROJECT ID iam gserviceaccount com role roles binaryauthorization attestorsViewer gcloud projects add iam policy binding PROJECT ID member serviceAccount gitlab sa PROJECT ID iam gserviceaccount com role roles cloudkms signerVerifier gcloud projects add iam policy binding PROJECT ID member serviceAccount gitlab sa PROJECT ID iam gserviceaccount com role roles containeranalysis occurrences editor gcloud projects add iam policy binding PROJECT ID member serviceAccount gitlab sa PROJECT ID iam gserviceaccount com role roles containeranalysis notes editor gcloud kms keys add iam policy binding qa signer project PROJECT ID location REGION keyring binauthz member serviceAccount gitlab sa PROJECT ID iam gserviceaccount com role roles cloudkms signerVerifier Deploy gitlab runner in acm repo table tr tr table bash cd GROUP NAME acm namespaces cat gitlab rbac yaml EOF apiVersion rbac authorization k8s io v1 kind Role metadata name gitlab runner rules apiGroups indicates the core API group resources verbs kind RoleBinding apiVersion rbac authorization k8s io v1 metadata name gitlab runner subjects kind ServiceAccount name gitlab runner namespace gitlab roleRef kind Role name gitlab runner apiGroup rbac authorization k8s io EOF mkdir operations cd operations cat config map yaml EOF apiVersion v1 kind ConfigMap metadata name gitlab runner config annotations configmanagement gke io cluster selector prod cluster selector data kubernetes namespace gitlab kubernetes service account gitlab runner gitlab server address https gitlab com runner tag list prod entrypoint bin bash set xe Register the runner entrypoint register non interactive url GITLAB SERVER ADDRESS registration token REGISTRATION TOKEN executor kubernetes Start the runner entrypoint run user gitlab runner working directory home gitlab runner apiVersion v1 kind ConfigMap metadata name gitlab runner config annotations configmanagement gke io cluster selector dev cluster selector data kubernetes namespace gitlab kubernetes service account gitlab runner gitlab server address https gitlab com runner tag list dev entrypoint bin bash set xe Register the runner entrypoint register non interactive url GITLAB SERVER ADDRESS registration token REGISTRATION TOKEN executor kubernetes Start the runner entrypoint run user gitlab runner working directory home gitlab runner EOF cat deployment yaml EOF apiVersion apps v1 kind Deployment metadata name gitlab runner spec replicas 1 selector matchLabels name gitlab runner template metadata labels name gitlab runner spec serviceAccountName gitlab runner containers command bin bash scripts entrypoint image gcr io abm test bed gitlab runner sha256 8f623d3c55ffc783752d0b34097c5625a32a910a8c1427308f5c39fd9a23a3c0 imagePullPolicy IfNotPresent name gitlab runner resources requests cpu 100m limits cpu 100m env name GITLAB SERVER ADDRESS valueFrom configMapKeyRef name gitlab runner config key gitlab server address name RUNNER TAG LIST valueFrom configMapKeyRef name gitlab runner config key runner tag list name KUBERNETES NAMESPACE valueFrom configMapKeyRef name gitlab runner config key kubernetes namespace name KUBERNETES SERVICE ACCOUNT valueFrom configMapKeyRef name gitlab runner config key kubernetes service account name REGISTRATION TOKEN valueFrom secretKeyRef name gitlab runner secret key runner registration token volumeMounts name config mountPath scripts entrypoint readOnly true subPath entrypoint mountPath tmp template config toml name config subPath template config toml volumes name config configMap name gitlab runner config restartPolicy Always EOF mkdir gitlab cd gitlab cat namespace yaml EOF apiVersion v1 kind Namespace metadata name gitlab EOF cd cat service account yaml EOF apiVersion v1 kind ServiceAccount metadata name gitlab runner annotations iam gke io gcp service account gitlab sa PROJECT ID iam gserviceaccount com EOF Push changes to acm repo bash git add A git commit m register gitlab runner git push origin dev Merge ACM dev with main branch Create gitlab runner secret bash for i in dev prod do gcloud container clusters get credentials i zone ZONE kubectl create secret generic gitlab runner secret n gitlab from literal runner registration token REGISTRATION TOKEN done To find your REGISTRATION TOKEN navigate to GROUP NAME group https gitlab com dashboard groups page Click Settings CI CD Runners Expand alt text images registration token png Registration token Verify your runner has been created Settings CI CD Runners Expand Should see two runners listed under group runners Verify that App is deployed on dev Hello Kubernetes env Using the lt app repo env model allows one more layer of checks before a deployment is pushed to production The hydrated manifests are pushed to hello kubernetes env The stage manifests are pushed to the stage branch and the prod manifests are pushed to the prod branch In this repo you can set up manual checks to ensure a human reviews and approve changes before it s pushed to production Create gitlab ci yml for hello kubernetes env bash cd GROUP NAME git clone git gitlab com GROUP URI hello kubernetes env git cd hello kubernetes env cat gitlab ci yml EOF image google cloud sdk variables KUBERNETES SERVICE ACCOUNT OVERWRITE default KUSTOMIZATION PATH BASE base KUSTOMIZATION PATH DEV kubernetes overlays dev KUSTOMIZATION PATH NON PROD kubernetes overlays stage KUSTOMIZATION PATH PROD kubernetes overlays prod HOSTNAME gcr io PROJECT ID PROJECT ID CONTAINER NAME hello kubernetes Binary Authorization Variables VULNZ ATTESTOR vulnz attestor VULNZ KMS KEY VERSION 1 VULNZ KMS KEY vulnz signer KMS KEYRING binauthz KMS LOCATION us central1 COMPUTE REGION us central1 PROD CLUSTER prod STAGING CLUSTER dev QA ATTESTOR qa attestor QA KMS KEY qa signer QA KMS KEY VERSION 1 stages Deploy Stage Manual Verification Attest QA Deployment Deploy Full GKE deploy stage stage Deploy Stage tags dev only stage environment name nonprod url https CI ENVIRONMENT SLUG script kubectl apply f stage yaml manual verification stage Manual Verification tags dev only stage script echo run uat here sleep 1s when manual allow failure false attest qa deployment stage Attest QA Deployment tags dev only stage environment name nonprod url https CI ENVIRONMENT SLUG retry 2 script kubectl apply f stage yaml dry run o jsonpath range items spec template spec containers image n sed space d sed s n g nonprd images txt cat nonprd images txt while IFS read r IMAGE do if gcloud beta container binauthz attestations list project PROJECT ID attestor QA ATTESTOR attestor project PROJECT ID artifact url IMAGE 2 1 grep QA KMS KEY wc l 0 then gcloud beta container binauthz attestations sign and create project PROJECT ID artifact url IMAGE attestor QA ATTESTOR attestor project PROJECT ID keyversion QA KMS KEY VERSION keyversion key QA KMS KEY keyversion location KMS LOCATION keyversion keyring KMS KEYRING keyversion project PROJECT ID echo Attested Image IMAGE fi done nonprd images txt deploy production stage Deploy Full GKE tags prod only prod environment name production url https CI ENVIRONMENT SLUG script kubectl apply f production yaml EOF git add git commit m Add stage pipeline git push In the hello kubernetes env repo on gitlab create a branch named stage from prod
GCP node os information This container image can be deployed on a Kubernetes cluster When accessed via a web browser on port it will display The default Hello world message displayed can be overridden using the environment variable The default port of 8080 can be overriden using the environment variable Hello Kubernetes a default Hello world message the pod name
# Hello Kubernetes! This container image can be deployed on a Kubernetes cluster. When accessed via a web browser on port `8080`, it will display: - a default **Hello world!** message - the pod name - node os information ![Hello world! from the hello-kubernetes image](hello-kubernetes.png) The default "Hello world!" message displayed can be overridden using the `MESSAGE` environment variable. The default port of 8080 can be overriden using the `PORT` environment variable. ## DockerHub It is available on DockerHub as: - [paulbouwer/hello-kubernetes:1.8](https://hub.docker.com/r/paulbouwer/hello-kubernetes/) ## Deploy ### Standard Configuration Deploy to your Kubernetes cluster using the hello-kubernetes.yaml, which contains definitions for the service and deployment objects: ```yaml # hello-kubernetes.yaml apiVersion: v1 kind: Service metadata: name: hello-kubernetes spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: app: hello-kubernetes --- apiVersion: apps/v1 kind: Deployment metadata: name: hello-kubernetes spec: replicas: 3 selector: matchLabels: app: hello-kubernetes template: metadata: labels: app: hello-kubernetes spec: containers: - name: hello-kubernetes image: paulbouwer/hello-kubernetes:1.8 ports: - containerPort: 8080 ``` ```bash $ kubectl apply -f yaml/hello-kubernetes.yaml ``` This will display a **Hello world!** message when you hit the service endpoint in a browser. You can get the service endpoint ip address by executing the following command and grabbing the returned external ip address value: ```bash $ kubectl get service hello-kubernetes ``` ### Customise Message You can customise the message displayed by the `hello-kubernetes` container. Deploy using the hello-kubernetes.custom-message.yaml, which contains definitions for the service and deployment objects. In the definition for the deployment, add an `env` variable with the name of `MESSAGE`. The value you provide will be displayed as the custom message. ```yaml # hello-kubernetes.custom-message.yaml apiVersion: v1 kind: Service metadata: name: hello-kubernetes-custom spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: app: hello-kubernetes-custom --- apiVersion: apps/v1 kind: Deployment metadata: name: hello-kubernetes-custom spec: replicas: 3 selector: matchLabels: app: hello-kubernetes-custom template: metadata: labels: app: hello-kubernetes-custom spec: containers: - name: hello-kubernetes image: paulbouwer/hello-kubernetes:1.8 ports: - containerPort: 8080 env: - name: MESSAGE value: I just deployed this on Kubernetes! ``` ```bash $ kubectl apply -f yaml/hello-kubernetes.custom-message.yaml ``` ### Specify Custom Port By default, the `hello-kubernetes` app listens on port `8080`. If you have a requirement for the app to listen on another port, you can specify the port via an env variable with the name of PORT. Remember to also update the `containers.ports.containerPort` value to match. Here is an example: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-kubernetes-custom spec: replicas: 3 selector: matchLabels: app: hello-kubernetes-custom template: metadata: labels: app: hello-kubernetes-custom spec: containers: - name: hello-kubernetes image: paulbouwer/hello-kubernetes:1.8 ports: - containerPort: 80 env: - name: PORT value: "80" ``` ## Build Container Image If you'd like to build the image yourself, then you can do so as follows. The `build-arg` parameters provides metadata as defined in [OCI image spec annotations](https://github.com/opencontainers/image-spec/blob/master/annotations.md). Bash ```bash $ docker build --no-cache --build-arg IMAGE_VERSION="1.8" --build-arg IMAGE_CREATE_DATE="`date -u +"%Y-%m-%dT%H:%M:%SZ"`" --build-arg IMAGE_SOURCE_REVISION="`git rev-parse HEAD`" -f Dockerfile -t "hello-kubernetes:1.8" app ``` Powershell ```powershell PS> docker build --no-cache --build-arg IMAGE_VERSION="1.8" --build-arg IMAGE_CREATE_DATE="$(Get-Date((Get-Date).ToUniversalTime()) -UFormat '%Y-%m-%dT%H:%M:%SZ')" --build-arg IMAGE_SOURCE_REVISION="$(git rev-parse HEAD)" -f Dockerfile -t "hello-kubernetes:1.8" app ``` ## Develop Application If you have [VS Code](https://code.visualstudio.com/) and the [Visual Studio Code Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension installed, the `.devcontainer` folder will be used to build a container based node.js 13 development environment. Port `8080` has been configured to be forwarded to your host. If you run `npm start` in the `app` folder in the VS Code Remote Containers terminal, you will be able to access the website on `http://localhost:8080`. You can change the port in the `.devcontainer\devcontainer.json` file under the `appPort` key. See [here](https://code.visualstudio.com/docs/remote/containers) for more details on working with this setup
GCP
Hello Kubernetes This container image can be deployed on a Kubernetes cluster When accessed via a web browser on port 8080 it will display a default Hello world message the pod name node os information Hello world from the hello kubernetes image hello kubernetes png The default Hello world message displayed can be overridden using the MESSAGE environment variable The default port of 8080 can be overriden using the PORT environment variable DockerHub It is available on DockerHub as paulbouwer hello kubernetes 1 8 https hub docker com r paulbouwer hello kubernetes Deploy Standard Configuration Deploy to your Kubernetes cluster using the hello kubernetes yaml which contains definitions for the service and deployment objects yaml hello kubernetes yaml apiVersion v1 kind Service metadata name hello kubernetes spec type LoadBalancer ports port 80 targetPort 8080 selector app hello kubernetes apiVersion apps v1 kind Deployment metadata name hello kubernetes spec replicas 3 selector matchLabels app hello kubernetes template metadata labels app hello kubernetes spec containers name hello kubernetes image paulbouwer hello kubernetes 1 8 ports containerPort 8080 bash kubectl apply f yaml hello kubernetes yaml This will display a Hello world message when you hit the service endpoint in a browser You can get the service endpoint ip address by executing the following command and grabbing the returned external ip address value bash kubectl get service hello kubernetes Customise Message You can customise the message displayed by the hello kubernetes container Deploy using the hello kubernetes custom message yaml which contains definitions for the service and deployment objects In the definition for the deployment add an env variable with the name of MESSAGE The value you provide will be displayed as the custom message yaml hello kubernetes custom message yaml apiVersion v1 kind Service metadata name hello kubernetes custom spec type LoadBalancer ports port 80 targetPort 8080 selector app hello kubernetes custom apiVersion apps v1 kind Deployment metadata name hello kubernetes custom spec replicas 3 selector matchLabels app hello kubernetes custom template metadata labels app hello kubernetes custom spec containers name hello kubernetes image paulbouwer hello kubernetes 1 8 ports containerPort 8080 env name MESSAGE value I just deployed this on Kubernetes bash kubectl apply f yaml hello kubernetes custom message yaml Specify Custom Port By default the hello kubernetes app listens on port 8080 If you have a requirement for the app to listen on another port you can specify the port via an env variable with the name of PORT Remember to also update the containers ports containerPort value to match Here is an example yaml apiVersion apps v1 kind Deployment metadata name hello kubernetes custom spec replicas 3 selector matchLabels app hello kubernetes custom template metadata labels app hello kubernetes custom spec containers name hello kubernetes image paulbouwer hello kubernetes 1 8 ports containerPort 80 env name PORT value 80 Build Container Image If you d like to build the image yourself then you can do so as follows The build arg parameters provides metadata as defined in OCI image spec annotations https github com opencontainers image spec blob master annotations md Bash bash docker build no cache build arg IMAGE VERSION 1 8 build arg IMAGE CREATE DATE date u Y m dT H M SZ build arg IMAGE SOURCE REVISION git rev parse HEAD f Dockerfile t hello kubernetes 1 8 app Powershell powershell PS docker build no cache build arg IMAGE VERSION 1 8 build arg IMAGE CREATE DATE Get Date Get Date ToUniversalTime UFormat Y m dT H M SZ build arg IMAGE SOURCE REVISION git rev parse HEAD f Dockerfile t hello kubernetes 1 8 app Develop Application If you have VS Code https code visualstudio com and the Visual Studio Code Remote Containers https marketplace visualstudio com items itemName ms vscode remote remote containers extension installed the devcontainer folder will be used to build a container based node js 13 development environment Port 8080 has been configured to be forwarded to your host If you run npm start in the app folder in the VS Code Remote Containers terminal you will be able to access the website on http localhost 8080 You can change the port in the devcontainer devcontainer json file under the appPort key See here https code visualstudio com docs remote containers for more details on working with this setup
GCP The solution leverages Airflow s dependency management capabilities by dynamically configuring the externaldatefn parameter in the to create a hierarchical relationship between the parent and child DAGs Composer Dependency Management Solution DAG code snippet for Depedency Management using with yearly schedule frequency TL DR This repository presents a Cloud Composer workflow designed to orchestrate complex task dependencies within Apache Airflow The solution specifically addresses the challenge of managing parent child DAG relationships across varying temporal frequencies yearly monthly weekly By implementing similar framework data engineers can ensure reliable and timely triggering of child DAGs in accordance with their respective parent DAG s schedule enhancing overall workflow efficiency and maintainability The goal of this use case is to provide a common pattern to automatically trigger and implement the composer dependency management The primary challenge addressed is the need to handle complex dependencies between DAGs with different frequencies
# Composer Dependency Management ### TL;DR: This repository presents a Cloud Composer workflow designed to orchestrate complex task dependencies within Apache Airflow. The solution specifically addresses the challenge of managing parent-child DAG relationships across varying temporal frequencies (yearly, monthly, weekly). By implementing similar framework, data engineers can ensure reliable and timely triggering of child DAGs in accordance with their respective parent DAG's schedule, enhancing overall workflow efficiency and maintainability. The goal of this use-case is to provide a common pattern to automatically trigger and implement the composer dependency management. The primary challenge addressed is the need to handle complex dependencies between DAGs with different frequencies. The solution leverages Airflow's dependency management capabilities by dynamically configuring the external_date_fn parameter in the [Airflow External Task Sensor](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/sensors/external_task/index.html) to create a hierarchical relationship between the parent and child DAGs. ***Solution DAG code-snippet for Depedency-Management using [external_task_sensor](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/sensors/external_task/index.html) with yearly schedule frequency:*** ``` # Define parent task IDs and external DAG IDs parent_tasks = [ {"task_id": "parent_task_1", "dag_id": "company_cal_refresh", "schedule_frequency":"yearly"} ] def execution_delta_dependency(logical_date, **kwargs): dt = logical_date task_instance_id=str(kwargs['task_instance']).split(':')[1].split(' ')[1].split('.')[1] res = None for sub in parent_tasks: if sub['task_id'] == task_instance_id: res = sub break schedule_frequency=res['schedule_frequency'] parent_dag_poke = '' if schedule_frequency == "monthly": parent_dag_poke = dt.replace(day=1).replace(hour=0, minute=0, second=0, microsecond=0) elif schedule_frequency == "weekly": parent_dag_poke = (dt - timedelta(days=dt.isoweekday() % 7)).replace(hour=0, minute=0, second=0, microsecond=0) elif schedule_frequency == "yearly": parent_dag_poke = dt.replace(day=1, month=1, hour=0, minute=0, second=0, microsecond=0) elif schedule_frequency == "daily": parent_dag_poke = (dt).replace(hour=0, minute=0, second=0, microsecond=0) print(parent_dag_poke) return parent_dag_poke # Create external task sensors dynamically external_task_sensors = [] for parent_task in parent_tasks: external_task_sensor = ExternalTaskSensor( task_id=parent_task["task_id"], external_dag_id=parent_task["dag_id"], timeout=900, execution_date_fn=execution_delta_dependency, poke_interval=60, # Check every 60 seconds mode="reschedule", # Reschedule task if external task fails check_existence=True ) external_task_sensors.append(external_task_sensor) ``` ### Hypothetical use case #### Workflow Overview *** The workflow involves the following steps: 1. **Create Parent DAGs**: - Create separate DAGs for each parent job (yearly, monthly, and weekly). - Define the schedule for each parent DAG accordingly. 2. **Define Child DAGs**: - Create child DAGs for each task that needs to be executed based on the parent's schedule. 3. **Set Dependencies**: - Use the `ExternalTaskSensor` argument to establish the dependency between the child DAG and its immediate parent DAG. 4. **Trigger Child DAGs**: - Utilize Airflow's `TriggerDagRunOperator` to trigger child DAGs when the parent DAG completes. - Configure the `wait_for_downstream` parameter to specify the conditions under which the child DAG should be triggered. 5. **Handle Data Lineage**: - Ensure that the child DAGs have access to the necessary data generated by the parent DAG. - Consider using Airflow's XComs or a central data store for data sharing. ![Alt text](img/composer_mgmt_usecase.png "Workflow Overview") ### Benefits - Improved DAG organization and maintainability. - Simplified dependency management. - Reliable execution of child DAGs based on parent schedules. - Reduced risk of data inconsistencies. - Scalable approach for managing complex DAG dependencies. ## Hypothetical User Story: The Symphony of Data Orchestration In the bustling city of San Francisco, a dynamic e-commerce company named "Symphony Goods" was on a mission to revolutionize the online shopping experience. At the heart of their success was a robust data infrastructure that seamlessly managed and processed vast amounts of information. ### Symphony Goods Data Workflows Symphony Goods relied on a sophisticated data orchestration system powered by Apache Airflow to automate and streamline their data workflows. This system consisted of a series of interconnected data pipelines, each designed to perform specific tasks and produce valuable insights. #### Yearly Refresh: Company Calendar Once a year, Symphony Goods executed a critical process known as ["Company_cal_refresh"](company_cal_refresh.py). This workflow ensured that the company's internal calendars were synchronized across all departments and systems. It involved extracting data from various sources, such as employee schedules, project timelines, and public holidays, and consolidating it into a centralized repository. The updated calendar served as a single source of truth, enabling efficient planning, resource allocation, and communication within the organization. #### Monthly Refresh: Product Catalog Every month, Symphony Goods performed a "Product_catalog_refresh" workflow to keep its product catalog up-to-date. This process involved ingesting data from multiple channels, including supplier feeds, internal databases, and customer feedback. The workflow validated, transformed, and enriched the product information, ensuring that customers had access to accurate and comprehensive product details. #### Weekly Summary Report Symphony Goods generated a "Weekly_summary_report" every week to monitor key performance indicators (KPIs) and track business growth. The workflow aggregated data from various sources, such as sales figures, customer engagement metrics, and website traffic analytics. It then presented the data in visually appealing dashboards and reports, enabling stakeholders to make informed decisions. #### Daily Refresh: Product Inventory To ensure optimal inventory management, Symphony Goods ran a ["Product_inventory_refresh"](product_catalog_refresh.py) workflow on a daily basis. This workflow extracted inventory data from warehouses, distribution centers, and point-of-sale systems. It calculated available stock levels, identified potential stockouts, and provided recommendations for replenishment. The workflow ensured that Symphony Goods could fulfill customer orders promptly and maintain high levels of customer satisfaction. The symphony of data orchestration at Symphony Goods was a testament to the power of automation and integration. By leveraging Apache Airflow, the company was able to streamline its data operations, improve data quality, and gain valuable insights to drive business growth. As Symphony Goods continued to scale its operations, the data orchestration system served as the backbone, ensuring that data was always available, accurate, and actionable. ### Workflow Frequencies 1. **Yearly**: [Company_cal_refresh](company_cal_refresh.py) 2. **Monthly**: [Product_catalog_refresh](product_catalog_refresh.py) 3. **Weekly**: [Weekly_summary_report](weekly_summary_report.py) 4. **Daily**: [Product_inventory_refresh](product_inventory_refresh.py) ## Use-case Lineage: Summary of Lineage and Dependencies The provided context describes the data orchestration system used by Symphony Goods, an e-commerce company in San Francisco. The system is powered by Apache Airflow and consists of four main workflows: 1. **Yearly: Company_cal_refresh** - Synchronizes internal calendars across all departments and systems, ensuring efficient planning and resource allocation. - Depends on data from employee schedules, project timelines, and public holidays. 2. **Monthly: Product_catalog_refresh** - Keeps the product catalog up-to-date by ingesting data from multiple channels and validating, transforming, and enriching it. - Depends on data from supplier feeds, internal databases, and customer feedback. 3. **Weekly: Weekly_summary_report** - Generates weekly summary reports to monitor key performance indicators (KPIs) and track business growth. - Depends on data from sales figures, customer engagement metrics, and website traffic analytics. 4. **Daily: Product_inventory_refresh** - Ensures optimal inventory management by extracting inventory data from various sources and calculating available stock levels. - Depends on data from warehouses, distribution centers, and point-of-sale systems. The symphony of data orchestration at Symphony Goods is a testament to the power of automation and integration. By leveraging Apache Airflow, the company was able to streamline its data operations, improve data quality, and gain valuable insights to drive business growth.
GCP
Composer Dependency Management TL DR This repository presents a Cloud Composer workflow designed to orchestrate complex task dependencies within Apache Airflow The solution specifically addresses the challenge of managing parent child DAG relationships across varying temporal frequencies yearly monthly weekly By implementing similar framework data engineers can ensure reliable and timely triggering of child DAGs in accordance with their respective parent DAG s schedule enhancing overall workflow efficiency and maintainability The goal of this use case is to provide a common pattern to automatically trigger and implement the composer dependency management The primary challenge addressed is the need to handle complex dependencies between DAGs with different frequencies The solution leverages Airflow s dependency management capabilities by dynamically configuring the external date fn parameter in the Airflow External Task Sensor https airflow apache org docs apache airflow stable api airflow sensors external task index html to create a hierarchical relationship between the parent and child DAGs Solution DAG code snippet for Depedency Management using external task sensor https airflow apache org docs apache airflow stable api airflow sensors external task index html with yearly schedule frequency Define parent task IDs and external DAG IDs parent tasks task id parent task 1 dag id company cal refresh schedule frequency yearly def execution delta dependency logical date kwargs dt logical date task instance id str kwargs task instance split 1 split 1 split 1 res None for sub in parent tasks if sub task id task instance id res sub break schedule frequency res schedule frequency parent dag poke if schedule frequency monthly parent dag poke dt replace day 1 replace hour 0 minute 0 second 0 microsecond 0 elif schedule frequency weekly parent dag poke dt timedelta days dt isoweekday 7 replace hour 0 minute 0 second 0 microsecond 0 elif schedule frequency yearly parent dag poke dt replace day 1 month 1 hour 0 minute 0 second 0 microsecond 0 elif schedule frequency daily parent dag poke dt replace hour 0 minute 0 second 0 microsecond 0 print parent dag poke return parent dag poke Create external task sensors dynamically external task sensors for parent task in parent tasks external task sensor ExternalTaskSensor task id parent task task id external dag id parent task dag id timeout 900 execution date fn execution delta dependency poke interval 60 Check every 60 seconds mode reschedule Reschedule task if external task fails check existence True external task sensors append external task sensor Hypothetical use case Workflow Overview The workflow involves the following steps 1 Create Parent DAGs Create separate DAGs for each parent job yearly monthly and weekly Define the schedule for each parent DAG accordingly 2 Define Child DAGs Create child DAGs for each task that needs to be executed based on the parent s schedule 3 Set Dependencies Use the ExternalTaskSensor argument to establish the dependency between the child DAG and its immediate parent DAG 4 Trigger Child DAGs Utilize Airflow s TriggerDagRunOperator to trigger child DAGs when the parent DAG completes Configure the wait for downstream parameter to specify the conditions under which the child DAG should be triggered 5 Handle Data Lineage Ensure that the child DAGs have access to the necessary data generated by the parent DAG Consider using Airflow s XComs or a central data store for data sharing Alt text img composer mgmt usecase png Workflow Overview Benefits Improved DAG organization and maintainability Simplified dependency management Reliable execution of child DAGs based on parent schedules Reduced risk of data inconsistencies Scalable approach for managing complex DAG dependencies Hypothetical User Story The Symphony of Data Orchestration In the bustling city of San Francisco a dynamic e commerce company named Symphony Goods was on a mission to revolutionize the online shopping experience At the heart of their success was a robust data infrastructure that seamlessly managed and processed vast amounts of information Symphony Goods Data Workflows Symphony Goods relied on a sophisticated data orchestration system powered by Apache Airflow to automate and streamline their data workflows This system consisted of a series of interconnected data pipelines each designed to perform specific tasks and produce valuable insights Yearly Refresh Company Calendar Once a year Symphony Goods executed a critical process known as Company cal refresh company cal refresh py This workflow ensured that the company s internal calendars were synchronized across all departments and systems It involved extracting data from various sources such as employee schedules project timelines and public holidays and consolidating it into a centralized repository The updated calendar served as a single source of truth enabling efficient planning resource allocation and communication within the organization Monthly Refresh Product Catalog Every month Symphony Goods performed a Product catalog refresh workflow to keep its product catalog up to date This process involved ingesting data from multiple channels including supplier feeds internal databases and customer feedback The workflow validated transformed and enriched the product information ensuring that customers had access to accurate and comprehensive product details Weekly Summary Report Symphony Goods generated a Weekly summary report every week to monitor key performance indicators KPIs and track business growth The workflow aggregated data from various sources such as sales figures customer engagement metrics and website traffic analytics It then presented the data in visually appealing dashboards and reports enabling stakeholders to make informed decisions Daily Refresh Product Inventory To ensure optimal inventory management Symphony Goods ran a Product inventory refresh product catalog refresh py workflow on a daily basis This workflow extracted inventory data from warehouses distribution centers and point of sale systems It calculated available stock levels identified potential stockouts and provided recommendations for replenishment The workflow ensured that Symphony Goods could fulfill customer orders promptly and maintain high levels of customer satisfaction The symphony of data orchestration at Symphony Goods was a testament to the power of automation and integration By leveraging Apache Airflow the company was able to streamline its data operations improve data quality and gain valuable insights to drive business growth As Symphony Goods continued to scale its operations the data orchestration system served as the backbone ensuring that data was always available accurate and actionable Workflow Frequencies 1 Yearly Company cal refresh company cal refresh py 2 Monthly Product catalog refresh product catalog refresh py 3 Weekly Weekly summary report weekly summary report py 4 Daily Product inventory refresh product inventory refresh py Use case Lineage Summary of Lineage and Dependencies The provided context describes the data orchestration system used by Symphony Goods an e commerce company in San Francisco The system is powered by Apache Airflow and consists of four main workflows 1 Yearly Company cal refresh Synchronizes internal calendars across all departments and systems ensuring efficient planning and resource allocation Depends on data from employee schedules project timelines and public holidays 2 Monthly Product catalog refresh Keeps the product catalog up to date by ingesting data from multiple channels and validating transforming and enriching it Depends on data from supplier feeds internal databases and customer feedback 3 Weekly Weekly summary report Generates weekly summary reports to monitor key performance indicators KPIs and track business growth Depends on data from sales figures customer engagement metrics and website traffic analytics 4 Daily Product inventory refresh Ensures optimal inventory management by extracting inventory data from various sources and calculating available stock levels Depends on data from warehouses distribution centers and point of sale systems The symphony of data orchestration at Symphony Goods is a testament to the power of automation and integration By leveraging Apache Airflow the company was able to streamline its data operations improve data quality and gain valuable insights to drive business growth
GCP shanekok9 gmail com Andrew Leach Google karanpalsani utexas edu Dimos Christopoulos Google Contributors Better Consumer Complaint and Support Request Handling With AI Anastasiia Manokhina Google Michael Sherman Google
# Better Consumer Complaint and Support Request Handling With AI ## Contributors - Dimos Christopoulos (Google) - [Shane Kok](https://www.linkedin.com/in/shane-kok-b1970a82/) ([email protected]) - Andrew Leach (Google) - Anastasiia Manokhina (Google) - [Karan Palsani](https://www.linkedin.com/in/karanpalsani/) ([email protected]) - Michael Sherman (Google) - [Michael Sparkman](https://www.linkedin.com/in/michael-sparkman/) ([email protected]) - [Sahana Subramanian](https://www.linkedin.com/in/sahana-subramanian/) ([email protected]) # Overview This example shows how to use ML models to predict a company's response to consumer complaints using the public [CFPB Consumer Complaint Database](https://console.cloud.google.com/marketplace/details/cfpb/complaint-database?filter=solution-type:dataset&id=5a1b3026-d189-4a35-8620-099f7b5a600b) on BigQuery. It provides an implementation of [AutoML Tables](https://cloud.google.com/automl-tables) for model training and batch prediction, and has a flexible config-driven BigQuery SQL pipeline for adapting to new data sources. This specific example identifies the outcomes of customer complaints, which could serve a customer support workflow that routes risky cases to specific support channels. But this example can be adapted to other support use cases by changing the label of the machine learning model. For example: * Routing support requests to specific teams. * Identifing support requests appropriate for templated vs. manual responses. * Prioritization of support requests. * Identifying a specific product (or products) needing support. ## Directory Structure ``` . β”œβ”€β”€ scripts # Python scripts for running the data and modeling pipeline. β”œβ”€β”€ queries # SQL queries for data manipulation, cleaning, and transformation. β”œβ”€β”€ notebooks # Jupyter notebooks for data exploration. Not part of the pipeline codebase, not reviewed, not tested in the pipeline environment, and dependent on 3rd party Python packages not required by the pipeline. Provided for reference only. └── config # Project configuration and table ingestion schemas. The configuration for the pipeline is all in `pipeline.yaml`. ``` ## Solution Diagram The diagram represents what each of the scripts does, including the structure of tables created at each step: ![diagram](./solution-diagram.png) ## Configuration Overview The configuration provided with the code is `config/pipeline.yaml`. This configuration information is used by pipeline scripts and for substitution into SQL queries stored in the `queries` folder. Basic configuration changes necessary when running the pipeline are discussed with the pipeline running instructions below. We recommend making a separate copy of the configuration when you have to change configuration parameters. All pipeline steps are run with the config file as a command line option, and using separate copies makes tracking different pipeline runs more manageable. The main sections of the configuration are: * `file_paths`: Absolute locations of files read by the pipeline. These will have to be changed to fit your environment. * `global`: Core configuration information used by multiple steps of the pipeline. It contains the names of the BigQuery dataset and tables, the ID of the Google Cloud Platform project, AutoML Tables model/data identification parameters, etc. * `query_files`: Filenames of SQL queries used by the pipeline. * `query_params`: Parameters for substitution into individual SQL queries. * `model`: Configuration information for the AutoML Tables Model. Includes parameters on training/optimizing the model, identification of key columns in the training data (e.g., the target), training data columns to exclude from model building, and type configuration for each feature used by the model. ## Instructions for Running the Pipeline to Predict Company Responses to Consumer Complaints All instructions were tested on a [Cloud AI Platform Notebook](https://cloud.google.com/ai-platform/notebooks/docs/) instance, created through the [UI](https://console.cloud.google.com/ai-platform/notebooks/instances). If you are running in another environment, you'll have to setup the [`gcloud` SDK](https://cloud.google.com/sdk/install), install Python 3 and virtualenv, and possibly manage other dependencies. We have not tested these instructions in other environments. **All commands, unless otherwise stated, should be run from the directory containing this README.** ## Enable Required APIs in your Project These instructions have been tested in a fresh Google Cloud project without any organization constraints. You should be able to run the code in an existing project, but make sure the following APIs are enabled, and make sure these products can communicate with one another--if you're running in a VPC or have organization-imposed firewall rule or product restrictions you may have some difficulty. Required APIs to enable: 1. [Compute Engine API](https://console.cloud.google.com/apis/api/compute.googleapis.com/) 1. [BigQuery API](https://console.cloud.google.com/apis/api/bigquery.googleapis.com/) 1. [Cloud AutoML API](https://console.cloud.google.com/apis/api/automl.googleapis.com/) 1. [Cloud Storage API](https://console.cloud.google.com/apis/api/storage-component.googleapis.com/) ### Setup for a New Local Environment These steps should be followed before you run the pipeline for the first time from a new development environment. As stated previously, these instructions have been tested in a [Google Cloud AI Platforms Notebook](https://console.cloud.google.com/ai-platform/notebooks/instances). 1. Run `gcloud init`, choose to use a new account, authenticate, and [set your project ID](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects) as the project. Choose a region in the US if prompted to set a default region. 1. Clone the github project. 1. Navigate to the directory containing this readme. 1. Create a Python 3 virtual environment (`automl-support` in this example, in your home directory): * Run `python3 -m virtualenv $HOME/env/automl-support` . * Activate the environment. Run: `source ~/env/automl-support/bin/activate` . * Install the required Python packages: `pip install -r requirements.txt` . You may get an error about apache-beam and pyyaml version incompatibilities, this will have no effect. ### Required Configuration Changes Configuration is read from a file specified when running the pipeline from the command line. We recommend working with different copies of the configuration for different experiments, environments, and other needs. Note that if values in the configuration match existing tables, resources, etc. in your project, strange errors and possibly data loss may result. The default values in `config/pipeline.yaml` provided with the code should be changed before running the pipeline. 1. Make a copy of the configuration file: `cp config/pipeline.yaml config/my_config.yaml` . 1. Edit `config/my_config.yaml` and make the following changes then save: * `file_paths.queries` is the path to the queries subfolder. Change this value to the absolute local path where the queries subfolder resides. * `global.destination_project_id` is the project_id of the project you want to run the pipeline in (and where the AutoML models will live). Change this to your project_id. 1. Also consider changing the following: * `global.destination_dataset` is the BigQuery dataset where data ingested by the pipeline into your project is stored. Note the table names don't need to change, since they will be written to the new dataset. Make sure this dataset doesn't already exist in your poject. If this dataset exists, the training pipeline will fail--you'll need to delete the dataset first. * `global.dataset_display_name` and `global.model_display_name` are the name of the AutoML Tables dataset and model created by the pipeline. Change these to new values if you wish (they can be the same). You should create a new config file and change these parameters for every full pipeline run. For failed pipeline runs, you'll want to delete the resources specified in these config values since the pipeline will not delete existing resources automatically. Note that on subsequent pipeline runs if you aren't rerunning ingestion you don't need to change `global.destination_dataset`, and if you aren't rerunning the model build you don't need to change `global.dataset_display_name` and `global.model_display_name`. If you need to change the default paths (because you are running somewhere besides an AI Platform Notebook, because your repo is in a different path, or because your AutoML service account key is in a different location) change the values in `file_paths`. ### Running the Pipeline These steps have only been tested for users with the "Owner" [IAM role](https://cloud.google.com/iam/docs/understanding-roles#primitive_role_definitions) in your project. These steps should work for the "Editor" role as well, but we have not tested it. All commands should be run from the project root (the folder with this README). This assumes your config file is in `config/my_config.yaml`. 1. Active the Python environment if it is not already activated. Run: `source ~/env/automl-support/bin/activate` or similar (see Setup for a New Environment, above). 1. Run the model pipeline: `nohup bash run_pipeline.sh config/my_config.yaml ftp > pipeline.out & disown` . This command will run the pipeline in the background, save logs to `pipeline.out`, and will not terminate if the terminal is closed. It will run all steps of the pipeline in sequence, or a subset of the steps as determined by the second positional arg (MODE). Ex. `fp` instead of `fp` would create features and then generate predictions using the model specified in the config. Pipline steps (`$MODE` argument): * Create features (f): This creates the dataset of features (config value `global.destination_dataset`) and feature tables. * Train (t): This creates the training dataset in AutoML Tables Forecasting (config value `global.dataset_display_name`) and trains the model (config value `global.model_display_name`). Note that in the AutoML Tables UI the dataset will appear as soon as it is created but the model will not appear until it is completely trained. * Predict (p): This makes predictions with the model, and copies the unformatted results to a predictions table (config value `global.predictions_table`). AutoML generates its own dataset in BQ, which will contain errors if predictions for any rows fail. This spurious dataset (named prediction_<model_name>_<timestamp>) will be deleted if there are no errors. This command pipes its output to a log file (`pipeline.out`). To follow this log file, run `tail -n 5 -f pipeline.out` to monitor the command while it runs. This command pipes its output to a log file (`pipeline.out`). To follow this log file, run `tail -n 5 -f pipeline.out` to monitor the command while it runs. Some of the AutoML steps are long-running operations. If you're following the logged output, you'll see continually longer sleepings between API calls. This is expected behavior. AutoML training can take hours, depending on your config settings. With the default settings, you can expect around two hours to complete the pipeline and model training. **Note:** If the pipeline is run and the destination datasets has already been created, the run will fail. Use the BQ UI, client, or command line interface to delete the dataset, or select new destinations (`global.destination_dataset`) in the config. AutoML also does not enforce that display names are unique, if multiple datasets or models are created with the same name, the run will fail. Use the AutoML UI or client to delete them, or select new display names in the config (`global.dataset_display_name` and `global.model_display_name`). ### Common Configuration Changes * Change `model.train_budget_hours` to control how long the model trains for. The default is 1, but you should expect an extra hour of spin up time on top of the training budget. Upping the budget may improve model performance. ## Online Predictions The example pipeline makes batch predictions, but a common deployment pattern is to create an API endpoint that receives features and returns a prediction. Do the following steps to deploy a model for online prediction, make a prediction, and then undeploy the model. **Do not leave your model deployed, deployed models can easily cost tens to hundreds of dollars a day.** All commands should be run from the project root (the folder with this README). This assumes your config file is in config/my_config.yaml. 1. Make sure you have activated the same virtual environment used for the model training pipeline. 1. Deploy the model: `bash online_predict.sh config/my_config.yaml deploy`. Take note of the "name" value in the response. 1. Deployment will take up to 15 minutes. To check the status of the deployment run the following command, replacing "operation-name" with the "name" value from the previous step: `curl -X GET -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" -H "Content-Type: application/json" https://automl.googleapis.com/v1beta1/operation-name`. When the operation is complete, the response will have a "done" item with the value "true". 1. Make a prediction using the provided sample predict_payload.json, containing features of an unlabeled example complaint: `bash online_predict.sh config/my_config.yaml predict predict_payload.json`. The response will have the different classes with values based on the confidence of the class. To predict for different features, change "values" in the .json file. The order of features in the json is the order of fields in the BigQuery Table used to train the model, minus the columns excluded by the `model.exclude_columns` config value. 1. You should undeploy your model when finished to avoid excessive charges. Run: `bash online_predict.sh config/my_config.yaml undeploy`. You should also verify in the UI that the model is undeployed. ## Using All Data to Train a Model This example intentionally splits the available data into training and prediction. Once you are comfortable with the model's performance, you should train the model on your available data. You can do this by changing the the config value `query_params.train_predict_split.test_threshold` to 0, which will put all data into the training split. Note that once you do this, the batch predict script won't run (since there's no data to use for prediction)
GCP
Better Consumer Complaint and Support Request Handling With AI Contributors Dimos Christopoulos Google Shane Kok https www linkedin com in shane kok b1970a82 shanekok9 gmail com Andrew Leach Google Anastasiia Manokhina Google Karan Palsani https www linkedin com in karanpalsani karanpalsani utexas edu Michael Sherman Google Michael Sparkman https www linkedin com in michael sparkman michaelsparkman1996 gmail com Sahana Subramanian https www linkedin com in sahana subramanian sahana subramanian utexas edu Overview This example shows how to use ML models to predict a company s response to consumer complaints using the public CFPB Consumer Complaint Database https console cloud google com marketplace details cfpb complaint database filter solution type dataset id 5a1b3026 d189 4a35 8620 099f7b5a600b on BigQuery It provides an implementation of AutoML Tables https cloud google com automl tables for model training and batch prediction and has a flexible config driven BigQuery SQL pipeline for adapting to new data sources This specific example identifies the outcomes of customer complaints which could serve a customer support workflow that routes risky cases to specific support channels But this example can be adapted to other support use cases by changing the label of the machine learning model For example Routing support requests to specific teams Identifing support requests appropriate for templated vs manual responses Prioritization of support requests Identifying a specific product or products needing support Directory Structure scripts Python scripts for running the data and modeling pipeline queries SQL queries for data manipulation cleaning and transformation notebooks Jupyter notebooks for data exploration Not part of the pipeline codebase not reviewed not tested in the pipeline environment and dependent on 3rd party Python packages not required by the pipeline Provided for reference only config Project configuration and table ingestion schemas The configuration for the pipeline is all in pipeline yaml Solution Diagram The diagram represents what each of the scripts does including the structure of tables created at each step diagram solution diagram png Configuration Overview The configuration provided with the code is config pipeline yaml This configuration information is used by pipeline scripts and for substitution into SQL queries stored in the queries folder Basic configuration changes necessary when running the pipeline are discussed with the pipeline running instructions below We recommend making a separate copy of the configuration when you have to change configuration parameters All pipeline steps are run with the config file as a command line option and using separate copies makes tracking different pipeline runs more manageable The main sections of the configuration are file paths Absolute locations of files read by the pipeline These will have to be changed to fit your environment global Core configuration information used by multiple steps of the pipeline It contains the names of the BigQuery dataset and tables the ID of the Google Cloud Platform project AutoML Tables model data identification parameters etc query files Filenames of SQL queries used by the pipeline query params Parameters for substitution into individual SQL queries model Configuration information for the AutoML Tables Model Includes parameters on training optimizing the model identification of key columns in the training data e g the target training data columns to exclude from model building and type configuration for each feature used by the model Instructions for Running the Pipeline to Predict Company Responses to Consumer Complaints All instructions were tested on a Cloud AI Platform Notebook https cloud google com ai platform notebooks docs instance created through the UI https console cloud google com ai platform notebooks instances If you are running in another environment you ll have to setup the gcloud SDK https cloud google com sdk install install Python 3 and virtualenv and possibly manage other dependencies We have not tested these instructions in other environments All commands unless otherwise stated should be run from the directory containing this README Enable Required APIs in your Project These instructions have been tested in a fresh Google Cloud project without any organization constraints You should be able to run the code in an existing project but make sure the following APIs are enabled and make sure these products can communicate with one another if you re running in a VPC or have organization imposed firewall rule or product restrictions you may have some difficulty Required APIs to enable 1 Compute Engine API https console cloud google com apis api compute googleapis com 1 BigQuery API https console cloud google com apis api bigquery googleapis com 1 Cloud AutoML API https console cloud google com apis api automl googleapis com 1 Cloud Storage API https console cloud google com apis api storage component googleapis com Setup for a New Local Environment These steps should be followed before you run the pipeline for the first time from a new development environment As stated previously these instructions have been tested in a Google Cloud AI Platforms Notebook https console cloud google com ai platform notebooks instances 1 Run gcloud init choose to use a new account authenticate and set your project ID https cloud google com resource manager docs creating managing projects identifying projects as the project Choose a region in the US if prompted to set a default region 1 Clone the github project 1 Navigate to the directory containing this readme 1 Create a Python 3 virtual environment automl support in this example in your home directory Run python3 m virtualenv HOME env automl support Activate the environment Run source env automl support bin activate Install the required Python packages pip install r requirements txt You may get an error about apache beam and pyyaml version incompatibilities this will have no effect Required Configuration Changes Configuration is read from a file specified when running the pipeline from the command line We recommend working with different copies of the configuration for different experiments environments and other needs Note that if values in the configuration match existing tables resources etc in your project strange errors and possibly data loss may result The default values in config pipeline yaml provided with the code should be changed before running the pipeline 1 Make a copy of the configuration file cp config pipeline yaml config my config yaml 1 Edit config my config yaml and make the following changes then save file paths queries is the path to the queries subfolder Change this value to the absolute local path where the queries subfolder resides global destination project id is the project id of the project you want to run the pipeline in and where the AutoML models will live Change this to your project id 1 Also consider changing the following global destination dataset is the BigQuery dataset where data ingested by the pipeline into your project is stored Note the table names don t need to change since they will be written to the new dataset Make sure this dataset doesn t already exist in your poject If this dataset exists the training pipeline will fail you ll need to delete the dataset first global dataset display name and global model display name are the name of the AutoML Tables dataset and model created by the pipeline Change these to new values if you wish they can be the same You should create a new config file and change these parameters for every full pipeline run For failed pipeline runs you ll want to delete the resources specified in these config values since the pipeline will not delete existing resources automatically Note that on subsequent pipeline runs if you aren t rerunning ingestion you don t need to change global destination dataset and if you aren t rerunning the model build you don t need to change global dataset display name and global model display name If you need to change the default paths because you are running somewhere besides an AI Platform Notebook because your repo is in a different path or because your AutoML service account key is in a different location change the values in file paths Running the Pipeline These steps have only been tested for users with the Owner IAM role https cloud google com iam docs understanding roles primitive role definitions in your project These steps should work for the Editor role as well but we have not tested it All commands should be run from the project root the folder with this README This assumes your config file is in config my config yaml 1 Active the Python environment if it is not already activated Run source env automl support bin activate or similar see Setup for a New Environment above 1 Run the model pipeline nohup bash run pipeline sh config my config yaml ftp pipeline out disown This command will run the pipeline in the background save logs to pipeline out and will not terminate if the terminal is closed It will run all steps of the pipeline in sequence or a subset of the steps as determined by the second positional arg MODE Ex fp instead of fp would create features and then generate predictions using the model specified in the config Pipline steps MODE argument Create features f This creates the dataset of features config value global destination dataset and feature tables Train t This creates the training dataset in AutoML Tables Forecasting config value global dataset display name and trains the model config value global model display name Note that in the AutoML Tables UI the dataset will appear as soon as it is created but the model will not appear until it is completely trained Predict p This makes predictions with the model and copies the unformatted results to a predictions table config value global predictions table AutoML generates its own dataset in BQ which will contain errors if predictions for any rows fail This spurious dataset named prediction model name timestamp will be deleted if there are no errors This command pipes its output to a log file pipeline out To follow this log file run tail n 5 f pipeline out to monitor the command while it runs This command pipes its output to a log file pipeline out To follow this log file run tail n 5 f pipeline out to monitor the command while it runs Some of the AutoML steps are long running operations If you re following the logged output you ll see continually longer sleepings between API calls This is expected behavior AutoML training can take hours depending on your config settings With the default settings you can expect around two hours to complete the pipeline and model training Note If the pipeline is run and the destination datasets has already been created the run will fail Use the BQ UI client or command line interface to delete the dataset or select new destinations global destination dataset in the config AutoML also does not enforce that display names are unique if multiple datasets or models are created with the same name the run will fail Use the AutoML UI or client to delete them or select new display names in the config global dataset display name and global model display name Common Configuration Changes Change model train budget hours to control how long the model trains for The default is 1 but you should expect an extra hour of spin up time on top of the training budget Upping the budget may improve model performance Online Predictions The example pipeline makes batch predictions but a common deployment pattern is to create an API endpoint that receives features and returns a prediction Do the following steps to deploy a model for online prediction make a prediction and then undeploy the model Do not leave your model deployed deployed models can easily cost tens to hundreds of dollars a day All commands should be run from the project root the folder with this README This assumes your config file is in config my config yaml 1 Make sure you have activated the same virtual environment used for the model training pipeline 1 Deploy the model bash online predict sh config my config yaml deploy Take note of the name value in the response 1 Deployment will take up to 15 minutes To check the status of the deployment run the following command replacing operation name with the name value from the previous step curl X GET H Authorization Bearer gcloud auth application default print access token H Content Type application json https automl googleapis com v1beta1 operation name When the operation is complete the response will have a done item with the value true 1 Make a prediction using the provided sample predict payload json containing features of an unlabeled example complaint bash online predict sh config my config yaml predict predict payload json The response will have the different classes with values based on the confidence of the class To predict for different features change values in the json file The order of features in the json is the order of fields in the BigQuery Table used to train the model minus the columns excluded by the model exclude columns config value 1 You should undeploy your model when finished to avoid excessive charges Run bash online predict sh config my config yaml undeploy You should also verify in the UI that the model is undeployed Using All Data to Train a Model This example intentionally splits the available data into training and prediction Once you are comfortable with the model s performance you should train the model on your available data You can do this by changing the the config value query params train predict split test threshold to 0 which will put all data into the training split Note that once you do this the batch predict script won t run since there s no data to use for prediction
GCP In production setup it s also useful to load test your models to tune example illustrates how to automate deployment of your trained models to GKE your service can handle the required throughput First of all we need to train a model You are welcome to experiment with your This You can serve your TensorFlow models on Google Kubernetes Engine with TensorFlow Serving configuration and your whole setup as well as to make sure Prerequisites Preparing a model own model or you might train an example based on this
You can serve your TensorFlow models on Google Kubernetes Engine with [TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving). This example illustrates how to automate deployment of your trained models to GKE. In production setup, it's also useful to load test your models to tune TensorFlow Serving configuration and your whole setup, as well as to make sure your service can handle the required throughput. # Prerequisites ## Preparing a model First of all, we need to train a model. You are welcome to experiment with your own model or you might train an example based on this [tutorial](https://www.tensorflow.org/tutorials/structured_data/feature_columns). ``` cd tensorflow python create_model.py ``` would create ## Creating GKE clusters for load testing and serving Now we need to deploy our model. We're going to serve our model with Tensorflow Serving launched in a docker container on a GKE cluster. Our _Dockerfile_ looks pretty simple: ``` FROM tensorflow/serving:latest ADD batching_parameters.txt /benchmark/batching_parameters.txt ADD models.config /benchmark/models.config ADD saved_model_regression /models/regression ``` We only add model(s) binaries and a few configuration files. In a `models.config` we define one (or many models) to be launched: ``` model_config_list { config { name: 'regression' base_path: '/models/regression/' model_platform: "tensorflow" } } ``` We also need to create a GKE cluster and deploy a _tensorflow-app_ service there, that would expose expose 8500 and 8501 ports (both for http and grpc requests) under a load balancer. ``` python experiment.py ``` would create a _kubernetes.yaml_ file with default serving parameters. For load testing we use a [locust](https://locust.io/) framework. We've implemented a _RegressionUser_ inheriting from _locust.HttpUser_ and configured locust to work in a distributed mode. Now we need to create two GKE clusters . We're doing this to emulate cross-cluster network latency as well as being able to experiment with different hardware for TensorFlow. All our deployment are deployed with Cloud Build, and you can use a bash script to run e2e infrastructure creation. ``` export TENSORFLOW_MACHINE_TYPE=e2-highcpu-8 export LOCUST_MACHINE_TYPE=e2-highcpu-32 export CLUSTER_ZONE=<GCP_ZONE> export GCP_PROJECT=<YOUR_PROJECT> ./create-cluster.sh ``` ## Running a load test After a cluster has been created, you need to forward a port to localhost: ``` gcloud container clusters get-credentials ${LOCUST_CLUSTER_NAME} --zone ${CLUSTER_ZONE} --project=${GCP_PROJECT} export LOCUST_CONTEXT="gke_${GCP_PROJECT}_${CLUSTER_ZONE}_loadtest-locust-${LOCUST_MACHINE_TYPE}" kubectl config use-context ${LOCUST_CONTEXT} kubectl port-forward svc/locust-master 8089:8089 ``` Now you can access the locust UI at _localhost:8089_ and initiate a load test of your model. We've observed the following results for the example model - 8ms @p50 and 11 @p99 at 300 queries per second, and 13ms @p50 and 47ms @p99 at 3900 queries per second. ## Experimenting with addition serving parameters Try to use a different hardware for Tensorflow Serving - e.g., recreate a GKE cluster using `n2-highcpu-8` machines. We've observed a significant increase in tail latency and throughput we could handle (with the same amount of nodes). 3ms @p50 and 5ms @p99 at 300 queries per second, and 15ms @p50 and 46ms @p90 at 15000 queries per second. Another way to experiment with is to try out different [batching](https://www.tensorflow.org/tfx/serving/serving_config#batching_configuration) parameters (you might look at the batching tuning [guide](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/batching/README.md#performance-tuning)) as well as other TensorFlow Serving parameters defined [here](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/main.cc#L59). One of the possible configuration might be this one: ``` python experiment.py --enable_batching \ --batching_parameters_file=/benchmark/batching_parameters.txt \ --max_batch_size=8000 --batch_timeout_micros=4 --num_batch_threads=4 \ --tensorflow_inter_op_parallelism=4 --tensorflow_intra_op_parallelism=4 ``` In this case, your _kubernetes.yaml_ would have the following lines: ``` spec: replicas: 3 selector: matchLabels: app: tensorflow-app template: metadata: labels: app: tensorflow-app spec: containers: - name: tensorflow-app image: gcr.io/mogr-test-277422/tensorflow-app:latest env: - name: MODEL_NAME value: regression ports: - containerPort: 8500 - containerPort: 8501 args: ["--model_config_file=/benchmark/models.config", "--tensorflow_intra_op_parallelism=4", "--tensorflow_inter_op_parallelism=4", "--batching_parameters_file=/benchmark/batching_parameters.txt", "--enable_batching"] ``` And the _batching_parameters.txt_ would look like this: ``` max_batch_size { value: 8000 } batch_timeout_micros { value: 4 } max_enqueued_batches { value: 100 } num_batch_threads { value: 4 } ``` With this configuration, we would achieve much better performance (both higher throughput and lower latency).
GCP
You can serve your TensorFlow models on Google Kubernetes Engine with TensorFlow Serving https www tensorflow org tfx guide serving This example illustrates how to automate deployment of your trained models to GKE In production setup it s also useful to load test your models to tune TensorFlow Serving configuration and your whole setup as well as to make sure your service can handle the required throughput Prerequisites Preparing a model First of all we need to train a model You are welcome to experiment with your own model or you might train an example based on this tutorial https www tensorflow org tutorials structured data feature columns cd tensorflow python create model py would create Creating GKE clusters for load testing and serving Now we need to deploy our model We re going to serve our model with Tensorflow Serving launched in a docker container on a GKE cluster Our Dockerfile looks pretty simple FROM tensorflow serving latest ADD batching parameters txt benchmark batching parameters txt ADD models config benchmark models config ADD saved model regression models regression We only add model s binaries and a few configuration files In a models config we define one or many models to be launched model config list config name regression base path models regression model platform tensorflow We also need to create a GKE cluster and deploy a tensorflow app service there that would expose expose 8500 and 8501 ports both for http and grpc requests under a load balancer python experiment py would create a kubernetes yaml file with default serving parameters For load testing we use a locust https locust io framework We ve implemented a RegressionUser inheriting from locust HttpUser and configured locust to work in a distributed mode Now we need to create two GKE clusters We re doing this to emulate cross cluster network latency as well as being able to experiment with different hardware for TensorFlow All our deployment are deployed with Cloud Build and you can use a bash script to run e2e infrastructure creation export TENSORFLOW MACHINE TYPE e2 highcpu 8 export LOCUST MACHINE TYPE e2 highcpu 32 export CLUSTER ZONE GCP ZONE export GCP PROJECT YOUR PROJECT create cluster sh Running a load test After a cluster has been created you need to forward a port to localhost gcloud container clusters get credentials LOCUST CLUSTER NAME zone CLUSTER ZONE project GCP PROJECT export LOCUST CONTEXT gke GCP PROJECT CLUSTER ZONE loadtest locust LOCUST MACHINE TYPE kubectl config use context LOCUST CONTEXT kubectl port forward svc locust master 8089 8089 Now you can access the locust UI at localhost 8089 and initiate a load test of your model We ve observed the following results for the example model 8ms p50 and 11 p99 at 300 queries per second and 13ms p50 and 47ms p99 at 3900 queries per second Experimenting with addition serving parameters Try to use a different hardware for Tensorflow Serving e g recreate a GKE cluster using n2 highcpu 8 machines We ve observed a significant increase in tail latency and throughput we could handle with the same amount of nodes 3ms p50 and 5ms p99 at 300 queries per second and 15ms p50 and 46ms p90 at 15000 queries per second Another way to experiment with is to try out different batching https www tensorflow org tfx serving serving config batching configuration parameters you might look at the batching tuning guide https github com tensorflow serving blob master tensorflow serving batching README md performance tuning as well as other TensorFlow Serving parameters defined here https github com tensorflow serving blob master tensorflow serving model servers main cc L59 One of the possible configuration might be this one python experiment py enable batching batching parameters file benchmark batching parameters txt max batch size 8000 batch timeout micros 4 num batch threads 4 tensorflow inter op parallelism 4 tensorflow intra op parallelism 4 In this case your kubernetes yaml would have the following lines spec replicas 3 selector matchLabels app tensorflow app template metadata labels app tensorflow app spec containers name tensorflow app image gcr io mogr test 277422 tensorflow app latest env name MODEL NAME value regression ports containerPort 8500 containerPort 8501 args model config file benchmark models config tensorflow intra op parallelism 4 tensorflow inter op parallelism 4 batching parameters file benchmark batching parameters txt enable batching And the batching parameters txt would look like this max batch size value 8000 batch timeout micros value 4 max enqueued batches value 100 num batch threads value 4 With this configuration we would achieve much better performance both higher throughput and lower latency
GCP so that both read and writes are evenly distributed across the keys space Although we have tools our key is performing it is not obvious how to change or update a key for all the records in a table Bigtable instance and to write the same records to another table with the same For an optimal performance of our requests to a Bigtable instance it is crucial to choose Dataflow pipeline to change the key of a Bigtable This example contains a Dataflow pipeline to read data from a table in a such as to diagnose how a good key for our records https cloud google com bigtable docs schema design
## Dataflow pipeline to change the key of a Bigtable For an optimal performance of our requests to a Bigtable instance, [it is crucial to choose a good key for our records](https://cloud.google.com/bigtable/docs/schema-design), so that both read and writes are evenly distributed across the keys space. Although we have tools such as [Key Visualizer](https://cloud.google.com/bigtable/docs/keyvis-overview), to diagnose how our key is performing, it is not obvious how to change or update a key for all the records in a table. This example contains a Dataflow pipeline to read data from a table in a Bigtable instance, and to write the same records to another table with the same schema, but using a different key. The pipeline does not assume any specific schema and can work with any table. ### Build the pipeline The build process is managed using Maven. To compile, just run `mvn compile` To create a package for the pipeline, run `mvn package` ### Setup `cbt` The helper scripts in this repo use `cbt` to work (to create sample tables, to create an output table with the same schema as the input table). If you have not configured `cbt` yet, please see the following link: * https://cloud.google.com/bigtable/docs/cbt-overview In summary, you need to include your GCP project and Bigtable instance in the `~/.cbtrc` file as shown below: ``` project = YOUR_GCP_PROJECT_NAME instance = YOUR_BIGTABLE_INSTANCE_NAME ``` ### Create a sandbox table for testing purposes If you already have data in Bigtable, you can ignore this section. If you don't have any table available to try this pipeline out, you can create one using a script in this repo: ```bash $ ./scripts/create_sandbox_table.sh MY_TABLE ``` That will create a table of name `MY_TABLE` with three records. To check that the data has been actually written to the table, you can use `cbt count` and `cbt read`: ```bash $ cbt count taxi_rides 2020/01/29 17:44:31 -creds flag unset, will use gcloud credential 3 $ cbt read taxi_rides 2020/01/29 17:43:34 -creds flag unset, will use gcloud credential ---------------------------------------- 33cb2a42-d9f5-4b64-9e8a-b5aa1d6e142f#132 id:point_idx @ 2020/01/29-16:22:32.551000 "132" id:ride_id @ 2020/01/29-16:22:31.407000 "33cb2a42-d9f5-4b64-9e8a-b5aa1d6e142f" loc:latitude @ 2020/01/29-16:22:33.711000 [...] ``` ### Create the output table The output table must exist before the pipeline is run, and it must have the same schema as the input table. That table will be **OVERWRITTEN** if it already contains data. If you have `cbt` configured, you can use one of the scripts in this repo to create an empty table replicating the schema of another table: ```bash $ ./scripts/copy_schema_to_new_table.sh MY_INPUT_TABLE MY_OUTPUT_TABLE ``` ### Command line options for the pipeline In addition to the [Dataflow command line options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-params), this pipeline has three additional required options: * ``--bigtableInstance``, the name of the Bigtable instances where all the tables are located * ``--inputTable``, the name of an existing table with the input data * ``--outputTable``, the name of an existing table. **BEWARE: it will be overwritten**. Remember that Dataflow also requires at least the following command line options: * ``--project`` * ``--tempLocation`` * ``--runner`` * ``--region`` ### Run the pipeline as a standalone Java app You don't necessarily need Maven to run a Dataflow pipeline, you can also use the package generated by Maven as an standalone Java application. Make sure that you have generated a package with `mvn package` and that the _bundled_ JAR file is available locally in the machine triggering the pipeline. Then run: ```bash # Change if your location is different JAR_LOC=target/bigtable-change-key-bundled-0.1-SNAPSHOT.jar PROJECT_ID=<YOUR_PROJECT> REGION=<YOUR_REGION_TO_RUN_DATAFLOW> TMP_GS_LOCATION=<GCS_URI_FOR_TEMP_FILES> BIGTABLE_INSTANCE=<YOUR_INSTANCE> INPUT_TABLE=<YOUR_INPUT_TABLE> OUTPUT_TABLE=<YOUR_OUTPUT_TABLE> RUNNER=DataflowRunner java -cp ${JAR_LOC} com.google.cloud.pso.pipeline.BigtableChangeKey \ --project=${PROJECT_ID} \ --gcpTempLocation=${TMP_GS_LOCATION} \ --region=${REGION} \ --runner=${RUNNER} \ --bigtableInstance=${BIGTABLE_INSTANCE} \ --inputTable=${INPUT_TABLE} \ --outputTable=${OUTPUT_TABLE} ``` Then go to the Dataflow UI to check that the job is running properly. You should see a job with a simple graph, similar to this one: ![Pipeline graph](./imgs/pipeline_graph.png) You can now check that the destination table has the same records as the input table, and that the key has changed. You can use `cbt count` and `cbt read` for that purpose, by comparing with the results of the original table. ### Change the update key function The pipeline [includes a key transform function that just reverses it](./src/main/java/com/google/cloud/pso/pipeline/BigtableChangeKey.java#L30-L49). It is only provided as an example so it is easier to write your own function. ```java /** * Return a new key for a given key and record in the existing table. * * <p>The purpose of this method is to test different key strategies over the same data in * Bigtable. * * @param key The existing key in the table * @param record The full record, in case it is needed to choose the new key * @return The new key for the same record */ public static String transformKey(String key, Row record) { /** * TODO: Change the existing key here, by a new key * * <p>Here we just reverse the key, as a demo. Ideally, you should test different strategies, * test the performance obtained with each key transform strategy, and then decide how you need * to change the keys. */ return StringUtils.reverse(key); } ``` The function has two input parameters: * `key`: the current key of the record * `record`: the full record, with all the column families, columns, values/cells, versions of cells, etc. The `record` is of type [com.google.bigtable.v2.Row](http://googleapis.github.io/googleapis/java/all/latest/apidocs/com/google/bigtable/v2/Row.html). You can traverse the record to recover all the elements. See [an example of how to traverse a Row](src/main/java/com/google/cloud/pso/transforms/UpdateKey.java#L57-L78). The new key must be returned as a `String`. In order to leverage this pipeline, you must create the new key using the previous key and the data contained in the record. This pipeline assumes that you don't need any other external piece of information. The function that you create is [passed to the `UpdateKey` transform in these lines](src/main/java/com/google/cloud/pso/pipeline/BigtableChangeKey.java#L81-L83). You can pass any function (named functions, lambdas, etc), and the `UpdateKey` transform will make sure that the function can be serialized. You need to make sure that you are passing an idempotent function, that is _thread-compatible_ and serializable, or you may experience issues when the function is called from the pipeline workers. For more details about the requirements of your code, see: * https://beam.apache.org/documentation/programming-guide/#requirements-for-writing-user-code-for-beam-transforms Any pure function with no side effects and/or external dependencies (other than those passed through the input arguments, namely the key and the record), will fulfill those requirements. ## License Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at * http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
GCP
Dataflow pipeline to change the key of a Bigtable For an optimal performance of our requests to a Bigtable instance it is crucial to choose a good key for our records https cloud google com bigtable docs schema design so that both read and writes are evenly distributed across the keys space Although we have tools such as Key Visualizer https cloud google com bigtable docs keyvis overview to diagnose how our key is performing it is not obvious how to change or update a key for all the records in a table This example contains a Dataflow pipeline to read data from a table in a Bigtable instance and to write the same records to another table with the same schema but using a different key The pipeline does not assume any specific schema and can work with any table Build the pipeline The build process is managed using Maven To compile just run mvn compile To create a package for the pipeline run mvn package Setup cbt The helper scripts in this repo use cbt to work to create sample tables to create an output table with the same schema as the input table If you have not configured cbt yet please see the following link https cloud google com bigtable docs cbt overview In summary you need to include your GCP project and Bigtable instance in the cbtrc file as shown below project YOUR GCP PROJECT NAME instance YOUR BIGTABLE INSTANCE NAME Create a sandbox table for testing purposes If you already have data in Bigtable you can ignore this section If you don t have any table available to try this pipeline out you can create one using a script in this repo bash scripts create sandbox table sh MY TABLE That will create a table of name MY TABLE with three records To check that the data has been actually written to the table you can use cbt count and cbt read bash cbt count taxi rides 2020 01 29 17 44 31 creds flag unset will use gcloud credential 3 cbt read taxi rides 2020 01 29 17 43 34 creds flag unset will use gcloud credential 33cb2a42 d9f5 4b64 9e8a b5aa1d6e142f 132 id point idx 2020 01 29 16 22 32 551000 132 id ride id 2020 01 29 16 22 31 407000 33cb2a42 d9f5 4b64 9e8a b5aa1d6e142f loc latitude 2020 01 29 16 22 33 711000 Create the output table The output table must exist before the pipeline is run and it must have the same schema as the input table That table will be OVERWRITTEN if it already contains data If you have cbt configured you can use one of the scripts in this repo to create an empty table replicating the schema of another table bash scripts copy schema to new table sh MY INPUT TABLE MY OUTPUT TABLE Command line options for the pipeline In addition to the Dataflow command line options https cloud google com dataflow docs guides specifying exec params this pipeline has three additional required options bigtableInstance the name of the Bigtable instances where all the tables are located inputTable the name of an existing table with the input data outputTable the name of an existing table BEWARE it will be overwritten Remember that Dataflow also requires at least the following command line options project tempLocation runner region Run the pipeline as a standalone Java app You don t necessarily need Maven to run a Dataflow pipeline you can also use the package generated by Maven as an standalone Java application Make sure that you have generated a package with mvn package and that the bundled JAR file is available locally in the machine triggering the pipeline Then run bash Change if your location is different JAR LOC target bigtable change key bundled 0 1 SNAPSHOT jar PROJECT ID YOUR PROJECT REGION YOUR REGION TO RUN DATAFLOW TMP GS LOCATION GCS URI FOR TEMP FILES BIGTABLE INSTANCE YOUR INSTANCE INPUT TABLE YOUR INPUT TABLE OUTPUT TABLE YOUR OUTPUT TABLE RUNNER DataflowRunner java cp JAR LOC com google cloud pso pipeline BigtableChangeKey project PROJECT ID gcpTempLocation TMP GS LOCATION region REGION runner RUNNER bigtableInstance BIGTABLE INSTANCE inputTable INPUT TABLE outputTable OUTPUT TABLE Then go to the Dataflow UI to check that the job is running properly You should see a job with a simple graph similar to this one Pipeline graph imgs pipeline graph png You can now check that the destination table has the same records as the input table and that the key has changed You can use cbt count and cbt read for that purpose by comparing with the results of the original table Change the update key function The pipeline includes a key transform function that just reverses it src main java com google cloud pso pipeline BigtableChangeKey java L30 L49 It is only provided as an example so it is easier to write your own function java Return a new key for a given key and record in the existing table p The purpose of this method is to test different key strategies over the same data in Bigtable param key The existing key in the table param record The full record in case it is needed to choose the new key return The new key for the same record public static String transformKey String key Row record TODO Change the existing key here by a new key p Here we just reverse the key as a demo Ideally you should test different strategies test the performance obtained with each key transform strategy and then decide how you need to change the keys return StringUtils reverse key The function has two input parameters key the current key of the record record the full record with all the column families columns values cells versions of cells etc The record is of type com google bigtable v2 Row http googleapis github io googleapis java all latest apidocs com google bigtable v2 Row html You can traverse the record to recover all the elements See an example of how to traverse a Row src main java com google cloud pso transforms UpdateKey java L57 L78 The new key must be returned as a String In order to leverage this pipeline you must create the new key using the previous key and the data contained in the record This pipeline assumes that you don t need any other external piece of information The function that you create is passed to the UpdateKey transform in these lines src main java com google cloud pso pipeline BigtableChangeKey java L81 L83 You can pass any function named functions lambdas etc and the UpdateKey transform will make sure that the function can be serialized You need to make sure that you are passing an idempotent function that is thread compatible and serializable or you may experience issues when the function is called from the pipeline workers For more details about the requirements of your code see https beam apache org documentation programming guide requirements for writing user code for beam transforms Any pure function with no side effects and or external dependencies other than those passed through the input arguments namely the key and the record will fulfill those requirements License Copyright 2020 Google LLC Licensed under the Apache License Version 2 0 the License you may not use this file except in compliance with the License You may obtain a copy of the License at http www apache org licenses LICENSE 2 0 Unless required by applicable law or agreed to in writing software distributed under the License is distributed on an AS IS BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License
GCP 1 Create a new project in CSR and clone it to your machine Google Cloud Build to automatically run unit tests and pylint upon code check in By following this tutorial you will learn This repo contains example code and instructions that show you how to use CSR and Basic Python Continuous Integration CI With Cloud Source Repositories CSR and Google Cloud Build how to build basic Python continuous integration pipelines on Google Cloud Platform GCP 1 Create your code and unit tests By following along with this example you will learn how to Overview
# Basic Python Continuous Integration (CI) With Cloud Source Repositories (CSR) and Google Cloud Build ## Overview This repo contains example code and instructions that show you how to use CSR and Google Cloud Build to automatically run unit tests and pylint upon code check-in. By following this tutorial you will learn how to build basic Python continuous integration pipelines on Google Cloud Platform (GCP). By following along with this example you will learn how to: 1. Create a new project in CSR and clone it to your machine. 1. Create your code and unit tests. 1. Create a custom container image called a cloud builder that Google Cloud Build will use to run your Python tests. 1. Create a cloud build trigger that will tell Google Cloud Build when to run. 1. Tie it all together by creating a cloudbuild.yaml file that tells Google Cloud Build how to execute your tests when the cloud build trigger fires, using the custom cloud builder you created. In order to follow along with this tutorial you'll need to: * Create or have access to an existing [GCP project](https://cloud.google.com/resource-manager/docs/creating-managing-projects). * Install and configure the [Google Cloud SDK](https://cloud.google.com/sdk). ## 1. Create a new project in Cloud Source Repositories You'll start by creating a new repository in CSR, copying the files in this example into the CSR repository, and commiting them to your new repository. 1. Go to [https://source.cloud.google.com/](https://source.cloud.google.com/). 1. Click 'Add repository'. 1. Choose 'Create new repository'. 1. Specify a name, and your project name. 1. Follow the instructions to 'git clone' the empty repo to your workstation. 1. Copy the files from this example into the new repo. 1. Add the files to the new repo with the command: ```bash git add . ``` 1. Commit and push these files in the new repo by running the commands: ```bash git commit -m 'Creating a repository for Python Cloud Build CI example.' git push origin master.' ``` You can alternatively do the same using the Google Cloud SDK: 1. Choose a name for your source repo and configure an environment variable for that name with the command: ```bash export REPO_NAME = <YOUR_REPO_NAME> ``` 1. Create the repository by running the command: ```bash gcloud source repos create $REPO_NAME ``` 1. Clone the new repository to your local machine by running the command: ```bash gcloud source repos clone $REPO_NAME. ``` 1. Copy the files from this example into the new repo. 1. Add the files to the new repo with the command: ```bash git add . ``` 1. Commit and push these files in the new repo by running the commands: ```bash git commit -m 'Creating repository for Python Cloud Build CI example.' git push origin master.' ``` ## 2. Create your code and unit tests Creating unit tests is beyond the scope of this README, but if you review the tests in tests/ you'll quickly get the idea. Pytest is being used as the testing suite for this project. Before proceeding make sure you can run your tests from the command line by running this command from the root of the project: ```bash python3 -m pytest ``` Or if you want to be fancy and use the [coverage](https://pytest-cov.readthedocs.io/en/latest/readme.html) plug-in: ```bash python3 -m pytest --cov=my_module tests/ ``` If everything goes well you should expect to see output like this, showing successful tests: ```bash $ python3 -m pytest --cov=my_module tests/ ============================================================================= test session starts ============================================================================== platform darwin -- Python 3.7.3, pytest-4.6.2, py-1.8.0, pluggy-0.12.0 rootdir: /Users/mikebernico/dev/basic-cicd-cloudbuild plugins: cov-2.7.1 collected 6 items tests/test_my_module.py ...... [100%] ---------- coverage: platform darwin, python 3.7.3-final-0 ----------- Name Stmts Miss Cover -------------------------------------------- my_module/__init__.py 1 0 100% my_module/my_module.py 4 0 100% -------------------------------------------- TOTAL 5 0 100% ``` Now that your tests are working locally, you can configure Google Cloud Build to run them every time you push new code. ## 3. Building a Python Cloud Build Container To run Python-based tests in Google Cloud Build you need a Python container image used as a [cloud builder](https://cloud.google.com/cloud-build/docs/cloud-builders). A cloud builder is just a container image with the software you need for your build steps in it. Google does not distribute a prebuilt Python cloud builder, so a custom cloud builder is required. The code needed to build a custom Python 3 cloud build container is located in '/python-cloud-builder' under the root of this project. Inside that folder you will find a Dockerfile and a very minimal cloud-build.yaml. The Dockerfile specifies the software that will be inside the container image. ```Docker FROM python:3 # Start from the public Python 3 image contained in DockerHub RUN pip install virtualenv # Install virtualenv so that a virtual environment can be used to carry build steps forward. ``` The Dockerfile is enough to build the image, however you will also need a cloud-build.yaml to tell Cloud Build how to build and upload the resulting image to GCP. ```yaml steps: - name: 'gcr.io/cloud-builders/docker' args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/python-cloudbuild', '.' ] # This step tells Google Cloud Build to use docker to build the Dockerfile. images: - 'gcr.io/$PROJECT_ID/python-cloudbuild' # The resulting image is then named python-cloudbuild and uploaded to your projects container registry. ``` You don't need to change either of these files to follow this tutorial. They are included here to help you understand the process of building custom build images. Once you're ready, run these commands from the root of the project to build and upload your custom Python cloud builder: ```bash cd python-cloud-builder gcloud builds submit --config=cloudbuild.yaml . ``` This creates the custom Python cloud builder and uploads it to your GCP project's [container registry](https://cloud.google.com/container-registry/), which is a private location to store container images. Your new cloud builder will be called gcr.io/$PROJECT_ID/python-cloudbuild, where $PROJECT_ID is the name of your GCP project. ## 4. Create a Google Cloud Build Trigger Now that you've created a Python cloud builder to run your tests, you should [create a trigger](https://cloud.google.com/cloud-build/docs/running-builds/automate-builds) that tells Cloud Build when to run those tests. To do that, follow these steps: 1. On the GCP console navigate to 'Cloud Build' > 'Triggers'. 1. Add a trigger by clicking '+ CREATE TRIGGER'. 1. Choose the cloud source repository you created in step 1. from the 'Repository' drop down. 1. Assuming you want the trigger to fire on any branch, accept the default trigger type and regex. 1. Choose the 'Cloud Build configuration file (yaml or json)' radio button under 'Build configuration'. 1. Click 'Create trigger'. ## 5. Create a cloudbuild.yaml file that executes your tests and runs pylint At this point you've ran some unit tests on the command line, created a new repository in CSR, created a Python cloud builder, and used Google Cloud Build to create build trigger that runs whenever you push new code. In this last step you'll tie this all together and tell Google Cloud Build how to automatically run tests and run pylint to examine your code whenever a code change is pushed into CSR. In order to tell Google Cloud Builder how to run your tests, you'll need to create a file called cloudbuild.yaml in the root of your project. Inside that file you'll and add the steps needed to execute your unit tests. For each steps you will reference the cloud builder that was created in step 3, by it's location in the Google Container Registry. *Note: Each step specified in the cloudbuild.yaml is a separate, ephemeral run of a docker image, however [the /workspace/ directory is preserved between runs](https://cloud.google.com/cloud-build/docs/build-config#dir). One way to carry python packages forward is to use a virtualenv housed in /workspace/* ```yaml steps: - name: 'gcr.io/$PROJECT_ID/python-cloudbuild' # Cloud Build automatically substitutes $PROJECT_ID for your Project ID. entrypoint: '/bin/bash' args: ['-c','virtualenv /workspace/venv' ] # Creates a Python virtualenv stored in /workspace/venv that will persist across container runs. - name: 'gcr.io/$PROJECT_ID/python-cloudbuild' entrypoint: 'venv/bin/pip' args: ['install', '-V', '-r', 'requirements.txt'] # Installs any dependencies listed in the project's requirements.txt. - name: 'gcr.io/$PROJECT_ID/python-cloudbuild' entrypoint: 'venv/bin/python' args: ['-m', 'pytest', '-v'] # Runs pytest from the virtual environment (with all requirements) # using the verbose flag so you can see each individual test. - name: 'gcr.io/$PROJECT_ID/python-cloudbuild' entrypoint: 'venv/bin/pylint' args: ['my_module/'] # Runs pylint against the module my_module contained one folder under the project root. ``` ## Wrap Up That's all there is to it! From here you can inspect your builds in Google Cloud Build's history. You can also build in third-party integrations via PubSub. You can find more documentation on how to use third party integrations [here](https://cloud.google.com/cloud-build/docs/configure-third-party-notifications). If you wanted to automatically deploy this code you could add additional steps that enabled continuous delivery. Prebuilt cloud builders exist for gcloud, kubectl, etc. The full list can be found at [https://github.com/GoogleCloudPlatform/cloud-builders](https://github.com/GoogleCloudPlatform/cloud-builders). Google also maintains a community repo for other cloud builders contributed by the public [here](https://github.com/GoogleCloudPlatform/cloud-builders-community). ## License Copyright 2019 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
GCP
Basic Python Continuous Integration CI With Cloud Source Repositories CSR and Google Cloud Build Overview This repo contains example code and instructions that show you how to use CSR and Google Cloud Build to automatically run unit tests and pylint upon code check in By following this tutorial you will learn how to build basic Python continuous integration pipelines on Google Cloud Platform GCP By following along with this example you will learn how to 1 Create a new project in CSR and clone it to your machine 1 Create your code and unit tests 1 Create a custom container image called a cloud builder that Google Cloud Build will use to run your Python tests 1 Create a cloud build trigger that will tell Google Cloud Build when to run 1 Tie it all together by creating a cloudbuild yaml file that tells Google Cloud Build how to execute your tests when the cloud build trigger fires using the custom cloud builder you created In order to follow along with this tutorial you ll need to Create or have access to an existing GCP project https cloud google com resource manager docs creating managing projects Install and configure the Google Cloud SDK https cloud google com sdk 1 Create a new project in Cloud Source Repositories You ll start by creating a new repository in CSR copying the files in this example into the CSR repository and commiting them to your new repository 1 Go to https source cloud google com https source cloud google com 1 Click Add repository 1 Choose Create new repository 1 Specify a name and your project name 1 Follow the instructions to git clone the empty repo to your workstation 1 Copy the files from this example into the new repo 1 Add the files to the new repo with the command bash git add 1 Commit and push these files in the new repo by running the commands bash git commit m Creating a repository for Python Cloud Build CI example git push origin master You can alternatively do the same using the Google Cloud SDK 1 Choose a name for your source repo and configure an environment variable for that name with the command bash export REPO NAME YOUR REPO NAME 1 Create the repository by running the command bash gcloud source repos create REPO NAME 1 Clone the new repository to your local machine by running the command bash gcloud source repos clone REPO NAME 1 Copy the files from this example into the new repo 1 Add the files to the new repo with the command bash git add 1 Commit and push these files in the new repo by running the commands bash git commit m Creating repository for Python Cloud Build CI example git push origin master 2 Create your code and unit tests Creating unit tests is beyond the scope of this README but if you review the tests in tests you ll quickly get the idea Pytest is being used as the testing suite for this project Before proceeding make sure you can run your tests from the command line by running this command from the root of the project bash python3 m pytest Or if you want to be fancy and use the coverage https pytest cov readthedocs io en latest readme html plug in bash python3 m pytest cov my module tests If everything goes well you should expect to see output like this showing successful tests bash python3 m pytest cov my module tests test session starts platform darwin Python 3 7 3 pytest 4 6 2 py 1 8 0 pluggy 0 12 0 rootdir Users mikebernico dev basic cicd cloudbuild plugins cov 2 7 1 collected 6 items tests test my module py 100 coverage platform darwin python 3 7 3 final 0 Name Stmts Miss Cover my module init py 1 0 100 my module my module py 4 0 100 TOTAL 5 0 100 Now that your tests are working locally you can configure Google Cloud Build to run them every time you push new code 3 Building a Python Cloud Build Container To run Python based tests in Google Cloud Build you need a Python container image used as a cloud builder https cloud google com cloud build docs cloud builders A cloud builder is just a container image with the software you need for your build steps in it Google does not distribute a prebuilt Python cloud builder so a custom cloud builder is required The code needed to build a custom Python 3 cloud build container is located in python cloud builder under the root of this project Inside that folder you will find a Dockerfile and a very minimal cloud build yaml The Dockerfile specifies the software that will be inside the container image Docker FROM python 3 Start from the public Python 3 image contained in DockerHub RUN pip install virtualenv Install virtualenv so that a virtual environment can be used to carry build steps forward The Dockerfile is enough to build the image however you will also need a cloud build yaml to tell Cloud Build how to build and upload the resulting image to GCP yaml steps name gcr io cloud builders docker args build t gcr io PROJECT ID python cloudbuild This step tells Google Cloud Build to use docker to build the Dockerfile images gcr io PROJECT ID python cloudbuild The resulting image is then named python cloudbuild and uploaded to your projects container registry You don t need to change either of these files to follow this tutorial They are included here to help you understand the process of building custom build images Once you re ready run these commands from the root of the project to build and upload your custom Python cloud builder bash cd python cloud builder gcloud builds submit config cloudbuild yaml This creates the custom Python cloud builder and uploads it to your GCP project s container registry https cloud google com container registry which is a private location to store container images Your new cloud builder will be called gcr io PROJECT ID python cloudbuild where PROJECT ID is the name of your GCP project 4 Create a Google Cloud Build Trigger Now that you ve created a Python cloud builder to run your tests you should create a trigger https cloud google com cloud build docs running builds automate builds that tells Cloud Build when to run those tests To do that follow these steps 1 On the GCP console navigate to Cloud Build Triggers 1 Add a trigger by clicking CREATE TRIGGER 1 Choose the cloud source repository you created in step 1 from the Repository drop down 1 Assuming you want the trigger to fire on any branch accept the default trigger type and regex 1 Choose the Cloud Build configuration file yaml or json radio button under Build configuration 1 Click Create trigger 5 Create a cloudbuild yaml file that executes your tests and runs pylint At this point you ve ran some unit tests on the command line created a new repository in CSR created a Python cloud builder and used Google Cloud Build to create build trigger that runs whenever you push new code In this last step you ll tie this all together and tell Google Cloud Build how to automatically run tests and run pylint to examine your code whenever a code change is pushed into CSR In order to tell Google Cloud Builder how to run your tests you ll need to create a file called cloudbuild yaml in the root of your project Inside that file you ll and add the steps needed to execute your unit tests For each steps you will reference the cloud builder that was created in step 3 by it s location in the Google Container Registry Note Each step specified in the cloudbuild yaml is a separate ephemeral run of a docker image however the workspace directory is preserved between runs https cloud google com cloud build docs build config dir One way to carry python packages forward is to use a virtualenv housed in workspace yaml steps name gcr io PROJECT ID python cloudbuild Cloud Build automatically substitutes PROJECT ID for your Project ID entrypoint bin bash args c virtualenv workspace venv Creates a Python virtualenv stored in workspace venv that will persist across container runs name gcr io PROJECT ID python cloudbuild entrypoint venv bin pip args install V r requirements txt Installs any dependencies listed in the project s requirements txt name gcr io PROJECT ID python cloudbuild entrypoint venv bin python args m pytest v Runs pytest from the virtual environment with all requirements using the verbose flag so you can see each individual test name gcr io PROJECT ID python cloudbuild entrypoint venv bin pylint args my module Runs pylint against the module my module contained one folder under the project root Wrap Up That s all there is to it From here you can inspect your builds in Google Cloud Build s history You can also build in third party integrations via PubSub You can find more documentation on how to use third party integrations here https cloud google com cloud build docs configure third party notifications If you wanted to automatically deploy this code you could add additional steps that enabled continuous delivery Prebuilt cloud builders exist for gcloud kubectl etc The full list can be found at https github com GoogleCloudPlatform cloud builders https github com GoogleCloudPlatform cloud builders Google also maintains a community repo for other cloud builders contributed by the public here https github com GoogleCloudPlatform cloud builders community License Copyright 2019 Google LLC Licensed under the Apache License Version 2 0 the License you may not use this file except in compliance with the License You may obtain a copy of the License at http www apache org licenses LICENSE 2 0 Unless required by applicable law or agreed to in writing software distributed under the License is distributed on an AS IS BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License
GCP The last year has been like a roller coaster for the cryptocurrency market At the end of 2017 the value of bitcoin BTC almost reached 20 000 USD only to fall below 4 000 USD a few months later What if there is a pattern in the high volatility of the cryptocurrencies market If so can we learn from it and get an edge on future trends Is there a way to observe all exchanges in real time and visualize it on a single chart A Google Cloud Dataflow Cloud Bigtable Websockets example CryptoRealTime In this tutorial we will graph the trades volume and time delta from trade execution until it reaches our system an indicator of how close to real time we can get the data
# CryptoRealTime ## A Google Cloud Dataflow/Cloud Bigtable Websockets example The last year has been like a roller coaster for the cryptocurrency market. At the end of 2017, the value of bitcoin (BTC) almost reached $20,000 USD, only to fall below $4,000 USD a few months later. What if there is a pattern in the high volatility of the cryptocurrencies market? If so, can we learn from it and get an edge on future trends? Is there a way to observe all exchanges in real time and visualize it on a single chart? In this tutorial we will graph the trades, volume and time delta from trade execution until it reaches our system (an indicator of how close to real time we can get the data). ![realtime multi exchange BTC/USD observer](crypto.gif) [Consider reading the Medium article](https://medium.com/@igalic/bigtable-beam-dataflow-cryptocurrencies-gcp-terraform-java-maven-4e7873811e86) [Terraform - get this up and running in less than 5 minutes](https://github.com/galic1987/professional-services/blob/master/examples/cryptorealtime/TERRAFORM-README.md) ## Architecture ![Cryptorealtime Cloud Architecture overview](https://i.ibb.co/dMc9bMz/Screen-Shot-2019-02-11-at-4-56-29-PM.png) ## Frontend ![Cryptorealtime Cloud Fronted overview](https://i.ibb.co/2S28KYq/Screen-Shot-2019-02-12-at-2-53-41-PM.png) ## Costs This tutorial uses billable components of GCP, including: - Cloud Dataflow - Compute Engine - Cloud Storage - Cloud Bigtable We recommend to clean up the project after finishing this tutorial to avoid costs. Use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. ## Project setup ### Install the Google Cloud Platform SDK on a new VM * Log into the console, and activate a cloud console session * Create a new VM ```console gcloud beta compute instances create crypto-driver \ --zone=us-central1-a \ --machine-type=n1-standard-1 \ --service-account=$(gcloud iam service-accounts list --format='value(email)' --filter="compute") \ --scopes=https://www.googleapis.com/auth/cloud-platform \ --image=debian-9-stretch-v20181210 \ --image-project=debian-cloud \ --boot-disk-size=20GB \ --boot-disk-device-name=crypto-driver ``` * SSH into that VM ```console gcloud compute ssh --zone=us-central1-a crypto-driver ``` * Installing necessary tools like java, git, maven, pip, python and Cloud Bigtable command line tool cbt using the following command: ```console sudo -s apt-get update -y curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py sudo python3 get-pip.py sudo pip3 install virtualenv virtualenv -p python3 venv source venv/bin/activate sudo apt -y --allow-downgrades install openjdk-8-jdk git maven google-cloud-sdk=271.0.0-0 google-cloud-sdk-cbt=271.0.0-0 ``` ### Create a Google Cloud Bigtable instance ```console export PROJECT=$(gcloud info --format='value(config.project)') export ZONE=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/zone" -H "Metadata-Flavor: Google"|cut -d/ -f4) gcloud services enable bigtable.googleapis.com \ bigtableadmin.googleapis.com \ dataflow.googleapis.com gcloud bigtable instances create cryptorealtime \ --cluster=cryptorealtime-c1 \ --cluster-zone=${ZONE} \ --display-name=cryptorealtime \ --cluster-storage-type=HDD \ --instance-type=DEVELOPMENT cbt -instance=cryptorealtime createtable cryptorealtime families=market ``` ### Create a Bucket ```console gsutil mb -p ${PROJECT} gs://realtimecrypto-${PROJECT} ``` ### Create firewall for visualization server on port 5000 ```console gcloud compute firewall-rules create crypto-dashboard --action=ALLOW --rules=tcp:5000 --source-ranges=0.0.0.0/0 --target-tags=crypto-console --description="Open port 5000 for crypto visualization tutorial" gcloud compute instances add-tags crypto-driver --tags="crypto-console" --zone=${ZONE} ``` ### Clone the repo ```console git clone https://github.com/GoogleCloudPlatform/professional-services ``` ### Build the pipeline ```console cd professional-services/examples/cryptorealtime mvn clean install ``` ### Start the DataFlow pipeline ```console ./run.sh ${PROJECT} \ cryptorealtime gs://realtimecrypto-${PROJECT}/temp \ cryptorealtime market ``` ### Start the Webserver and Visualization ```console cd frontend/ pip install -r requirements.txt python app.py ${PROJECT} cryptorealtime cryptorealtime market ``` Find your external IP in [Compute console instance](https://console.cloud.google.com/compute/instances) and open it in your browser with port 5000 at the end e.g. http://external-ip:5000/stream You should be able to see the visualization of aggregated BTC/USD pair on several exchanges (without predictor part) # Cleanup * To save on cost we can clean up the pipeline by running the following command ```console gcloud dataflow jobs cancel \ $(gcloud dataflow jobs list \ --format='value(id)' \ --filter="name:runthepipeline*") ``` * Empty and Delete the bucket: ```console gsutil -m rm -r gs://realtimecrypto-${PROJECT}/* gsutil rb gs://realtimecrypto-${PROJECT} ``` * Delete the Cloud Bigtable instance: ```console gcloud bigtable instances delete cryptorealtime ``` * Exit the VM and delete it. ```console gcloud compute instances delete crypto-driver --delete-disks ``` 1. View the status of your Dataflow job in the Cloud Dataflow console 1. After a few minutes, from the shell, ```console cbt -instance=<bigtable-instance-name> read <bigtable-table-name> ``` Should return many rows of crypto trades data that the frontend project will read for it's dashboard. ## External libraries used to connect to exchanges https://github.com/bitrich-info/xchange-stream
GCP
CryptoRealTime A Google Cloud Dataflow Cloud Bigtable Websockets example The last year has been like a roller coaster for the cryptocurrency market At the end of 2017 the value of bitcoin BTC almost reached 20 000 USD only to fall below 4 000 USD a few months later What if there is a pattern in the high volatility of the cryptocurrencies market If so can we learn from it and get an edge on future trends Is there a way to observe all exchanges in real time and visualize it on a single chart In this tutorial we will graph the trades volume and time delta from trade execution until it reaches our system an indicator of how close to real time we can get the data realtime multi exchange BTC USD observer crypto gif Consider reading the Medium article https medium com igalic bigtable beam dataflow cryptocurrencies gcp terraform java maven 4e7873811e86 Terraform get this up and running in less than 5 minutes https github com galic1987 professional services blob master examples cryptorealtime TERRAFORM README md Architecture Cryptorealtime Cloud Architecture overview https i ibb co dMc9bMz Screen Shot 2019 02 11 at 4 56 29 PM png Frontend Cryptorealtime Cloud Fronted overview https i ibb co 2S28KYq Screen Shot 2019 02 12 at 2 53 41 PM png Costs This tutorial uses billable components of GCP including Cloud Dataflow Compute Engine Cloud Storage Cloud Bigtable We recommend to clean up the project after finishing this tutorial to avoid costs Use the Pricing Calculator https cloud google com products calculator to generate a cost estimate based on your projected usage Project setup Install the Google Cloud Platform SDK on a new VM Log into the console and activate a cloud console session Create a new VM console gcloud beta compute instances create crypto driver zone us central1 a machine type n1 standard 1 service account gcloud iam service accounts list format value email filter compute scopes https www googleapis com auth cloud platform image debian 9 stretch v20181210 image project debian cloud boot disk size 20GB boot disk device name crypto driver SSH into that VM console gcloud compute ssh zone us central1 a crypto driver Installing necessary tools like java git maven pip python and Cloud Bigtable command line tool cbt using the following command console sudo s apt get update y curl https bootstrap pypa io get pip py o get pip py sudo python3 get pip py sudo pip3 install virtualenv virtualenv p python3 venv source venv bin activate sudo apt y allow downgrades install openjdk 8 jdk git maven google cloud sdk 271 0 0 0 google cloud sdk cbt 271 0 0 0 Create a Google Cloud Bigtable instance console export PROJECT gcloud info format value config project export ZONE curl http metadata google internal computeMetadata v1 instance zone H Metadata Flavor Google cut d f4 gcloud services enable bigtable googleapis com bigtableadmin googleapis com dataflow googleapis com gcloud bigtable instances create cryptorealtime cluster cryptorealtime c1 cluster zone ZONE display name cryptorealtime cluster storage type HDD instance type DEVELOPMENT cbt instance cryptorealtime createtable cryptorealtime families market Create a Bucket console gsutil mb p PROJECT gs realtimecrypto PROJECT Create firewall for visualization server on port 5000 console gcloud compute firewall rules create crypto dashboard action ALLOW rules tcp 5000 source ranges 0 0 0 0 0 target tags crypto console description Open port 5000 for crypto visualization tutorial gcloud compute instances add tags crypto driver tags crypto console zone ZONE Clone the repo console git clone https github com GoogleCloudPlatform professional services Build the pipeline console cd professional services examples cryptorealtime mvn clean install Start the DataFlow pipeline console run sh PROJECT cryptorealtime gs realtimecrypto PROJECT temp cryptorealtime market Start the Webserver and Visualization console cd frontend pip install r requirements txt python app py PROJECT cryptorealtime cryptorealtime market Find your external IP in Compute console instance https console cloud google com compute instances and open it in your browser with port 5000 at the end e g http external ip 5000 stream You should be able to see the visualization of aggregated BTC USD pair on several exchanges without predictor part Cleanup To save on cost we can clean up the pipeline by running the following command console gcloud dataflow jobs cancel gcloud dataflow jobs list format value id filter name runthepipeline Empty and Delete the bucket console gsutil m rm r gs realtimecrypto PROJECT gsutil rb gs realtimecrypto PROJECT Delete the Cloud Bigtable instance console gcloud bigtable instances delete cryptorealtime Exit the VM and delete it console gcloud compute instances delete crypto driver delete disks 1 View the status of your Dataflow job in the Cloud Dataflow console 1 After a few minutes from the shell console cbt instance bigtable instance name read bigtable table name Should return many rows of crypto trades data that the frontend project will read for it s dashboard External libraries used to connect to exchanges https github com bitrich info xchange stream
GCP The word pipeline refers to a collection of all job instances of a query in the transformation process in a data warehouse in this case BigQuery Each pipeline involves source table s and destination table For example a query Table Access Pattern Analysis FROM project dataset flashsalepurchases SELECT purchaseId shop Pipeline Optimisation This module consists of deep dive analysis of a BigQuery environment in Google Cloud Platform according to audit logs data access data which can be used to optimise BigQuery usage and improve time space and cost of BigQuery b Definitions b
# Table Access Pattern Analysis This module consists of deep dive analysis of a BigQuery environment in Google Cloud Platform, according to audit logs - data access data, which can be used to optimise BigQuery usage, and improve time, space and cost of BigQuery. ## Pipeline Optimisation #### <b>Definitions</b> The word 'pipeline' refers to a collection of all job instances of a query in the transformation process in a data warehouse, in this case BigQuery. Each 'pipeline' involves source table(s) and destination table. For example, a query: ``` SELECT purchaseId, shop FROM project.dataset.flash_sale_purchases WHERE isReturn = TRUE GROUP BY shop ``` with its destination table set to return_purchases table. The source table of this query is flash_sale_purchases and its destination table is return_purchases table. ![](assets/pipeline-definition.png) In the illustration above, one of the pipeline involves T1 and T2 as its source tables and T5 as its destination table. Given enough historical data from the audit logs, you can group queries which have the same source table(s) and destination table combination (each group becomes a single pipeline), and see the run history of each of the pipelines. Same source table(s) - destination table combination will almost always come from the same query, even if they are a different query, the semantics should be similar, so this assumption is still valid. After grouping into different pipelines, according to the source table(s) - destination table combination, you might be able to see a pattern in their execution history. You might see that this pipeline is executed hourly, or daily, or even monthly, and when it was last executed. <i>Note that each of the source table(s) can be a different table type, it can be either a view, materialized view, or a normal table. We only consider the normal tables because the notion of 'analysing the update pattern' does not really apply to view and materialized view. View does not gets updated, and materialized view gets updated automatically</i> We can categorise every pipeline to different pipeline type and scheduling patern. There are 3 different pipeline types, namely: <ul> <li>Live pipelines: Pipelines that run regularly on an obvious schedule, until now <li>Dead pipelines: Pipelines that used to run regularly on an obvious schedule, but it stopped some time ago <li>Ad hoc pipelines: Pipelines that does not have a regular scheduling pattern detected, or not enough repetition to conclude that it’s a scheduled live pipeline </ul> The scheduling pattern can be identified as hourly, daily, weekly, or non-deterministically (no obvious pattern). #### <b>Purpose</b> This tool helps identify tables with a high difference between the write and read frequency across data warehouse queries. High discrepancy between the write and read frequency of a table might be a good starting point for identifying optimisation points. Using this tool we can visualise the pipelines that are involved with a table of interest. We can then further analyse the pipeline type, its scheduling pattern and the jobs in each pipeline, and pinpoint problems or optimisations from there. ![](assets/pipeline-example.gif) This GIF shows the graph that will be visualised when running the tool. The graph is an HTML page that is rendered as an iFrame in the Jupyter Notebook. You can zoom in, zoom out, select (or deselect) a node to see more details about it. Each node represents a table, and each edge represents a pipeline from one table to another. The weight of the edges indicates the frequency of jobs of that pipeline compared to the rest of the pipelines in the current graph. #### <b>Analysing the Result</b> As can be seen from the GIF, the tool will visualise all the pipelines associated with a table. To be specific, this includes all query jobs that has this table of interest as its source or destination table. As mentioned above, for every query job, there are source table(s) and also a single destination table. The tables that are involved in the pipelines associated with a table (all tables that are included when the pipeline graph of a table of interest is plotted) employs the below logic: For every query jobs that has the table of interest as one of its source tables or destination table, * For every source table(s) of every query job that has the table of interest as one of its source table(s), recursively find query jobs that has this source table as its destination table, and get its source table(s). * For every destination table of every query job that has the table of interest as its destination table, recursively find query jobs that has destination table as its source table, and get its destination table. As seen from the GIF, for every table that is involved in the pipeline of the table of interest, you can select it, and see the details of the job schedule of every query involving this particular table. It will list all ad-hoc, live or dead pipelines that have this table as its source or destination table. For example, a pipeline information on the right side of the graph might look like this ![](assets/pipeline-information-example.png) This means that the table `data-analytics-pocs.public.bigquery_audit_log` is a destination table in an ad-hoc pipeline, where `project4.finance.cloudaudit_googleapis_com_data_access_*` are the source table(s). The jobs of this pipeline has a non-deterministic schedule, and its pipeline ID is 208250. The pipeline ID information is useful if you want to further analyse this pipeline by querying the intermediate tables created. See [intermediate tables](#intermediate-tables-creation) Given these insights, you can further deep dive into insights that are particularly interesting for you. For example, you might identify imbalance queries, the table `flash_sale_purchases` in your data warehouse might updated hourly but only queried daily. You might also identify queries that are already β€˜dead’ and no longer scheduled, according to the last execution time, and identify if this is intended, or something might have happened inside the query that causes an error. ## Architecture This tool is built on top of BigQuery and Python modules. The data source of the tool is audit log - data access which is located in BigQuery. The module will be responsible for the creation of intermediate tables (from the audit logs - data access source table), and the execution of all relevant queries towards those intermediate tables that will be used for analysis purposes. The analysis can be done through a Jupyter notebook, which can be run locally (if installed) or in AI Platform Notebooks. This guide will specifically be on running the tool on AI Platform Notebooks ![](assets/architecture.png) ### Directories and Files ``` data-dumpling-data-assessment/ β”œβ”€β”€ table-access-pattern-analysis/ β”‚ β”œβ”€β”€ assets/ β”‚ β”œβ”€β”€ bq_routines/ β”‚ β”œβ”€β”€ pipeline_graph/ β”‚ β”œβ”€β”€ src/ β”‚ β”œβ”€β”€ templates/ β”‚ β”œβ”€β”€ README.md β”‚ β”œβ”€β”€ pipeline.ipynb β”‚ β”œβ”€β”€ pipeline-output_only.ipynb β”‚ β”œβ”€β”€ requirements.txt β”‚ └── var.env ``` There are several subdirectories under the `table-access-pattern-analysis` subdirectory. <ul> <li> <b>assets/</b> This directory contains images or other assets that are used in README.md <li> <b>bq_routines/</b> This directory contains all the [JS UDF](https://cloud.google.com/bigquery/docs/reference/standard-sql/user-defined-functions#javascript-udf-structure) functions that will be created in BigQuery upon usage of the tool. These files are not to be run independently in a JS environment, these file contents will be loaded by the Python package, `src/` to be constructed as a function creation query to BigQuery. For more information about each of the functions, look at this [section](#routines-creation) <li> <b>pipeline_graph/</b> This directory contains the HTML file, which is a webpage that is used to display the pipeline visualisation of the pipeline optimisation module. <li> <b>src/</b> This directory is the source Python package of the module, it drives the logic for table and routines creation, as well as query towards BigQuery tables. <li> <b>templates/</b> This directory consist of a template HTML file that will be filled using Jinja2 templating system, through the Python code. <li> <b>README.md</b> This is the README file which explains all the details fo this directory. <li> <b>pipeline.ipynb</b> This Notebook is used for the pipeline optimisation. <li> <b>pipeline-output_only.ipynb</b> This Notebook is used for demonstration purposes of the pipeline optimisation only, it shows the expected output and result of running the notebook. <li> <b>requirements.txt</b> This file consist of all the dependencies, you don't need to install it manually because it's part of the Jupyter Notebooks command. <li> <b>var.env</b> This is the file on which environment variables are to be defined and to be loaded by the different Jupyter Notebooks. For every 'analysis workflow', you should redefine some of the variables. For details, look at this [section](#environment-variables) </ul> ## Prerequisites * Your account must have access to read the audit logs - data access table that will be used as a source table for the analysis. For more details regarding different kinds of audit logs, visit this [page](https://cloud.google.com/logging/docs/audit#data-access) * The audit logs - data access table that will be used as a source table for the analysis should contain BigQuery logs version 1. For more details regarding audit logs version, visit this [page](https://cloud.google.com/bigquery/docs/reference/auditlogs) * Your account must have access to write to the destination dataset. * The source and destination dataset must be in the same location ## Set Up This set up is for running JupyterLab Notebook in AI Platform Notebooks, you can also choose to run the Jupyter Notebook locally. 1. Go to a GCP project. 2. Navigate to <b>AI Platform -> Notebooks</b>. <b>New Instance -> Choose Python3 Option -> Name the instance </b> 3. Clone this repository 4. Go to the `table-access-pattern-analysis` directory of the project 5. Set the environment variables inside `var.env`. 6. Run the analysis, as described [below](#analysis). ### Environment Variables The environment variables that you need to set includes: * INPUT_PROJECT_ID * INPUT_DATASET_ID * INPUT_AUDIT_LOGS_TABLE_ID * IS_AUDIT_LOGS_INPUT_TABLE_PARTITIONED * OUTPUT_PROJECT_ID * OUTPUT_DATASET_ID * OUTPUT_TABLE_SUFFIX * LOCATION * IS_INTERACTIVE_TABLES_MODE The details of each of the environment variables are as follows: <ul> <li><b>INPUT_PROJECT_ID, INPUT_DATASET_ID, INPUT_AUDIT_LOGS_TABLE_ID</b> <ul> <li> Definition * These 3 environment variables should point to the audit logs - data access table that will be the source table of the analysis. The complete path to the audit logs table source will be `INPUT_PROJECT_ID.INPUT_DATASET_ID.INPUT_AUDIT_LOGS_TABLE_ID`. If you want to analyse on a table with a wildcard, include the wildcard in the INPUT_AUDIT_LOGS_TABLE_ID variable as well. <li> Example values * INPUT_PROJECT_ID = 'project-a' * INPUT_DATASET_ID = 'dataset-b' * INPUT_AUDIT_LOGS_TABLE_ID = 'cloudaudit_googleapis_com_data_access_*' </ul> <li><b>OUTPUT_PROJECT_ID, OUTPUT_DATASET_ID</b> <ul> <li> Definition * These 2 environment variables should point to the dataset ID that will contain all the tables and routines that is going to be created during the analysis. <li> Example values * OUTPUT_PROJECT_ID = 'project-c' * OUTPUT_DATASET_ID = 'dataset-d' </ul> <li><b>OUTPUT_TABLE_SUFFIX</b> <ul> <li> Definition * The 'OUTPUT_TABLE_SUFFIX' variable is used to denote an 'analysis environment' that you intend to build. All tables that are produced by this run will have this variable as its suffix, thus it will not replace any existing tables that you have created for other analysis. * If this variable is not set, the analysis cannot be run as you might unintentionally forgot to change the suffix and replace an existing set of tables with the same suffix set. <li> Example value * OUTPUT_TABLE_SUFFIX = 'first-analysis'. </ul> <li><b>IS_AUDIT_LOGS_INPUT_TABLE_PARTITIONED</b> <ul> <li> Definition * The 'IS_AUDIT_LOGS_INPUT_TABLE_PARTITIONED' variable is a boolean value which denotes whether the input audit log table is a partitioned table. <li> Value * Its value should be either "TRUE" or "FALSE", with the exact casing. </ul> <li><b>LOCATION</b> <ul> <li> Definition * The 'LOCATION' variable is used to specify the region on which the input dataset and output dataset is located, a common and most used location is 'US'. <li> Example value * LOCATION=US </ul> <li><b>IS_INTERACTIVE_TABLES_MODE</b> <ul> <li> Definition * Boolean on whether you want the tables to be interactive, it is recommended to set this to "TRUE". * If you want the tables output to be interactive (can filter, sort, search), you should set this value to "TRUE". * If you want the tables output to not be interactive, you can set this value to "FALSE". <li> Value * Its value should be either "TRUE" or "FALSE", with the exact casing. </ul> </ul> ### Caveats After resetting any environment variables, you need to restart the kernel because otherwise it will not be loaded by Jupyter. To restart, go to the menu 'Kernel' and choose 'Restart' ## Analysis 1. Open a notebook to run an analysis. 2. You can choose the interactivity mode of the output. * If you want the tables output to be interactive, you can choose to run the Classic Jupyter Notebook. The output of the tables produced by this notebook will be interactive (can filter, sort, search), but it is an older version of Jupyter notebook in AI Platform Notebooks. To do this, 1. Navigate to `Help` menu in Jupyter 2. Click on `Launch Classic Notebook`. 3. Navigate the directory and open the Notebook that you want to do the analysis on. * If you prefer a newer version of the Jupyter notebook, you can choose to not run the Classic Jupyter Notebook. The output of the tables produced by this notebook is not interactive. You can double click on the intended Notebook from the list of files, without following the steps to launch a Classic Jupyter Notebook 2. Run the cells from top to bottom of the notebook. 3. In the first cell, there is a datetime picker, which is used to filter the audit logs data source to the start and end date range specified. If you select `05-05-2021` as a start date and `06-05-2021`, the analysis result of the notebook run will be based on audit logs data on 5th May 2021 to 6th May 2021. 4. Run pipeline optimisation analysis produced in Jupyter Notebook <ul> <li><b>Pipeline Optimisation, run `pipeline.ipynb`</b> This tool helps identify pipeline optimisation points. At first, the tool will list down tables with high difference of writing and reading frequency throughout the data warehouse queries. After identifying the table that you would like to analyse further, you can select the table in the next part of the notebook and display the result in an iFrame inside the notebook. </ul> ## Appendix ### Intermediate Tables Creation As mentioned in the [Architecture](#architecture) section, this module involves the creation of intermediate tables. These are important and relevant for users that are interested to analyse the insights generated from this tool even further. The schema and details of each intermediate tables created are explained below. <ul> <li> job_info_with_tables_info<OUTPUT_TABLE_SUFFIX> This table stores some of the details of job history that are relevant to pipeline optimisation. Each job history entry corresponds to a single entry in the audit logs. The audit logs are filtered to the ones that are relevant for pipeline optimisation. ``` [ { "name": "jobId", "type": "STRING", "mode": "NULLABLE", "description": "The job ID of this job run" }, { "name": "timestamp", "type": "TIMESTAMP", "mode": "NULLABLE", "description": "Timestamp when the job was run" }, { "name": "email", "type": "STRING", "mode": "NULLABLE", "description": "The account that ran this job" }, { "name": "projectId", "type": "STRING", "mode": "NULLABLE", "description": "The project ID this job was ran on" }, { "name": "totalSlotMs", "type": "INTEGER", "mode": "NULLABLE", "description": "The slot ms consumed by this job" }, { "name": "totalProcessedBytes", "type": "STRING", "mode": "NULLABLE", "description": "The total bytes processed when this job was ran" }, { "name": "destinationTable", "type": "STRING", "mode": "NULLABLE", "description": "The destination table of this job, in a concatenated 'project.dataset.table' string format" }, { "name": "sourceTables", "type": "STRING", "mode": "NULLABLE", "description": "The source tables of this job, in a JSON string format of the array of concatenated 'project.dataset.table' string format, for example it can be a string of '[tableA, tableB, tableC]'" } ] ``` <li> pipeline_info<OUTPUT_TABLE_SUFFIX> This table stores the information of the different pipelines. Each unique pipeline is a collection of all job instances of a query (involving unique source table(s)-destination table combination) in the transformation process in BigQuery. ``` [ { "name": "pipelineId", "type": "INTEGER", "mode": "NULLABLE", "description": "The pipeline ID of this job run" }, { "name": "timestamps", "type": "ARRAY<TIMESTAMP>", "mode": "NULLABLE", "description": "Timestamps when this pipeline was run in the past" }, { "name": "pipelineType", "type": "STRING", "mode": "NULLABLE", "description": "The pipeline type of this pipeline, its value can be dead/live/ad hoc" }, { "name": "schedule", "type": "STRING", "mode": "NULLABLE", "description": "The schedule for this pipeline, its value can be non deterministic/hourly/daily/monthly" }, { "name": "destinationTable", "type": "STRING", "mode": "NULLABLE", "description": "The destination table of this pipeline, in a concatenated 'project.dataset.table' string format" }, { "name": "sourceTables", "type": "STRING", "mode": "NULLABLE", "description": "The source tables of this job, in a JSON string format of the array of concatenated 'project.dataset.table' string format, for example it can be a string of '[tableA, tableB, tableC]'" } ] ``` <li>source_destination_table_pairs<OUTPUT_TABLE_SUFFIX> This table stores all source-destination table pair. It also stores the pipeline ID, which is the pipeline ID that this pair was part of. ``` [ { "name": "destinationTable", "type": "STRING", "mode": "NULLABLE", "description": "The destination table" }, { "name": "sourceTable", "type": "STRING", "mode": "NULLABLE", "description": "The source table" }, { "name": "pipelineId", "type": "INTEGER", "mode": "NULLABLE", "description": "The pipeline ID for the pipeline that this pair was part of" } ] ``` <li>table_direct_pipelines<OUTPUT_TABLE_SUFFIX> This table stores all table pipeline, as destination table and as source table ``` [ { "name": "table", "type": "STRING", "mode": "NULLABLE", "description": "The table" }, { "name": "directBackwardPipelines", "type": "ARRAY<STRUCT<INTEGER, STRING, STRING, INTEGER, STRING, STRING>>", "mode": "NULLABLE", "description": "An array of pipeline informations that have the current table as its destination table. Each of the struct has information about the pipelineId, sourceTables, destinationTable, frequency, pipelineType, and schedule" }, { "name": "directForwardPipelines", "type": "ARRAY<STRUCT<INTEGER, STRING, STRING, INTEGER, STRING, STRING>>", "mode": "NULLABLE", "description": "An array of pipeline informations that have the current table as one of its source table. Each of the struct has information about the pipelineId, sourceTables, destinationTable, frequency, pipelineType, and schedule" } ] ``` </ul> ### Routines Creation There are several JavaScript UDFs created in BigQuery upon usage of the tool. These function files are not to be run independently in a JS environment, these file contents will be loaded by the Python package, `src/` to be constructed as a function creation query to BigQuery. <ul> <li>getPipelineTypeAndSchedule This function takes an array of timestamps and returns a struct of the pipeline type and schedule according to the history. There are 3 possible values for pipeline type: live/dead/ad hoc, and there are 4 possible values for schedule: non deterministic/hourly/daily/monthly. The routine file content is located in `bq_routines/getPipelineTypeAndSchedule.js` <li>getTablesInvolvedInPipelineOfTable This function returns a list of tables that are involved in the pipeline of the table of input. The routine file content is located in `bq_routines/getTablesInvolvedInPipelineOfTable.js` </ul
GCP
Table Access Pattern Analysis This module consists of deep dive analysis of a BigQuery environment in Google Cloud Platform according to audit logs data access data which can be used to optimise BigQuery usage and improve time space and cost of BigQuery Pipeline Optimisation b Definitions b The word pipeline refers to a collection of all job instances of a query in the transformation process in a data warehouse in this case BigQuery Each pipeline involves source table s and destination table For example a query SELECT purchaseId shop FROM project dataset flash sale purchases WHERE isReturn TRUE GROUP BY shop with its destination table set to return purchases table The source table of this query is flash sale purchases and its destination table is return purchases table assets pipeline definition png In the illustration above one of the pipeline involves T1 and T2 as its source tables and T5 as its destination table Given enough historical data from the audit logs you can group queries which have the same source table s and destination table combination each group becomes a single pipeline and see the run history of each of the pipelines Same source table s destination table combination will almost always come from the same query even if they are a different query the semantics should be similar so this assumption is still valid After grouping into different pipelines according to the source table s destination table combination you might be able to see a pattern in their execution history You might see that this pipeline is executed hourly or daily or even monthly and when it was last executed i Note that each of the source table s can be a different table type it can be either a view materialized view or a normal table We only consider the normal tables because the notion of analysing the update pattern does not really apply to view and materialized view View does not gets updated and materialized view gets updated automatically i We can categorise every pipeline to different pipeline type and scheduling patern There are 3 different pipeline types namely ul li Live pipelines Pipelines that run regularly on an obvious schedule until now li Dead pipelines Pipelines that used to run regularly on an obvious schedule but it stopped some time ago li Ad hoc pipelines Pipelines that does not have a regular scheduling pattern detected or not enough repetition to conclude that it s a scheduled live pipeline ul The scheduling pattern can be identified as hourly daily weekly or non deterministically no obvious pattern b Purpose b This tool helps identify tables with a high difference between the write and read frequency across data warehouse queries High discrepancy between the write and read frequency of a table might be a good starting point for identifying optimisation points Using this tool we can visualise the pipelines that are involved with a table of interest We can then further analyse the pipeline type its scheduling pattern and the jobs in each pipeline and pinpoint problems or optimisations from there assets pipeline example gif This GIF shows the graph that will be visualised when running the tool The graph is an HTML page that is rendered as an iFrame in the Jupyter Notebook You can zoom in zoom out select or deselect a node to see more details about it Each node represents a table and each edge represents a pipeline from one table to another The weight of the edges indicates the frequency of jobs of that pipeline compared to the rest of the pipelines in the current graph b Analysing the Result b As can be seen from the GIF the tool will visualise all the pipelines associated with a table To be specific this includes all query jobs that has this table of interest as its source or destination table As mentioned above for every query job there are source table s and also a single destination table The tables that are involved in the pipelines associated with a table all tables that are included when the pipeline graph of a table of interest is plotted employs the below logic For every query jobs that has the table of interest as one of its source tables or destination table For every source table s of every query job that has the table of interest as one of its source table s recursively find query jobs that has this source table as its destination table and get its source table s For every destination table of every query job that has the table of interest as its destination table recursively find query jobs that has destination table as its source table and get its destination table As seen from the GIF for every table that is involved in the pipeline of the table of interest you can select it and see the details of the job schedule of every query involving this particular table It will list all ad hoc live or dead pipelines that have this table as its source or destination table For example a pipeline information on the right side of the graph might look like this assets pipeline information example png This means that the table data analytics pocs public bigquery audit log is a destination table in an ad hoc pipeline where project4 finance cloudaudit googleapis com data access are the source table s The jobs of this pipeline has a non deterministic schedule and its pipeline ID is 208250 The pipeline ID information is useful if you want to further analyse this pipeline by querying the intermediate tables created See intermediate tables intermediate tables creation Given these insights you can further deep dive into insights that are particularly interesting for you For example you might identify imbalance queries the table flash sale purchases in your data warehouse might updated hourly but only queried daily You might also identify queries that are already dead and no longer scheduled according to the last execution time and identify if this is intended or something might have happened inside the query that causes an error Architecture This tool is built on top of BigQuery and Python modules The data source of the tool is audit log data access which is located in BigQuery The module will be responsible for the creation of intermediate tables from the audit logs data access source table and the execution of all relevant queries towards those intermediate tables that will be used for analysis purposes The analysis can be done through a Jupyter notebook which can be run locally if installed or in AI Platform Notebooks This guide will specifically be on running the tool on AI Platform Notebooks assets architecture png Directories and Files data dumpling data assessment table access pattern analysis assets bq routines pipeline graph src templates README md pipeline ipynb pipeline output only ipynb requirements txt var env There are several subdirectories under the table access pattern analysis subdirectory ul li b assets b This directory contains images or other assets that are used in README md li b bq routines b This directory contains all the JS UDF https cloud google com bigquery docs reference standard sql user defined functions javascript udf structure functions that will be created in BigQuery upon usage of the tool These files are not to be run independently in a JS environment these file contents will be loaded by the Python package src to be constructed as a function creation query to BigQuery For more information about each of the functions look at this section routines creation li b pipeline graph b This directory contains the HTML file which is a webpage that is used to display the pipeline visualisation of the pipeline optimisation module li b src b This directory is the source Python package of the module it drives the logic for table and routines creation as well as query towards BigQuery tables li b templates b This directory consist of a template HTML file that will be filled using Jinja2 templating system through the Python code li b README md b This is the README file which explains all the details fo this directory li b pipeline ipynb b This Notebook is used for the pipeline optimisation li b pipeline output only ipynb b This Notebook is used for demonstration purposes of the pipeline optimisation only it shows the expected output and result of running the notebook li b requirements txt b This file consist of all the dependencies you don t need to install it manually because it s part of the Jupyter Notebooks command li b var env b This is the file on which environment variables are to be defined and to be loaded by the different Jupyter Notebooks For every analysis workflow you should redefine some of the variables For details look at this section environment variables ul Prerequisites Your account must have access to read the audit logs data access table that will be used as a source table for the analysis For more details regarding different kinds of audit logs visit this page https cloud google com logging docs audit data access The audit logs data access table that will be used as a source table for the analysis should contain BigQuery logs version 1 For more details regarding audit logs version visit this page https cloud google com bigquery docs reference auditlogs Your account must have access to write to the destination dataset The source and destination dataset must be in the same location Set Up This set up is for running JupyterLab Notebook in AI Platform Notebooks you can also choose to run the Jupyter Notebook locally 1 Go to a GCP project 2 Navigate to b AI Platform Notebooks b b New Instance Choose Python3 Option Name the instance b 3 Clone this repository 4 Go to the table access pattern analysis directory of the project 5 Set the environment variables inside var env 6 Run the analysis as described below analysis Environment Variables The environment variables that you need to set includes INPUT PROJECT ID INPUT DATASET ID INPUT AUDIT LOGS TABLE ID IS AUDIT LOGS INPUT TABLE PARTITIONED OUTPUT PROJECT ID OUTPUT DATASET ID OUTPUT TABLE SUFFIX LOCATION IS INTERACTIVE TABLES MODE The details of each of the environment variables are as follows ul li b INPUT PROJECT ID INPUT DATASET ID INPUT AUDIT LOGS TABLE ID b ul li Definition These 3 environment variables should point to the audit logs data access table that will be the source table of the analysis The complete path to the audit logs table source will be INPUT PROJECT ID INPUT DATASET ID INPUT AUDIT LOGS TABLE ID If you want to analyse on a table with a wildcard include the wildcard in the INPUT AUDIT LOGS TABLE ID variable as well li Example values INPUT PROJECT ID project a INPUT DATASET ID dataset b INPUT AUDIT LOGS TABLE ID cloudaudit googleapis com data access ul li b OUTPUT PROJECT ID OUTPUT DATASET ID b ul li Definition These 2 environment variables should point to the dataset ID that will contain all the tables and routines that is going to be created during the analysis li Example values OUTPUT PROJECT ID project c OUTPUT DATASET ID dataset d ul li b OUTPUT TABLE SUFFIX b ul li Definition The OUTPUT TABLE SUFFIX variable is used to denote an analysis environment that you intend to build All tables that are produced by this run will have this variable as its suffix thus it will not replace any existing tables that you have created for other analysis If this variable is not set the analysis cannot be run as you might unintentionally forgot to change the suffix and replace an existing set of tables with the same suffix set li Example value OUTPUT TABLE SUFFIX first analysis ul li b IS AUDIT LOGS INPUT TABLE PARTITIONED b ul li Definition The IS AUDIT LOGS INPUT TABLE PARTITIONED variable is a boolean value which denotes whether the input audit log table is a partitioned table li Value Its value should be either TRUE or FALSE with the exact casing ul li b LOCATION b ul li Definition The LOCATION variable is used to specify the region on which the input dataset and output dataset is located a common and most used location is US li Example value LOCATION US ul li b IS INTERACTIVE TABLES MODE b ul li Definition Boolean on whether you want the tables to be interactive it is recommended to set this to TRUE If you want the tables output to be interactive can filter sort search you should set this value to TRUE If you want the tables output to not be interactive you can set this value to FALSE li Value Its value should be either TRUE or FALSE with the exact casing ul ul Caveats After resetting any environment variables you need to restart the kernel because otherwise it will not be loaded by Jupyter To restart go to the menu Kernel and choose Restart Analysis 1 Open a notebook to run an analysis 2 You can choose the interactivity mode of the output If you want the tables output to be interactive you can choose to run the Classic Jupyter Notebook The output of the tables produced by this notebook will be interactive can filter sort search but it is an older version of Jupyter notebook in AI Platform Notebooks To do this 1 Navigate to Help menu in Jupyter 2 Click on Launch Classic Notebook 3 Navigate the directory and open the Notebook that you want to do the analysis on If you prefer a newer version of the Jupyter notebook you can choose to not run the Classic Jupyter Notebook The output of the tables produced by this notebook is not interactive You can double click on the intended Notebook from the list of files without following the steps to launch a Classic Jupyter Notebook 2 Run the cells from top to bottom of the notebook 3 In the first cell there is a datetime picker which is used to filter the audit logs data source to the start and end date range specified If you select 05 05 2021 as a start date and 06 05 2021 the analysis result of the notebook run will be based on audit logs data on 5th May 2021 to 6th May 2021 4 Run pipeline optimisation analysis produced in Jupyter Notebook ul li b Pipeline Optimisation run pipeline ipynb b This tool helps identify pipeline optimisation points At first the tool will list down tables with high difference of writing and reading frequency throughout the data warehouse queries After identifying the table that you would like to analyse further you can select the table in the next part of the notebook and display the result in an iFrame inside the notebook ul Appendix Intermediate Tables Creation As mentioned in the Architecture architecture section this module involves the creation of intermediate tables These are important and relevant for users that are interested to analyse the insights generated from this tool even further The schema and details of each intermediate tables created are explained below ul li job info with tables info OUTPUT TABLE SUFFIX This table stores some of the details of job history that are relevant to pipeline optimisation Each job history entry corresponds to a single entry in the audit logs The audit logs are filtered to the ones that are relevant for pipeline optimisation name jobId type STRING mode NULLABLE description The job ID of this job run name timestamp type TIMESTAMP mode NULLABLE description Timestamp when the job was run name email type STRING mode NULLABLE description The account that ran this job name projectId type STRING mode NULLABLE description The project ID this job was ran on name totalSlotMs type INTEGER mode NULLABLE description The slot ms consumed by this job name totalProcessedBytes type STRING mode NULLABLE description The total bytes processed when this job was ran name destinationTable type STRING mode NULLABLE description The destination table of this job in a concatenated project dataset table string format name sourceTables type STRING mode NULLABLE description The source tables of this job in a JSON string format of the array of concatenated project dataset table string format for example it can be a string of tableA tableB tableC li pipeline info OUTPUT TABLE SUFFIX This table stores the information of the different pipelines Each unique pipeline is a collection of all job instances of a query involving unique source table s destination table combination in the transformation process in BigQuery name pipelineId type INTEGER mode NULLABLE description The pipeline ID of this job run name timestamps type ARRAY TIMESTAMP mode NULLABLE description Timestamps when this pipeline was run in the past name pipelineType type STRING mode NULLABLE description The pipeline type of this pipeline its value can be dead live ad hoc name schedule type STRING mode NULLABLE description The schedule for this pipeline its value can be non deterministic hourly daily monthly name destinationTable type STRING mode NULLABLE description The destination table of this pipeline in a concatenated project dataset table string format name sourceTables type STRING mode NULLABLE description The source tables of this job in a JSON string format of the array of concatenated project dataset table string format for example it can be a string of tableA tableB tableC li source destination table pairs OUTPUT TABLE SUFFIX This table stores all source destination table pair It also stores the pipeline ID which is the pipeline ID that this pair was part of name destinationTable type STRING mode NULLABLE description The destination table name sourceTable type STRING mode NULLABLE description The source table name pipelineId type INTEGER mode NULLABLE description The pipeline ID for the pipeline that this pair was part of li table direct pipelines OUTPUT TABLE SUFFIX This table stores all table pipeline as destination table and as source table name table type STRING mode NULLABLE description The table name directBackwardPipelines type ARRAY STRUCT INTEGER STRING STRING INTEGER STRING STRING mode NULLABLE description An array of pipeline informations that have the current table as its destination table Each of the struct has information about the pipelineId sourceTables destinationTable frequency pipelineType and schedule name directForwardPipelines type ARRAY STRUCT INTEGER STRING STRING INTEGER STRING STRING mode NULLABLE description An array of pipeline informations that have the current table as one of its source table Each of the struct has information about the pipelineId sourceTables destinationTable frequency pipelineType and schedule ul Routines Creation There are several JavaScript UDFs created in BigQuery upon usage of the tool These function files are not to be run independently in a JS environment these file contents will be loaded by the Python package src to be constructed as a function creation query to BigQuery ul li getPipelineTypeAndSchedule This function takes an array of timestamps and returns a struct of the pipeline type and schedule according to the history There are 3 possible values for pipeline type live dead ad hoc and there are 4 possible values for schedule non deterministic hourly daily monthly The routine file content is located in bq routines getPipelineTypeAndSchedule js li getTablesInvolvedInPipelineOfTable This function returns a list of tables that are involved in the pipeline of the table of input The routine file content is located in bq routines getTablesInvolvedInPipelineOfTable js ul
GCP deployment neo4jbackuprestoreviagkegcsexample backup Backup Restore via and Example Project Structure
# [Neo4j](https://neo4j.com/developer/graph-database/) Backup & Restore via [GKE Cronjob](https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs) and [GCS](https://cloud.google.com/storage) Example ## Project Structure ``` . └── neo4j_backup_restore_via_gke_gcs_example └── backup └── deployment β”œβ”€β”€ backup-cronjob.yaml #(Cronjob configuration) └── deploy-exec.sh #(Executable for backup deployment) └── docker β”œβ”€β”€ Dockerfile #(Backup pod docker image) β”œβ”€β”€ backup-via-admin.sh #(Helper used by docker image) └── pod-image-exec.sh #(Executable for build & push docker image) β”œβ”€β”€ neo4j-backup-architecture.png └── backup.env #(Update gcloud configuration) └── restore β”œβ”€β”€ restore.env #(Update gcloud configuration) β”œβ”€β”€ download-backup.sh #(Helper to copy backup from GCS) β”œβ”€β”€ restore-via-admin.sh #(Helper to run restore admin commands) β”œβ”€β”€ restore-exec.sh #(Excutable for Restore) └── cleanup.sh #(Helper to remove local backup copy on pod) └── README.md ``` ## Backup ### Backup Architecture ![image info](./backup/neo4j-backup-architecture.png) ### Build and push backup pod image * Make sure the Enviornment variables are set correctly in ```backup/backup.env``` file. - This file should point to the Google Cloud Storage `REMOTE_BACKUPSET` to backup the graphs, GCR bucket used to point to the `BACKUP_IMAGE` used by the backup pod, `GKE_NAMESPACE` to be used to backup from the correct Neo4j cluster, and the `NEO4J_ADMIN_SERVER` IPs to backup the servers from. * Simply run ```pod-image-exec.sh``` file to build and push the backup pod image. ```bash # Have execute-access to the script $ chmod u+x backup/docker/pod-image-exec.sh # Run the back-up pod image script $ ./backup/docker/pod-image-exec.sh ``` ### Deploy backup kubernetes cronjob * Navigate to ```backup/deployment/backup-cronjob.yaml``` and edit the schedule or anything else you'd like to customize in the cronjob. * Run the ```backup/deployment/deploy-exec.sh``` to deploy the cronjob on your neo4j cluster. ```bash # Have execute-access to the script $ chmod u+x backup/deployment/deploy-exec.sh # Run the back-up cronjob deployment script $ ./backup/deployment/deploy-exec.sh ``` ### Update backup pod image * Configuration for image used by the backup-pod can be found in the file `backup/docker/Dockerfile`. * If any changes need to be made to the Backup configuration used by the back-up pod, please modify and save your changes on the following shell file `backup-via-admin.sh` - Once any of these files are changed, an updated container image needs to be built and pushed to container registry. ```bash # Script to build and push the image $ ./pod-image-exec.sh ``` ### Delete backup cronjob * Run the following command to delete the backup cronjob. Replace the <CRONJOB_NAME> with currently assigned cronjob name ```bash # Delete cronjob $ kubectl delete cronjob <CRONJOB_NAME> ``` ### Re-deploy Backup Cronjob * Run the following YAML file to de-deploy the Kubernetes Cronjob which schedules the backups ```bash # Delete cronjob $ kubectl apply -f backup-cronjob.yaml ``` ## Restore This procedure assumes that you either have sidecar container on your neo4j instance running the google cloud-sdk or your neo4j instance servers have the google cloud-sdk pre-installed ### Download and restore from Google Cloud Storage Bucket Simply run the ```/restore/restore-exec.sh``` which will call the helper shell scripts and complete the restore process one server at a time. ```bash # Have execute-access to the script $ chmod u+x restore/restore-exec.sh # Execute restore procedure script $ ./restore/restore-exec.sh ``` ## References This software uses the following open source packages: - [GKE Cronjobs](https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs) - [Artifact Registry](https://cloud.google.com/artifact-registry) - [Dockerfile](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) - [Neo4j Enterprise](https://neo4j.com/licensing/) --
GCP
Neo4j https neo4j com developer graph database Backup Restore via GKE Cronjob https cloud google com kubernetes engine docs how to cronjobs and GCS https cloud google com storage Example Project Structure neo4j backup restore via gke gcs example backup deployment backup cronjob yaml Cronjob configuration deploy exec sh Executable for backup deployment docker Dockerfile Backup pod docker image backup via admin sh Helper used by docker image pod image exec sh Executable for build push docker image neo4j backup architecture png backup env Update gcloud configuration restore restore env Update gcloud configuration download backup sh Helper to copy backup from GCS restore via admin sh Helper to run restore admin commands restore exec sh Excutable for Restore cleanup sh Helper to remove local backup copy on pod README md Backup Backup Architecture image info backup neo4j backup architecture png Build and push backup pod image Make sure the Enviornment variables are set correctly in backup backup env file This file should point to the Google Cloud Storage REMOTE BACKUPSET to backup the graphs GCR bucket used to point to the BACKUP IMAGE used by the backup pod GKE NAMESPACE to be used to backup from the correct Neo4j cluster and the NEO4J ADMIN SERVER IPs to backup the servers from Simply run pod image exec sh file to build and push the backup pod image bash Have execute access to the script chmod u x backup docker pod image exec sh Run the back up pod image script backup docker pod image exec sh Deploy backup kubernetes cronjob Navigate to backup deployment backup cronjob yaml and edit the schedule or anything else you d like to customize in the cronjob Run the backup deployment deploy exec sh to deploy the cronjob on your neo4j cluster bash Have execute access to the script chmod u x backup deployment deploy exec sh Run the back up cronjob deployment script backup deployment deploy exec sh Update backup pod image Configuration for image used by the backup pod can be found in the file backup docker Dockerfile If any changes need to be made to the Backup configuration used by the back up pod please modify and save your changes on the following shell file backup via admin sh Once any of these files are changed an updated container image needs to be built and pushed to container registry bash Script to build and push the image pod image exec sh Delete backup cronjob Run the following command to delete the backup cronjob Replace the CRONJOB NAME with currently assigned cronjob name bash Delete cronjob kubectl delete cronjob CRONJOB NAME Re deploy Backup Cronjob Run the following YAML file to de deploy the Kubernetes Cronjob which schedules the backups bash Delete cronjob kubectl apply f backup cronjob yaml Restore This procedure assumes that you either have sidecar container on your neo4j instance running the google cloud sdk or your neo4j instance servers have the google cloud sdk pre installed Download and restore from Google Cloud Storage Bucket Simply run the restore restore exec sh which will call the helper shell scripts and complete the restore process one server at a time bash Have execute access to the script chmod u x restore restore exec sh Execute restore procedure script restore restore exec sh References This software uses the following open source packages GKE Cronjobs https cloud google com kubernetes engine docs how to cronjobs Artifact Registry https cloud google com artifact registry Dockerfile https docs docker com develop develop images dockerfile best practices Neo4j Enterprise https neo4j com licensing
GCP It explains how to create Flex template and run it in a restricted environment where there is no internet connectivity to dataflow launcher or worker nodes It also run dataflow template on shared VPC Example also contains a DAG which can be used to trigger Dataflow job from composer It also demonstrates how we can use cloudbuild to implement CI CD for this dataflow job Dataflow Python Flex Template This example contains a sample Dataflow job which reads a XML file and inserts the records to BQ table Resorces structure
## Dataflow Python Flex Template This example contains a sample Dataflow job which reads a XML file and inserts the records to BQ table. It explains how to create Flex template and run it in a restricted environment where there is no internet connectivity to dataflow launcher or worker nodes. It also run dataflow template on shared VPC. Example also contains a DAG which can be used to trigger Dataflow job from composer. It also demonstrates how we can use cloudbuild to implement CI/CD for this dataflow job. ### Resorces structure Below Tree explains the purpose of each file in the folder. ``` dataflow-flex-python/ β”œβ”€β”€ cloudbuild_base.yaml --> Cloudbuild config to build SDK image β”œβ”€β”€ cloudbuild_df_job.yaml --> Cloudbuild config to build Launcher image and Flex template β”œβ”€β”€ composer_variables.template --> Definition of All Composer variables used by DAG β”œβ”€β”€ dag β”‚ └── xml-to-bq-dag.py --> Dag code to launch Dataflow Job β”œβ”€β”€ df-package ---> Dataflow template package β”‚ β”œβ”€β”€ corder β”‚ β”‚ β”œβ”€β”€ bq_schema.py --> BQ Table Schemas β”‚ β”‚ β”œβ”€β”€ models.py --> Data Model for input data, generated by xsdata and pydantic plugin β”‚ β”‚ β”œβ”€β”€ customer_orders.py --> Dataflow pipeline Implementation β”‚ β”‚ β”œβ”€β”€ customer_orders_test.py --> pytest for Dataflow pipeline code β”‚ β”‚ └── __init__.py β”‚ β”œβ”€β”€ main.py --> Used by launcher to launch the pipeline β”‚ └── setup.py --> Used to Install the package β”œβ”€β”€ Dockerfile_Launcher --> Dockerfile to create Launcher Image β”œβ”€β”€ Dockerfile_SDK --> Dockerfile to create SDK image β”œβ”€β”€ metadata.json --> metadata file used during building flex teamplate β”œβ”€β”€ README.md β”œβ”€β”€ requirements-test.txt --> Python Requirements for running tests β”œβ”€β”€ requirements.txt --> Python Requirements for dataflow job └── sample-data --> Directory holding some sample data for test ``` ### Prerequisites This example assumes Project, Network, DNS and Firewalls has already been setup. #### Export Variables ``` export PROJECT_ID=<project_id> export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") export HOST_PROJECT_ID=<HOST_PROJECT_ID> export INPUT_BUCKET_NAME=pw-df-input-bkt export STAGING_BUCKET_NAME=pw-df-temp-bkt export LOCATION=us-central1 export BQ_DATASET=bqtoxmldataset export NETWORK=shared-vpc export SUBNET=bh-subnet-usc1 export REPO=dataflowf-image-repo export DF_WORKER_SA=dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com ``` #### Setup IAM ``` # Create service account for dataflow workers and launchers gcloud iam service-accounts create dataflow-worker-sa --project=$PROJECT_ID # Assign dataflow worker permissions gcloud projects add-iam-policy-binding $PROJECT_ID --member "serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com" --role roles/dataflow.worker # Assign Object viewer permissions in order to read the data from cloud storage gcloud projects add-iam-policy-binding $PROJECT_ID --member "serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com" --role roles/storage.objectViewer # Assign Object viewer permissions in order to Create temp files cloud storage gcloud projects add-iam-policy-binding $PROJECT_ID --member "serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com" --role roles/storage.objectCreator # Assign Service Account User permissions gcloud projects add-iam-policy-binding $PROJECT_ID --member "serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com" --role roles/iam.serviceAccountUser # Assign BigQuery job user permissions gcloud projects add-iam-policy-binding $PROJECT_ID --member "serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com" --role roles/bigquery.jobUser # Assign bigquery data editor permissions gcloud projects add-iam-policy-binding $PROJECT_ID --member "serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com" --role roles/bigquery.dataEditor # Assign Artifactory Reader Permissions gcloud projects add-iam-policy-binding $PROJECT_ID --member "serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com" --role roles/artifactregistry.reader # Assign network user permissions on Host project, this is needed only if dataflow workers will be using shared VPC gcloud projects add-iam-policy-binding $HOST_PROJECT_ID --member "serviceAccount:service-$PROJECT_NUMBER@dataflow-service-producer-prod.iam.gserviceaccount.com" --role roles/compute.networkUser ``` #### Setup Cloud Storage ``` # Create Cloud Storage bucket for input data gcloud storage buckets create gs://$INPUT_BUCKET_NAME --location $LOCATION --project $PROJECT_ID # Create a bucket for dataflow staging and temp locations gcloud storage buckets create gs://$STAGING_BUCKET_NAME --location $LOCATION --project $PROJECT_ID gsutil iam ch serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com:roles/storage.legacyBucketWriter gs://$STAGING_BUCKET_NAME # Assign Legacy Bucket Writer Role on Input bucket in order to move the object gsutil iam ch serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com:roles/storage.legacyBucketWriter gs://$INPUT_BUCKET_NAME ``` #### Create BQ Dataset ``` bq --location=$LOCATION mk --dataset $PROJECT_ID:$BQ_DATASET ``` #### Create ARtifactory registry ``` gcloud artifacts repositories create $REPO --location $LOCATION --repository-format docker --project $PROJECT_ID ``` ### Build Templates #### Build and Push Docker Images for template ``` # Build Base Image, all packages will be used from this image when dataflow job runs docker build -t $LOCATION-docker.pkg.dev/$PROJECT_ID/$REPO/dataflow-2.40-base:dev -f Dockerfile_SDK . # Build Image used by launcher to launch the Dataflow job docker build -t $LOCATION-docker.pkg.dev/$PROJECT_ID/$REPO/df-xml-to-bq:dev -f Dockerfile_Launcher . # Push both the images to repo docker push $LOCATION-docker.pkg.dev/$PROJECT_ID/$REPO/dataflow-2.40-base:dev docker push $LOCATION-docker.pkg.dev/$PROJECT_ID/$REPO/df-xml-to-bq:dev ``` #### Build Dataflow flex template ``` gcloud dataflow flex-template build gs://$INPUT_BUCKET_NAME/dataflow-templates/xml-to-bq.json \ --image "$LOCATION-docker.pkg.dev/$PROJECT_ID/$REPO/df-xml-to-bq:dev" \ --sdk-language "PYTHON" \ --metadata-file metadata.json \ --project $PROJECT_ID ``` ### Demo #### Upload Sample Data ``` gcloud storage cp ./sample-data/*.xml gs://$INPUT_BUCKET_NAME/data/ ``` #### Run job using DirectRunner locally ``` # Install Reuirements pip3 install -r requirements.txt requirements-test.txt cd df-package # Run tests python3 -m pytest # Run Job python3 main.py --input=../sample-data/customer-orders.xml \ --temp_location=gs://$STAGING_BUCKET_NAME/tmp \ --staging_location=gs://$STAGING_BUCKET_NAME/staging \ --output=$PROJECT_ID:$BQ_DATASET \ --dead_letter_dir=../dead/ \ --runner=DirectRunner cd ../ ``` #### Run Job using Gcloud Command ``` gcloud dataflow flex-template run xml-to-bq-sample-pipeline-$(date '+%Y-%m-%d-%H-%M-%S') \ --template-file-gcs-location gs://$INPUT_BUCKET_NAME/dataflow-templates/xml-to-bq.json \ --additional-experiments use_runner_v2 \ --additional-experiments=use_network_tags_for_flex_templates="dataflow-worker;allow-iap-ssh" \ --additional-experiments=use_network_tags="dataflow-worker;allow-iap-ssh" \ --additional-experiments=use_unsupported_python_version \ --disable-public-ips \ --network projects/$HOST_PROJECT_ID/global/networks/$NETWORK \ --subnetwork https://www.googleapis.com/compute/v1/projects/$HOST_PROJECT_ID/regions/$LOCATION/subnetworks/$SUBNET \ --service-account-email dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com \ --staging-location gs://$STAGING_BUCKET_NAME/staging \ --temp-location gs://$STAGING_BUCKET_NAME/tmp \ --region $LOCATION --worker-region=$LOCATION \ --parameters output=$PROJECT_ID:$BQ_DATASET \ --parameters input=gs://$INPUT_BUCKET_NAME/data/* \ --parameters dead_letter_dir=gs://$INPUT_BUCKET_NAME/invalid_files \ --parameters sdk_location=container \ --parameters sdk_container_image=$LOCATION-docker.pkg.dev/$PROJECT_ID/$REPO/dataflow-2.40-base:dev \ --project $PROJECT_ID ``` ### CI/CD using Cloudbuild #### Build Docker Image and Template with Cloud build The below section uses gcloud command. In Real World scenario Cloud build triggers can be created ahich can run this build job whenever their is change in the code. ``` # Build and push Base Image gcloud builds submit --config cloudbuild_base.yaml . --project $PROJECT_ID --substitutions _LOCATION=$LOCATION,_PROJECT_ID=$PROJECT_ID,_REPOSITORY=$REPO # Build and push launcher image and create flex template gcloud builds submit --config cloudbuild_df_job.yaml . --project $PROJECT_ID --substitutions _LOCATION=$LOCATION,_PROJECT_ID=$PROJECT_ID,_REPOSITORY=$REPO,_TEMPLATE_PATH=gs://$INPUT_BUCKET_NAME/dataflow-templates ``` ### Run Dataflow Flex template job from Composer Dags #### Set Environment Variables for composer ``` export COMPOSER_ENV_NAME=<composer-env-name> export COMPOSER_REGION=$LOCATION COMPOSER_VAR_FILE=composer_variables.json if [ ! -f "${COMPOSER_VAR_FILE}" ]; then envsubst < composer_variables.template > ${COMPOSER_VAR_FILE} fi gcloud composer environments storage data import \ --environment ${COMPOSER_ENV_NAME} \ --location ${COMPOSER_REGION} \ --source ${COMPOSER_VAR_FILE} gcloud composer environments run \ ${COMPOSER_ENV_NAME} \ --location ${COMPOSER_REGION} \ variables import -- /home/airflow/gcs/data/${COMPOSER_VAR_FILE} ``` #### Assign permissions to Composer Worker SA ``` COMPOSER_SA=$(gcloud composer environments describe $COMPOSER_ENV_NAME --location $COMPOSER_REGION --project $PROJECT_ID --format json | jq '.config.nodeConfig.serviceAccount') # Assign Service Account User permissions gcloud projects add-iam-policy-binding $PROJECT_ID --member "serviceAccount:$COMPOSER_SA" --role roles/iam.serviceAccountUser gcloud projects add-iam-policy-binding $PROJECT_ID --member "serviceAccount:$COMPOSER_SA" --role roles/dataflow.admin ``` #### Upload the dag to composer's dag bucket ``` DAG_PATH=$(gcloud composer environments describe $COMPOSER_ENV_NAME --location $COMPOSER_REGION --project $PROJECT_ID --format json | jq '.config.dagGcsPrefix') gcloud storage cp dag/xml-to-bq-dag.py $DAG_PATH ``` ### Limitations Currently this pipleine loads the whole XML file into memory for the conversion to dict via xmltodict. This approach works for small files but is not parallelizable on super large XML files as they are not read in chunks but in one go. This risks having a single worker dealing with very large file instances (slow) and running potentially out of memmory. In our experience any XML file above ~ 300mb would start slowing down the pipeline considerably and potentially memory failures can start showing up at ~ 500mb. This is if you go with the default worker. **Contributors:** @singhpradeepk, @kkulczak, @akolkiewicz **Credit:** Sample data has been borrowed from https://learn.microsoft.com/en-in/dotnet/standard/linq/sample-xml-file-customers-orders-namespace#customersordersinnamespacexml Data Model has been borrowed from https://learn.microsoft.com/en-in/dotnet/standard/linq/sample-xsd-file-customers-orders#customersordersxsd
GCP
Dataflow Python Flex Template This example contains a sample Dataflow job which reads a XML file and inserts the records to BQ table It explains how to create Flex template and run it in a restricted environment where there is no internet connectivity to dataflow launcher or worker nodes It also run dataflow template on shared VPC Example also contains a DAG which can be used to trigger Dataflow job from composer It also demonstrates how we can use cloudbuild to implement CI CD for this dataflow job Resorces structure Below Tree explains the purpose of each file in the folder dataflow flex python cloudbuild base yaml Cloudbuild config to build SDK image cloudbuild df job yaml Cloudbuild config to build Launcher image and Flex template composer variables template Definition of All Composer variables used by DAG dag xml to bq dag py Dag code to launch Dataflow Job df package Dataflow template package corder bq schema py BQ Table Schemas models py Data Model for input data generated by xsdata and pydantic plugin customer orders py Dataflow pipeline Implementation customer orders test py pytest for Dataflow pipeline code init py main py Used by launcher to launch the pipeline setup py Used to Install the package Dockerfile Launcher Dockerfile to create Launcher Image Dockerfile SDK Dockerfile to create SDK image metadata json metadata file used during building flex teamplate README md requirements test txt Python Requirements for running tests requirements txt Python Requirements for dataflow job sample data Directory holding some sample data for test Prerequisites This example assumes Project Network DNS and Firewalls has already been setup Export Variables export PROJECT ID project id export PROJECT NUMBER gcloud projects describe PROJECT ID format value projectNumber export HOST PROJECT ID HOST PROJECT ID export INPUT BUCKET NAME pw df input bkt export STAGING BUCKET NAME pw df temp bkt export LOCATION us central1 export BQ DATASET bqtoxmldataset export NETWORK shared vpc export SUBNET bh subnet usc1 export REPO dataflowf image repo export DF WORKER SA dataflow worker sa PROJECT ID iam gserviceaccount com Setup IAM Create service account for dataflow workers and launchers gcloud iam service accounts create dataflow worker sa project PROJECT ID Assign dataflow worker permissions gcloud projects add iam policy binding PROJECT ID member serviceAccount dataflow worker sa PROJECT ID iam gserviceaccount com role roles dataflow worker Assign Object viewer permissions in order to read the data from cloud storage gcloud projects add iam policy binding PROJECT ID member serviceAccount dataflow worker sa PROJECT ID iam gserviceaccount com role roles storage objectViewer Assign Object viewer permissions in order to Create temp files cloud storage gcloud projects add iam policy binding PROJECT ID member serviceAccount dataflow worker sa PROJECT ID iam gserviceaccount com role roles storage objectCreator Assign Service Account User permissions gcloud projects add iam policy binding PROJECT ID member serviceAccount dataflow worker sa PROJECT ID iam gserviceaccount com role roles iam serviceAccountUser Assign BigQuery job user permissions gcloud projects add iam policy binding PROJECT ID member serviceAccount dataflow worker sa PROJECT ID iam gserviceaccount com role roles bigquery jobUser Assign bigquery data editor permissions gcloud projects add iam policy binding PROJECT ID member serviceAccount dataflow worker sa PROJECT ID iam gserviceaccount com role roles bigquery dataEditor Assign Artifactory Reader Permissions gcloud projects add iam policy binding PROJECT ID member serviceAccount dataflow worker sa PROJECT ID iam gserviceaccount com role roles artifactregistry reader Assign network user permissions on Host project this is needed only if dataflow workers will be using shared VPC gcloud projects add iam policy binding HOST PROJECT ID member serviceAccount service PROJECT NUMBER dataflow service producer prod iam gserviceaccount com role roles compute networkUser Setup Cloud Storage Create Cloud Storage bucket for input data gcloud storage buckets create gs INPUT BUCKET NAME location LOCATION project PROJECT ID Create a bucket for dataflow staging and temp locations gcloud storage buckets create gs STAGING BUCKET NAME location LOCATION project PROJECT ID gsutil iam ch serviceAccount dataflow worker sa PROJECT ID iam gserviceaccount com roles storage legacyBucketWriter gs STAGING BUCKET NAME Assign Legacy Bucket Writer Role on Input bucket in order to move the object gsutil iam ch serviceAccount dataflow worker sa PROJECT ID iam gserviceaccount com roles storage legacyBucketWriter gs INPUT BUCKET NAME Create BQ Dataset bq location LOCATION mk dataset PROJECT ID BQ DATASET Create ARtifactory registry gcloud artifacts repositories create REPO location LOCATION repository format docker project PROJECT ID Build Templates Build and Push Docker Images for template Build Base Image all packages will be used from this image when dataflow job runs docker build t LOCATION docker pkg dev PROJECT ID REPO dataflow 2 40 base dev f Dockerfile SDK Build Image used by launcher to launch the Dataflow job docker build t LOCATION docker pkg dev PROJECT ID REPO df xml to bq dev f Dockerfile Launcher Push both the images to repo docker push LOCATION docker pkg dev PROJECT ID REPO dataflow 2 40 base dev docker push LOCATION docker pkg dev PROJECT ID REPO df xml to bq dev Build Dataflow flex template gcloud dataflow flex template build gs INPUT BUCKET NAME dataflow templates xml to bq json image LOCATION docker pkg dev PROJECT ID REPO df xml to bq dev sdk language PYTHON metadata file metadata json project PROJECT ID Demo Upload Sample Data gcloud storage cp sample data xml gs INPUT BUCKET NAME data Run job using DirectRunner locally Install Reuirements pip3 install r requirements txt requirements test txt cd df package Run tests python3 m pytest Run Job python3 main py input sample data customer orders xml temp location gs STAGING BUCKET NAME tmp staging location gs STAGING BUCKET NAME staging output PROJECT ID BQ DATASET dead letter dir dead runner DirectRunner cd Run Job using Gcloud Command gcloud dataflow flex template run xml to bq sample pipeline date Y m d H M S template file gcs location gs INPUT BUCKET NAME dataflow templates xml to bq json additional experiments use runner v2 additional experiments use network tags for flex templates dataflow worker allow iap ssh additional experiments use network tags dataflow worker allow iap ssh additional experiments use unsupported python version disable public ips network projects HOST PROJECT ID global networks NETWORK subnetwork https www googleapis com compute v1 projects HOST PROJECT ID regions LOCATION subnetworks SUBNET service account email dataflow worker sa PROJECT ID iam gserviceaccount com staging location gs STAGING BUCKET NAME staging temp location gs STAGING BUCKET NAME tmp region LOCATION worker region LOCATION parameters output PROJECT ID BQ DATASET parameters input gs INPUT BUCKET NAME data parameters dead letter dir gs INPUT BUCKET NAME invalid files parameters sdk location container parameters sdk container image LOCATION docker pkg dev PROJECT ID REPO dataflow 2 40 base dev project PROJECT ID CI CD using Cloudbuild Build Docker Image and Template with Cloud build The below section uses gcloud command In Real World scenario Cloud build triggers can be created ahich can run this build job whenever their is change in the code Build and push Base Image gcloud builds submit config cloudbuild base yaml project PROJECT ID substitutions LOCATION LOCATION PROJECT ID PROJECT ID REPOSITORY REPO Build and push launcher image and create flex template gcloud builds submit config cloudbuild df job yaml project PROJECT ID substitutions LOCATION LOCATION PROJECT ID PROJECT ID REPOSITORY REPO TEMPLATE PATH gs INPUT BUCKET NAME dataflow templates Run Dataflow Flex template job from Composer Dags Set Environment Variables for composer export COMPOSER ENV NAME composer env name export COMPOSER REGION LOCATION COMPOSER VAR FILE composer variables json if f COMPOSER VAR FILE then envsubst composer variables template COMPOSER VAR FILE fi gcloud composer environments storage data import environment COMPOSER ENV NAME location COMPOSER REGION source COMPOSER VAR FILE gcloud composer environments run COMPOSER ENV NAME location COMPOSER REGION variables import home airflow gcs data COMPOSER VAR FILE Assign permissions to Composer Worker SA COMPOSER SA gcloud composer environments describe COMPOSER ENV NAME location COMPOSER REGION project PROJECT ID format json jq config nodeConfig serviceAccount Assign Service Account User permissions gcloud projects add iam policy binding PROJECT ID member serviceAccount COMPOSER SA role roles iam serviceAccountUser gcloud projects add iam policy binding PROJECT ID member serviceAccount COMPOSER SA role roles dataflow admin Upload the dag to composer s dag bucket DAG PATH gcloud composer environments describe COMPOSER ENV NAME location COMPOSER REGION project PROJECT ID format json jq config dagGcsPrefix gcloud storage cp dag xml to bq dag py DAG PATH Limitations Currently this pipleine loads the whole XML file into memory for the conversion to dict via xmltodict This approach works for small files but is not parallelizable on super large XML files as they are not read in chunks but in one go This risks having a single worker dealing with very large file instances slow and running potentially out of memmory In our experience any XML file above 300mb would start slowing down the pipeline considerably and potentially memory failures can start showing up at 500mb This is if you go with the default worker Contributors singhpradeepk kkulczak akolkiewicz Credit Sample data has been borrowed from https learn microsoft com en in dotnet standard linq sample xml file customers orders namespace customersordersinnamespacexml Data Model has been borrowed from https learn microsoft com en in dotnet standard linq sample xsd file customers orders customersordersxsd
GCP Google Cloud Storage Google Speech to Text representation for any use or purpose Your use of it is subject to your agreement with Google Technology Stack Copyright 2023 Google This software is provided as is without warranty or Google Cloud Run Google Artifact Registry
``` Copyright 2023 Google. This software is provided as-is, without warranty or representation for any use or purpose. Your use of it is subject to your agreement with Google. ``` ## Technology Stack - Google Cloud Run - Google Artifact Registry - Google Cloud Storage - Google Speech to Text - Vertex AI Conversation - Dialogflow CX - Dialogflow CX Agent - Google Data Store - Google Secret Manager - Gradio ## GCP Project Setup ### Creating a Project in the Google Cloud Platform Console If you haven't already created a project, create one now. Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services. 1. Open the [Cloud Platform Console][cloud-console]. 2. In the drop-down menu at the top, select **NEW PROJECT**. 3. Give your project a name. 4. Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations. [cloud-console]: https://console.cloud.google.com/ ### Enabling billing for your project. If you haven't already enabled billing for your project, [enable billing][enable-billing] now. Enabling billing allows is required to use Cloud Bigtable and to create VM instances. [enable-billing]: https://console.cloud.google.com/project/_/settings ### Install the Google Cloud SDK. If you haven't already installed the Google Cloud SDK, [install the Google Cloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform. [cloud-sdk]: https://cloud.google.com/sdk/ ### Setting Google Application Default Credentials Set your [Google Application Default Credentials][application-default-credentials] by [initializing the Google Cloud SDK][cloud-sdk-init] with the command: ``` gcloud init ``` Generate a credentials file by running the [application-default login](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) command: ``` gcloud auth application-default login ``` [cloud-sdk-init]: https://cloud.google.com/sdk/docs/initializing [application-default-credentials]: https://developers.google.com/identity/protocols/application-default-credentials ## Upload your data to a Cloud Storage bucket Follow these [instructions][instructions] to upload your pdf documents or pdf manuals to be used in this example [instructions]:https://cloud.google.com/storage/docs/uploading-objects ## Create a Generative AI Agent Follow the instructions at this [link][link] and perform the following: 1. Create Data Stores: Select information that you would like the Vertex AI Search and Conversation to query 2. Create an Agent: Create the Dialogflow CX agent that queries the Data Store 3. Test the agent in the simulator 4. Take note of you agent link by going to [Dialogflow CX Console][Dialogflow CX Console] and see the information about the agent you created [link]: https://cloud.google.com/generative-ai-app-builder/docs/a [Dialogflow CX Console]:https://cloud.google.com/dialogflow/cx/docs/concept/console#agent ### Dialogflow CX Agent Data Stores Data Stores are used to find answers for end-user's questions. Data Stores are a collection documents, each of which reference your data. For this particular example data store will consist of the following characteristics: 1. Your organizational documents or manuals. 2. The data store type will be unstructured in a pdf format 3. The data is uploaded without metadata for simplicity. Only need to point the import to the gcp bucket folder where the pdf files are. Their extension will decide their type. When an end-user asks the agent a question, the agent searches for an answer from the given source content and summarizes the findings into a coherent agent response. It also provides supporting links to the sources of the response for the end-user to learn more.
GCP
Copyright 2023 Google This software is provided as is without warranty or representation for any use or purpose Your use of it is subject to your agreement with Google Technology Stack Google Cloud Run Google Artifact Registry Google Cloud Storage Google Speech to Text Vertex AI Conversation Dialogflow CX Dialogflow CX Agent Google Data Store Google Secret Manager Gradio GCP Project Setup Creating a Project in the Google Cloud Platform Console If you haven t already created a project create one now Projects enable you to manage all Google Cloud Platform resources for your app including deployment access control billing and services 1 Open the Cloud Platform Console cloud console 2 In the drop down menu at the top select NEW PROJECT 3 Give your project a name 4 Make a note of the project ID which might be different from the project name The project ID is used in commands and in configurations cloud console https console cloud google com Enabling billing for your project If you haven t already enabled billing for your project enable billing enable billing now Enabling billing allows is required to use Cloud Bigtable and to create VM instances enable billing https console cloud google com project settings Install the Google Cloud SDK If you haven t already installed the Google Cloud SDK install the Google Cloud SDK cloud sdk now The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform cloud sdk https cloud google com sdk Setting Google Application Default Credentials Set your Google Application Default Credentials application default credentials by initializing the Google Cloud SDK cloud sdk init with the command gcloud init Generate a credentials file by running the application default login https cloud google com sdk gcloud reference auth application default login command gcloud auth application default login cloud sdk init https cloud google com sdk docs initializing application default credentials https developers google com identity protocols application default credentials Upload your data to a Cloud Storage bucket Follow these instructions instructions to upload your pdf documents or pdf manuals to be used in this example instructions https cloud google com storage docs uploading objects Create a Generative AI Agent Follow the instructions at this link link and perform the following 1 Create Data Stores Select information that you would like the Vertex AI Search and Conversation to query 2 Create an Agent Create the Dialogflow CX agent that queries the Data Store 3 Test the agent in the simulator 4 Take note of you agent link by going to Dialogflow CX Console Dialogflow CX Console and see the information about the agent you created link https cloud google com generative ai app builder docs a Dialogflow CX Console https cloud google com dialogflow cx docs concept console agent Dialogflow CX Agent Data Stores Data Stores are used to find answers for end user s questions Data Stores are a collection documents each of which reference your data For this particular example data store will consist of the following characteristics 1 Your organizational documents or manuals 2 The data store type will be unstructured in a pdf format 3 The data is uploaded without metadata for simplicity Only need to point the import to the gcp bucket folder where the pdf files are Their extension will decide their type When an end user asks the agent a question the agent searches for an answer from the given source content and summarizes the findings into a coherent agent response It also provides supporting links to the sources of the response for the end user to learn more
GCP Near realtime NRT Feature Producer Hypothetical Scenario Features We want to build and use near real time NRT features in the hypotethical scoring system Scoring is not part of this example There are multiple sources that produce NRT features Features are ideally defined in the feature store system and are exposed in the online store Features are stored in BigQuery and synced to Online Feature store Vertex ai Below you can see the definition Feature type Feature name Feature source Window assuming sliding Period Method Beam SQL Destination
# Near realtime (NRT) Feature Producer ## Hypothetical Scenario We want to build and use near real time (NRT) features in the hypotethical scoring system. Scoring is not part of this example. There are multiple sources that produce NRT features. Features are ideally defined in the feature store system and are exposed in the online store. ### Features Features are stored in BigQuery and synced to Online Feature store (Vertex ai). Below you can see the definition. | Feature type | Feature name | Feature source | Window (assuming sliding)/Period | Method (Beam SQL) | Destination |--------|---------------------|------------------|-----------------------|----------------------|----------------------| | NRT | Total_number_of_clicks_last_90sec per user_id | Ga4 topic | 90sec/30s | count(*) | BQ table | | NRT | Total_number_of_logins_last_5min per user_id | Authn topic | 300sec/30s | count(*) | BQ table | | NRT | Total_number_of_transactions_last5min per user_id | Transactions topic | 300sec/30s | count(*) | BQ table | ### Scoring pipeline (not part of the example) Pipeline input is a transaction topic, for each message, it takes entity id and use it for enrichment - it reads total_number_of_clicks_last_90sec, total_number_of_logins_last_5min, total_number_of_transactions_last5min and many other historical features that are needed for scoring. Score is emitted downstream along with transaction details. ### Near real time feature engineering pipeline Pipeline takes events from the source topic, splits into multiple branches based on windowing strategy (duration, period) and does aggregations. Branches are joined back (which is tricky!), and stored into the destination table. ### Visualization To simplify the visualization here are 2 features (f1, f2) - 90s and 60s. As there is a sliding window happening, each row has overlapping windows visualized. Events happen within the (window start, window end boundary), but output of aggregation is triggered at the end of the window with a timestamp of end boundary minus 1ms, e.g. 29.999. ![viz](viz.png) Notice that windows emit by the end of first period and second period already contain aggregations for 60s and 90s windows. Notice also that this pipeline should also emit 0 (default for some aggregations) even if there is no window triggered due as there is no data for key. ### Resetting feature value There is a need to reset total_number_of_clicks_last_90sec if there are no more events for a specific user_id. Solution is implementing a stateful processing step after each feature calculation that resets timer or expires value (produce default/0/null). There is additional windowing needed to make this possible. ### Merging branches Total_number_of_clicks_last_90sec and total_number_of_locations_last_5min are features that are calculated based on the same data product or similar period and should ideally be stored in the same destination. Pipeline takes events from ga4 topic, splits into two branches, does window (90s and 300s) and aggregations (count and count distinct). As windows are different, the result of window & aggregation can’t be instatnly co-grouped and stored as one row (entity_id, Total_number_of_clicks_last_90sec, total_number_of_locations_last_5min, timestamp). Solution here is that the windowing period should match (events are produced at the same rate)and branches should be re-windowed to a fixed window and co-grouped. ## Sources Repository is created based on [quickstart](https://cloud.google.com/dataflow/docs/quickstarts/create-pipeline-java), but most of the files are removed. It contains NRTFeature transform and other building blocks to showcase how implement above requirements. ## Architecture of demo pipeline There is demo pipeline implemented with taxi data source, producing two features and storing them to BQ backed feature store. ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ PubSubIO β”‚ Topic: taxirides-realtime β”‚ (Read/Source)β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ PCollection<String> v β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ JsonToRow β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β”‚ β”‚ PCollection<Row> β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ └────────┐ v v β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚NRTFeature β”‚ β”‚ NRT Feature (pax) β”‚ max(passenger_count) group by ride_id β”‚ (meter) β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ PCollection<KV<String,Row>> β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ └─ ──────┐ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ v v β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ CoGroupByKey β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ PCollection<KV<String, CoGbkResult>> β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ CoGroupByKey β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ PCollection<KV<TableRow> v β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ BigQueryIO β”‚ β”‚(features) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ## Run Altough pom.xml supports multiple profiles, this was tested locally and dataflow only. ### Dataflow ``` mvn -Pdataflow-runner compile exec:java \ -Dexec.mainClass=com.google.dataflow.feature.pipeline.TaxiNRTPipeline \ -Dexec.args="--project=PROJECT_ID \ --gcpTempLocation=gs://BUCKET_NAME/temp/ \ --output=gs://BUCKET_NAME/output \ --runner=DataflowRunner \ --projectId=FEATURE_PROJECT_ID \ --datasetName=FEATURE_DATASET_NAME \ --tableName=FEATURE_TABLE_NAME \ --region=REGION" ```
GCP
Near realtime NRT Feature Producer Hypothetical Scenario We want to build and use near real time NRT features in the hypotethical scoring system Scoring is not part of this example There are multiple sources that produce NRT features Features are ideally defined in the feature store system and are exposed in the online store Features Features are stored in BigQuery and synced to Online Feature store Vertex ai Below you can see the definition Feature type Feature name Feature source Window assuming sliding Period Method Beam SQL Destination NRT Total number of clicks last 90sec per user id Ga4 topic 90sec 30s count BQ table NRT Total number of logins last 5min per user id Authn topic 300sec 30s count BQ table NRT Total number of transactions last5min per user id Transactions topic 300sec 30s count BQ table Scoring pipeline not part of the example Pipeline input is a transaction topic for each message it takes entity id and use it for enrichment it reads total number of clicks last 90sec total number of logins last 5min total number of transactions last5min and many other historical features that are needed for scoring Score is emitted downstream along with transaction details Near real time feature engineering pipeline Pipeline takes events from the source topic splits into multiple branches based on windowing strategy duration period and does aggregations Branches are joined back which is tricky and stored into the destination table Visualization To simplify the visualization here are 2 features f1 f2 90s and 60s As there is a sliding window happening each row has overlapping windows visualized Events happen within the window start window end boundary but output of aggregation is triggered at the end of the window with a timestamp of end boundary minus 1ms e g 29 999 viz viz png Notice that windows emit by the end of first period and second period already contain aggregations for 60s and 90s windows Notice also that this pipeline should also emit 0 default for some aggregations even if there is no window triggered due as there is no data for key Resetting feature value There is a need to reset total number of clicks last 90sec if there are no more events for a specific user id Solution is implementing a stateful processing step after each feature calculation that resets timer or expires value produce default 0 null There is additional windowing needed to make this possible Merging branches Total number of clicks last 90sec and total number of locations last 5min are features that are calculated based on the same data product or similar period and should ideally be stored in the same destination Pipeline takes events from ga4 topic splits into two branches does window 90s and 300s and aggregations count and count distinct As windows are different the result of window aggregation can t be instatnly co grouped and stored as one row entity id Total number of clicks last 90sec total number of locations last 5min timestamp Solution here is that the windowing period should match events are produced at the same rate and branches should be re windowed to a fixed window and co grouped Sources Repository is created based on quickstart https cloud google com dataflow docs quickstarts create pipeline java but most of the files are removed It contains NRTFeature transform and other building blocks to showcase how implement above requirements Architecture of demo pipeline There is demo pipeline implemented with taxi data source producing two features and storing them to BQ backed feature store PubSubIO Topic taxirides realtime Read Source PCollection String v JsonToRow PCollection Row v v NRTFeature NRT Feature pax max passenger count group by ride id meter PCollection KV String Row v v CoGroupByKey PCollection KV String CoGbkResult CoGroupByKey PCollection KV TableRow v BigQueryIO features Run Altough pom xml supports multiple profiles this was tested locally and dataflow only Dataflow mvn Pdataflow runner compile exec java Dexec mainClass com google dataflow feature pipeline TaxiNRTPipeline Dexec args project PROJECT ID gcpTempLocation gs BUCKET NAME temp output gs BUCKET NAME output runner DataflowRunner projectId FEATURE PROJECT ID datasetName FEATURE DATASET NAME tableName FEATURE TABLE NAME region REGION
GCP At a high level the Cloud Dataflow pipeline performs the following steps Workflow Overview img src img dataflowelasticworkflow png alt Workflow Overview height 400 width 800 Indexing documents into Elasticsearch using Cloud Dataflow This example Cloud Dataflow pipeline demonstrates the process of reading JSON documents from Cloud Pub Sub enhancing the document using metadata stored in Cloud Bigtable and indexing those documents into The pipeline also validates the documents for correctness and availability of metadata and publishes any documents that fail validation into another Cloud Pub Sub topic for debugging and eventual reprocessing
## Indexing documents into Elasticsearch using Cloud Dataflow This example Cloud Dataflow pipeline demonstrates the process of reading JSON documents from Cloud Pub/Sub, enhancing the document using metadata stored in Cloud Bigtable and indexing those documents into [Elasticsearch](https://www.elastic.co/). The pipeline also validates the documents for correctness and availability of metadata and publishes any documents that fail validation into another Cloud Pub/Sub topic for debugging and eventual reprocessing. ### Workflow Overview *** <img src="img/dataflow_elastic_workflow.png" alt="Workflow Overview" height="400" width="800"/> At a high-level the Cloud Dataflow pipeline performs the following steps: 1. Reads JSON documents from Cloud Pub/Sub, validates that the documents are well-formed and contains a user provided unique id field (e.g. **SKU**). 2. Enhances the document using external metadata stored in a Cloud Bigtable table. The pipeline looks up the metadata from Cloud Bigtable using the unique id field (e.g. **SKU**) extracted from the document. 3. Indexes the enhanced document into an existing Elasticsearch index. 4. Publishes into a Cloud Pub/Sub topic, any documents that either fail validation (i.e. are not well-formed JSON documents) or do not have a metadata record in Cloud Bigtable. 5. Optionally corrects and republishes the failed documents back into Cloud Pub/Sub. *Note: This workflow is not part of the sample code provided in this repo*. #### Sample Data For the purpose of demonstrating this pipeline, we will use the [products](https://github.com/BestBuyAPIs/open-data-set/blob/master/products.json) data provided [here](https://github.com/BestBuyAPIs/open-data-set). The products data provides JSON documents with various attributes associated with a product: ```json { "image": "http://img.bbystatic.com/BestBuy_US/images/products/4853/48530_sa.jpg", "shipping": 5.49, "price": 5.49, "name": "Duracell - AA 1.5V CopperTop Batteries (4-Pack)", "upc": "041333415017", "description": "Long-lasting energy; DURALOCK Power Preserve technology; for toys, clocks, radios, games, remotes, PDAs and more", "model": "MN1500B4Z", "sku": 48530, "type": "HardGood", "category": [ { "name": "Connected Home & Housewares", "id": "pcmcat312300050015" }, { "name": "Housewares", "id": "pcmcat248700050021" }, { "name": "Household Batteries", "id": "pcmcat303600050001" }, { "name": "Alkaline Batteries", "id": "abcat0208002" } ], "url": "http://www.bestbuy.com/site/duracell-aa-1-5v-coppertop-batteries-4-pack/48530.p?id=1099385268988&skuId=48530&cmp=RMXCC", "manufacturer": "Duracell" } ``` #### Sample metadata In order to demonstrate how the documents are enhanced using external metadata stored in Cloud Bigtable, we will create a Cloud Bigtable table (e.g. *products_metadata*) with a single column family (e.g. *cf*). A randomly generated *boolean* value is then stored for a field called **in_stock** associated with the **SKU** that is used as a *rowkey*: |rowkey |in_stock | |:-----------|:-----------| |1234 |true | |5678 |false | |.... |.... | #### Generating sample data and metadata. In order to assist with publishing the products data into Cloud Pub/Sub and populating the metadata table in Cloud Bigtable, we provided a helper pipeline [Publish Products](ElasticIndexer/src/main/java/com/google/cloud/pso/utils/PublishProducts.java). The sample pipeline can be executed from the folder containing the [pom.xml](ElasticIndexer/pom.xml) file: ```bash mvn compile exec:java -Dexec.mainClass=com.google.cloud.pso.utils.PublishProducts -Dexec.args=" \ --runner=DataflowRunner \ --project=[GCP_PROJECT_ID] \ --stagingLocation=[GCS_STAGING_BUCKET] \ --input=[GCS_BUCKET_CONTAINING_PRODUCTS_FILE]/products.json.gz \ --topic=[INPUT_Pub/Sub_TOPIC] \ --idField=/sku \ --instanceId=[BIGTABLE_INSTANCE_ID] \ --tableName=[BIGTABLE_TABLE_NAME] \ --columnFamily=[BIGTABLE_COLUMN_FAMILY] \ --columnQualifier=[BIGTABLE_COLUMN_QUALIFIER]" ``` <img src="img/sample_data_gen_pipeline.png" alt="Sample data generation workflow" height="864" width="800"/> *** #### Setup and Pre-requisites The sample pipeline is written in Java and requires Java 8 and [Apache Maven](https://maven.apache.org/). The following high-level steps describe the setup needed to run this example: 1. Create a Cloud Pub/Sub topic and subscription for consuming the documents to be indexed. 2. Create a Cloud Pub/Sub topic and subscription for publishing the invalid documents. 3. Create a Cloud Bigtable table to store the metadata. The metadata can be stored in a single column family (for e.g. *cf*). 4. Identify the following relevant fields for the existing Elasticsearch index where the documents will be published. | Field | Value |Example | | :--------------------- |:---------------------------------------------- |:--------------------------- | | addresses | *comma-separated-es-addresses* |http://x.x.x.x:9200 | | index | *es-index-name* |prod_index | | type | *es-index-type* |prod | 5. Generate sample data and metadata using the helper pipeline as described earlier. ##### Build and Execute The sample pipeline can be executed from the folder containing the [pom.xml](ElasticIndexer/pom.xml) file: ```bash mvn compile exec:java -Dexec.mainClass=com.google.cloud.pso.IndexerMain -Dexec.args=" \ --runner=DataflowRunner \ --project=[GCP_PROJECT_ID] \ --stagingLocation=[GCS_STAGING_BUCKET] \ --inputSubscription=[INPUT_Pub/Sub_SUBSCRIPTION] \ --idField=[DOC_ID_FIELD] \ --addresses=[ES_ADDRESSES] \ --index=[ES_INDEX_NAME] \ --type=[ES_INDEX_TYPE] \ --rejectionTopic=[Pub/Sub_REJECTED_DOCS_TOPIC] \ --instanceId=[BIGTABLE_INSTANCE_ID] \ --tableName=[BIGTABLE_TABLE_NAME] \ --columnFamily=[BIGTABLE_COLUMN_FAMILY] \ --columnQualifier=[BIGTABLE_COLUMN_QUALIFIER]" ``` *** ##### Full code examples Ready to dive deeper? Check out the complete code [here](ElasticIndexer/src/main/java/com/google/cloud/pso/IndexerMain.java)
GCP
Indexing documents into Elasticsearch using Cloud Dataflow This example Cloud Dataflow pipeline demonstrates the process of reading JSON documents from Cloud Pub Sub enhancing the document using metadata stored in Cloud Bigtable and indexing those documents into Elasticsearch https www elastic co The pipeline also validates the documents for correctness and availability of metadata and publishes any documents that fail validation into another Cloud Pub Sub topic for debugging and eventual reprocessing Workflow Overview img src img dataflow elastic workflow png alt Workflow Overview height 400 width 800 At a high level the Cloud Dataflow pipeline performs the following steps 1 Reads JSON documents from Cloud Pub Sub validates that the documents are well formed and contains a user provided unique id field e g SKU 2 Enhances the document using external metadata stored in a Cloud Bigtable table The pipeline looks up the metadata from Cloud Bigtable using the unique id field e g SKU extracted from the document 3 Indexes the enhanced document into an existing Elasticsearch index 4 Publishes into a Cloud Pub Sub topic any documents that either fail validation i e are not well formed JSON documents or do not have a metadata record in Cloud Bigtable 5 Optionally corrects and republishes the failed documents back into Cloud Pub Sub Note This workflow is not part of the sample code provided in this repo Sample Data For the purpose of demonstrating this pipeline we will use the products https github com BestBuyAPIs open data set blob master products json data provided here https github com BestBuyAPIs open data set The products data provides JSON documents with various attributes associated with a product json image http img bbystatic com BestBuy US images products 4853 48530 sa jpg shipping 5 49 price 5 49 name Duracell AA 1 5V CopperTop Batteries 4 Pack upc 041333415017 description Long lasting energy DURALOCK Power Preserve technology for toys clocks radios games remotes PDAs and more model MN1500B4Z sku 48530 type HardGood category name Connected Home Housewares id pcmcat312300050015 name Housewares id pcmcat248700050021 name Household Batteries id pcmcat303600050001 name Alkaline Batteries id abcat0208002 url http www bestbuy com site duracell aa 1 5v coppertop batteries 4 pack 48530 p id 1099385268988 skuId 48530 cmp RMXCC manufacturer Duracell Sample metadata In order to demonstrate how the documents are enhanced using external metadata stored in Cloud Bigtable we will create a Cloud Bigtable table e g products metadata with a single column family e g cf A randomly generated boolean value is then stored for a field called in stock associated with the SKU that is used as a rowkey rowkey in stock 1234 true 5678 false Generating sample data and metadata In order to assist with publishing the products data into Cloud Pub Sub and populating the metadata table in Cloud Bigtable we provided a helper pipeline Publish Products ElasticIndexer src main java com google cloud pso utils PublishProducts java The sample pipeline can be executed from the folder containing the pom xml ElasticIndexer pom xml file bash mvn compile exec java Dexec mainClass com google cloud pso utils PublishProducts Dexec args runner DataflowRunner project GCP PROJECT ID stagingLocation GCS STAGING BUCKET input GCS BUCKET CONTAINING PRODUCTS FILE products json gz topic INPUT Pub Sub TOPIC idField sku instanceId BIGTABLE INSTANCE ID tableName BIGTABLE TABLE NAME columnFamily BIGTABLE COLUMN FAMILY columnQualifier BIGTABLE COLUMN QUALIFIER img src img sample data gen pipeline png alt Sample data generation workflow height 864 width 800 Setup and Pre requisites The sample pipeline is written in Java and requires Java 8 and Apache Maven https maven apache org The following high level steps describe the setup needed to run this example 1 Create a Cloud Pub Sub topic and subscription for consuming the documents to be indexed 2 Create a Cloud Pub Sub topic and subscription for publishing the invalid documents 3 Create a Cloud Bigtable table to store the metadata The metadata can be stored in a single column family for e g cf 4 Identify the following relevant fields for the existing Elasticsearch index where the documents will be published Field Value Example addresses comma separated es addresses http x x x x 9200 index es index name prod index type es index type prod 5 Generate sample data and metadata using the helper pipeline as described earlier Build and Execute The sample pipeline can be executed from the folder containing the pom xml ElasticIndexer pom xml file bash mvn compile exec java Dexec mainClass com google cloud pso IndexerMain Dexec args runner DataflowRunner project GCP PROJECT ID stagingLocation GCS STAGING BUCKET inputSubscription INPUT Pub Sub SUBSCRIPTION idField DOC ID FIELD addresses ES ADDRESSES index ES INDEX NAME type ES INDEX TYPE rejectionTopic Pub Sub REJECTED DOCS TOPIC instanceId BIGTABLE INSTANCE ID tableName BIGTABLE TABLE NAME columnFamily BIGTABLE COLUMN FAMILY columnQualifier BIGTABLE COLUMN QUALIFIER Full code examples Ready to dive deeper Check out the complete code here ElasticIndexer src main java com google cloud pso IndexerMain java
GCP present as bigquery user defined functions to deploy their custom services or libraries written in any language other than SQL and javascript which are not BigQuery Remote Function Sample Code This repository has string format Java code which can be deployed on cloud run or cloud function and can be invoked BQ remote functions provide direct integration with cloud function or cloud run allows user using SQL queries from BigQuery
# BigQuery Remote Function Sample Code [Bigquery remote function](https://cloud.google.com/bigquery/docs/reference/standard-sql/remote-functions) allows user to deploy their custom services or libraries written in any language other than SQL and javascript, which are not present as bigquery user defined functions. BQ remote functions provide direct integration with cloud function or cloud run This repository has string format Java code, which can be deployed on cloud run or cloud function, and can be invoked using SQL queries from BigQuery. Bigquery sends HTTP request POST request to cloud run as [input json format](https://cloud.google.com/bigquery/docs/reference/standard-sql/remote-functions#input_format) and expects endpoint to return code in [output json format](https://cloud.google.com/bigquery/docs/reference/standard-sql/remote-functions#output_format) and in case of failure, sends back error messages. ### Deployment Steps on Cloud Run: 1. Set Environment variables: ``` PROJECT_NAME=$(gcloud config get-value project) INSTANCE_NAME=string-format REGION=us-central1 JAVA_VERSION=java11 SERVICE_ENTRY_POINT=com.google.cloud.pso.bqremotefunc.StringFormat ``` 2. clone this git repo on GCP project and got to directory ``` cd examples/bq-remote-function/string_formatter ``` 3. Deploy the code as cloud run using below commands: ``` gcloud functions deploy $INSTANCE_NAME \ --project=$PROJECT_NAME \ --gen2 \ --region=$REGION \ --runtime=$JAVA_VERSION \ --entry-point=$SERVICE_ENTRY_POINT \ --trigger-http ``` 4. Copy the https url from cloud run UI 5. Create a remote function in BigQuery. 1. Create a connection of type **CLOUD_RESOURCE** replace connection name in below command and run on cloud shell. ``` bq mk --connection \ --display_name=<connection-name> \ --connection_type=CLOUD_RESOURCE \ --project_id=$PROJECT_ID \ --location=$REGION <connection-name> ``` 2. Create a remote function in BigQuery Editor with below query (replace the variables based on your environment) ``` CREATE or Replace FUNCTION `<project-id>.<dataset>.<function-name>` (text STRING) RETURNS STRING REMOTE WITH CONNECTION `<BQ connection name> OPTIONS (endpoint = '<HTTP end point of the cloud run service>'); ``` 6. Use the remote function in a query just like any other user-defined functions. ``` SELECT `<project-id>.<dataset>.<function-name>`(col_name) from (select * from unnest(['text1','text2','text3']) as col_name ); ``` 7. Expected Output ``` text1_test text2_test text3_test ``` ### Logging and Monitoring the cloud run: Go to GCP Cloud run, click the instance created select LOGS on action bar, when the instance is invoked from BigQuery, you will see the logs printed, parallely in METRICS section, you can check the request count, container utilisation and billable time. ### Cost The cost can be calculated using [pricing calculator](https://cloud.google.com/products/calculator) for both Cloud Run and BigQuery utilization by entering CPU, memory and concurrent requests count. ### Clean up To destroy delete the cloud run instance and bq remote function. ### Limitations: BQ remote function fails to support [payload >10mb](https://cloud.google.com/bigquery/quotas#query_jobs:~:text=Maximum%20request%20size,like%20query%20parameters) , and accepts certain [data types](https://cloud.google.com/bigquery/docs/reference/standard-sql/remote-functions#limitations). ### Next steps: For more Cloud Run samples beyond Java, see the main list in the [Cloud Run Samples repository](https://github.com/GoogleCloudPlatform/cloud-run-samples).
GCP
BigQuery Remote Function Sample Code Bigquery remote function https cloud google com bigquery docs reference standard sql remote functions allows user to deploy their custom services or libraries written in any language other than SQL and javascript which are not present as bigquery user defined functions BQ remote functions provide direct integration with cloud function or cloud run This repository has string format Java code which can be deployed on cloud run or cloud function and can be invoked using SQL queries from BigQuery Bigquery sends HTTP request POST request to cloud run as input json format https cloud google com bigquery docs reference standard sql remote functions input format and expects endpoint to return code in output json format https cloud google com bigquery docs reference standard sql remote functions output format and in case of failure sends back error messages Deployment Steps on Cloud Run 1 Set Environment variables PROJECT NAME gcloud config get value project INSTANCE NAME string format REGION us central1 JAVA VERSION java11 SERVICE ENTRY POINT com google cloud pso bqremotefunc StringFormat 2 clone this git repo on GCP project and got to directory cd examples bq remote function string formatter 3 Deploy the code as cloud run using below commands gcloud functions deploy INSTANCE NAME project PROJECT NAME gen2 region REGION runtime JAVA VERSION entry point SERVICE ENTRY POINT trigger http 4 Copy the https url from cloud run UI 5 Create a remote function in BigQuery 1 Create a connection of type CLOUD RESOURCE replace connection name in below command and run on cloud shell bq mk connection display name connection name connection type CLOUD RESOURCE project id PROJECT ID location REGION connection name 2 Create a remote function in BigQuery Editor with below query replace the variables based on your environment CREATE or Replace FUNCTION project id dataset function name text STRING RETURNS STRING REMOTE WITH CONNECTION BQ connection name OPTIONS endpoint HTTP end point of the cloud run service 6 Use the remote function in a query just like any other user defined functions SELECT project id dataset function name col name from select from unnest text1 text2 text3 as col name 7 Expected Output text1 test text2 test text3 test Logging and Monitoring the cloud run Go to GCP Cloud run click the instance created select LOGS on action bar when the instance is invoked from BigQuery you will see the logs printed parallely in METRICS section you can check the request count container utilisation and billable time Cost The cost can be calculated using pricing calculator https cloud google com products calculator for both Cloud Run and BigQuery utilization by entering CPU memory and concurrent requests count Clean up To destroy delete the cloud run instance and bq remote function Limitations BQ remote function fails to support payload 10mb https cloud google com bigquery quotas query jobs text Maximum 20request 20size like 20query 20parameters and accepts certain data types https cloud google com bigquery docs reference standard sql remote functions limitations Next steps For more Cloud Run samples beyond Java see the main list in the Cloud Run Samples repository https github com GoogleCloudPlatform cloud run samples
GCP Uploading files directly to Google Cloud Storage by using Signed URL This is an architecture for uploading files directly to Google Cloud Storage by using Signed URL Overview This code implements the following architecture
# Uploading files directly to Google Cloud Storage by using Signed URL This is an architecture for uploading files directly to Google Cloud Storage by using Signed URL. ## Overview This code implements the following architecture: ![architecture diagram](./architecture.png) The characteristic of the architecture is that serverless realizes processing from file-upload to delivery. Let’s see what kind of processing could be done in order. 1. Generates a Signed URL that allows PUT request to be executed only for a specific bucket and object for authenticated user by application domain logic. 2. The user tries to upload a file for a specific bucket and object by using given Signed URL. 3. When completed to upload the file to GCS, Google Cloud Functions is triggered as finalize event. GCF validates the uploaded file. 4. After confirmed that the file is image format and appropriate size at 3, annotate the image by posting Cloud Vision API to filter inappropriate content. 5. Both 3 and 4 validations are completed, copy the image file from the uploaded bucket to the distribution bucket. 6. The copied image file is now available to the public. ## Usage First off, you should check the requirements for realizing this system. ## Requirements ### API In order to realize this system, you need to enable following APIs. - Cloud Storage API - Cloud Functions API - Identity and Access Management (IAM) API - If you are going to use original service account and its private key instead of the `signBlob` API, you don't need to enable this API. - Cloud Vision API ### Service Account In order to generate the Signed URL on App Engine Standard, you need to prepare the service account for signing signature. The service account must have following authorities: - `storage.buckets.get` - `storage.objects.create` - `storage.objects.delete` And you need to grant your service account `Service Account Token Creator`. ## Step.1 Create uploadable and distribution buckets Before deploying applications, you should create two buckets for use in this system. ```sh REGION="<REGION>" PROJECT_ID="<PROJECT ID>" UPLOADABLE_BUCKET="<UPLOADABLE BUCKET NAME>" DISTRIBUTION_BUCKET="<DISTRIBUTION BUCKET NAME>" LIFECYCLE_POLICY_FILE="./lifecycle.json" # Creates the uploadable bucket gsutil mb -p $PROJECT_ID -l $REGION --retention 900s gs://$UPLOADABLE_BUCKET # Creates the bucket for distribution gsutil mb -p $PROJECT_ID -l $REGION gs://$DISTRIBUTION_BUCKET # Set lifecycle for the uploadable bucket gsutil lifecycle set $LIFECYCLE_POLICY_FILE gs://$UPLOADABLE_BUCKET # Publish all objects to all users gsutil iam ch allUsers:objectViewer gs://$DISTRIBUTION_BUCKET ``` ### Step.2 Deploy to App Engine Standard To generate Signed URL, you need to deploy the code placed in `appengine`. Make sure environment variables have appropriate values in `app.yaml` before deploying. ```sh cd appengine # Make sure environment variables have appropriate values in app.yaml gcloud app deploy ``` ### Step.3 Deploy to Google Cloud Functions To validate files and copy files to distribution bucket, you need to deploy the code placed in `function`. Make sure constant variables have appropriate values in `function/main.go`. ```sh UPLOADABLE_BUCKET="<UPLOADABLE_BUCKET>" cd function # Make sure constant variables have appropriate values in `function/main.go`. gcloud functions deploy UploadImage --runtime go111 --trigger-resource $UPLOADABLE_BUCKET --trigger-event google.storage.object.finalize --retry ``` ### Step.4 Try to upload your image! By executing the following code, you can try to upload sample image by using Signed URL. ```go package main import ( "bytes" "fmt" "io/ioutil" "log" "net/http" "net/url" "strings" ) const signerUrl = "<APPENGINE_URL>" func getSignedURL(target string, values url.Values) (string, error) { resp, err := http.PostForm(target, values) if err != nil { return "", err } defer resp.Body.Close() b, err := ioutil.ReadAll(resp.Body) if err != nil { return "", err } return strings.TrimSpace(string(b)), nil } func main() { // Get signed url by requesting API server hosted on App Engine. u, err := getSignedURL(signerUrl, url.Values{"content_type": {"image/png"}, "ext": {"png"}}) if err != nil { log.Fatal(err) } fmt.Printf("Signed URL here: %q\n", u) b, err := ioutil.ReadFile("./sample.png") if err != nil { log.Fatal(err) } req, err := http.NewRequest("PUT", u, bytes.NewReader(b)) if err != nil { log.Fatal(err) } req.Header.Add("Content-Type", "image/png") client := new(http.Client) resp, err := client.Do(req) if err != nil { log.Fatal(err) } fmt.Println(resp) } ``` And then you can confirm that the sample image file is now published by accessing `https://console.cloud.google.com/storage/browser/$DISTRIBUTION_BUCKET?project=PROJECT_ID`.
GCP
Uploading files directly to Google Cloud Storage by using Signed URL This is an architecture for uploading files directly to Google Cloud Storage by using Signed URL Overview This code implements the following architecture architecture diagram architecture png The characteristic of the architecture is that serverless realizes processing from file upload to delivery Let s see what kind of processing could be done in order 1 Generates a Signed URL that allows PUT request to be executed only for a specific bucket and object for authenticated user by application domain logic 2 The user tries to upload a file for a specific bucket and object by using given Signed URL 3 When completed to upload the file to GCS Google Cloud Functions is triggered as finalize event GCF validates the uploaded file 4 After confirmed that the file is image format and appropriate size at 3 annotate the image by posting Cloud Vision API to filter inappropriate content 5 Both 3 and 4 validations are completed copy the image file from the uploaded bucket to the distribution bucket 6 The copied image file is now available to the public Usage First off you should check the requirements for realizing this system Requirements API In order to realize this system you need to enable following APIs Cloud Storage API Cloud Functions API Identity and Access Management IAM API If you are going to use original service account and its private key instead of the signBlob API you don t need to enable this API Cloud Vision API Service Account In order to generate the Signed URL on App Engine Standard you need to prepare the service account for signing signature The service account must have following authorities storage buckets get storage objects create storage objects delete And you need to grant your service account Service Account Token Creator Step 1 Create uploadable and distribution buckets Before deploying applications you should create two buckets for use in this system sh REGION REGION PROJECT ID PROJECT ID UPLOADABLE BUCKET UPLOADABLE BUCKET NAME DISTRIBUTION BUCKET DISTRIBUTION BUCKET NAME LIFECYCLE POLICY FILE lifecycle json Creates the uploadable bucket gsutil mb p PROJECT ID l REGION retention 900s gs UPLOADABLE BUCKET Creates the bucket for distribution gsutil mb p PROJECT ID l REGION gs DISTRIBUTION BUCKET Set lifecycle for the uploadable bucket gsutil lifecycle set LIFECYCLE POLICY FILE gs UPLOADABLE BUCKET Publish all objects to all users gsutil iam ch allUsers objectViewer gs DISTRIBUTION BUCKET Step 2 Deploy to App Engine Standard To generate Signed URL you need to deploy the code placed in appengine Make sure environment variables have appropriate values in app yaml before deploying sh cd appengine Make sure environment variables have appropriate values in app yaml gcloud app deploy Step 3 Deploy to Google Cloud Functions To validate files and copy files to distribution bucket you need to deploy the code placed in function Make sure constant variables have appropriate values in function main go sh UPLOADABLE BUCKET UPLOADABLE BUCKET cd function Make sure constant variables have appropriate values in function main go gcloud functions deploy UploadImage runtime go111 trigger resource UPLOADABLE BUCKET trigger event google storage object finalize retry Step 4 Try to upload your image By executing the following code you can try to upload sample image by using Signed URL go package main import bytes fmt io ioutil log net http net url strings const signerUrl APPENGINE URL func getSignedURL target string values url Values string error resp err http PostForm target values if err nil return err defer resp Body Close b err ioutil ReadAll resp Body if err nil return err return strings TrimSpace string b nil func main Get signed url by requesting API server hosted on App Engine u err getSignedURL signerUrl url Values content type image png ext png if err nil log Fatal err fmt Printf Signed URL here q n u b err ioutil ReadFile sample png if err nil log Fatal err req err http NewRequest PUT u bytes NewReader b if err nil log Fatal err req Header Add Content Type image png client new http Client resp err client Do req if err nil log Fatal err fmt Println resp And then you can confirm that the sample image file is now published by accessing https console cloud google com storage browser DISTRIBUTION BUCKET project PROJECT ID
GCP Context De id Pipeline Design Document Objective The DLP De identification Pipeline aims to identify and anonymize sensitive data stored in BigQuery or Google Cloud Storage GCS The pipeline reads data from a source de identifies sensitive information using Cloud DLP and writes the de identified data to a corresponding location in the specified destination This enables the secure migration of data to lower environments such as development and testing where developers or other users require data access but sensitive information needs to be removed to mitigate privacy and security risks Background
## De-id Pipeline Design Document ### Context #### Objective The DLP De-identification Pipeline aims to identify and anonymize sensitive data stored in BigQuery or Google Cloud Storage (GCS). The pipeline reads data from a source, de-identifies sensitive information using Cloud DLP, and writes the de-identified data to a corresponding location in the specified destination. This enables the secure migration of data to lower environments, such as development and testing, where developers or other users require data access, but sensitive information needs to be removed to mitigate privacy and security risks. #### Background Production environments often contain sensitive information or Personally Identifiable Information (PII), but lower environments require de-identified data to prevent unauthorized access and potential breaches. Therefore, migrating de-identified data to these environments is crucial for purposes including testing, development, and analysis. Ideally, de-identified data should closely resemble the source data to facilitate the accurate replication of processes and scenarios found in production. This allows users in lower environments to work with realistic data without compromising sensitive information. Google Cloud's Sensitive Data Protection (also known as Data Loss Prevention or DLP) service offers built-in features for identifying and de-identifying sensitive data in Cloud Storage and integrates with services like BigQuery. However, it has limitations regarding file types and sizes and lacks a unified solution that seamlessly handles both BigQuery and Cloud Storage data de-identification. This pipeline addresses these limitations by providing a comprehensive and scalable solution for de-identifying data across both BigQuery and GCS. ### Design #### Overview The DLP De-identification Pipeline is a Dataflow pipeline that anonymizes sensitive data residing in BigQuery or Google Cloud Storage (GCS). It offers a comprehensive solution for migrating data to lower environments while ensuring privacy and security. ![GCS mode diagram](diagrams/design_diagram_gcs.png) ![BQ mode diagram](diagrams/design_diagram_gcs.png) The De-id pipeline works as follows: 1. **Data Ingestion**: The pipeline reads data from either BigQuery tables or various file formats stored in GCS. 2. **De-identification**: Leveraging Cloud DLP’s powerful de-identification capabilities, the pipeline anonymizes sensitive data within the ingested data. This includes techniques like: * **Format-Preserving Encryption:** This technique encrypts sensitive data while maintaining its original format and referential integrity. This is crucial for preserving data utility in lower environments. * **Other De-identification Techniques:** Cloud DLP offers a range of other de-identification techniques, such as masking, redaction, tokenization, and pseudonymization, which can be configured based on specific needs and privacy requirements. 3. **Output**: The pipeline writes the de-identified data to the specified destination, mirroring the source structure and format. This ensures consistency and facilitates seamless integration with downstream processes in lower environments. The De-id pipeline offers several **key benefits**: * **Comprehensive Solution:** Handles both structured data from BigQuery and unstructured/semi-structured data from GCS. * **Scalability and Reliability:** Built on Dataflow, the pipeline provides scalability and reliability for handling large datasets and heavy de-identification tasks. * **Data Utility:** Format-preserving encryption and other de-identification techniques ensure that the anonymized data remains useful for testing, development, and analysis in lower environments. * **Security and Privacy:** By de-identifying sensitive data, the pipeline helps protect sensitive information and comply with privacy regulations. The De-id pipeline offers a robust and efficient way to create secure and usable data copies for lower environments, enabling various data-driven activities without compromising sensitive information. #### Detailed Design ##### DLP Templates This solution employs templates to streamline the de-identification process for sensitive data. By configuring de-identification settings within a template, a reusable blueprint is established. This eliminates the need for repetitive configuration, allowing de-identification jobs to be executed multiple times with ease. To ensure referential integrity while masking sensitive information, a combination of format-preserving encryption (FPE) and regular expressions (regex) is utilized. This approach enables the original data pattern to be maintained even after de-identification. **Illustrative Example: Customer ID** Consider a scenario where a Customer ID follows the format "A123456" (i.e., "A" followed by a 6-digit number) and is classified as PII. A custom PII info type named "CUSTOMER\_ID" can be configured within the inspection template, utilizing the following regex: ```json { "info_type": { "name": "CUSTOMER_ID" }, "regex": { "group_indexes":, "pattern": "(A)(\\d{6})" } } ``` In this regex, two group indexes are defined, but only the second group index (the 6-digit number) is designated as sensitive. This ensures that during de-identification, only the numerical portion undergoes transformation. FPE guarantees that the output remains a 6-digit number, and by preserving the prefix "A," the overall pattern of the Customer ID is retained. **FPE Configuration** Here’s an example of how FPE can be configured within the de-identification template for this Customer ID: ```json { "primitive_transformation": { "crypto_replace_ffx_fpe_config": { "crypto_key": <CYPTO_KEY> "common_alphabet": "NUMERIC" } }, "info_types": [ { "name": "CUSTOMER_ID" } ] } ``` **Template Configuration for this Example** This table below shows the PII configured and how they are de-identified using this solution. The inspection and de-identification templates can be customized to suit your specific needs and integrate them into your data processing pipeline. | **PII Info Type** | **Original** | **De-identified** | | :----------------- | :----------- | :--------------- | | Customer ID | A935492 | A678512 | | Email Address | [email protected] | [email protected] | | Credit Card Number | 3524882434259679 | 1839406548854298 | | SSN | 298-34-4337 | 515-57-9132 | | Date | 1979-10-29 | 1982-08-24 |
GCP
De id Pipeline Design Document Context Objective The DLP De identification Pipeline aims to identify and anonymize sensitive data stored in BigQuery or Google Cloud Storage GCS The pipeline reads data from a source de identifies sensitive information using Cloud DLP and writes the de identified data to a corresponding location in the specified destination This enables the secure migration of data to lower environments such as development and testing where developers or other users require data access but sensitive information needs to be removed to mitigate privacy and security risks Background Production environments often contain sensitive information or Personally Identifiable Information PII but lower environments require de identified data to prevent unauthorized access and potential breaches Therefore migrating de identified data to these environments is crucial for purposes including testing development and analysis Ideally de identified data should closely resemble the source data to facilitate the accurate replication of processes and scenarios found in production This allows users in lower environments to work with realistic data without compromising sensitive information Google Cloud s Sensitive Data Protection also known as Data Loss Prevention or DLP service offers built in features for identifying and de identifying sensitive data in Cloud Storage and integrates with services like BigQuery However it has limitations regarding file types and sizes and lacks a unified solution that seamlessly handles both BigQuery and Cloud Storage data de identification This pipeline addresses these limitations by providing a comprehensive and scalable solution for de identifying data across both BigQuery and GCS Design Overview The DLP De identification Pipeline is a Dataflow pipeline that anonymizes sensitive data residing in BigQuery or Google Cloud Storage GCS It offers a comprehensive solution for migrating data to lower environments while ensuring privacy and security GCS mode diagram diagrams design diagram gcs png BQ mode diagram diagrams design diagram gcs png The De id pipeline works as follows 1 Data Ingestion The pipeline reads data from either BigQuery tables or various file formats stored in GCS 2 De identification Leveraging Cloud DLP s powerful de identification capabilities the pipeline anonymizes sensitive data within the ingested data This includes techniques like Format Preserving Encryption This technique encrypts sensitive data while maintaining its original format and referential integrity This is crucial for preserving data utility in lower environments Other De identification Techniques Cloud DLP offers a range of other de identification techniques such as masking redaction tokenization and pseudonymization which can be configured based on specific needs and privacy requirements 3 Output The pipeline writes the de identified data to the specified destination mirroring the source structure and format This ensures consistency and facilitates seamless integration with downstream processes in lower environments The De id pipeline offers several key benefits Comprehensive Solution Handles both structured data from BigQuery and unstructured semi structured data from GCS Scalability and Reliability Built on Dataflow the pipeline provides scalability and reliability for handling large datasets and heavy de identification tasks Data Utility Format preserving encryption and other de identification techniques ensure that the anonymized data remains useful for testing development and analysis in lower environments Security and Privacy By de identifying sensitive data the pipeline helps protect sensitive information and comply with privacy regulations The De id pipeline offers a robust and efficient way to create secure and usable data copies for lower environments enabling various data driven activities without compromising sensitive information Detailed Design DLP Templates This solution employs templates to streamline the de identification process for sensitive data By configuring de identification settings within a template a reusable blueprint is established This eliminates the need for repetitive configuration allowing de identification jobs to be executed multiple times with ease To ensure referential integrity while masking sensitive information a combination of format preserving encryption FPE and regular expressions regex is utilized This approach enables the original data pattern to be maintained even after de identification Illustrative Example Customer ID Consider a scenario where a Customer ID follows the format A123456 i e A followed by a 6 digit number and is classified as PII A custom PII info type named CUSTOMER ID can be configured within the inspection template utilizing the following regex json info type name CUSTOMER ID regex group indexes pattern A d 6 In this regex two group indexes are defined but only the second group index the 6 digit number is designated as sensitive This ensures that during de identification only the numerical portion undergoes transformation FPE guarantees that the output remains a 6 digit number and by preserving the prefix A the overall pattern of the Customer ID is retained FPE Configuration Here s an example of how FPE can be configured within the de identification template for this Customer ID json primitive transformation crypto replace ffx fpe config crypto key CYPTO KEY common alphabet NUMERIC info types name CUSTOMER ID Template Configuration for this Example This table below shows the PII configured and how they are de identified using this solution The inspection and de identification templates can be customized to suit your specific needs and integrate them into your data processing pipeline PII Info Type Original De identified Customer ID A935492 A678512 Email Address email example net 9jRsv example net Credit Card Number 3524882434259679 1839406548854298 SSN 298 34 4337 515 57 9132 Date 1979 10 29 1982 08 24
GCP This Beam pipeline reads data from either Google Cloud Storage GCS or BigQuery BQ de identifies sensitive data using DLP and writes the de identified data to the corresponding destination in GCS or BQ The pipeline supports two modes To learn more read DLP De identification Pipeline GCS mode For processing files stored in GCS in Avro CSV TXT DAT and JSON formats Setup BigQuery mode For processing data stored in BigQuery tables
# DLP De-identification Pipeline This Beam pipeline reads data from either Google Cloud Storage (GCS) or BigQuery (BQ), de-identifies sensitive data using DLP, and writes the de-identified data to the corresponding destination in GCS or BQ. The pipeline supports two modes: * **GCS mode:** For processing files stored in GCS in Avro, CSV, TXT, DAT, and JSON formats. * **BigQuery mode:** For processing data stored in BigQuery tables. To learn more, read [DOC.md](DOC.md) ## Setup Before running the pipeline, ensure the following prerequisites are met: * **DLP Inspect Template:** A DLP inspect template that defines the types of sensitive data to be identified. See [steps](src/dlp/templates/README.md#setup-and-deploy-the-templates) for creating a DLP template. * **DLP De-identify Template:** A DLP de-identify template that defines how to transform the sensitive data. See [steps](src/dlp/templates/README.md#setup-and-deploy-the-templates) for creating a DLP template. * **Service Account:** A service account with the necessary permissions to access resources in both the source and destination projects. This service account should have the following roles: * **Source Project:** Dataflow Admin, Dataflow Worker, Storage Object Admin, DLP Administrator, BigQuery Data Editor, BigQuery Job User. * **Destination Project:** Storage Object Viewer, BigQuery Data Editor, BigQuery Job User. * **BigQuery Schema (BigQuery mode only):** The dataset and tables, including their schema, should already exist in the destination project. ## Pipeline Options | Pipeline Option | Description | |---|---| | `project` | The Google Cloud project ID. | | `region` | The Google Cloud region where the Dataflow job will run. | | `job_name` | The name of the Dataflow job. | | `service_account` | The service account used to run the Dataflow job. | | `machine_type` | The machine type for Dataflow workers. | | `max_num_workers` | The maximum number of Dataflow workers. | | `job_dir` | The GCS location for staging Dataflow job files. | | `prod` | A boolean flag indicating whether the pipeline is running in production mode. 'True' runs the pipeline with Dataflow runner while 'False' runs the pipeline locally.| | `inspect_template` | The name of the DLP inspect template. | | `deidentify_template` | The name of the DLP de-identify template. | | `dlp_batch_size` | The batch size for processing data with DLP. The default is 100.| | `mode` | The mode of operation: "gcs" for Google Cloud Storage or "bq" for BigQuery. | | `input_dir` | (GCS mode) The GCS location of the input data. | | `output_dir` | (GCS mode) The GCS location for the output data. It can be in a different project from the input | | `input_projects` | (BigQuery mode) A list of Google Cloud project IDs containing the input BigQuery tables. | | `output_projects` | (BigQuery mode) A list of Google Cloud project IDs where the output BigQuery tables will be written. | | `config_file` | YAML config file with all the pipeline options set to avoid passing a lot of options in a command. | ## Run 1. To avoid passing many flags in the run command, fill [config.yaml](config.yaml) with the paramaters. - Example `config.yaml ` to deidentify data in GCS ```yaml project: project-id region: us-central1 job_name: dlp-deid-pipeline service_account: [email protected] machine_type: n1-standard-2 max_num_workers: 30 job_dir: gs://staging-bucket prod: False # DLP Params inspect_template: projects/project-id/locations/global/inspectTemplates/inspect_template deidentify_template: projects/project-id/locations/global/deidentifyTemplates/deidentify_template dlp_batch_size: 100 # Either "gcs" or "bq" mode: gcs # GCS mode required paramas input_dir: gs://input-bucket/dir output_dir: gs://output-bucket/dir ``` - Example `config.yaml` to deidentify data in BigQuery ```yaml project: project-id region: us-central1 job_name: dlp-deid-pipeline service_account: [email protected] machine_type: n1-standard-2 max_num_workers: 30 job_dir: gs://staging-bucket # DLP Params inspect_template: projects/project-id/locations/global/inspectTemplates/inspect_template deidentify_template: projects/project-id/locations/global/deidentifyTemplates/deidentify_template dlp_batch_size: 100 # Either "gcs" or "bq" mode: bq # BigQuery mode required params input_projects: input_projects1,input_projects2 output_projects: output_projects1,output_projects2 ``` 2. Create virtual env ```bash python3 -m venv .venv source .venv/bin/activate ``` 3. Run - Run locally ``` python3 src.run --config_file config.yaml ``` - Run on Dataflow ``` python3 src.run --config_file config.yaml --prod true ```
GCP
DLP De identification Pipeline This Beam pipeline reads data from either Google Cloud Storage GCS or BigQuery BQ de identifies sensitive data using DLP and writes the de identified data to the corresponding destination in GCS or BQ The pipeline supports two modes GCS mode For processing files stored in GCS in Avro CSV TXT DAT and JSON formats BigQuery mode For processing data stored in BigQuery tables To learn more read DOC md DOC md Setup Before running the pipeline ensure the following prerequisites are met DLP Inspect Template A DLP inspect template that defines the types of sensitive data to be identified See steps src dlp templates README md setup and deploy the templates for creating a DLP template DLP De identify Template A DLP de identify template that defines how to transform the sensitive data See steps src dlp templates README md setup and deploy the templates for creating a DLP template Service Account A service account with the necessary permissions to access resources in both the source and destination projects This service account should have the following roles Source Project Dataflow Admin Dataflow Worker Storage Object Admin DLP Administrator BigQuery Data Editor BigQuery Job User Destination Project Storage Object Viewer BigQuery Data Editor BigQuery Job User BigQuery Schema BigQuery mode only The dataset and tables including their schema should already exist in the destination project Pipeline Options Pipeline Option Description project The Google Cloud project ID region The Google Cloud region where the Dataflow job will run job name The name of the Dataflow job service account The service account used to run the Dataflow job machine type The machine type for Dataflow workers max num workers The maximum number of Dataflow workers job dir The GCS location for staging Dataflow job files prod A boolean flag indicating whether the pipeline is running in production mode True runs the pipeline with Dataflow runner while False runs the pipeline locally inspect template The name of the DLP inspect template deidentify template The name of the DLP de identify template dlp batch size The batch size for processing data with DLP The default is 100 mode The mode of operation gcs for Google Cloud Storage or bq for BigQuery input dir GCS mode The GCS location of the input data output dir GCS mode The GCS location for the output data It can be in a different project from the input input projects BigQuery mode A list of Google Cloud project IDs containing the input BigQuery tables output projects BigQuery mode A list of Google Cloud project IDs where the output BigQuery tables will be written config file YAML config file with all the pipeline options set to avoid passing a lot of options in a command Run 1 To avoid passing many flags in the run command fill config yaml config yaml with the paramaters Example config yaml to deidentify data in GCS yaml project project id region us central1 job name dlp deid pipeline service account dlp deid pipeline sa project id iam gserviceaccount com machine type n1 standard 2 max num workers 30 job dir gs staging bucket prod False DLP Params inspect template projects project id locations global inspectTemplates inspect template deidentify template projects project id locations global deidentifyTemplates deidentify template dlp batch size 100 Either gcs or bq mode gcs GCS mode required paramas input dir gs input bucket dir output dir gs output bucket dir Example config yaml to deidentify data in BigQuery yaml project project id region us central1 job name dlp deid pipeline service account dlp deid pipeline sa project id iam gserviceaccount com machine type n1 standard 2 max num workers 30 job dir gs staging bucket DLP Params inspect template projects project id locations global inspectTemplates inspect template deidentify template projects project id locations global deidentifyTemplates deidentify template dlp batch size 100 Either gcs or bq mode bq BigQuery mode required params input projects input projects1 input projects2 output projects output projects1 output projects2 2 Create virtual env bash python3 m venv venv source venv bin activate 3 Run Run locally python3 src run config file config yaml Run on Dataflow python3 src run config file config yaml prod true
GCP Credit Card Number 3524882434259679 1839406548854298 are reusable configurations that tell DLP how to inspect de identify or re identify your data PII Info Type Original De identified DLP Templates Email Address email example net 9jRsv example net Customer ID A935492 A678512 This solution considers the following as sensitive data and provides the expected outcome
# DLP Templates [Templates](https://cloud.google.com/sensitive-data-protection/docs/concepts-templates) are reusable configurations that tell DLP how to inspect, de-identify, or re-identify your data. This solution considers the following as sensitive data and provides the expected outcome: | PII Info Type | Original | De-identified | |-----------------|-------------------|-------------------| | Customer ID | A935492 | A678512 | | Email Address | [email protected] | [email protected] | | Credit Card Number | 3524882434259679 | 1839406548854298 | | SSN | 298-34-4337 | 515-57-9132 | | Date | 1979-10-29 | 1982-08-24 | In this solution, the templates are created using [Cloud Functions](https://cloud.google.com/functions/1stgendocs/concepts/overview). ## Setup and Deploy the Templates - Set Project ID and Region ``` PROJECT_ID=<project_id> REGION=<region> PROJECT_NUMBER=<project_number> ``` - Enable required APIs ``` gcloud services enable \ cloudfunctions.googleapis.com \ secretmanager.googleapis.com \ dlp.googleapis.com \ cloudkms.googleapis.com ``` - Ensure the service account has the required permissions (since a 1st Gen function is used, the default service account is the App Engine service account) ``` gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${PROJECT_ID}@appspot.gserviceaccount.com" \ --role="roles/secretmanager.secretAccessor" \ --role="roles/dlp.user" \ --role="roles/cloudkms.cryptoKeyEncrypterDecrypter" ``` - Create a KMS key ring ``` gcloud kms keyrings create "dlp-keyring" \ --location "global" ``` - Create a key ``` gcloud kms keys create "dlp-key" \ --location "global" \ --keyring "dlp-keyring" \ --purpose "encryption" ``` - Create a 256-bit AES key using openssl: ``` openssl rand -out "./aes_key.bin" 32 ``` - Encode the key as base64 string and wrap key using Cloud KMS key ``` curl "https://cloudkms.googleapis.com/v1/projects/datastream-rm/locations/global/keyRings/dlp-keyring/cryptoKeys/dlp-key:encrypt" \ --request "POST" \ --header "Authorization:Bearer $(gcloud auth application-default print-access-token)" \ --header "content-type: application/json" \ --data "{\"plaintext\": \"$(base64 -i ./aes_key.bin)"}" ``` - Store wrapped key in secret manager ``` echo -n "<ciphertext from previous result>" | gcloud secrets create dlp-wrapped-key \ --replication-policy="automatic" \ --data-file=- ``` - Set the key name and the wrapped key ``` KMS_KEY_NAME=projects/$PROJECT/locations/global/keyRings/dlp-keyring/cryptoKeys/dlp-key SECRET_NAME=projects/$PROJECT_NUMBER/secrets/tdm-dlp-wrapped-key ``` - Deploy the function to create an inspect template ``` gcloud functions deploy create-inspect-template \ --runtime python311 \ --trigger-http \ --source src/dlp/templates/inspect \ --entry-point main \ --region $REGION ``` - Deploy the function to create a de-identify template ``` gcloud functions deploy create-deidentify-template \ --runtime python311 \ --trigger-http \ --source src/dlp/templates/deidentify \ --entry-point main \ --region $REGION ``` - Create inspect template ``` gcloud functions call create-inspect-template \ --data '{"project": "{$PROJECT_ID}"}' ``` - Create deidentify template ``` gcloud functions call create-deidentify-template \ --data '{"project": "{$PROJECT_ID}", "kms_key_name: "{$KMS_KEY_NAME}", "secret_name": "{$SECRET_NAME}}' ``` ## Templates If you followed the steps correctly, you should now have two DLP templates in your project. These templates names should look like below: ``` projects/<project_id>/locations/global/inspectTemplates/inspect_template projects/<project_id>/locations/global/inspectTemplates/deidentify_template ``
GCP
DLP Templates Templates https cloud google com sensitive data protection docs concepts templates are reusable configurations that tell DLP how to inspect de identify or re identify your data This solution considers the following as sensitive data and provides the expected outcome PII Info Type Original De identified Customer ID A935492 A678512 Email Address email example net 9jRsv example net Credit Card Number 3524882434259679 1839406548854298 SSN 298 34 4337 515 57 9132 Date 1979 10 29 1982 08 24 In this solution the templates are created using Cloud Functions https cloud google com functions 1stgendocs concepts overview Setup and Deploy the Templates Set Project ID and Region PROJECT ID project id REGION region PROJECT NUMBER project number Enable required APIs gcloud services enable cloudfunctions googleapis com secretmanager googleapis com dlp googleapis com cloudkms googleapis com Ensure the service account has the required permissions since a 1st Gen function is used the default service account is the App Engine service account gcloud projects add iam policy binding PROJECT ID member serviceAccount PROJECT ID appspot gserviceaccount com role roles secretmanager secretAccessor role roles dlp user role roles cloudkms cryptoKeyEncrypterDecrypter Create a KMS key ring gcloud kms keyrings create dlp keyring location global Create a key gcloud kms keys create dlp key location global keyring dlp keyring purpose encryption Create a 256 bit AES key using openssl openssl rand out aes key bin 32 Encode the key as base64 string and wrap key using Cloud KMS key curl https cloudkms googleapis com v1 projects datastream rm locations global keyRings dlp keyring cryptoKeys dlp key encrypt request POST header Authorization Bearer gcloud auth application default print access token header content type application json data plaintext base64 i aes key bin Store wrapped key in secret manager echo n ciphertext from previous result gcloud secrets create dlp wrapped key replication policy automatic data file Set the key name and the wrapped key KMS KEY NAME projects PROJECT locations global keyRings dlp keyring cryptoKeys dlp key SECRET NAME projects PROJECT NUMBER secrets tdm dlp wrapped key Deploy the function to create an inspect template gcloud functions deploy create inspect template runtime python311 trigger http source src dlp templates inspect entry point main region REGION Deploy the function to create a de identify template gcloud functions deploy create deidentify template runtime python311 trigger http source src dlp templates deidentify entry point main region REGION Create inspect template gcloud functions call create inspect template data project PROJECT ID Create deidentify template gcloud functions call create deidentify template data project PROJECT ID kms key name KMS KEY NAME secret name SECRET NAME Templates If you followed the steps correctly you should now have two DLP templates in your project These templates names should look like below projects project id locations global inspectTemplates inspect template projects project id locations global inspectTemplates deidentify template
GCP Sentiment analysis using TensorFlow RNNEstimator on Google Cloud Platform TensorFlow input without preprocessing needed A more detailed guide can be found This code aims at providing a simple example of how to train a RNN model using Overview on Google Cloud Platform The model is designed to handle raw text files in
# Sentiment analysis using TensorFlow RNNEstimator on Google Cloud Platform. ### Overview. This code aims at providing a simple example of how to train a RNN model using TensorFlow [RNNEstimator](https://www.tensorflow.org/api_docs/python/tf/contrib/estimator/RNNEstimator) on Google Cloud Platform. The model is designed to handle raw text files in input without preprocessing needed. A more detailed guide can be found [here](https://docs.google.com/document/d/1CKYdv_LyTcpQw07UH_4iCsxL6IGs6hmsFWwUMv5bwug/edit#). ### Problem and data. The problem is a text classification example where we categorize the movie reviews into positive or negative sentiment. We base this example on the IMDb dataset provided from this website: http://ai.stanford.edu/~amaas/data/sentiment/ ### Set-up environment. ```sh PROJECT_NAME=sentiment_analysis git clone https://github.com/GoogleCloudPlatform/professional-services.git cd professional-services/examples/cloudml-sentiment-analysis python -m virtualenv env source env/bin/activate python -m pip install -U pip python -m pip install -r requirements.txt ``` ### Download data. ```sh DATA_PATH=data INPUT_DATA=${DATA_PATH}/aclImdb/train TRAINING_INPUT_DATA=${DATA_PATH}/training_data wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz -P $DATA_PATH tar -xzf ${DATA_PATH}/aclImdb_v1.tar.gz -C $DATA_PATH ``` ### Configure GCP. ```sh PROJECT_ID=<...> BUCKET_PATH=<...> gcloud config set project $PROJECT_ID ``` ### Move data to GCP. ```sh gsutil -m cp -r $DATA_PATH/aclImdb $BUCKET_PATH GCP_INPUT_DATA=$BUCKET_PATH/aclImdb/train ``` ### Preprocess data. ```sh JOB_NAME=training-$(date +"%Y%m%d-%H%M%S") PROCESSED_DATA=$BUCKET_PATH/processed_data/$JOB_NAME python run_preprocessing.py \ --input_dir=$GCP_INPUT_DATA \ --output_dir=$PROCESSED_DATA \ --gcp=True \ --project_id=$PROJECT_ID \ --job_name=$JOB_NAME \ --num_workers=8 \ --worker_machine_type=n1-highcpu-4 \ --region=us-central1 ``` ### Train model locally. ```sh MODEL_NAME=${PROJECT_NAME}_$(date +"%Y%m%d_%H%M%S") TRAINING_OUTPUT_DIR=models/$MODEL_NAME python -m trainer.task \ --input_dir=$PROCESSED_DATA \ --model_dir=$TRAINING_OUTPUT_DIR ``` ### Train model on GCP. ```sh MODEL_NAME=${PROJECT_NAME}_$(date +"%Y%m%d_%H%M%S") TRAINING_OUTPUT_DIR=${BUCKET_PATH}/$MODEL_NAME gcloud ml-engine jobs submit training $MODEL_NAME \ --module-name trainer.task \ --staging-bucket $BUCKET_PATH \ --package-path $PWD/trainer \ --region=us-central1 \ --runtime-version 1.12 \ --config=config_hp_tuning.yaml \ --stream-logs \ -- \ --input_dir $PROCESSED_DATA \ --model_dir $TRAINING_OUTPUT_DIR ``` ### Train model locally with gcloud. ```sh MODEL_NAME=${PROJECT_NAME}_$(date +"%Y%m%d_%H%M%S") TRAINING_OUTPUT_DIR=models/$MODEL_NAME gcloud ml-engine local train \ --module-name=trainer.task \ --package-path=$PWD/trainer \ -- \ --input_dir=$PROCESSED_DATA \ --model_dir=$TRAINING_OUTPUT_DIR ``` ### Monitor with tensorboard. ```sh tensorboard --logdir=$TRAINING_OUTPUT_DIR ``` ### Save model in GCP. **With HP tuning:** ```sh TRIAL_NUMBER='' MODEL_SAVED_NAME=$(gsutil ls ${TRAINING_OUTPUT_DIR}/${TRIAL_NUMBER}/export/exporter/ | tail -1) ``` **Without HP tuning:** ```sh MODEL_SAVED_NAME=$(gsutil ls ${TRAINING_OUTPUT_DIR}/export/exporter/ | tail -1) ``` ```sh gcloud ml-engine models create $PROJECT_NAME \ --regions us-central1 gcloud ml-engine versions create $MODEL_NAME \ --model $PROJECT_NAME \ --origin $MODEL_SAVED_NAME \ --runtime-version 1.12 ``` ### Make local online predictions. ```sh gcloud ml-engine local predict \ --model-dir=${TRAINING_OUTPUT_DIR}/export/exporter/$(ls ${TRAINING_OUTPUT_DIR}/export/exporter/ | tail -1) \ --text-instances=${DATA_PATH}/aclImdb/test/*/*.txt ``` ### Make online predictions with GCP. ```sh gcloud ml-engine predict \ --model=$PROJECT_NAME \ --version=$MODEL_NAME \ --text-instances=$DATA_PATH/aclImdb/test/neg/0_2.txt ``` ### Move out of sample data to GCS. ```sh PREDICTION_DATA_PATH=${BUCKET_PATH}/prediction_data gsutil -m cp -r ${DATA_PATH}/aclImdb/test/ $PREDICTION_DATA_PATH ``` ### Make batch predictions with GCP. ```sh JOB_NAME=${PROJECT_NAME}_predict_$(date +"%Y%m%d_%H%M%S") PREDICTIONS_OUTPUT_PATH=${BUCKET_PATH}/predictions/$JOB_NAME gcloud ml-engine jobs submit prediction $JOB_NAME \ --model $PROJECT_NAME \ --input-paths $PREDICTION_DATA_PATH/neg/* \ --output-path $PREDICTIONS_OUTPUT_PATH \ --region us-central1 \ --data-format TEXT \ --version $MODEL_NAME ``` ### Scoring. ```sh python scoring.py \ --project_name=$PROJECT_ID \ --model_name=$PROJECT_NAME \ --input_path=$DATA_PATH/aclImdb/test \ --size=1000 \ --batch_size=20 ```
GCP
Sentiment analysis using TensorFlow RNNEstimator on Google Cloud Platform Overview This code aims at providing a simple example of how to train a RNN model using TensorFlow RNNEstimator https www tensorflow org api docs python tf contrib estimator RNNEstimator on Google Cloud Platform The model is designed to handle raw text files in input without preprocessing needed A more detailed guide can be found here https docs google com document d 1CKYdv LyTcpQw07UH 4iCsxL6IGs6hmsFWwUMv5bwug edit Problem and data The problem is a text classification example where we categorize the movie reviews into positive or negative sentiment We base this example on the IMDb dataset provided from this website http ai stanford edu amaas data sentiment Set up environment sh PROJECT NAME sentiment analysis git clone https github com GoogleCloudPlatform professional services git cd professional services examples cloudml sentiment analysis python m virtualenv env source env bin activate python m pip install U pip python m pip install r requirements txt Download data sh DATA PATH data INPUT DATA DATA PATH aclImdb train TRAINING INPUT DATA DATA PATH training data wget http ai stanford edu amaas data sentiment aclImdb v1 tar gz P DATA PATH tar xzf DATA PATH aclImdb v1 tar gz C DATA PATH Configure GCP sh PROJECT ID BUCKET PATH gcloud config set project PROJECT ID Move data to GCP sh gsutil m cp r DATA PATH aclImdb BUCKET PATH GCP INPUT DATA BUCKET PATH aclImdb train Preprocess data sh JOB NAME training date Y m d H M S PROCESSED DATA BUCKET PATH processed data JOB NAME python run preprocessing py input dir GCP INPUT DATA output dir PROCESSED DATA gcp True project id PROJECT ID job name JOB NAME num workers 8 worker machine type n1 highcpu 4 region us central1 Train model locally sh MODEL NAME PROJECT NAME date Y m d H M S TRAINING OUTPUT DIR models MODEL NAME python m trainer task input dir PROCESSED DATA model dir TRAINING OUTPUT DIR Train model on GCP sh MODEL NAME PROJECT NAME date Y m d H M S TRAINING OUTPUT DIR BUCKET PATH MODEL NAME gcloud ml engine jobs submit training MODEL NAME module name trainer task staging bucket BUCKET PATH package path PWD trainer region us central1 runtime version 1 12 config config hp tuning yaml stream logs input dir PROCESSED DATA model dir TRAINING OUTPUT DIR Train model locally with gcloud sh MODEL NAME PROJECT NAME date Y m d H M S TRAINING OUTPUT DIR models MODEL NAME gcloud ml engine local train module name trainer task package path PWD trainer input dir PROCESSED DATA model dir TRAINING OUTPUT DIR Monitor with tensorboard sh tensorboard logdir TRAINING OUTPUT DIR Save model in GCP With HP tuning sh TRIAL NUMBER MODEL SAVED NAME gsutil ls TRAINING OUTPUT DIR TRIAL NUMBER export exporter tail 1 Without HP tuning sh MODEL SAVED NAME gsutil ls TRAINING OUTPUT DIR export exporter tail 1 sh gcloud ml engine models create PROJECT NAME regions us central1 gcloud ml engine versions create MODEL NAME model PROJECT NAME origin MODEL SAVED NAME runtime version 1 12 Make local online predictions sh gcloud ml engine local predict model dir TRAINING OUTPUT DIR export exporter ls TRAINING OUTPUT DIR export exporter tail 1 text instances DATA PATH aclImdb test txt Make online predictions with GCP sh gcloud ml engine predict model PROJECT NAME version MODEL NAME text instances DATA PATH aclImdb test neg 0 2 txt Move out of sample data to GCS sh PREDICTION DATA PATH BUCKET PATH prediction data gsutil m cp r DATA PATH aclImdb test PREDICTION DATA PATH Make batch predictions with GCP sh JOB NAME PROJECT NAME predict date Y m d H M S PREDICTIONS OUTPUT PATH BUCKET PATH predictions JOB NAME gcloud ml engine jobs submit prediction JOB NAME model PROJECT NAME input paths PREDICTION DATA PATH neg output path PREDICTIONS OUTPUT PATH region us central1 data format TEXT version MODEL NAME Scoring sh python scoring py project name PROJECT ID model name PROJECT NAME input path DATA PATH aclImdb test size 1000 batch size 20
grafana getting started title Get started with Grafana and InfluxDB Learn how to build your first InfluxDB dashboard in Grafana getting started influxdb aliases products weight 400 labels oss enterprise
--- aliases: - getting-started-influxdb/ description: Learn how to build your first InfluxDB dashboard in Grafana. labels: products: - enterprise - oss title: Get started with Grafana and InfluxDB weight: 400 --- # Get started with Grafana and InfluxDB #### Get InfluxDB You can [download InfluxDB](https://portal.influxdata.com/downloads/) and install it locally or you can sign up for [InfluxDB Cloud](https://www.influxdata.com/products/influxdb-cloud/). Windows installers are not available for some versions of InfluxDB. #### Install other InfluxDB software [Install Telegraf](https://docs.influxdata.com/telegraf/v1.18/introduction/installation/). This tool is an agent that helps you get metrics into InfluxDB. For more information, refer to [Telegraf documentation](https://docs.influxdata.com/telegraf/v1.18/). If you chose to use InfluxDB Cloud, then you should [download and install the InfluxDB Cloud CLI](https://portal.influxdata.com/downloads/). This tool allows you to send command line instructions to your cloud account. For more information, refer to [Influx CLI documentation](https://docs.influxdata.com/influxdb/cloud/write-data/developer-tools/influx-cli/). #### Get data into InfluxDB If you downloaded and installed InfluxDB on your local machine, then use the [Quick Start](https://docs.influxdata.com/influxdb/v2.0/write-data/#quick-start-for-influxdb-oss) feature to visualize InfluxDB metrics. If you are using the cloud account, then the wizards will guide you through the initial process. For more information, refer to [Configure Telegraf](https://docs.influxdata.com/influxdb/cloud/write-data/no-code/use-telegraf/#configure-telegraf). ##### Note for Windows users: Windows users might need to make additional adjustments. Look for special instructions in the InfluxData documentation and [Using Telegraf on Windows](https://www.influxdata.com/blog/using-telegraf-on-windows/) blog post. The regular system monitoring template in InfluxDB Cloud is not compatible with Windows. Windows users who use InfluxDB Cloud to monitor their system will need to use the [Windows System Monitoring Template](https://github.com/influxdata/community-templates/tree/master/windows_system). #### Add your InfluxDB data source to Grafana You can have more than one InfluxDB data source defined in Grafana. 1. Follow the general instructions to [add a data source](). 1. Decide if you will use InfluxQL or Flux as your query language. - [Configure the data source]() for your chosen query language. Each query language has its own unique data source settings. - For querying features specific to each language, see the data source's [query editor documentation](). ##### InfluxDB guides InfluxDB publishes guidance for connecting different versions of their product to Grafana. - **InfluxDB OSS or Enterprise 1.8+.** To turn on Flux, refer to [Configure InfluxDB](https://docs.influxdata.com/influxdb/v1.8/administration/config/#flux-enabled-false.). Select your InfluxDB version in the upper right corner. - **InfluxDB OSS or Enterprise 2.x.** Refer to [Use Grafana with InfluxDB](https://docs.influxdata.com/influxdb/v2.0/tools/grafana/). Select your InfluxDB version in the upper right corner. - **InfluxDB Cloud.** Refer to [Use Grafana with InfluxDB Cloud](https://docs.influxdata.com/influxdb/cloud/tools/grafana/). ##### Important tips - Make sure your Grafana token has read access. If it doesn't, then you'll get an authentication error and be unable to connect Grafana to InfluxDB. - Avoid apostrophes and other non-standard characters in bucket and token names. - If the text name of the organization or bucket doesn't work, then try the ID number. - If you change your bucket name in InfluxDB, then you must also change it in Grafana and your Telegraf .conf file as well. #### Add a query This step varies depending on the query language that you selected when you set up your data source in Grafana. ##### InfluxQL query language In the query editor, click **select measurement**. ![InfluxQL query](/static/img/docs/influxdb/influxql-query-7-5.png) Grafana displays a list of possible series. Click one to select it, and Grafana graphs any available data. If there is no data to display, then try another selection or check your data source. ##### Flux query language Create a simple Flux query. 1. [Add a panel](). 1. In the query editor, select your InfluxDB-Flux data source. For more information, refer to [Queries](). 1. Select the **Table** visualization. 1. In the query editor text field, enter `buckets()` and then click outside of the query editor. This generic query returns a list of buckets. ![Flux query](/static/img/docs/influxdb/flux-query-7-5.png) You can also create Flux queries in the InfluxDB Explore view. 1. In your browser, log in to the InfluxDB native UI (OSS is typically something like http://localhost:8086 or for InfluxDB Cloud use: https://cloud2.influxdata.com). 1. Click **Explore** to open the Data Explorer. 1. The InfluxDB Data Explorer provides two mechanisms for creating Flux queries: a graphical query editor and a script editor. Using the graphical query editor, [create a query](https://docs.influxdata.com/influxdb/cloud/query-data/execute-queries/data-explorer/). It will look something like this: ![InfluxDB Explore query](/static/img/docs/influxdb/influx-explore-query-7-5.png) 1. Click **Script Editor** to view the text of the query, and then copy all the lines of your Flux code, which will look something like this: ![InfluxDB Explore Script Editor](/static/img/docs/influxdb/explore-query-text-7-5.png) 1. In Grafana, [add a panel]() and then paste your Flux code into the query editor. 1. Click **Apply**. Your new panel should be visible with data from your Flux query. #### Check InfluxDB metrics in Grafana Explore In your Grafana instance, go to the [Explore]() view and build queries to experiment with the metrics you want to monitor. Here you can also debug issues related to collecting metrics. #### Start building dashboards There you go! Use Explore and Data Explorer to experiment with your data, and add the queries that you like to your dashboard as panels. Have fun! Here are some resources to learn more: - Grafana documentation: [InfluxDB data source]() - InfluxDB documentation: [Comparison of Flux vs InfluxQL](https://docs.influxdata.com/influxdb/v1.8/flux/flux-vs-influxql/)
grafana getting started
aliases getting started influxdb description Learn how to build your first InfluxDB dashboard in Grafana labels products enterprise oss title Get started with Grafana and InfluxDB weight 400 Get started with Grafana and InfluxDB Get InfluxDB You can download InfluxDB https portal influxdata com downloads and install it locally or you can sign up for InfluxDB Cloud https www influxdata com products influxdb cloud Windows installers are not available for some versions of InfluxDB Install other InfluxDB software Install Telegraf https docs influxdata com telegraf v1 18 introduction installation This tool is an agent that helps you get metrics into InfluxDB For more information refer to Telegraf documentation https docs influxdata com telegraf v1 18 If you chose to use InfluxDB Cloud then you should download and install the InfluxDB Cloud CLI https portal influxdata com downloads This tool allows you to send command line instructions to your cloud account For more information refer to Influx CLI documentation https docs influxdata com influxdb cloud write data developer tools influx cli Get data into InfluxDB If you downloaded and installed InfluxDB on your local machine then use the Quick Start https docs influxdata com influxdb v2 0 write data quick start for influxdb oss feature to visualize InfluxDB metrics If you are using the cloud account then the wizards will guide you through the initial process For more information refer to Configure Telegraf https docs influxdata com influxdb cloud write data no code use telegraf configure telegraf Note for Windows users Windows users might need to make additional adjustments Look for special instructions in the InfluxData documentation and Using Telegraf on Windows https www influxdata com blog using telegraf on windows blog post The regular system monitoring template in InfluxDB Cloud is not compatible with Windows Windows users who use InfluxDB Cloud to monitor their system will need to use the Windows System Monitoring Template https github com influxdata community templates tree master windows system Add your InfluxDB data source to Grafana You can have more than one InfluxDB data source defined in Grafana 1 Follow the general instructions to add a data source 1 Decide if you will use InfluxQL or Flux as your query language Configure the data source for your chosen query language Each query language has its own unique data source settings For querying features specific to each language see the data source s query editor documentation InfluxDB guides InfluxDB publishes guidance for connecting different versions of their product to Grafana InfluxDB OSS or Enterprise 1 8 To turn on Flux refer to Configure InfluxDB https docs influxdata com influxdb v1 8 administration config flux enabled false Select your InfluxDB version in the upper right corner InfluxDB OSS or Enterprise 2 x Refer to Use Grafana with InfluxDB https docs influxdata com influxdb v2 0 tools grafana Select your InfluxDB version in the upper right corner InfluxDB Cloud Refer to Use Grafana with InfluxDB Cloud https docs influxdata com influxdb cloud tools grafana Important tips Make sure your Grafana token has read access If it doesn t then you ll get an authentication error and be unable to connect Grafana to InfluxDB Avoid apostrophes and other non standard characters in bucket and token names If the text name of the organization or bucket doesn t work then try the ID number If you change your bucket name in InfluxDB then you must also change it in Grafana and your Telegraf conf file as well Add a query This step varies depending on the query language that you selected when you set up your data source in Grafana InfluxQL query language In the query editor click select measurement InfluxQL query static img docs influxdb influxql query 7 5 png Grafana displays a list of possible series Click one to select it and Grafana graphs any available data If there is no data to display then try another selection or check your data source Flux query language Create a simple Flux query 1 Add a panel 1 In the query editor select your InfluxDB Flux data source For more information refer to Queries 1 Select the Table visualization 1 In the query editor text field enter buckets and then click outside of the query editor This generic query returns a list of buckets Flux query static img docs influxdb flux query 7 5 png You can also create Flux queries in the InfluxDB Explore view 1 In your browser log in to the InfluxDB native UI OSS is typically something like http localhost 8086 or for InfluxDB Cloud use https cloud2 influxdata com 1 Click Explore to open the Data Explorer 1 The InfluxDB Data Explorer provides two mechanisms for creating Flux queries a graphical query editor and a script editor Using the graphical query editor create a query https docs influxdata com influxdb cloud query data execute queries data explorer It will look something like this InfluxDB Explore query static img docs influxdb influx explore query 7 5 png 1 Click Script Editor to view the text of the query and then copy all the lines of your Flux code which will look something like this InfluxDB Explore Script Editor static img docs influxdb explore query text 7 5 png 1 In Grafana add a panel and then paste your Flux code into the query editor 1 Click Apply Your new panel should be visible with data from your Flux query Check InfluxDB metrics in Grafana Explore In your Grafana instance go to the Explore view and build queries to experiment with the metrics you want to monitor Here you can also debug issues related to collecting metrics Start building dashboards There you go Use Explore and Data Explorer to experiment with your data and add the queries that you like to your dashboard as panels Have fun Here are some resources to learn more Grafana documentation InfluxDB data source InfluxDB documentation Comparison of Flux vs InfluxQL https docs influxdata com influxdb v1 8 flux flux vs influxql
grafana getting started Learn how to build your first Prometheus dashboard in Grafana guides gettingstarted aliases products getting started prometheus labels oss enterprise
--- aliases: - ../guides/getting_started/ - ../guides/gettingstarted/ - getting-started-prometheus/ description: Learn how to build your first Prometheus dashboard in Grafana. labels: products: - enterprise - oss title: Get started with Grafana and Prometheus weight: 300 --- # Get started with Grafana and Prometheus Prometheus is an open source monitoring system for which Grafana provides out-of-the-box support. This topic walks you through the steps to create a series of dashboards in Grafana to display system metrics for a server monitored by Prometheus. _Grafana and Prometheus_: 1. Download Prometheus and node_exporter 1. Install Prometheus node_exporter 1. Install and configure Prometheus 1. Configure Prometheus for Grafana 1. Check Prometheus metrics in Grafana Explore view 1. Start building dashboards #### Download Prometheus and node_exporter Download the following components: - [Prometheus](https://prometheus.io/download/#prometheus) - [node_exporter](https://prometheus.io/download/#node_exporter) Like Grafana, you can install Prometheus on many different operating systems. Refer to the [Prometheus download page](https://prometheus.io/download/) to see a list of stable versions of Prometheus components. #### Install Prometheus node_exporter Install node_exporter on all hosts you want to monitor. This guide shows you how to install it locally. Prometheus node_exporter is a widely used tool that exposes system metrics. For instructions on installing node_exporter, refer to the [Installing and running the node_exporter](https://prometheus.io/docs/guides/node-exporter/#installing-and-running-the-node-exporter) section in the Prometheus documentation. When you run node_exporter locally, navigate to `http://localhost:9100/metrics` to check that it is exporting metrics. The instructions in the referenced topic are intended for Linux users. You may have to alter the instructions slightly depending on your operating system. For example, if you are on Windows, use the [windows_exporter](https://github.com/prometheus-community/windows_exporter) instead. #### Install and configure Prometheus 1. After [downloading Prometheus](https://prometheus.io/download/#prometheus), extract it and navigate to the directory. ``` tar xvfz prometheus-*.tar.gz cd prometheus-* ``` 1. Locate the `prometheus.yml` file in the directory. 1. Modify Prometheus's configuration file to monitor the hosts where you installed node_exporter. By default, Prometheus looks for the file `prometheus.yml` in the current working directory. This behavior can be changed via the `--config.file` command line flag. For example, some Prometheus installers use it to set the configuration file to `/etc/prometheus/prometheus.yml`. The following example shows you the code you should add. Notice that static configs targets are set to `['localhost:9100']` to target node-explorer when running it locally. ``` # A scrape configuration containing exactly one endpoint to scrape from node_exporter running on a host: scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'node' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9100'] ``` 1. Start the Prometheus service: ``` ./prometheus --config.file=./prometheus.yml ``` 1. Confirm that Prometheus is running by navigating to `http://localhost:9090`. You can see that the node_exporter metrics have been delivered to Prometheus. Next, the metrics will be sent to Grafana. #### Configure Prometheus for Grafana When running Prometheus locally, there are two ways to configure Prometheus for Grafana. You can use a hosted Grafana instance at [Grafana Cloud](/) or run Grafana locally. This guide describes configuring Prometheus in a hosted Grafana instance on Grafana Cloud. 1. Sign up for [https://grafana.com/](/auth/sign-up/create-user). Grafana gives you a Prometheus instance out of the box. ![Prometheus details in Grafana.com](/static/img/docs/getting-started/screenshot-grafana-prometheus-details.png) 1. Because you are running your own Prometheus instance locally, you must `remote_write` your metrics to the Grafana.com Prometheus instance. Grafana provides code to add to your `prometheus.yml` config file. This includes a remote write endpoint, your user name and password. Add the following code to your prometheus.yml file to begin sending metrics to your hosted Grafana instance. ``` remote_write: - url: <https://your-remote-write-endpoint> basic_auth: username: <your user name> password: <Your Grafana.com API Key> ``` To configure your Prometheus instance to work with Grafana locally instead of Grafana Cloud, install Grafana [here](/grafana/download) and follow the configuration steps listed [here](/docs/grafana/latest/datasources/prometheus/#configure-the-data-source). #### Check Prometheus metrics in Grafana Explore view In your Grafana instance, go to the [Explore]() view and build queries to experiment with the metrics you want to monitor. Here you can also debug issues related to collecting metrics from Prometheus. #### Start building dashboards Now that you have a curated list of queries, create [dashboards]() to render system metrics monitored by Prometheus. When you install Prometheus and node_exporter or windows_exporter, you will find recommended dashboards for use. The following image shows a dashboard with three panels showing some system metrics. ![Prometheus dashboards](/static/img/docs/getting-started/simple_grafana_prom_dashboard.png) To learn more: - Grafana documentation: [Prometheus data source]() - Prometheus documentation: [What is Prometheus?](https://prometheus.io/docs/introduction/overview/)
grafana getting started
aliases guides getting started guides gettingstarted getting started prometheus description Learn how to build your first Prometheus dashboard in Grafana labels products enterprise oss title Get started with Grafana and Prometheus weight 300 Get started with Grafana and Prometheus Prometheus is an open source monitoring system for which Grafana provides out of the box support This topic walks you through the steps to create a series of dashboards in Grafana to display system metrics for a server monitored by Prometheus Grafana and Prometheus 1 Download Prometheus and node exporter 1 Install Prometheus node exporter 1 Install and configure Prometheus 1 Configure Prometheus for Grafana 1 Check Prometheus metrics in Grafana Explore view 1 Start building dashboards Download Prometheus and node exporter Download the following components Prometheus https prometheus io download prometheus node exporter https prometheus io download node exporter Like Grafana you can install Prometheus on many different operating systems Refer to the Prometheus download page https prometheus io download to see a list of stable versions of Prometheus components Install Prometheus node exporter Install node exporter on all hosts you want to monitor This guide shows you how to install it locally Prometheus node exporter is a widely used tool that exposes system metrics For instructions on installing node exporter refer to the Installing and running the node exporter https prometheus io docs guides node exporter installing and running the node exporter section in the Prometheus documentation When you run node exporter locally navigate to http localhost 9100 metrics to check that it is exporting metrics The instructions in the referenced topic are intended for Linux users You may have to alter the instructions slightly depending on your operating system For example if you are on Windows use the windows exporter https github com prometheus community windows exporter instead Install and configure Prometheus 1 After downloading Prometheus https prometheus io download prometheus extract it and navigate to the directory tar xvfz prometheus tar gz cd prometheus 1 Locate the prometheus yml file in the directory 1 Modify Prometheus s configuration file to monitor the hosts where you installed node exporter By default Prometheus looks for the file prometheus yml in the current working directory This behavior can be changed via the config file command line flag For example some Prometheus installers use it to set the configuration file to etc prometheus prometheus yml The following example shows you the code you should add Notice that static configs targets are set to localhost 9100 to target node explorer when running it locally A scrape configuration containing exactly one endpoint to scrape from node exporter running on a host scrape configs The job name is added as a label job job name to any timeseries scraped from this config job name node metrics path defaults to metrics scheme defaults to http static configs targets localhost 9100 1 Start the Prometheus service prometheus config file prometheus yml 1 Confirm that Prometheus is running by navigating to http localhost 9090 You can see that the node exporter metrics have been delivered to Prometheus Next the metrics will be sent to Grafana Configure Prometheus for Grafana When running Prometheus locally there are two ways to configure Prometheus for Grafana You can use a hosted Grafana instance at Grafana Cloud or run Grafana locally This guide describes configuring Prometheus in a hosted Grafana instance on Grafana Cloud 1 Sign up for https grafana com auth sign up create user Grafana gives you a Prometheus instance out of the box Prometheus details in Grafana com static img docs getting started screenshot grafana prometheus details png 1 Because you are running your own Prometheus instance locally you must remote write your metrics to the Grafana com Prometheus instance Grafana provides code to add to your prometheus yml config file This includes a remote write endpoint your user name and password Add the following code to your prometheus yml file to begin sending metrics to your hosted Grafana instance remote write url https your remote write endpoint basic auth username your user name password Your Grafana com API Key To configure your Prometheus instance to work with Grafana locally instead of Grafana Cloud install Grafana here grafana download and follow the configuration steps listed here docs grafana latest datasources prometheus configure the data source Check Prometheus metrics in Grafana Explore view In your Grafana instance go to the Explore view and build queries to experiment with the metrics you want to monitor Here you can also debug issues related to collecting metrics from Prometheus Start building dashboards Now that you have a curated list of queries create dashboards to render system metrics monitored by Prometheus When you install Prometheus and node exporter or windows exporter you will find recommended dashboards for use The following image shows a dashboard with three panels showing some system metrics Prometheus dashboards static img docs getting started simple grafana prom dashboard png To learn more Grafana documentation Prometheus data source Prometheus documentation What is Prometheus https prometheus io docs introduction overview
grafana getting started getting started sql guides gettingstarted aliases products Learn how to build your first MS SQL Server dashboard in Grafana labels oss enterprise
--- aliases: - ../guides/getting_started/ - ../guides/gettingstarted/ - getting-started-sql/ description: Learn how to build your first MS SQL Server dashboard in Grafana. labels: products: - enterprise - oss title: Get started with Grafana and MS SQL Server weight: 500 --- # Get started with Grafana and MS SQL Server Microsoft SQL Server is a popular relational database management system that is widely used in development and production environments. This topic walks you through the steps to create a series of dashboards in Grafana to display metrics from a MS SQL Server database. #### Download MS SQL Server MS SQL Server can be installed on Windows or Linux operating systems and also on Docker containers. Refer to the [MS SQL Server downloads page](https://www.microsoft.com/en-us/sql-server/sql-server-downloads), for a complete list of all available options. #### Install MS SQL Server You can install MS SQL Server on the host running Grafana or on a remote server. To install the software from the [downloads page](https://www.microsoft.com/en-us/sql-server/sql-server-downloads), follow their setup prompts. If you are on a Windows host but want to use Grafana and MS SQL data source on a Linux environment, refer to the [WSL to set up your Grafana development environment](/blog/2021/03/03/.how-to-set-up-a-grafana-development-environment-on-a-windows-pc-using-wsl). This will allow you to leverage the resources available in [grafana/grafana](https://github.com/grafana/grafana) GitHub repository. Here you will find a collection of supported data sources, including MS SQL Server, along with test data and pre-configured dashboards for use. #### Add the MS SQL data source There are several ways to authenticate in MSSQL. Start by: 1. Click **Connections** in the left-side menu and filter by `mssql`. 1. Select the **Microsoft SQL Server** option. 1. Click **Create a Microsoft SQL Server data source** in the top right corner to open the configuration page. 1. Select the desired authentication method and fill in the right information as detailed below. 1. Click **Save & test**. ##### General configuration | Name | Description | | ---------- | --------------------------------------------------------------------------------------------------------------------- | | `Name` | The data source name. This is how you refer to the data source in panels and queries. | | `Host` | The IP address/hostname and optional port of your MS SQL instance. If port is omitted, the default 1433 will be used. | | `Database` | Name of your MS SQL database. | ##### SQL Server Authentication | Name | Description | | ---------- | ------------------------------- | | `User` | Database user's login/username. | | `Password` | Database user's password. | ##### Windows Active Directory (Kerberos) Below are the four possible ways to authenticate via Windows Active Directory/Kerberos. Windows Active Directory (Kerberos) authentication is not supported in Grafana Cloud at the moment. | Method | Description | | ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Username + password** | Enter the domain user and password | | **Keytab file** | Specify the path to a valid keytab file to use that for authentication. | | **Credential cache** | Log in on the host via `kinit` and pass the path to the credential cache. The cache path can be found by running `klist` on the host in question. | | **Credential cache file** | This option allows multiple valid configurations to be present and matching is performed on host, database, and user. See the example JSON below this table. | ```json [ { "user": "[email protected]", "database": "dbone", "address": "mysql1.mydomain.com:3306", "credentialCache": "/tmp/krb5cc_1000" }, { "user": "[email protected]", "database": "dbtwo", "address": "mysql2.gf.lab", "credentialCache": "/tmp/krb5cc_1000" } ] ``` For installations from the [grafana/grafana](https://github.com/grafana/grafana/tree/main) repository, `gdev-mssql` data source is available. Once you add this data source, you can use the `Datasource tests - MSSQL` dashboard with three panels showing metrics generated from a test database. ![MS SQL Server dashboard](/static/img/docs/getting-started/gdev-sql-dashboard.png) Optionally, play around this dashboard and customize it to: - Create different panels. - Change titles for panels. - Change frequency of data polling. - Change the period for which the data is displayed. - Rearrange and resize panels. #### Start building dashboards Now that you have gained some idea of using the pre-packaged MS SQL data source and some test data, the next step is to setup your own instance of MS SQL Server database and data your development or sandbox area. To fetch data from your own instance of MS SQL Server, add the data source using instructions in Step 4 of this topic. In Grafana [Explore]() build queries to experiment with the metrics you want to monitor. Once you have a curated list of queries, create [dashboards]() to render metrics from the SQL Server database. For troubleshooting, user permissions, known issues, and query examples, refer to [Using Microsoft SQL Server in Grafana]().
grafana getting started
aliases guides getting started guides gettingstarted getting started sql description Learn how to build your first MS SQL Server dashboard in Grafana labels products enterprise oss title Get started with Grafana and MS SQL Server weight 500 Get started with Grafana and MS SQL Server Microsoft SQL Server is a popular relational database management system that is widely used in development and production environments This topic walks you through the steps to create a series of dashboards in Grafana to display metrics from a MS SQL Server database Download MS SQL Server MS SQL Server can be installed on Windows or Linux operating systems and also on Docker containers Refer to the MS SQL Server downloads page https www microsoft com en us sql server sql server downloads for a complete list of all available options Install MS SQL Server You can install MS SQL Server on the host running Grafana or on a remote server To install the software from the downloads page https www microsoft com en us sql server sql server downloads follow their setup prompts If you are on a Windows host but want to use Grafana and MS SQL data source on a Linux environment refer to the WSL to set up your Grafana development environment blog 2021 03 03 how to set up a grafana development environment on a windows pc using wsl This will allow you to leverage the resources available in grafana grafana https github com grafana grafana GitHub repository Here you will find a collection of supported data sources including MS SQL Server along with test data and pre configured dashboards for use Add the MS SQL data source There are several ways to authenticate in MSSQL Start by 1 Click Connections in the left side menu and filter by mssql 1 Select the Microsoft SQL Server option 1 Click Create a Microsoft SQL Server data source in the top right corner to open the configuration page 1 Select the desired authentication method and fill in the right information as detailed below 1 Click Save test General configuration Name Description Name The data source name This is how you refer to the data source in panels and queries Host The IP address hostname and optional port of your MS SQL instance If port is omitted the default 1433 will be used Database Name of your MS SQL database SQL Server Authentication Name Description User Database user s login username Password Database user s password Windows Active Directory Kerberos Below are the four possible ways to authenticate via Windows Active Directory Kerberos Windows Active Directory Kerberos authentication is not supported in Grafana Cloud at the moment Method Description Username password Enter the domain user and password Keytab file Specify the path to a valid keytab file to use that for authentication Credential cache Log in on the host via kinit and pass the path to the credential cache The cache path can be found by running klist on the host in question Credential cache file This option allows multiple valid configurations to be present and matching is performed on host database and user See the example JSON below this table json user grot GF LAB database dbone address mysql1 mydomain com 3306 credentialCache tmp krb5cc 1000 user grot GF LAB database dbtwo address mysql2 gf lab credentialCache tmp krb5cc 1000 For installations from the grafana grafana https github com grafana grafana tree main repository gdev mssql data source is available Once you add this data source you can use the Datasource tests MSSQL dashboard with three panels showing metrics generated from a test database MS SQL Server dashboard static img docs getting started gdev sql dashboard png Optionally play around this dashboard and customize it to Create different panels Change titles for panels Change frequency of data polling Change the period for which the data is displayed Rearrange and resize panels Start building dashboards Now that you have gained some idea of using the pre packaged MS SQL data source and some test data the next step is to setup your own instance of MS SQL Server database and data your development or sandbox area To fetch data from your own instance of MS SQL Server add the data source using instructions in Step 4 of this topic In Grafana Explore build queries to experiment with the metrics you want to monitor Once you have a curated list of queries create dashboards to render metrics from the SQL Server database For troubleshooting user permissions known issues and query examples refer to Using Microsoft SQL Server in Grafana
grafana setup certificates labels https products Learn how to set up Grafana HTTPS for secure web traffic keywords ssl grafana enterprise
--- description: Learn how to set up Grafana HTTPS for secure web traffic. keywords: - grafana - https - ssl - certificates labels: products: - enterprise - oss menuTitle: Set up HTTPS title: Set up Grafana HTTPS for secure web traffic weight: 900 --- # Set up Grafana HTTPS for secure web traffic When accessing the Grafana UI through the web, it is important to set up HTTPS to ensure the communication between Grafana and the end user is encrypted, including login credentials and retrieved metric data. In order to ensure secure traffic over the internet, Grafana must have a key for encryption and a [Secure Socket Layer (SSL) Certificate](https://www.kaspersky.com/resource-center/definitions/what-is-a-ssl-certificate) to verify the identity of the site. The following image shows a browser lock icon which confirms the connection is safe. This topic shows you how to: 1. Obtain a certificate and key 2. Configure Grafana HTTPS 3. Restart the Grafana server ## Before you begin To follow these instructions, you need: - You must have shell access to the system and `sudo` access to perform actions as root or administrator. - For the CA-signed option, you need a domain name that you possess and that is associated with the machine you are using. ## Obtain a certificate and key You can use one of two methods to obtain a certificate and a key. The faster and easier _self-signed_ option might show browser warnings to the user that they will have to accept each time they visit the site. Alternatively, the Certificate Authority (CA) signed option requires more steps to complete, but it enables full trust with the browser. To learn more about the difference between these options, refer to [Difference between self-signed CA and self-signed certificate](https://www.baeldung.com/cs/self-signed-ca-vs-certificate). ### Generate a self-signed certificate This section shows you how to use `openssl` tooling to generate all necessary files from the command line. 1. Run the following command to generate a 2048-bit RSA private key, which is used to decrypt traffic: ```bash $ sudo openssl genrsa -out /etc/grafana/grafana.key 2048 ``` 1. Run the following command to generate a certificate, using the private key from the previous step. ```bash $ sudo openssl req -new -key /etc/grafana/grafana.key -out /etc/grafana/grafana.csr ``` When prompted, answer the questions, which might include your fully-qualified domain name, email address, country code, and others. The following example is similar to the prompts you will see. ``` You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:Virginia Locality Name (eg, city) []:Richmond Organization Name (eg, company) [Internet Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []:subdomain.mysite.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: ``` 1. Run the following command to self-sign the certificate with the private key, for a period of validity of 365 days: ```bash sudo openssl x509 -req -days 365 -in /etc/grafana/grafana.csr -signkey /etc/grafana/grafana.key -out /etc/grafana/grafana.crt ``` 1. Run the following commands to set the appropriate permissions for the files: ```bash sudo chown grafana:grafana /etc/grafana/grafana.crt sudo chown grafana:grafana /etc/grafana/grafana.key sudo chmod 400 /etc/grafana/grafana.key /etc/grafana/grafana.crt ``` **Note**: When using these files, browsers might provide warnings for the resulting website because a third-party source does not trust the certificate. Browsers will show trust warnings; however, the connection will remain encrypted. The following image shows an insecure HTTP connection. ### Obtain a signed certificate from LetsEncrypt [LetsEncrypt](https://letsencrypt.org/) is a nonprofit certificate authority that provides certificates without any charge. For signed certificates, there are multiple companies and certificate authorities (CAs) available. The principles for generating the certificates might vary slightly in accordance with the provider but will generally remain the same. The examples in this section use LetsEncrypt because it is free. The instructions provided in this section are for a Debian-based Linux system. For other distributions and operating systems, please refer to the [certbot instructions](https://certbot.eff.org/instructions). Also, these instructions require you to have a domain name that you are in control of. Dynamic domain names like those from Amazon EC2 or DynDNS providers will not function. #### Install `snapd` and `certbot` `certbot` is an open-source program used to manage LetsEncrypt certificates, and `snapd` is a tool that assists in running `certbot` and installing the certificates. 1. To install `snapd`, run the following commands: ```bash sudo apt-get install snapd sudo snap install core; sudo snap refresh core ``` 1. Run the following commands to install: ```bash sudo apt-get remove certbot sudo snap install --classic certbot sudo ln -s /snap/bin/certbot /usr/bin/certbot ``` These commands: - Uninstall `certbot` from your system if it has been installed using a package manager - Install `certbot` using `snapd` #### Generate certificates using `certbot` The `sudo certbot certonly --standalone` command prompts you to answer questions before it generates a certificate. This process temporarily opens a service on port `80` that LetsEncrypt uses to verify communication with your host. To generate certificates using `certbot`, complete the following steps: 1. Ensure that port `80` traffic is permitted by applicable firewall rules. 1. Run the following command to generate certificates: ```bash $ sudo certbot certonly --standalone Saving debug log to /var/log/letsencrypt/letsencrypt.log Enter email address (used for urgent renewal and security notices) (Enter 'c' to cancel): [email protected] - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Please read the Terms of Service at https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf. You must agree in order to register with the ACME server. Do you agree? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: y - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Would you be willing, once your first certificate is successfully issued, to share your email address with the Electronic Frontier Foundation, a founding partner of the Let’s Encrypt project and the non-profit organization that develops Certbot? We’d like to send you email about our work encrypting the web, EFF news, campaigns, and ways to support digital freedom. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: n Account registered. Please enter the domain name(s) you would like on your certificate (comma and/or space separated) (Enter 'c' to cancel): subdomain.mysite.com Requesting a certificate for subdomain.mysite.com Successfully received certificate. Certificate is saved at: /etc/letsencrypt/live/subdomain.mysite.com/fullchain.pem Key is saved at: /etc/letsencrypt/live/subdomain.mysite.com/privkey.pem This certificate expires on 2023-06-20. These files will be updated when the certificate renews. Certbot has set up a scheduled task to automatically renew this certificate in the background. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - If you like Certbot, please consider supporting our work by: * Donating to ISRG / Let’s Encrypt: https://letsencrypt.org/donate * Donating to EFF: https://eff.org/donate-le - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ``` #### Set up symlinks to Grafana Symbolic links, also known as symlinks, enable you to create pointers to existing LetsEncrypt files in the `/etc/grafana` directory. By using symlinks rather than copying files, you can use `certbot` to refresh or request updated certificates from LetsEncrypt without the need to reconfigure the Grafana settings. To set up symlinks to Grafana, run the following commands: ```bash $ sudo ln -s /etc/letsencrypt/live/subdomain.mysite.com/privkey.pem /etc/grafana/grafana.key $ sudo ln -s /etc/letsencrypt/live/subdomain.mysite.com/fullchain.pem /etc/grafana/grafana.crt ``` #### Adjust permissions Grafana usually runs under the `grafana` Linux group, and you must ensure that the Grafana server process has permission to read the relevant files. Without read access, the HTTPS server fails to start properly. To adjust permissions, perform the following steps: 1. Run the following commands to set the appropriate permissions and groups for the files: ```bash $ sudo chgrp -R grafana /etc/letsencrypt/* $ sudo chmod -R g+rx /etc/letsencrypt/* $ sudo chgrp -R grafana /etc/grafana/grafana.crt /etc/grafana/grafana.key $ sudo chmod 400 /etc/grafana/grafana.crt /etc/grafana/grafana.key ``` 1. Run the following command to verify that the `grafana` group can read the symlinks: ```bash $ $ ls -l /etc/grafana/grafana.* lrwxrwxrwx 1 root grafana 67 Mar 22 14:15 /etc/grafana/grafana.crt -> /etc/letsencrypt/live/subdomain.mysite.com/fullchain.pem -rw-r----- 1 root grafana 54554 Mar 22 14:13 /etc/grafana/grafana.ini lrwxrwxrwx 1 root grafana 65 Mar 22 14:15 /etc/grafana/grafana.key -> /etc/letsencrypt/live/subdomain.mysite.com/privkey.pem ``` ## Configure Grafana HTTPS and restart Grafana In this section you edit the `grafana.ini` file so that it includes the certificate you created. If you need help identifying where to find this file, or what each key means, refer to [Configuration file location](). To configure Grafana HTTPS and restart Grafana, complete the following steps. 1. Open the `grafana.ini` file and edit the following configuration parameters: ``` [server] http_addr = http_port = 3000 domain = mysite.com root_url = https://subdomain.mysite.com:3000 cert_key = /etc/grafana/grafana.key cert_file = /etc/grafana/grafana.crt enforce_domain = False protocol = https ``` > **Note**: The standard port for SSL traffic is 443, which you can use instead of Grafana's default port 3000. This change might require additional operating system privileges or configuration to bind to lower-numbered privileged ports. 1. [Restart the Grafana server]() using `systemd`, `init.d`, or the binary as appropriate for your environment. ## Troubleshooting Refer to the following troubleshooting tips as required. ### Failure to obtain a certificate The following reasons explain why the `certbot` process might fail: - To make sure you can get a certificate from LetsEncrypt, you need to ensure that port 80 is open so that LetsEncrypt can communicate with your machine. If port 80 is blocked or firewall is enabled, the exchange will fail and you won't be able to receive a certificate. - LetsEncrypt requires proof that you control the domain, so attempts to obtain certificates for domains you do not control might be rejected. ### Grafana starts, but HTTPS is unavailable When you configure HTTPS, the following errors might appear in Grafana's logs. #### Permission denied ``` level=error msg="Stopped background service" service=*api.HTTPServer reason="open /etc/grafana/grafana.crt: permission denied" ``` ##### Resolution To ensure secure HTTPS setup, it is essential that the cryptographic keys and certificates are as restricted as possible. However, if the file permissions are too restricted, the Grafana process may not have access to the necessary files, thus impeding a successful HTTPS setup. Please re-examine the listed instructions to double check the file permissions and try again. #### Cannot assign requested address ``` listen tcp 34.148.30.243:3000: bind: cannot assign requested address ``` ##### Resolution Check the config to ensure the `http_addr` is left blank, allowing Grafana to bind to all interfaces. If you have set `http_addr` to a specific subdomain, such as `subdomain.mysite.com`, this might prevent the Grafana process from binding to an external address, due to network address translation layers being present.
grafana setup
description Learn how to set up Grafana HTTPS for secure web traffic keywords grafana https ssl certificates labels products enterprise oss menuTitle Set up HTTPS title Set up Grafana HTTPS for secure web traffic weight 900 Set up Grafana HTTPS for secure web traffic When accessing the Grafana UI through the web it is important to set up HTTPS to ensure the communication between Grafana and the end user is encrypted including login credentials and retrieved metric data In order to ensure secure traffic over the internet Grafana must have a key for encryption and a Secure Socket Layer SSL Certificate https www kaspersky com resource center definitions what is a ssl certificate to verify the identity of the site The following image shows a browser lock icon which confirms the connection is safe This topic shows you how to 1 Obtain a certificate and key 2 Configure Grafana HTTPS 3 Restart the Grafana server Before you begin To follow these instructions you need You must have shell access to the system and sudo access to perform actions as root or administrator For the CA signed option you need a domain name that you possess and that is associated with the machine you are using Obtain a certificate and key You can use one of two methods to obtain a certificate and a key The faster and easier self signed option might show browser warnings to the user that they will have to accept each time they visit the site Alternatively the Certificate Authority CA signed option requires more steps to complete but it enables full trust with the browser To learn more about the difference between these options refer to Difference between self signed CA and self signed certificate https www baeldung com cs self signed ca vs certificate Generate a self signed certificate This section shows you how to use openssl tooling to generate all necessary files from the command line 1 Run the following command to generate a 2048 bit RSA private key which is used to decrypt traffic bash sudo openssl genrsa out etc grafana grafana key 2048 1 Run the following command to generate a certificate using the private key from the previous step bash sudo openssl req new key etc grafana grafana key out etc grafana grafana csr When prompted answer the questions which might include your fully qualified domain name email address country code and others The following example is similar to the prompts you will see You are about to be asked to enter information that will be incorporated into your certificate request What you are about to enter is what is called a Distinguished Name or a DN There are quite a few fields but you can leave some blank For some fields there will be a default value If you enter the field will be left blank Country Name 2 letter code AU US State or Province Name full name Some State Virginia Locality Name eg city Richmond Organization Name eg company Internet Pty Ltd Organizational Unit Name eg section Common Name e g server FQDN or YOUR name subdomain mysite com Email Address me mysite com Please enter the following extra attributes to be sent with your certificate request A challenge password An optional company name 1 Run the following command to self sign the certificate with the private key for a period of validity of 365 days bash sudo openssl x509 req days 365 in etc grafana grafana csr signkey etc grafana grafana key out etc grafana grafana crt 1 Run the following commands to set the appropriate permissions for the files bash sudo chown grafana grafana etc grafana grafana crt sudo chown grafana grafana etc grafana grafana key sudo chmod 400 etc grafana grafana key etc grafana grafana crt Note When using these files browsers might provide warnings for the resulting website because a third party source does not trust the certificate Browsers will show trust warnings however the connection will remain encrypted The following image shows an insecure HTTP connection Obtain a signed certificate from LetsEncrypt LetsEncrypt https letsencrypt org is a nonprofit certificate authority that provides certificates without any charge For signed certificates there are multiple companies and certificate authorities CAs available The principles for generating the certificates might vary slightly in accordance with the provider but will generally remain the same The examples in this section use LetsEncrypt because it is free The instructions provided in this section are for a Debian based Linux system For other distributions and operating systems please refer to the certbot instructions https certbot eff org instructions Also these instructions require you to have a domain name that you are in control of Dynamic domain names like those from Amazon EC2 or DynDNS providers will not function Install snapd and certbot certbot is an open source program used to manage LetsEncrypt certificates and snapd is a tool that assists in running certbot and installing the certificates 1 To install snapd run the following commands bash sudo apt get install snapd sudo snap install core sudo snap refresh core 1 Run the following commands to install bash sudo apt get remove certbot sudo snap install classic certbot sudo ln s snap bin certbot usr bin certbot These commands Uninstall certbot from your system if it has been installed using a package manager Install certbot using snapd Generate certificates using certbot The sudo certbot certonly standalone command prompts you to answer questions before it generates a certificate This process temporarily opens a service on port 80 that LetsEncrypt uses to verify communication with your host To generate certificates using certbot complete the following steps 1 Ensure that port 80 traffic is permitted by applicable firewall rules 1 Run the following command to generate certificates bash sudo certbot certonly standalone Saving debug log to var log letsencrypt letsencrypt log Enter email address used for urgent renewal and security notices Enter c to cancel me mysite com Please read the Terms of Service at https letsencrypt org documents LE SA v1 3 September 21 2022 pdf You must agree in order to register with the ACME server Do you agree Y es N o y Would you be willing once your first certificate is successfully issued to share your email address with the Electronic Frontier Foundation a founding partner of the Let s Encrypt project and the non profit organization that develops Certbot We d like to send you email about our work encrypting the web EFF news campaigns and ways to support digital freedom Y es N o n Account registered Please enter the domain name s you would like on your certificate comma and or space separated Enter c to cancel subdomain mysite com Requesting a certificate for subdomain mysite com Successfully received certificate Certificate is saved at etc letsencrypt live subdomain mysite com fullchain pem Key is saved at etc letsencrypt live subdomain mysite com privkey pem This certificate expires on 2023 06 20 These files will be updated when the certificate renews Certbot has set up a scheduled task to automatically renew this certificate in the background If you like Certbot please consider supporting our work by Donating to ISRG Let s Encrypt https letsencrypt org donate Donating to EFF https eff org donate le Set up symlinks to Grafana Symbolic links also known as symlinks enable you to create pointers to existing LetsEncrypt files in the etc grafana directory By using symlinks rather than copying files you can use certbot to refresh or request updated certificates from LetsEncrypt without the need to reconfigure the Grafana settings To set up symlinks to Grafana run the following commands bash sudo ln s etc letsencrypt live subdomain mysite com privkey pem etc grafana grafana key sudo ln s etc letsencrypt live subdomain mysite com fullchain pem etc grafana grafana crt Adjust permissions Grafana usually runs under the grafana Linux group and you must ensure that the Grafana server process has permission to read the relevant files Without read access the HTTPS server fails to start properly To adjust permissions perform the following steps 1 Run the following commands to set the appropriate permissions and groups for the files bash sudo chgrp R grafana etc letsencrypt sudo chmod R g rx etc letsencrypt sudo chgrp R grafana etc grafana grafana crt etc grafana grafana key sudo chmod 400 etc grafana grafana crt etc grafana grafana key 1 Run the following command to verify that the grafana group can read the symlinks bash ls l etc grafana grafana lrwxrwxrwx 1 root grafana 67 Mar 22 14 15 etc grafana grafana crt etc letsencrypt live subdomain mysite com fullchain pem rw r 1 root grafana 54554 Mar 22 14 13 etc grafana grafana ini lrwxrwxrwx 1 root grafana 65 Mar 22 14 15 etc grafana grafana key etc letsencrypt live subdomain mysite com privkey pem Configure Grafana HTTPS and restart Grafana In this section you edit the grafana ini file so that it includes the certificate you created If you need help identifying where to find this file or what each key means refer to Configuration file location To configure Grafana HTTPS and restart Grafana complete the following steps 1 Open the grafana ini file and edit the following configuration parameters server http addr http port 3000 domain mysite com root url https subdomain mysite com 3000 cert key etc grafana grafana key cert file etc grafana grafana crt enforce domain False protocol https Note The standard port for SSL traffic is 443 which you can use instead of Grafana s default port 3000 This change might require additional operating system privileges or configuration to bind to lower numbered privileged ports 1 Restart the Grafana server using systemd init d or the binary as appropriate for your environment Troubleshooting Refer to the following troubleshooting tips as required Failure to obtain a certificate The following reasons explain why the certbot process might fail To make sure you can get a certificate from LetsEncrypt you need to ensure that port 80 is open so that LetsEncrypt can communicate with your machine If port 80 is blocked or firewall is enabled the exchange will fail and you won t be able to receive a certificate LetsEncrypt requires proof that you control the domain so attempts to obtain certificates for domains you do not control might be rejected Grafana starts but HTTPS is unavailable When you configure HTTPS the following errors might appear in Grafana s logs Permission denied level error msg Stopped background service service api HTTPServer reason open etc grafana grafana crt permission denied Resolution To ensure secure HTTPS setup it is essential that the cryptographic keys and certificates are as restricted as possible However if the file permissions are too restricted the Grafana process may not have access to the necessary files thus impeding a successful HTTPS setup Please re examine the listed instructions to double check the file permissions and try again Cannot assign requested address listen tcp 34 148 30 243 3000 bind cannot assign requested address Resolution Check the config to ensure the http addr is left blank allowing Grafana to bind to all interfaces If you have set http addr to a specific subdomain such as subdomain mysite com this might prevent the Grafana process from binding to an external address due to network address translation layers being present
grafana setup restart grafana menuTitle Start Grafana aliases products installation restart grafana How to start the Grafana server labels oss enterprise
--- aliases: - ../installation/restart-grafana/ - ./restart-grafana/ description: How to start the Grafana server labels: products: - enterprise - oss menuTitle: Start Grafana title: Start the Grafana server weight: 300 --- # Start the Grafana server This topic includes instructions for starting the Grafana server. For certain configuration changes, you might have to restart the Grafana server for them to take effect. The following instructions start the `grafana-server` process as the `grafana` user, which was created during the package installation. If you installed with the APT repository or `.deb` package, then you can start the server using `systemd` or `init.d`. If you installed a binary `.tar.gz` file, then you execute the binary. ## Linux The following subsections describe three methods of starting and restarting the Grafana server: with systemd, initd, or by directly running the binary. You should follow only one set of instructions, depending on how your machine is configured. ### Start the Grafana server with systemd Complete the following steps to start the Grafana server using systemd and verify that it is running. 1. To start the service, run the following commands: ```bash sudo systemctl daemon-reload sudo systemctl start grafana-server ``` 1. To verify that the service is running, run the following command: ```bash sudo systemctl status grafana-server ``` ### Configure the Grafana server to start at boot using systemd To configure the Grafana server to start at boot, run the following command: ```bash sudo systemctl enable grafana-server.service ``` #### Serve Grafana on a port < 1024 ### Restart the Grafana server using systemd To restart the Grafana server, run the following command: ```bash sudo systemctl restart grafana-server ``` SUSE or openSUSE users might need to start the server with the systemd method, then use the init.d method to configure Grafana to start at boot. ### Start the Grafana server using init.d Complete the following steps to start the Grafana server using init.d and verify that it is running: 1. To start the Grafana server, run the following command: ```bash sudo service grafana-server start ``` 1. To verify that the service is running, run the following command: ```bash sudo service grafana-server status ``` ### Configure the Grafana server to start at boot using init.d To configure the Grafana server to start at boot, run the following command: ```bash sudo update-rc.d grafana-server defaults ``` #### Restart the Grafana server using init.d To restart the Grafana server, run the following command: ```bash sudo service grafana-server restart ``` ### Start the server using the binary The `grafana` binary .tar.gz needs the working directory to be the root install directory where the binary and the `public` folder are located. To start the Grafana server, run the following command: ```bash ./bin/grafana server ``` ## Docker To restart the Grafana service, use the `docker restart` command. `docker restart grafana` Alternatively, you can use the `docker compose restart` command to restart Grafana. For more information, refer to [docker compose documentation](https://docs.docker.com/compose/). ### Docker compose example Configure your `docker-compose.yml` file. For example: ```yml version: '3.8' services: grafana: image: grafana/grafana:latest container_name: grafana restart: unless-stopped environment: - TERM=linux - GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-polystat-panel ports: - '3000:3000' volumes: - 'grafana_storage:/var/lib/grafana' volumes: grafana_storage: {} ``` Start the Grafana server: `docker compose up -d` This starts the Grafana server container in detached mode along with the two plugins specified in the YAML file. To restart the running container, use this command: `docker compose restart grafana` ## Windows Complete the following steps to start the Grafana server on Windows: 1. Execute `grafana.exe server`; the `grafana` binary is located in the `bin` directory. We recommend that you run `grafana.exe server` from the command line. If you want to run Grafana as a Windows service, you can download [NSSM](https://nssm.cc/). 1. To run Grafana, open your browser and go to the Grafana port (http://localhost:3000/ is default). > **Note:** The default Grafana port is `3000`. This port might require extra permissions on Windows. If it does not appear in the default port, you can try changing to a different port. 1. To change the port, complete the following steps: a. In the `conf` directory, copy `sample.ini` to `custom.ini`. > **Note:** You should edit `custom.ini`, never `defaults.ini`. b. Edit `custom.ini` and uncomment the `http_port` configuration option (`;` is the comment character in ini files) and change it to something similar to `8080`, which should not require extra Windows privileges. To restart the Grafana server, complete the following steps: 1. Open the **Services** app. 1. Right-click on the **Grafana** service. 1. In the context menu, click **Restart**. ## macOS Restart methods differ depending on whether you installed Grafana using Homebrew or as standalone macOS binaries. ### Start Grafana using Homebrew To start Grafana using [Homebrew](http://brew.sh/), run the following start command: ```bash brew services start grafana ``` ### Restart Grafana using Homebrew Use the [Homebrew](http://brew.sh/) restart command: ```bash brew services restart grafana ``` ### Restart standalone macOS binaries To restart Grafana: 1. Open a terminal and go to the directory where you copied the install setup files. 1. Run the command: ```bash ./bin/grafana server ``` ## Next steps After the Grafana server is up and running, consider taking the next steps: - Refer to [Get Started]() to learn how to build your first dashboard. - Refer to [Configuration]() to learn about how you can customize your environment.
grafana setup
aliases installation restart grafana restart grafana description How to start the Grafana server labels products enterprise oss menuTitle Start Grafana title Start the Grafana server weight 300 Start the Grafana server This topic includes instructions for starting the Grafana server For certain configuration changes you might have to restart the Grafana server for them to take effect The following instructions start the grafana server process as the grafana user which was created during the package installation If you installed with the APT repository or deb package then you can start the server using systemd or init d If you installed a binary tar gz file then you execute the binary Linux The following subsections describe three methods of starting and restarting the Grafana server with systemd initd or by directly running the binary You should follow only one set of instructions depending on how your machine is configured Start the Grafana server with systemd Complete the following steps to start the Grafana server using systemd and verify that it is running 1 To start the service run the following commands bash sudo systemctl daemon reload sudo systemctl start grafana server 1 To verify that the service is running run the following command bash sudo systemctl status grafana server Configure the Grafana server to start at boot using systemd To configure the Grafana server to start at boot run the following command bash sudo systemctl enable grafana server service Serve Grafana on a port 1024 Restart the Grafana server using systemd To restart the Grafana server run the following command bash sudo systemctl restart grafana server SUSE or openSUSE users might need to start the server with the systemd method then use the init d method to configure Grafana to start at boot Start the Grafana server using init d Complete the following steps to start the Grafana server using init d and verify that it is running 1 To start the Grafana server run the following command bash sudo service grafana server start 1 To verify that the service is running run the following command bash sudo service grafana server status Configure the Grafana server to start at boot using init d To configure the Grafana server to start at boot run the following command bash sudo update rc d grafana server defaults Restart the Grafana server using init d To restart the Grafana server run the following command bash sudo service grafana server restart Start the server using the binary The grafana binary tar gz needs the working directory to be the root install directory where the binary and the public folder are located To start the Grafana server run the following command bash bin grafana server Docker To restart the Grafana service use the docker restart command docker restart grafana Alternatively you can use the docker compose restart command to restart Grafana For more information refer to docker compose documentation https docs docker com compose Docker compose example Configure your docker compose yml file For example yml version 3 8 services grafana image grafana grafana latest container name grafana restart unless stopped environment TERM linux GF INSTALL PLUGINS grafana clock panel grafana polystat panel ports 3000 3000 volumes grafana storage var lib grafana volumes grafana storage Start the Grafana server docker compose up d This starts the Grafana server container in detached mode along with the two plugins specified in the YAML file To restart the running container use this command docker compose restart grafana Windows Complete the following steps to start the Grafana server on Windows 1 Execute grafana exe server the grafana binary is located in the bin directory We recommend that you run grafana exe server from the command line If you want to run Grafana as a Windows service you can download NSSM https nssm cc 1 To run Grafana open your browser and go to the Grafana port http localhost 3000 is default Note The default Grafana port is 3000 This port might require extra permissions on Windows If it does not appear in the default port you can try changing to a different port 1 To change the port complete the following steps a In the conf directory copy sample ini to custom ini Note You should edit custom ini never defaults ini b Edit custom ini and uncomment the http port configuration option is the comment character in ini files and change it to something similar to 8080 which should not require extra Windows privileges To restart the Grafana server complete the following steps 1 Open the Services app 1 Right click on the Grafana service 1 In the context menu click Restart macOS Restart methods differ depending on whether you installed Grafana using Homebrew or as standalone macOS binaries Start Grafana using Homebrew To start Grafana using Homebrew http brew sh run the following start command bash brew services start grafana Restart Grafana using Homebrew Use the Homebrew http brew sh restart command bash brew services restart grafana Restart standalone macOS binaries To restart Grafana 1 Open a terminal and go to the directory where you copied the install setup files 1 Run the command bash bin grafana server Next steps After the Grafana server is up and running consider taking the next steps Refer to Get Started to learn how to build your first dashboard Refer to Configuration to learn about how you can customize your environment
grafana setup Grafana Live is a real time messaging engine that pushes event data to live live ha setup live configure grafana live live live feature overview live aliases live live channel live set up grafana live a frontend when an event occurs
--- aliases: - ../live/ - ../live/configure-grafana-live/ - ../live/live-channel/ - ../live/live-feature-overview/ - ../live/live-ha-setup/ - ../live/set-up-grafana-live/ description: Grafana Live is a real-time messaging engine that pushes event data to a frontend when an event occurs. labels: products: - enterprise - oss menuTitle: Set up Grafana Live title: Set up Grafana Live weight: 1100 --- # Set up Grafana Live Grafana Live is a real-time messaging engine you can use to push event data to a frontend as soon as an event occurs. This could be notifications about dashboard changes, new frames for rendered data, and so on. Live features can help eliminate a page reload or polling in many places, it can stream Internet of things (IoT) sensors or any other real-time data to panels. By `real-time`, we indicate a soft real-time. Due to network latencies, garbage collection cycles, and so on, the delay of a delivered message can be up to several hundred milliseconds or higher. ## Concepts Grafana Live sends data to clients over persistent WebSocket connection. Grafana frontend subscribes on channels to receive data which was published into that channel – in other words PUB/SUB mechanics is used. All subscriptions on a page multiplexed inside a single WebSocket connection. There are some rules regarding Live channel names – see [Grafana Live channel](). Handling persistent connections like WebSocket in scale may require operating system and infrastructure tuning. That's why by default Grafana Live supports 100 simultaneous connections max. For more details on how to tune this limit, refer to [Live configuration section](). ## Features Having a way to send data to clients in real-time opens a road for new ways of data interaction and visualization. Below we describe Grafana Live features supported at the moment. ### Dashboard change notifications As soon as there is a change to the dashboard layout, it is automatically reflected on other devices connected to Grafana Live. ### Data streaming from plugins With Grafana Live, backend data source plugins can stream updates to frontend panels. For data source plugin channels, Grafana uses `ds` scope. Namespace in the case of data source channels is a data source unique ID (UID) which is issued by Grafana at the moment of data source creation. The path is a custom string that plugin authors free to choose themselves (just make sure it consists of allowed symbols). For example, a data source channel looks like this: `ds/<DATASOURCE_UID>/<CUSTOM_PATH>`. Refer to the tutorial about [building a streaming data source backend plugin](/tutorials/build-a-streaming-data-source-plugin/) for more details. The basic streaming example included in Grafana core streams frames with some generated data to a panel. To look at it create a new panel and point it to the `-- Grafana --` data source. Next, choose `Live Measurements` and select the `plugin/testdata/random-20Hz-stream` channel. ### Data streaming from Telegraf A new API endpoint `/api/live/push/:streamId` allows accepting metrics data in Influx format from Telegraf. These metrics are transformed into Grafana data frames and published to channels. Refer to the tutorial about [streaming metrics from Telegraf to Grafana](/tutorials/stream-metrics-from-telegraf-to-grafana/) for more information. ## Grafana Live channel Grafana Live is a PUB/SUB server, clients subscribe to channels to receive real-time updates published to those channels. ### Channel structure Channel is a string identifier. In Grafana channel consists of 3 parts delimited by `/`: - Scope - Namespace - Path For example, the channel `grafana/dashboard/xyz` has the scope `grafana`, namespace `dashboard`, and path `xyz`. Scope, namespace and path can only have ASCII alphanumeric symbols (A-Z, a-z, 0-9), `_` (underscore) and `-` (dash) at the moment. The path part can additionally have `/`, `.` and `=` symbols. The meaning of scope, namespace and path is context-specific. The maximum length of a channel is 160 symbols. Scope determines the purpose of a channel in Grafana. For example, for data source plugin channels Grafana uses `ds` scope. For built-in features like dashboard edit notifications Grafana uses `grafana` scope. Namespace has a different meaning depending on scope. For example, for `grafana` scope this could be a name of built-in real-time feature like `dashboard` (i.e. dashboards events). The path, which is the final part of a channel, usually contains the identifier of some concrete resource such as the ID of a dashboard that a user is currently looking at. But a path can be anything. Channels are lightweight and ephemeral - they are created automatically on user subscription and removed as soon as last user left a channel. ### Data format All data travelling over Live channels must be JSON-encoded. ## Configure Grafana Live Grafana Live is enabled by default. In Grafana v8.0, it has a strict default for a maximum number of connections per Grafana server instance. ### Max number of connections Grafana Live uses persistent connections (WebSocket at the moment) to deliver real-time updates to clients. WebSocket is a persistent connection that starts with an HTTP Upgrade request (using the same HTTP port as the rest of Grafana) and then switches to a TCP mode where WebSocket frames can travel in both directions between a client and a server. Each logged-in user opens a WebSocket connection – one per browser tab. The number of maximum WebSocket connections users can establish with Grafana is limited to 100 by default. See [max_connections]() option. In case you want to increase this limit, ensure that your server and infrastructure allow handling more connections. The following sections discuss several common problems which could happen when managing persistent connections, in particular WebSocket connections. ### Request origin check To avoid hijacking of WebSocket connection Grafana Live checks the Origin request header sent by a client in an HTTP Upgrade request. Requests without Origin header pass through without any origin check. By default, Live accepts connections with Origin header that matches configured [root_url]() (which is a public Grafana URL). It is possible to provide a list of additional origin patterns to allow WebSocket connections from. This can be achieved using the [allowed_origins]() option of Grafana Live configuration. #### Resource usage Each persistent connection costs some memory on a server. Typically, this should be about 50 KB per connection at this moment. Thus a server with 1 GB RAM is expected to handle about 20k connections max. Each active connection consumes additional CPU resources since the client and server send PING/PONG frames to each other to maintain a connection. Using the streaming functionality results in additional CPU usage. The exact CPU resource utilization can be hard to estimate as it heavily depends on the Grafana Live usage pattern. #### Open file limit Each WebSocket connection costs a file descriptor on a server machine where Grafana runs. Most operating systems have a quite low default limit for the maximum number of descriptors that process can open. To look at the current limit on Unix run: ``` ulimit -n ``` On a Linux system, you can also check out the current limits for a running process with: ``` cat /proc/<PROCESS_PID>/limits ``` The open files limit shows approximately how many user connections your server can currently handle. To increase this limit, refer to [these instructions](https://docs.riak.com/riak/kv/2.2.3/using/performance/open-files-limit.1.html) for popular operating systems. #### Ephemeral port exhaustion Ephemeral port exhaustion problem can happen between your load balancer (or reverse proxy) software and Grafana server. For example, when you load balance requests/connections between different Grafana instances. If you connect directly to a single Grafana server instance, then you should not come across this issue. The problem arises because each TCP connection uniquely identified in the OS by the 4-part-tuple: ``` source ip | source port | destination ip | destination port ``` By default, on load balancer/server boundary you are limited to 65535 possible variants. But actually, due to some OS limits (for example on Unix available ports defined in `ip_local_port_range` sysctl parameter) and sockets in TIME_WAIT state, the number is even less. In order to eliminate a problem you can: - Increase the ephemeral port range by tuning `ip_local_port_range` kernel option. - Deploy more Grafana server instances to load balance across. - Deploy more load balancer instances. - Use virtual network interfaces. #### WebSocket and proxies Not all proxies can transparently proxy WebSocket connections by default. For example, if you are using Nginx before Grafana you need to configure WebSocket proxy like this: ``` http { map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream grafana { server 127.0.0.1:3000; } server { listen 8000; location / { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $http_host; proxy_pass http://grafana; } } } ``` See the [Nginx blog on their website](https://www.nginx.com/blog/websocket-nginx/) for more information. Also, refer to your load balancer/reverse proxy documentation to find out more information on dealing with WebSocket connections. Some corporate proxies can remove headers required to properly establish a WebSocket connection. In this case, you should tune intermediate proxies to not remove required headers. However, the better option is to use Grafana with TLS. Now WebSocket connection will inherit TLS and thus must be handled transparently by proxies. Proxies like Nginx and Envoy have default limits on maximum number of connections which can be established. Make sure you have a reasonable limit for max number of incoming and outgoing connections in your proxy configuration. ## Configure Grafana Live HA setup By default, Grafana Live uses in-memory data structures and in-memory PUB/SUB hub for handling subscriptions. In a high availability Grafana setup involving several Grafana server instances behind a load balancer, you can find the following limitations: - Built-in features like dashboard change notifications will only be broadcasted to users connected to the same Grafana server process instance. - Streaming from Telegraf will deliver data only to clients connected to the same instance which received Telegraf data, active stream cache is not shared between different Grafana instances. - A separate unidirectional stream between Grafana and backend data source may be opened on different Grafana servers for the same channel. To bypass these limitations, Grafana v8.1 has an experimental Live HA engine that requires Redis to work. ### Configure Redis Live engine When the Redis engine is configured, Grafana Live keeps its state in Redis and uses Redis PUB/SUB functionality to deliver messages to all subscribers throughout all Grafana server nodes. Here is an example configuration: ``` [live] ha_engine = redis ha_engine_address = 127.0.0.1:6379 ``` For additional information, refer to the [ha_engine]() and [ha_engine_address]() options. After running: - All built-in real-time notifications like dashboard changes are delivered to all Grafana server instances and broadcasted to all subscribers. - Streaming from Telegraf delivers messages to all subscribers. - A separate unidirectional stream between Grafana and backend data source opens on different Grafana servers. Publishing data to a channel delivers messages to instance subscribers, as a result, publications from different instances on different machines do not produce duplicate data on panels. At the moment we only support single Redis node. > **Note:** It's possible to use Redis Sentinel and Haproxy to achieve a highly available Redis setup. Redis nodes should be managed by [Redis Sentinel](https://redis.io/topics/sentinel) to achieve automatic failover. Haproxy configuration example: > > ``` > listen redis > server redis-01 127.0.0.1:6380 check port 6380 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions > server redis-02 127.0.0.1:6381 check port 6381 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 backup > bind *:6379 > mode tcp > option tcpka > option tcplog > option tcp-check > tcp-check send PING\r\n > tcp-check expect string +PONG > tcp-check send info\ replication\r\n > tcp-check expect string role:master > tcp-check send QUIT\r\n > tcp-check expect string +OK > balance roundrobin > ``` > > Next, point Grafana Live to Haproxy address:port.
grafana setup
aliases live live configure grafana live live live channel live live feature overview live live ha setup live set up grafana live description Grafana Live is a real time messaging engine that pushes event data to a frontend when an event occurs labels products enterprise oss menuTitle Set up Grafana Live title Set up Grafana Live weight 1100 Set up Grafana Live Grafana Live is a real time messaging engine you can use to push event data to a frontend as soon as an event occurs This could be notifications about dashboard changes new frames for rendered data and so on Live features can help eliminate a page reload or polling in many places it can stream Internet of things IoT sensors or any other real time data to panels By real time we indicate a soft real time Due to network latencies garbage collection cycles and so on the delay of a delivered message can be up to several hundred milliseconds or higher Concepts Grafana Live sends data to clients over persistent WebSocket connection Grafana frontend subscribes on channels to receive data which was published into that channel in other words PUB SUB mechanics is used All subscriptions on a page multiplexed inside a single WebSocket connection There are some rules regarding Live channel names see Grafana Live channel Handling persistent connections like WebSocket in scale may require operating system and infrastructure tuning That s why by default Grafana Live supports 100 simultaneous connections max For more details on how to tune this limit refer to Live configuration section Features Having a way to send data to clients in real time opens a road for new ways of data interaction and visualization Below we describe Grafana Live features supported at the moment Dashboard change notifications As soon as there is a change to the dashboard layout it is automatically reflected on other devices connected to Grafana Live Data streaming from plugins With Grafana Live backend data source plugins can stream updates to frontend panels For data source plugin channels Grafana uses ds scope Namespace in the case of data source channels is a data source unique ID UID which is issued by Grafana at the moment of data source creation The path is a custom string that plugin authors free to choose themselves just make sure it consists of allowed symbols For example a data source channel looks like this ds DATASOURCE UID CUSTOM PATH Refer to the tutorial about building a streaming data source backend plugin tutorials build a streaming data source plugin for more details The basic streaming example included in Grafana core streams frames with some generated data to a panel To look at it create a new panel and point it to the Grafana data source Next choose Live Measurements and select the plugin testdata random 20Hz stream channel Data streaming from Telegraf A new API endpoint api live push streamId allows accepting metrics data in Influx format from Telegraf These metrics are transformed into Grafana data frames and published to channels Refer to the tutorial about streaming metrics from Telegraf to Grafana tutorials stream metrics from telegraf to grafana for more information Grafana Live channel Grafana Live is a PUB SUB server clients subscribe to channels to receive real time updates published to those channels Channel structure Channel is a string identifier In Grafana channel consists of 3 parts delimited by Scope Namespace Path For example the channel grafana dashboard xyz has the scope grafana namespace dashboard and path xyz Scope namespace and path can only have ASCII alphanumeric symbols A Z a z 0 9 underscore and dash at the moment The path part can additionally have and symbols The meaning of scope namespace and path is context specific The maximum length of a channel is 160 symbols Scope determines the purpose of a channel in Grafana For example for data source plugin channels Grafana uses ds scope For built in features like dashboard edit notifications Grafana uses grafana scope Namespace has a different meaning depending on scope For example for grafana scope this could be a name of built in real time feature like dashboard i e dashboards events The path which is the final part of a channel usually contains the identifier of some concrete resource such as the ID of a dashboard that a user is currently looking at But a path can be anything Channels are lightweight and ephemeral they are created automatically on user subscription and removed as soon as last user left a channel Data format All data travelling over Live channels must be JSON encoded Configure Grafana Live Grafana Live is enabled by default In Grafana v8 0 it has a strict default for a maximum number of connections per Grafana server instance Max number of connections Grafana Live uses persistent connections WebSocket at the moment to deliver real time updates to clients WebSocket is a persistent connection that starts with an HTTP Upgrade request using the same HTTP port as the rest of Grafana and then switches to a TCP mode where WebSocket frames can travel in both directions between a client and a server Each logged in user opens a WebSocket connection one per browser tab The number of maximum WebSocket connections users can establish with Grafana is limited to 100 by default See max connections option In case you want to increase this limit ensure that your server and infrastructure allow handling more connections The following sections discuss several common problems which could happen when managing persistent connections in particular WebSocket connections Request origin check To avoid hijacking of WebSocket connection Grafana Live checks the Origin request header sent by a client in an HTTP Upgrade request Requests without Origin header pass through without any origin check By default Live accepts connections with Origin header that matches configured root url which is a public Grafana URL It is possible to provide a list of additional origin patterns to allow WebSocket connections from This can be achieved using the allowed origins option of Grafana Live configuration Resource usage Each persistent connection costs some memory on a server Typically this should be about 50 KB per connection at this moment Thus a server with 1 GB RAM is expected to handle about 20k connections max Each active connection consumes additional CPU resources since the client and server send PING PONG frames to each other to maintain a connection Using the streaming functionality results in additional CPU usage The exact CPU resource utilization can be hard to estimate as it heavily depends on the Grafana Live usage pattern Open file limit Each WebSocket connection costs a file descriptor on a server machine where Grafana runs Most operating systems have a quite low default limit for the maximum number of descriptors that process can open To look at the current limit on Unix run ulimit n On a Linux system you can also check out the current limits for a running process with cat proc PROCESS PID limits The open files limit shows approximately how many user connections your server can currently handle To increase this limit refer to these instructions https docs riak com riak kv 2 2 3 using performance open files limit 1 html for popular operating systems Ephemeral port exhaustion Ephemeral port exhaustion problem can happen between your load balancer or reverse proxy software and Grafana server For example when you load balance requests connections between different Grafana instances If you connect directly to a single Grafana server instance then you should not come across this issue The problem arises because each TCP connection uniquely identified in the OS by the 4 part tuple source ip source port destination ip destination port By default on load balancer server boundary you are limited to 65535 possible variants But actually due to some OS limits for example on Unix available ports defined in ip local port range sysctl parameter and sockets in TIME WAIT state the number is even less In order to eliminate a problem you can Increase the ephemeral port range by tuning ip local port range kernel option Deploy more Grafana server instances to load balance across Deploy more load balancer instances Use virtual network interfaces WebSocket and proxies Not all proxies can transparently proxy WebSocket connections by default For example if you are using Nginx before Grafana you need to configure WebSocket proxy like this http map http upgrade connection upgrade default upgrade close upstream grafana server 127 0 0 1 3000 server listen 8000 location proxy http version 1 1 proxy set header Upgrade http upgrade proxy set header Connection connection upgrade proxy set header Host http host proxy pass http grafana See the Nginx blog on their website https www nginx com blog websocket nginx for more information Also refer to your load balancer reverse proxy documentation to find out more information on dealing with WebSocket connections Some corporate proxies can remove headers required to properly establish a WebSocket connection In this case you should tune intermediate proxies to not remove required headers However the better option is to use Grafana with TLS Now WebSocket connection will inherit TLS and thus must be handled transparently by proxies Proxies like Nginx and Envoy have default limits on maximum number of connections which can be established Make sure you have a reasonable limit for max number of incoming and outgoing connections in your proxy configuration Configure Grafana Live HA setup By default Grafana Live uses in memory data structures and in memory PUB SUB hub for handling subscriptions In a high availability Grafana setup involving several Grafana server instances behind a load balancer you can find the following limitations Built in features like dashboard change notifications will only be broadcasted to users connected to the same Grafana server process instance Streaming from Telegraf will deliver data only to clients connected to the same instance which received Telegraf data active stream cache is not shared between different Grafana instances A separate unidirectional stream between Grafana and backend data source may be opened on different Grafana servers for the same channel To bypass these limitations Grafana v8 1 has an experimental Live HA engine that requires Redis to work Configure Redis Live engine When the Redis engine is configured Grafana Live keeps its state in Redis and uses Redis PUB SUB functionality to deliver messages to all subscribers throughout all Grafana server nodes Here is an example configuration live ha engine redis ha engine address 127 0 0 1 6379 For additional information refer to the ha engine and ha engine address options After running All built in real time notifications like dashboard changes are delivered to all Grafana server instances and broadcasted to all subscribers Streaming from Telegraf delivers messages to all subscribers A separate unidirectional stream between Grafana and backend data source opens on different Grafana servers Publishing data to a channel delivers messages to instance subscribers as a result publications from different instances on different machines do not produce duplicate data on panels At the moment we only support single Redis node Note It s possible to use Redis Sentinel and Haproxy to achieve a highly available Redis setup Redis nodes should be managed by Redis Sentinel https redis io topics sentinel to achieve automatic failover Haproxy configuration example listen redis server redis 01 127 0 0 1 6380 check port 6380 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 on marked down shutdown sessions on marked up shutdown backup sessions server redis 02 127 0 0 1 6381 check port 6381 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 backup bind 6379 mode tcp option tcpka option tcplog option tcp check tcp check send PING r n tcp check expect string PONG tcp check send info replication r n tcp check expect string role master tcp check send QUIT r n tcp check expect string OK balance roundrobin Next point Grafana Live to Haproxy address port
grafana setup documentation Guide for configuring the Grafana Docker image installation configure docker aliases administration configure docker keywords configuration grafana docker
--- aliases: - ../administration/configure-docker/ - ../installation/configure-docker/ description: Guide for configuring the Grafana Docker image keywords: - grafana - configuration - documentation - docker - docker compose labels: products: - enterprise - oss menuTitle: Configure a Docker image title: Configure a Grafana Docker image weight: 1800 --- # Configure a Grafana Docker image This topic explains how to run Grafana on Docker in complex environments that require you to: - Use different images - Change logging levels - Define secrets on the Cloud - Configure plugins > **Note:** The examples in this topic use the Grafana Enterprise Docker image. You can use the Grafana Open Source edition by changing the Docker image to `grafana/grafana-oss`. ## Supported Docker image variants You can install and run Grafana using the following official Docker images. - **Grafana Enterprise**: `grafana/grafana-enterprise` - **Grafana Open Source**: `grafana/grafana-oss` Each edition is available in two variants: Alpine and Ubuntu. ## Alpine image (recommended) [Alpine Linux](https://alpinelinux.org/about/) is a Linux distribution not affiliated with any commercial entity. It is a versatile operating system that caters to users who prioritize security, efficiency, and user-friendliness. Alpine Linux is much smaller than other distribution base images, allowing for slimmer and more secure images to be created. By default, the images are built using the widely used [Alpine Linux project](http://alpinelinux.org/) base image, which can be found in the [Alpine docker repo](https://hub.docker.com/_/alpine). If you prioritize security and want to minimize the size of your image, it is recommended that you use the Alpine variant. However, it's important to note that the Alpine variant uses [musl libc](http://www.musl-libc.org/) instead of [glibc and others](http://www.etalabs.net/compare_libcs.html). As a result, some software might encounter problems depending on their libc requirements. Nonetheless, most software should not experience any issues, so the Alpine variant is generally reliable. ## Ubuntu image The Ubuntu-based Grafana Enterprise and OSS images are built using the [Ubuntu](https://ubuntu.com/) base image, which can be found in the [Ubuntu docker repo](https://hub.docker.com/_/ubuntu). An Ubuntu-based image can be a good option for users who prefer an Ubuntu-based image or require certain tools unavailable on Alpine. - **Grafana Enterprise**: `grafana/grafana-enterprise:<version>-ubuntu` - **Grafana Open Source**: `grafana/grafana-oss:<version>-ubuntu` ## Run a specific version of Grafana You can also run a specific version of Grafana or a beta version based on the main branch of the [grafana/grafana GitHub repository](https://github.com/grafana/grafana). > **Note:** If you use a Linux operating system such as Debian or Ubuntu and encounter permission errors when running Docker commands, you might need to prefix the command with `sudo` or add your user to the `docker` group. The official Docker documentation provides instructions on how to [run Docker without a non-root user](https://docs.docker.com/engine/install/linux-postinstall/). To run a specific version of Grafana, add it in the command <version number> section: ```bash docker run -d -p 3000:3000 --name grafana grafana/grafana-enterprise:<version number> ``` Example: The following command runs the Grafana Enterprise container and specifies version 9.4.7. If you want to run a different version, modify the version number section. ```bash docker run -d -p 3000:3000 --name grafana grafana/grafana-enterprise:9.4.7 ``` ## Run the Grafana main branch After every successful build of the main branch, two tags, `grafana/grafana-oss:main` and `grafana/grafana-oss:main-ubuntu`, are updated. Additionally, two new tags are created: `grafana/grafana-oss-dev:<version><build ID>-pre` and `grafana/grafana-oss-dev:<version><build ID>-pre-ubuntu`, where `version` is the next version of Grafana and `build ID `is the ID of the corresponding CI build. These tags provide access to the most recent Grafana main builds. For more information, refer to [grafana/grafana-oss-dev](https://hub.docker.com/r/grafana/grafana-oss-dev/tags). To ensure stability and consistency, we strongly recommend using the `grafana/grafana-oss-dev:<version><build ID>-pre` tag when running the Grafana main branch in a production environment. This tag ensures that you are using a specific version of Grafana instead of the most recent commit, which could potentially introduce bugs or issues. It also avoids polluting the tag namespace for the main Grafana images with thousands of pre-release tags. For a list of available tags, refer to [grafana/grafana-oss](https://hub.docker.com/r/grafana/grafana-oss/tags/) and [grafana/grafana-oss-dev](https://hub.docker.com/r/grafana/grafana-oss-dev/tags/). ## Default paths Grafana comes with default configuration parameters that remain the same among versions regardless of the operating system or the environment (for example, virtual machine, Docker, Kubernetes, etc.). You can refer to the [Configure Grafana]() documentation to view all the default configuration settings. The following configurations are set by default when you start the Grafana Docker container. When running in Docker you cannot change the configurations by editing the `conf/grafana.ini` file. Instead, you can modify the configuration using [environment variables](). | Setting | Default value | | --------------------- | ------------------------- | | GF_PATHS_CONFIG | /etc/grafana/grafana.ini | | GF_PATHS_DATA | /var/lib/grafana | | GF_PATHS_HOME | /usr/share/grafana | | GF_PATHS_LOGS | /var/log/grafana | | GF_PATHS_PLUGINS | /var/lib/grafana/plugins | | GF_PATHS_PROVISIONING | /etc/grafana/provisioning | ## Install plugins in the Docker container You can install publicly available plugins and plugins that are private or used internally in an organization. For plugin installation instructions, refer to [Install plugins in the Docker container](). ### Install plugins from other sources To install plugins from other sources, you must define the custom URL and specify it immediately before the plugin name in the `GF_PLUGINS_PREINSTALL` environment variable: `GF_PLUGINS_PREINSTALL=<plugin ID>@[<plugin version>]@<url to plugin zip>`. Example: The following command runs Grafana Enterprise on **port 3000** in detached mode and installs the custom plugin, which is specified as a URL parameter in the `GF_PLUGINS_PREINSTALL` environment variable. ```bash docker run -d -p 3000:3000 --name=grafana \ -e "GF_PLUGINS_PREINSTALL=custom-plugin@@http://plugin-domain.com/my-custom-plugin.zip,grafana-clock-panel" \ grafana/grafana-enterprise ``` ## Build a custom Grafana Docker image In the Grafana GitHub repository, the `packaging/docker/custom/` folder includes a `Dockerfile` that you can use to build a custom Grafana image. The `Dockerfile` accepts `GRAFANA_VERSION`, `GF_INSTALL_PLUGINS`, and `GF_INSTALL_IMAGE_RENDERER_PLUGIN` as build arguments. The `GRAFANA_VERSION` build argument must be a valid `grafana/grafana` Docker image tag. By default, Grafana builds an Alpine-based image. To build an Ubuntu-based image, append `-ubuntu` to the `GRAFANA_VERSION` build argument. Example: The following example shows you how to build and run a custom Grafana Docker image based on the latest official Ubuntu-based Grafana Docker image: ```bash # go to the custom directory cd packaging/docker/custom # run the docker build command to build the image docker build \ --build-arg "GRAFANA_VERSION=latest-ubuntu" \ -t grafana-custom . # run the custom grafana container using docker run command docker run -d -p 3000:3000 --name=grafana grafana-custom ``` ### Build Grafana with the Image Renderer plugin pre-installed > **Note:** This feature is experimental. Currently, the Grafana Image Renderer plugin requires dependencies that are not available in the Grafana Docker image (see [GitHub Issue#301](https://github.com/grafana/grafana-image-renderer/issues/301) for more details). However, you can create a customized Docker image using the `GF_INSTALL_IMAGE_RENDERER_PLUGIN` build argument as a solution. This will install the necessary dependencies for the Grafana Image Renderer plugin to run. Example: The following example shows how to build a customized Grafana Docker image that includes the Image Renderer plugin. ```bash # go to the folder cd packaging/docker/custom # running the build command docker build \ --build-arg "GRAFANA_VERSION=latest" \ --build-arg "GF_INSTALL_IMAGE_RENDERER_PLUGIN=true" \ -t grafana-custom . # running the docker run command docker run -d -p 3000:3000 --name=grafana grafana-custom ``` ### Build a Grafana Docker image with pre-installed plugins If you run multiple Grafana installations with the same plugins, you can save time by building a customized image that includes plugins available on the [Grafana Plugin download page](/grafana/plugins). When you build a customized image, Grafana doesn't have to install the plugins each time it starts, making the startup process more efficient. > **Note:** To specify the version of a plugin, you can use the `GF_INSTALL_PLUGINS` build argument and add the version number. The latest version is used if you don't specify a version number. For example, you can use `--build-arg "GF_INSTALL_PLUGINS=grafana-clock-panel 1.0.1,grafana-simple-json-datasource 1.3.5"` to specify the versions of two plugins. Example: The following example shows how to build and run a custom Grafana Docker image with pre-installed plugins. ```bash # go to the custom directory cd packaging/docker/custom # running the build command # include the plugins you want e.g. clock planel etc docker build \ --build-arg "GRAFANA_VERSION=latest" \ --build-arg "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource" \ -t grafana-custom . # running the custom Grafana container using the docker run command docker run -d -p 3000:3000 --name=grafana grafana-custom ``` ### Build a Grafana Docker image with pre-installed plugins from other sources You can create a Docker image containing a plugin that is exclusive to your organization, even if it is not accessible to the public. Simply use the `GF_INSTALL_PLUGINS` build argument to specify the plugin's URL and installation folder name, such as `GF_INSTALL_PLUGINS=<url to plugin zip>;<plugin install folder name>`. The following example demonstrates creating a customized Grafana Docker image that includes a custom plugin from a URL link, the clock panel plugin, and the simple-json-datasource plugin. You can define these plugins in the build argument using the Grafana Plugin environment variable. ```bash # go to the folder cd packaging/docker/custom # running the build command docker build \ --build-arg "GRAFANA_VERSION=latest" \ --build-arg "GF_INSTALL_PLUGINS=http://plugin-domain.com/my-custom-plugin.zip;my-custom-plugin,grafana-clock-panel,grafana-simple-json-datasource" \ -t grafana-custom . # running the docker run command docker run -d -p 3000:3000 --name=grafana grafana-custom ``` ## Logging By default, Docker container logs are directed to `STDOUT`, a common practice in the Docker community. You can change this by setting a different [log mode]() such as `console`, `file`, or `syslog`. You can use one or more modes by separating them with spaces, for example, `console file`. By default, both `console` and `file` modes are enabled. Example: The following example runs Grafana using the `console file` log mode that is set in the `GF_LOG_MODE` environment variable. ```bash # Run Grafana while logging to both standard out # and /var/log/grafana/grafana.log docker run -p 3000:3000 -e "GF_LOG_MODE=console file" grafana/grafana-enterprise ``` ## Configure Grafana with Docker Secrets You can input confidential data like login credentials and secrets into Grafana using configuration files. This method works well with [Docker Secrets](https://docs.docker.com/engine/swarm/secrets/), as the secrets are automatically mapped to the `/run/secrets/` location within the container. You can apply this technique to any configuration options in `conf/grafana.ini` by setting `GF_<SectionName>_<KeyName>__FILE` to the file path that contains the secret information. For more information about Docker secret command usage, refer to [docker secret](https://docs.docker.com/engine/reference/commandline/secret/). The following example demonstrates how to set the admin password: - Admin password secret: `/run/secrets/admin_password` - Environment variable: `GF_SECURITY_ADMIN_PASSWORD__FILE=/run/secrets/admin_password` ### Configure Docker secrets credentials for AWS CloudWatch Grafana ships with built-in support for the [Amazon CloudWatch datasource](). To configure the data source, you must provide information such as the AWS ID-Key, secret access key, region, and so on. You can use Docker secrets as a way to provide this information. Example: The example below shows how to use Grafana environment variables via Docker Secrets for the AWS ID-Key, secret access key, region, and profile. The example uses the following values for the AWS Cloudwatch data source: ```bash AWS_default_ACCESS_KEY_ID=aws01us02 AWS_default_SECRET_ACCESS_KEY=topsecret9b78c6 AWS_default_REGION=us-east-1 ``` 1. Create a Docker secret for each of the values noted above. ```bash echo "aws01us02" | docker secret create aws_access_key_id - ``` ```bash echo "topsecret9b78c6" | docker secret create aws_secret_access_key - ``` ```bash echo "us-east-1" | docker secret create aws_region - ``` 1. Run the following command to determine that the secrets were created. ```bash $ docker secret ls ``` The output from the command should look similar to the following: ``` ID NAME DRIVER CREATED UPDATED i4g62kyuy80lnti5d05oqzgwh aws_access_key_id 5 minutes ago 5 minutes ago uegit5plcwodp57fxbqbnke7h aws_secret_access_key 3 minutes ago 3 minutes ago fxbqbnke7hplcwodp57fuegit aws_region About a minute ago About a minute ago ``` Where: ID = the secret unique ID that will be use in the docker run command NAME = the logical name defined for each secret 1. Add the secrets to the command line when you run Docker. ```bash docker run -d -p 3000:3000 --name grafana \ -e "GF_DEFAULT_INSTANCE_NAME=my-grafana" \ -e "GF_AWS_PROFILES=default" \ -e "GF_AWS_default_ACCESS_KEY_ID__FILE=/run/secrets/aws_access_key_id" \ -e "GF_AWS_default_SECRET_ACCESS_KEY__FILE=/run/secrets/aws_secret_access_key" \ -e "GF_AWS_default_REGION__FILE=/run/secrets/aws_region" \ -v grafana-data:/var/lib/grafana \ grafana/grafana-enterprise ``` You can also specify multiple profiles to `GF_AWS_PROFILES` (for example, `GF_AWS_PROFILES=default another`). The following list includes the supported environment variables: - `GF_AWS_${profile}_ACCESS_KEY_ID`: AWS access key ID (required). - `GF_AWS_${profile}_SECRET_ACCESS_KEY`: AWS secret access key (required). - `GF_AWS_${profile}_REGION`: AWS region (optional). ## Troubleshoot a Docker deployment By default, the Grafana log level is set to `INFO`, but you can increase the log level to `DEBUG` mode when you want to reproduce a problem. For more information about logging, refer to [logs](). ### Increase log level using the Docker run (CLI) command To increase the log level to `DEBUG` mode, add the environment variable `GF_LOG_LEVEL` to the command line. ```bash docker run -d -p 3000:3000 --name=grafana \ -e "GF_LOG_LEVEL=debug" \ grafana/grafana-enterprise ``` ### Increase log level using the Docker Compose To increase the log level to `DEBUG` mode, add the environment variable `GF_LOG_LEVEL` to the `docker-compose.yaml` file. ```yaml version: '3.8' services: grafana: image: grafana/grafana-enterprise container_name: grafana restart: unless-stopped environment: # increases the log level from info to debug - GF_LOG_LEVEL=debug ports: - '3000:3000' volumes: - 'grafana_storage:/var/lib/grafana' volumes: grafana_storage: {} ``` ### Validate Docker Compose YAML file The chance of syntax errors appearing in a YAML file increases as the file becomes more complex. You can use the following command to check for syntax errors. ```bash # go to your docker-compose.yaml directory cd /path-to/docker-compose/file # run the validation command docker compose config ``` If there are errors in the YAML file, the command output highlights the lines that contain errors. If there are no errors in the YAML file, the output includes the content of the `docker-compose.yaml` file in detailed YAML format.
grafana setup
aliases administration configure docker installation configure docker description Guide for configuring the Grafana Docker image keywords grafana configuration documentation docker docker compose labels products enterprise oss menuTitle Configure a Docker image title Configure a Grafana Docker image weight 1800 Configure a Grafana Docker image This topic explains how to run Grafana on Docker in complex environments that require you to Use different images Change logging levels Define secrets on the Cloud Configure plugins Note The examples in this topic use the Grafana Enterprise Docker image You can use the Grafana Open Source edition by changing the Docker image to grafana grafana oss Supported Docker image variants You can install and run Grafana using the following official Docker images Grafana Enterprise grafana grafana enterprise Grafana Open Source grafana grafana oss Each edition is available in two variants Alpine and Ubuntu Alpine image recommended Alpine Linux https alpinelinux org about is a Linux distribution not affiliated with any commercial entity It is a versatile operating system that caters to users who prioritize security efficiency and user friendliness Alpine Linux is much smaller than other distribution base images allowing for slimmer and more secure images to be created By default the images are built using the widely used Alpine Linux project http alpinelinux org base image which can be found in the Alpine docker repo https hub docker com alpine If you prioritize security and want to minimize the size of your image it is recommended that you use the Alpine variant However it s important to note that the Alpine variant uses musl libc http www musl libc org instead of glibc and others http www etalabs net compare libcs html As a result some software might encounter problems depending on their libc requirements Nonetheless most software should not experience any issues so the Alpine variant is generally reliable Ubuntu image The Ubuntu based Grafana Enterprise and OSS images are built using the Ubuntu https ubuntu com base image which can be found in the Ubuntu docker repo https hub docker com ubuntu An Ubuntu based image can be a good option for users who prefer an Ubuntu based image or require certain tools unavailable on Alpine Grafana Enterprise grafana grafana enterprise version ubuntu Grafana Open Source grafana grafana oss version ubuntu Run a specific version of Grafana You can also run a specific version of Grafana or a beta version based on the main branch of the grafana grafana GitHub repository https github com grafana grafana Note If you use a Linux operating system such as Debian or Ubuntu and encounter permission errors when running Docker commands you might need to prefix the command with sudo or add your user to the docker group The official Docker documentation provides instructions on how to run Docker without a non root user https docs docker com engine install linux postinstall To run a specific version of Grafana add it in the command version number section bash docker run d p 3000 3000 name grafana grafana grafana enterprise version number Example The following command runs the Grafana Enterprise container and specifies version 9 4 7 If you want to run a different version modify the version number section bash docker run d p 3000 3000 name grafana grafana grafana enterprise 9 4 7 Run the Grafana main branch After every successful build of the main branch two tags grafana grafana oss main and grafana grafana oss main ubuntu are updated Additionally two new tags are created grafana grafana oss dev version build ID pre and grafana grafana oss dev version build ID pre ubuntu where version is the next version of Grafana and build ID is the ID of the corresponding CI build These tags provide access to the most recent Grafana main builds For more information refer to grafana grafana oss dev https hub docker com r grafana grafana oss dev tags To ensure stability and consistency we strongly recommend using the grafana grafana oss dev version build ID pre tag when running the Grafana main branch in a production environment This tag ensures that you are using a specific version of Grafana instead of the most recent commit which could potentially introduce bugs or issues It also avoids polluting the tag namespace for the main Grafana images with thousands of pre release tags For a list of available tags refer to grafana grafana oss https hub docker com r grafana grafana oss tags and grafana grafana oss dev https hub docker com r grafana grafana oss dev tags Default paths Grafana comes with default configuration parameters that remain the same among versions regardless of the operating system or the environment for example virtual machine Docker Kubernetes etc You can refer to the Configure Grafana documentation to view all the default configuration settings The following configurations are set by default when you start the Grafana Docker container When running in Docker you cannot change the configurations by editing the conf grafana ini file Instead you can modify the configuration using environment variables Setting Default value GF PATHS CONFIG etc grafana grafana ini GF PATHS DATA var lib grafana GF PATHS HOME usr share grafana GF PATHS LOGS var log grafana GF PATHS PLUGINS var lib grafana plugins GF PATHS PROVISIONING etc grafana provisioning Install plugins in the Docker container You can install publicly available plugins and plugins that are private or used internally in an organization For plugin installation instructions refer to Install plugins in the Docker container Install plugins from other sources To install plugins from other sources you must define the custom URL and specify it immediately before the plugin name in the GF PLUGINS PREINSTALL environment variable GF PLUGINS PREINSTALL plugin ID plugin version url to plugin zip Example The following command runs Grafana Enterprise on port 3000 in detached mode and installs the custom plugin which is specified as a URL parameter in the GF PLUGINS PREINSTALL environment variable bash docker run d p 3000 3000 name grafana e GF PLUGINS PREINSTALL custom plugin http plugin domain com my custom plugin zip grafana clock panel grafana grafana enterprise Build a custom Grafana Docker image In the Grafana GitHub repository the packaging docker custom folder includes a Dockerfile that you can use to build a custom Grafana image The Dockerfile accepts GRAFANA VERSION GF INSTALL PLUGINS and GF INSTALL IMAGE RENDERER PLUGIN as build arguments The GRAFANA VERSION build argument must be a valid grafana grafana Docker image tag By default Grafana builds an Alpine based image To build an Ubuntu based image append ubuntu to the GRAFANA VERSION build argument Example The following example shows you how to build and run a custom Grafana Docker image based on the latest official Ubuntu based Grafana Docker image bash go to the custom directory cd packaging docker custom run the docker build command to build the image docker build build arg GRAFANA VERSION latest ubuntu t grafana custom run the custom grafana container using docker run command docker run d p 3000 3000 name grafana grafana custom Build Grafana with the Image Renderer plugin pre installed Note This feature is experimental Currently the Grafana Image Renderer plugin requires dependencies that are not available in the Grafana Docker image see GitHub Issue 301 https github com grafana grafana image renderer issues 301 for more details However you can create a customized Docker image using the GF INSTALL IMAGE RENDERER PLUGIN build argument as a solution This will install the necessary dependencies for the Grafana Image Renderer plugin to run Example The following example shows how to build a customized Grafana Docker image that includes the Image Renderer plugin bash go to the folder cd packaging docker custom running the build command docker build build arg GRAFANA VERSION latest build arg GF INSTALL IMAGE RENDERER PLUGIN true t grafana custom running the docker run command docker run d p 3000 3000 name grafana grafana custom Build a Grafana Docker image with pre installed plugins If you run multiple Grafana installations with the same plugins you can save time by building a customized image that includes plugins available on the Grafana Plugin download page grafana plugins When you build a customized image Grafana doesn t have to install the plugins each time it starts making the startup process more efficient Note To specify the version of a plugin you can use the GF INSTALL PLUGINS build argument and add the version number The latest version is used if you don t specify a version number For example you can use build arg GF INSTALL PLUGINS grafana clock panel 1 0 1 grafana simple json datasource 1 3 5 to specify the versions of two plugins Example The following example shows how to build and run a custom Grafana Docker image with pre installed plugins bash go to the custom directory cd packaging docker custom running the build command include the plugins you want e g clock planel etc docker build build arg GRAFANA VERSION latest build arg GF INSTALL PLUGINS grafana clock panel grafana simple json datasource t grafana custom running the custom Grafana container using the docker run command docker run d p 3000 3000 name grafana grafana custom Build a Grafana Docker image with pre installed plugins from other sources You can create a Docker image containing a plugin that is exclusive to your organization even if it is not accessible to the public Simply use the GF INSTALL PLUGINS build argument to specify the plugin s URL and installation folder name such as GF INSTALL PLUGINS url to plugin zip plugin install folder name The following example demonstrates creating a customized Grafana Docker image that includes a custom plugin from a URL link the clock panel plugin and the simple json datasource plugin You can define these plugins in the build argument using the Grafana Plugin environment variable bash go to the folder cd packaging docker custom running the build command docker build build arg GRAFANA VERSION latest build arg GF INSTALL PLUGINS http plugin domain com my custom plugin zip my custom plugin grafana clock panel grafana simple json datasource t grafana custom running the docker run command docker run d p 3000 3000 name grafana grafana custom Logging By default Docker container logs are directed to STDOUT a common practice in the Docker community You can change this by setting a different log mode such as console file or syslog You can use one or more modes by separating them with spaces for example console file By default both console and file modes are enabled Example The following example runs Grafana using the console file log mode that is set in the GF LOG MODE environment variable bash Run Grafana while logging to both standard out and var log grafana grafana log docker run p 3000 3000 e GF LOG MODE console file grafana grafana enterprise Configure Grafana with Docker Secrets You can input confidential data like login credentials and secrets into Grafana using configuration files This method works well with Docker Secrets https docs docker com engine swarm secrets as the secrets are automatically mapped to the run secrets location within the container You can apply this technique to any configuration options in conf grafana ini by setting GF SectionName KeyName FILE to the file path that contains the secret information For more information about Docker secret command usage refer to docker secret https docs docker com engine reference commandline secret The following example demonstrates how to set the admin password Admin password secret run secrets admin password Environment variable GF SECURITY ADMIN PASSWORD FILE run secrets admin password Configure Docker secrets credentials for AWS CloudWatch Grafana ships with built in support for the Amazon CloudWatch datasource To configure the data source you must provide information such as the AWS ID Key secret access key region and so on You can use Docker secrets as a way to provide this information Example The example below shows how to use Grafana environment variables via Docker Secrets for the AWS ID Key secret access key region and profile The example uses the following values for the AWS Cloudwatch data source bash AWS default ACCESS KEY ID aws01us02 AWS default SECRET ACCESS KEY topsecret9b78c6 AWS default REGION us east 1 1 Create a Docker secret for each of the values noted above bash echo aws01us02 docker secret create aws access key id bash echo topsecret9b78c6 docker secret create aws secret access key bash echo us east 1 docker secret create aws region 1 Run the following command to determine that the secrets were created bash docker secret ls The output from the command should look similar to the following ID NAME DRIVER CREATED UPDATED i4g62kyuy80lnti5d05oqzgwh aws access key id 5 minutes ago 5 minutes ago uegit5plcwodp57fxbqbnke7h aws secret access key 3 minutes ago 3 minutes ago fxbqbnke7hplcwodp57fuegit aws region About a minute ago About a minute ago Where ID the secret unique ID that will be use in the docker run command NAME the logical name defined for each secret 1 Add the secrets to the command line when you run Docker bash docker run d p 3000 3000 name grafana e GF DEFAULT INSTANCE NAME my grafana e GF AWS PROFILES default e GF AWS default ACCESS KEY ID FILE run secrets aws access key id e GF AWS default SECRET ACCESS KEY FILE run secrets aws secret access key e GF AWS default REGION FILE run secrets aws region v grafana data var lib grafana grafana grafana enterprise You can also specify multiple profiles to GF AWS PROFILES for example GF AWS PROFILES default another The following list includes the supported environment variables GF AWS profile ACCESS KEY ID AWS access key ID required GF AWS profile SECRET ACCESS KEY AWS secret access key required GF AWS profile REGION AWS region optional Troubleshoot a Docker deployment By default the Grafana log level is set to INFO but you can increase the log level to DEBUG mode when you want to reproduce a problem For more information about logging refer to logs Increase log level using the Docker run CLI command To increase the log level to DEBUG mode add the environment variable GF LOG LEVEL to the command line bash docker run d p 3000 3000 name grafana e GF LOG LEVEL debug grafana grafana enterprise Increase log level using the Docker Compose To increase the log level to DEBUG mode add the environment variable GF LOG LEVEL to the docker compose yaml file yaml version 3 8 services grafana image grafana grafana enterprise container name grafana restart unless stopped environment increases the log level from info to debug GF LOG LEVEL debug ports 3000 3000 volumes grafana storage var lib grafana volumes grafana storage Validate Docker Compose YAML file The chance of syntax errors appearing in a YAML file increases as the file becomes more complex You can use the following command to check for syntax errors bash go to your docker compose yaml directory cd path to docker compose file run the validation command docker compose config If there are errors in the YAML file the command output highlights the lines that contain errors If there are no errors in the YAML file the output includes the content of the docker compose yaml file in detailed YAML format
grafana setup administration jaeger instrumentation Jaeger traces emitted and propagation by Grafana administration view server internal metrics tracing jaeger aliases keywords admin metrics grafana
--- aliases: - ../admin/metrics/ - ../administration/jaeger-instrumentation/ - ../administration/view-server/internal-metrics/ description: Jaeger traces emitted and propagation by Grafana keywords: - grafana - jaeger - tracing labels: products: - enterprise - oss title: Set up Grafana monitoring weight: 800 --- # Set up Grafana monitoring Grafana supports tracing. Grafana can emit Jaeger or OpenTelemetry Protocol (OTLP) traces for its HTTP API endpoints and propagate Jaeger and [w3c Trace Context](https://www.w3.org/TR/trace-context/) trace information to compatible data sources. All HTTP endpoints are logged evenly (annotations, dashboard, tags, and so on). When a trace ID is propagated, it is reported with operation 'HTTP /datasources/proxy/:id/\*'. Refer to [Configuration's OpenTelemetry section]() for a reference of tracing options available in Grafana. ## View Grafana internal metrics Grafana collects some metrics about itself internally. Grafana supports pushing metrics to Graphite or exposing them to be scraped by Prometheus. For more information about configuration options related to Grafana metrics, refer to [metrics]() and [metrics.graphite]() in [Configuration](). ### Available metrics When enabled, Grafana exposes a number of metrics, including: - Active Grafana instances - Number of dashboards, users, and playlists - HTTP status codes - Requests by routing group - Grafana active alerts - Grafana performance ### Pull metrics from Grafana into Prometheus These instructions assume you have already added Prometheus as a data source in Grafana. 1. Enable Prometheus to scrape metrics from Grafana. In your configuration file (`grafana.ini` or `custom.ini` depending on your operating system) remove the semicolon to enable the following configuration options: ``` # Metrics available at HTTP URL /metrics and /metrics/plugins/:pluginId [metrics] # Disable / Enable internal metrics enabled = true # Disable total stats (stat_totals_*) metrics to be generated disable_total_stats = false ``` 1. (optional) If you want to require authorization to view the metrics endpoints, then uncomment and set the following options: ``` basic_auth_username = basic_auth_password = ``` 1. Restart Grafana. Grafana now exposes metrics at http://localhost:3000/metrics. 1. Add the job to your prometheus.yml file. Example: ``` - job_name: 'grafana_metrics' scrape_interval: 15s scrape_timeout: 5s static_configs: - targets: ['localhost:3000'] ``` 1. Restart Prometheus. Your new job should appear on the Targets tab. 1. In Grafana, click **Connections** in the left-side menu. 1. Under your connections, click **Data Sources**. 1. Select the **Prometheus** data source. 1. Under the name of your data source, click **Dashboards**. 1. On the Dashboards tab, click **Import** in the _Grafana metrics_ row to import the Grafana metrics dashboard. All scraped Grafana metrics are available in the dashboard. ### View Grafana metrics in Graphite These instructions assume you have already added Graphite as a data source in Grafana. 1. Enable sending metrics to Graphite. In your configuration file (`grafana.ini` or `custom.ini` depending on your operating system) remove the semicolon to enable the following configuration options: ``` # Metrics available at HTTP API Url /metrics [metrics] # Disable / Enable internal metrics enabled = true # Disable total stats (stat_totals_*) metrics to be generated disable_total_stats = false ``` 1. Enable [metrics.graphite] options: ``` # Send internal metrics to Graphite [metrics.graphite] # Enable by setting the address setting (ex localhost:2003) address = <hostname or ip>:<port#> prefix = prod.grafana.%(instance_name)s. ``` 1. Restart Grafana. Grafana now exposes metrics at http://localhost:3000/metrics and sends them to the Graphite location you specified. ### Pull metrics from Grafana backend plugin into Prometheus Any installed [backend plugin](https://grafana.com/developers/plugin-tools/key-concepts/backend-plugins/) exposes a metrics endpoint through Grafana that you can configure Prometheus to scrape. These instructions assume you have already added Prometheus as a data source in Grafana. 1. Enable Prometheus to scrape backend plugin metrics from Grafana. In your configuration file (`grafana.ini` or `custom.ini` depending on your operating system) remove the semicolon to enable the following configuration options: ``` # Metrics available at HTTP URL /metrics and /metrics/plugins/:pluginId [metrics] # Disable / Enable internal metrics enabled = true # Disable total stats (stat_totals_*) metrics to be generated disable_total_stats = false ``` 1. (optional) If you want to require authorization to view the metrics endpoints, then uncomment and set the following options: ``` basic_auth_username = basic_auth_password = ``` 1. Restart Grafana. Grafana now exposes metrics at `http://localhost:3000/metrics/plugins/<plugin id>`, e.g. http://localhost:3000/metrics/plugins/grafana-github-datasource if you have the [Grafana GitHub datasource](/grafana/plugins/grafana-github-datasource/) installed. 1. Add the job to your prometheus.yml file. Example: ``` - job_name: 'grafana_github_datasource' scrape_interval: 15s scrape_timeout: 5s metrics_path: /metrics/plugins/grafana-test-datasource static_configs: - targets: ['localhost:3000'] ``` 1. Restart Prometheus. Your new job should appear on the Targets tab. 1. In Grafana, hover your mouse over the **Configuration** (gear) icon on the left sidebar and then click **Data Sources**. 1. Select the **Prometheus** data source. 1. Import a Golang application metrics dashboard - for example [Go Processes](/grafana/dashboards/6671).
grafana setup
aliases admin metrics administration jaeger instrumentation administration view server internal metrics description Jaeger traces emitted and propagation by Grafana keywords grafana jaeger tracing labels products enterprise oss title Set up Grafana monitoring weight 800 Set up Grafana monitoring Grafana supports tracing Grafana can emit Jaeger or OpenTelemetry Protocol OTLP traces for its HTTP API endpoints and propagate Jaeger and w3c Trace Context https www w3 org TR trace context trace information to compatible data sources All HTTP endpoints are logged evenly annotations dashboard tags and so on When a trace ID is propagated it is reported with operation HTTP datasources proxy id Refer to Configuration s OpenTelemetry section for a reference of tracing options available in Grafana View Grafana internal metrics Grafana collects some metrics about itself internally Grafana supports pushing metrics to Graphite or exposing them to be scraped by Prometheus For more information about configuration options related to Grafana metrics refer to metrics and metrics graphite in Configuration Available metrics When enabled Grafana exposes a number of metrics including Active Grafana instances Number of dashboards users and playlists HTTP status codes Requests by routing group Grafana active alerts Grafana performance Pull metrics from Grafana into Prometheus These instructions assume you have already added Prometheus as a data source in Grafana 1 Enable Prometheus to scrape metrics from Grafana In your configuration file grafana ini or custom ini depending on your operating system remove the semicolon to enable the following configuration options Metrics available at HTTP URL metrics and metrics plugins pluginId metrics Disable Enable internal metrics enabled true Disable total stats stat totals metrics to be generated disable total stats false 1 optional If you want to require authorization to view the metrics endpoints then uncomment and set the following options basic auth username basic auth password 1 Restart Grafana Grafana now exposes metrics at http localhost 3000 metrics 1 Add the job to your prometheus yml file Example job name grafana metrics scrape interval 15s scrape timeout 5s static configs targets localhost 3000 1 Restart Prometheus Your new job should appear on the Targets tab 1 In Grafana click Connections in the left side menu 1 Under your connections click Data Sources 1 Select the Prometheus data source 1 Under the name of your data source click Dashboards 1 On the Dashboards tab click Import in the Grafana metrics row to import the Grafana metrics dashboard All scraped Grafana metrics are available in the dashboard View Grafana metrics in Graphite These instructions assume you have already added Graphite as a data source in Grafana 1 Enable sending metrics to Graphite In your configuration file grafana ini or custom ini depending on your operating system remove the semicolon to enable the following configuration options Metrics available at HTTP API Url metrics metrics Disable Enable internal metrics enabled true Disable total stats stat totals metrics to be generated disable total stats false 1 Enable metrics graphite options Send internal metrics to Graphite metrics graphite Enable by setting the address setting ex localhost 2003 address hostname or ip port prefix prod grafana instance name s 1 Restart Grafana Grafana now exposes metrics at http localhost 3000 metrics and sends them to the Graphite location you specified Pull metrics from Grafana backend plugin into Prometheus Any installed backend plugin https grafana com developers plugin tools key concepts backend plugins exposes a metrics endpoint through Grafana that you can configure Prometheus to scrape These instructions assume you have already added Prometheus as a data source in Grafana 1 Enable Prometheus to scrape backend plugin metrics from Grafana In your configuration file grafana ini or custom ini depending on your operating system remove the semicolon to enable the following configuration options Metrics available at HTTP URL metrics and metrics plugins pluginId metrics Disable Enable internal metrics enabled true Disable total stats stat totals metrics to be generated disable total stats false 1 optional If you want to require authorization to view the metrics endpoints then uncomment and set the following options basic auth username basic auth password 1 Restart Grafana Grafana now exposes metrics at http localhost 3000 metrics plugins plugin id e g http localhost 3000 metrics plugins grafana github datasource if you have the Grafana GitHub datasource grafana plugins grafana github datasource installed 1 Add the job to your prometheus yml file Example job name grafana github datasource scrape interval 15s scrape timeout 5s metrics path metrics plugins grafana test datasource static configs targets localhost 3000 1 Restart Prometheus Your new job should appear on the Targets tab 1 In Grafana hover your mouse over the Configuration gear icon on the left sidebar and then click Data Sources 1 Select the Prometheus data source 1 Import a Golang application metrics dashboard for example Go Processes grafana dashboards 6671
grafana setup labels aliases keywords logs grafana audit Auditing auditing enterprise auditing
--- aliases: - ../../enterprise/auditing/ description: Auditing keywords: - grafana - auditing - audit - logs labels: products: - cloud - enterprise title: Audit a Grafana instance weight: 800 --- # Audit a Grafana instance Auditing allows you to track important changes to your Grafana instance. By default, audit logs are logged to file but the auditing feature also supports sending logs directly to Loki. To enable sending Grafana Cloud audit logs to your Grafana Cloud Logs instance, please [file a support ticket](/profile/org/tickets/new). Note that standard ingest and retention rates apply for ingesting these audit logs. Only API requests or UI actions that trigger an API request generate an audit log. Available in [Grafana Enterprise]() and [Grafana Cloud](/docs/grafana-cloud). ## Audit logs Audit logs are JSON objects representing user actions like: - Modifications to resources such as dashboards and data sources. - A user failing to log in. ### Format Audit logs contain the following fields. The fields followed by **\*** are always available, the others depend on the type of action logged. | Field name | Type | Description | | ----------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `timestamp`\* | string | The date and time the request was made, in coordinated universal time (UTC) using the [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.6) format. | | `user`\* | object | Information about the user that made the request. Either one of the `UserID` or `ApiKeyID` fields will contain content if `isAnonymous=false`. | | `user.userId` | number | ID of the Grafana user that made the request. | | `user.orgId`\* | number | Current organization of the user that made the request. | | `user.orgRole` | string | Current role of the user that made the request. | | `user.name` | string | Name of the Grafana user that made the request. | | `user.authTokenId` | number | ID of the user authentication token. | | `user.apiKeyId` | number | ID of the Grafana API key used to make the request. | | `user.isAnonymous`\* | boolean | If an anonymous user made the request, `true`. Otherwise, `false`. | | `action`\* | string | The request action. For example, `create`, `update`, or `manage-permissions`. | | `request`\* | object | Information about the HTTP request. | | `request.params` | object | Request’s path parameters. | | `request.query` | object | Request’s query parameters. | | `request.body` | string | Request’s body. Filled with `<non-marshalable format>` when it isn't a valid JSON. | | `result`\* | object | Information about the HTTP response. | | `result.statusType` | string | If the request action was successful, `success`. Otherwise, `failure`. | | `result.statusCode` | number | HTTP status of the request. | | `result.failureMessage` | string | HTTP error message. | | `result.body` | string | Response body. Filled with `<non-marshalable format>` when it isn't a valid JSON. | | `resources` | array | Information about the resources that the request action affected. This field can be null for non-resource actions such as `login` or `logout`. | | `resources[x].id`\* | number | ID of the resource. | | `resources[x].type`\* | string | The type of the resource that was logged: `alert`, `alert-notification`, `annotation`, `api-key`, `auth-token`, `dashboard`, `datasource`, `folder`, `org`, `panel`, `playlist`, `report`, `team`, `user`, or `version`. | | `requestUri`\* | string | Request URI. | | `ipAddress`\* | string | IP address that the request was made from. | | `userAgent`\* | string | Agent through which the request was made. | | `grafanaVersion`\* | string | Current version of Grafana when this log is created. | | `additionalData` | object | Additional information that can be provided about the request. | The `additionalData` field can contain the following information: | Field name | Action | Description | | ---------- | ------ | ----------- | | `loginUsername` | `login` | Login used in the Grafana authentication form. | | `extUserInfo` | `login` | User information provided by the external system that was used to log in. | | `authTokenCount` | `login` | Number of active authentication tokens for the user that logged in. | | `terminationReason` | `logout` | The reason why the user logged out, such as a manual logout or a token expiring. | | `billing_role` | `billing-information` | The billing role associated with the billing information being sent. | ### Recorded actions The audit logs include records about the following categories of actions. Each action is distinguished by the `action` and `resources[...].type` fields in the JSON record. For example, creating an API key produces an audit log like this: ```json {hl_lines=4} { "action": "create", "resources": [ { "id": 1, "type": "api-key" } ], "timestamp": "2021-11-12T22:12:36.144795692Z", "user": { "userId": 1, "orgId": 1, "orgRole": "Admin", "username": "admin", "isAnonymous": false, "authTokenId": 1 }, "request": { "body": "{\"name\":\"example\",\"role\":\"Viewer\",\"secondsToLive\":null}" }, "result": { "statusType": "success", "statusCode": 200, "responseBody": "{\"id\":1,\"name\":\"example\"}" }, "resources": [ { "id": 1, "type": "api-key" } ], "requestUri": "/api/auth/keys", "ipAddress": "127.0.0.1:54652", "userAgent": "Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0", "grafanaVersion": "8.3.0-pre" } ``` Some actions can only be distinguished by their `requestUri` fields. For those actions, the relevant pattern of the `requestUri` field is given. Note that almost all these recorded actions are actions that correspond to API requests or UI actions that trigger an API request. Therefore, the action `{"action": "email", "resources": [{"type": "report"}]}` corresponds to the action when the user requests a report's preview to be sent through email, and not the scheduled ones. #### Sessions | Action | Distinguishing fields | | -------------------------------- | ------------------------------------------------------------------------------------------ | | Log in | `{"action": "login-AUTH-MODULE"}` \* | | Log out \*\* | `{"action": "logout"}` | | Force logout for user | `{"action": "logout-user"}` | | Remove user authentication token | `{"action": "revoke-auth-token", "resources": [{"type": "auth-token"}, {"type": "user"}]}` | | Create API key | `{"action": "create", "resources": [{"type": "api-key"}]}` | | Delete API key | `{"action": "delete", "resources": [{"type": "api-key"}]}` | \* Where `AUTH-MODULE` is the name of the authentication module: `grafana`, `saml`, `ldap`, etc. \ \*\* Includes manual log out, token expired/revoked, and [SAML Single Logout](). #### Service accounts | Action | Distinguishing fields | | ---------------------------- | ----------------------------------------------------------------------------------------------------- | | Create service account | `{"action": "create", "resources": [{"type": "service-account"}]}` | | Update service account | `{"action": "update", "resources": [{"type": "service-account"}]}` | | Delete service account | `{"action": "delete", "resources": [{"type": "service-account"}]}` | | Create service account token | `{"action": "create", "resources": [{"type": "service-account"}, {"type": "service-account-token"}]}` | | Delete service account token | `{"action": "delete", "resources": [{"type": "service-account"}, {"type": "service-account-token"}]}` | | Hide API keys | `{"action": "hide-api-keys"}` | | Migrate API keys | `{"action": "migrate-api-keys"}` | | Migrate API key | `{"action": "migrate-api-keys"}, "resources": [{"type": "api-key"}]}` | #### Access control | Action | Distinguishing fields | | ---------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | | Create role | `{"action": "create", "resources": [{"type": "role"}]}` | | Update role | `{"action": "update", "resources": [{"type": "role"}]}` | | Delete role | `{"action": "delete", "resources": [{"type": "role"}]}` | | Assign built-in role | `{"action": "assign-builtin-role", "resources": [{"type": "role"}, {"type": "builtin-role"}]}` | | Remove built-in role | `{"action": "remove-builtin-role", "resources": [{"type": "role"}, {"type": "builtin-role"}]}` | | Grant team role | `{"action": "grant-team-role", "resources": [{"type": "team"}]}` | | Set team roles | `{"action": "set-team-roles", "resources": [{"type": "team"}]}` | | Revoke team role | `{"action": "revoke-team-role", "resources": [{"type": "role"}, {"type": "team"}]}` | | Grant user role | `{"action": "grant-user-role", "resources": [{"type": "role"}, {"type": "user"}]}` | | Set user roles | `{"action": "set-user-roles", "resources": [{"type": "user"}]}` | | Revoke user role | `{"action": "revoke-user-role", "resources": [{"type": "role"}, {"type": "user"}]}` | | Set user permissions on folder | `{"action": "set-user-permissions-on-folder", "resources": [{"type": "folder"}, {"type": "user"}]}` | | Set team permissions on folder | `{"action": "set-team-permissions-on-folder", "resources": [{"type": "folder"}, {"type": "team"}]}` | | Set basic role permissions on folder | `{"action": "set-basic-role-permissions-on-folder", "resources": [{"type": "folder"}, {"type": "builtin-role"}]}` | | Set user permissions on dashboard | `{"action": "set-user-permissions-on-dashboards", "resources": [{"type": "dashboard"}, {"type": "user"}]}` | | Set team permissions on dashboard | `{"action": "set-team-permissions-on-dashboards", "resources": [{"type": "dashboard"}, {"type": "team"}]}` | | Set basic role permissions on dashboard | `{"action": "set-basic-role-permissions-on-dashboards", "resources": [{"type": "dashboard"}, {"type": "builtin-role"}]}` | | Set user permissions on team | `{"action": "set-user-permissions-on-teams", "resources": [{"type": "teams"}, {"type": "user"}]}` | | Set user permissions on service account | `{"action": "set-user-permissions-on-service-accounts", "resources": [{"type": "service-account"}, {"type": "user"}]}` | | Set user permissions on datasource | `{"action": "set-user-permissions-on-data-sources", "resources": [{"type": "datasource"}, {"type": "user"}]}` | | Set team permissions on datasource | `{"action": "set-team-permissions-on-data-sources", "resources": [{"type": "datasource"}, {"type": "team"}]}` | | Set basic role permissions on datasource | `{"action": "set-basic-role-permissions-on-data-sources", "resources": [{"type": "datasource"}, {"type": "builtin-role"}]}` | #### User management | Action | Distinguishing fields | | ------------------------- | ------------------------------------------------------------------- | | Create user | `{"action": "create", "resources": [{"type": "user"}]}` | | Update user | `{"action": "update", "resources": [{"type": "user"}]}` | | Delete user | `{"action": "delete", "resources": [{"type": "user"}]}` | | Disable user | `{"action": "disable", "resources": [{"type": "user"}]}` | | Enable user | `{"action": "enable", "resources": [{"type": "user"}]}` | | Update password | `{"action": "update-password", "resources": [{"type": "user"}]}` | | Send password reset email | `{"action": "send-reset-email"}` | | Reset password | `{"action": "reset-password"}` | | Update permissions | `{"action": "update-permissions", "resources": [{"type": "user"}]}` | | Send signup email | `{"action": "signup-email"}` | | Click signup link | `{"action": "signup"}` | | Reload LDAP configuration | `{"action": "ldap-reload"}` | | Get user in LDAP | `{"action": "ldap-search"}` | | Sync user with LDAP | `{"action": "ldap-sync", "resources": [{"type": "user"}]` | #### Team and organization management | Action | Distinguishing fields | | ------------------------------------ | ---------------------------------------------------------------------------- | | Add team | `{"action": "create", "requestUri": "/api/teams"}` | | Update team | `{"action": "update", "requestUri": "/api/teams/TEAM-ID"}`\* | | Delete team | `{"action": "delete", "requestUri": "/api/teams/TEAM-ID"}`\* | | Add external group for team | `{"action": "create", "requestUri": "/api/teams/TEAM-ID/groups"}`\* | | Remove external group for team | `{"action": "delete", "requestUri": "/api/teams/TEAM-ID/groups/GROUP-ID"}`\* | | Add user to team | `{"action": "create", "resources": [{"type": "user"}, {"type": "team"}]}` | | Update team member permissions | `{"action": "update", "resources": [{"type": "user"}, {"type": "team"}]}` | | Remove user from team | `{"action": "delete", "resources": [{"type": "user"}, {"type": "team"}]}` | | Create organization | `{"action": "create", "resources": [{"type": "org"}]}` | | Update organization | `{"action": "update", "resources": [{"type": "org"}]}` | | Delete organization | `{"action": "delete", "resources": [{"type": "org"}]}` | | Add user to organization | `{"action": "create", "resources": [{"type": "org"}, {"type": "user"}]}` | | Change user role in organization | `{"action": "update", "resources": [{"type": "user"}, {"type": "org"}]}` | | Remove user from organization | `{"action": "delete", "resources": [{"type": "user"}, {"type": "org"}]}` | | Invite external user to organization | `{"action": "org-invite", "resources": [{"type": "org"}, {"type": "user"}]}` | | Revoke invitation | `{"action": "revoke-org-invite", "resources": [{"type": "org"}]}` | \* Where `TEAM-ID` is the ID of the affected team, and `GROUP-ID` (if present) is the ID of the external group. #### Folder and dashboard management | Action | Distinguishing fields | | ----------------------------- | ------------------------------------------------------------------------ | | Create folder | `{"action": "create", "resources": [{"type": "folder"}]}` | | Update folder | `{"action": "update", "resources": [{"type": "folder"}]}` | | Update folder permissions | `{"action": "manage-permissions", "resources": [{"type": "folder"}]}` | | Delete folder | `{"action": "delete", "resources": [{"type": "folder"}]}` | | Create/update dashboard | `{"action": "create-update", "resources": [{"type": "dashboard"}]}` | | Import dashboard | `{"action": "create", "resources": [{"type": "dashboard"}]}` | | Update dashboard permissions | `{"action": "manage-permissions", "resources": [{"type": "dashboard"}]}` | | Restore old dashboard version | `{"action": "restore", "resources": [{"type": "dashboard"}]}` | | Delete dashboard | `{"action": "delete", "resources": [{"type": "dashboard"}]}` | #### Library elements management | Action | Distinguishing fields | | ---------------------- | ------------------------------------------------------------------ | | Create library element | `{"action": "create", "resources": [{"type": "library-element"}]}` | | Update library element | `{"action": "update", "resources": [{"type": "library-element"}]}` | | Delete library element | `{"action": "delete", "resources": [{"type": "library-element"}]}` | #### Data sources management | Action | Distinguishing fields | | -------------------------------------------------- | ----------------------------------------------------------------------------------------- | | Create datasource | `{"action": "create", "resources": [{"type": "datasource"}]}` | | Update datasource | `{"action": "update", "resources": [{"type": "datasource"}]}` | | Delete datasource | `{"action": "delete", "resources": [{"type": "datasource"}]}` | | Enable permissions for datasource | `{"action": "enable-permissions", "resources": [{"type": "datasource"}]}` | | Disable permissions for datasource | `{"action": "disable-permissions", "resources": [{"type": "datasource"}]}` | | Grant datasource permission to role, team, or user | `{"action": "create", "resources": [{"type": "datasource"}, {"type": "dspermission"}]}`\* | | Remove datasource permission | `{"action": "delete", "resources": [{"type": "datasource"}, {"type": "dspermission"}]}` | | Enable caching for datasource | `{"action": "enable-cache", "resources": [{"type": "datasource"}]}` | | Disable caching for datasource | `{"action": "disable-cache", "resources": [{"type": "datasource"}]}` | | Update datasource caching configuration | `{"action": "update", "resources": [{"type": "datasource"}]}` | \* `resources` may also contain a third item with `"type":` set to `"user"` or `"team"`. #### Data source query | Action | Distinguishing fields | | ---------------- | ------------------------------------------------------------ | | Query datasource | `{"action": "query", "resources": [{"type": "datasource"}]}` | #### Reporting | Action | Distinguishing fields | | ------------------------- | -------------------------------------------------------------------------------- | | Create report | `{"action": "create", "resources": [{"type": "report"}, {"type": "dashboard"}]}` | | Update report | `{"action": "update", "resources": [{"type": "report"}, {"type": "dashboard"}]}` | | Delete report | `{"action": "delete", "resources": [{"type": "report"}]}` | | Send report by email | `{"action": "email", "resources": [{"type": "report"}]}` | | Update reporting settings | `{"action": "change-settings"}` | #### Annotations, playlists and snapshots management | Action | Distinguishing fields | | --------------------------------- | ------------------------------------------------------------------------------------ | | Create annotation | `{"action": "create", "resources": [{"type": "annotation"}]}` | | Create Graphite annotation | `{"action": "create-graphite", "resources": [{"type": "annotation"}]}` | | Update annotation | `{"action": "update", "resources": [{"type": "annotation"}]}` | | Patch annotation | `{"action": "patch", "resources": [{"type": "annotation"}]}` | | Delete annotation | `{"action": "delete", "resources": [{"type": "annotation"}]}` | | Delete all annotations from panel | `{"action": "mass-delete", "resources": [{"type": "dashboard"}, {"type": "panel"}]}` | | Create playlist | `{"action": "create", "resources": [{"type": "playlist"}]}` | | Update playlist | `{"action": "update", "resources": [{"type": "playlist"}]}` | | Delete playlist | `{"action": "delete", "resources": [{"type": "playlist"}]}` | | Create a snapshot | `{"action": "create", "resources": [{"type": "dashboard"}, {"type": "snapshot"}]}` | | Delete a snapshot | `{"action": "delete", "resources": [{"type": "snapshot"}]}` | | Delete a snapshot by delete key | `{"action": "delete", "resources": [{"type": "snapshot"}]}` | #### Provisioning | Action | Distinguishing fields | | --------------------------------- | ------------------------------------------ | | Reload provisioned dashboards | `{"action": "provisioning-dashboards"}` | | Reload provisioned datasources | `{"action": "provisioning-datasources"}` | | Reload provisioned plugins | `{"action": "provisioning-plugins"}` | | Reload provisioned alerts | `{"action": "provisioning-alerts"}` | | Reload provisioned access control | `{"action": "provisioning-accesscontrol"}` | #### Plugins management | Action | Distinguishing fields | | ---------------- | ------------------------- | | Install plugin | `{"action": "install"}` | | Uninstall plugin | `{"action": "uninstall"}` | #### Miscellaneous | Action | Distinguishing fields | | ------------------------ | ------------------------------------------------------------ | | Set licensing token | `{"action": "create", "requestUri": "/api/licensing/token"}` | | Save billing information | `{"action": "billing-information"}` | #### Cloud migration management | Action | Distinguishing fields | | -------------------------------- | ----------------------------------------------------------- | | Connect to a cloud instance | `{"action": "connect-instance"}` | | Disconnect from a cloud instance | `{"action": "disconnect-instance"}` | | Build a snapshot | `{"action": "build", "resources": [{"type": "snapshot"}]}` | | Upload a snapshot | `{"action": "upload", "resources": [{"type": "snapshot"}]}` | #### Generic actions In addition to the actions listed above, any HTTP request (`POST`, `PATCH`, `PUT`, and `DELETE`) against the API is recorded with one of the following generic actions. Furthermore, you can also record `GET` requests. See below how to configure it. | Action | Distinguishing fields | | -------------- | ------------------------------ | | POST request | `{"action": "post-action"}` | | PATCH request | `{"action": "partial-update"}` | | PUT request | `{"action": "update"}` | | DELETE request | `{"action": "delete"}` | | GET request | `{"action": "retrieve"}` | ## Configuration The auditing feature is disabled by default. Audit logs can be saved into files, sent to a Loki instance or sent to the Grafana default logger. By default, only the file exporter is enabled. You can choose which exporter to use in the [configuration file](). Options are `file`, `loki`, and `logger`. Use spaces to separate multiple modes, such as `file loki`. By default, when a user creates or updates a dashboard, its content will not appear in the logs as it can significantly increase the size of your logs. If this is important information for you and you can handle the amount of data generated, then you can enable this option in the configuration. ```ini [auditing] # Enable the auditing feature enabled = false # List of enabled loggers loggers = file # Keep dashboard content in the logs (request or response fields); this can significantly increase the size of your logs. log_dashboard_content = false # Keep requests and responses body; this can significantly increase the size of your logs. verbose = false # Write an audit log for every status code. # By default it only logs the following ones: 2XX, 3XX, 401, 403 and 500. log_all_status_codes = false # Maximum response body (in bytes) to be audited; 500KiB by default. # May help reducing the memory footprint caused by auditing. max_response_size_bytes = 512000 ``` Each exporter has its own configuration fields. ### File exporter Audit logs are saved into files. You can configure the folder to use to save these files. Logs are rotated when the file size is exceeded and at the start of a new day. ```ini [auditing.logs.file] # Path to logs folder path = data/log # Maximum log files to keep max_files = 5 # Max size in megabytes per log file max_file_size_mb = 256 ``` ### Loki exporter Audit logs are sent to a [Loki](/oss/loki/) service, through HTTP or gRPC. The HTTP option for the Loki exporter is available only in Grafana Enterprise version 7.4 and later. ```ini [auditing.logs.loki] # Set the communication protocol to use with Loki (can be grpc or http) type = grpc # Set the address for writing logs to Loki url = localhost:9095 # Defaults to true. If true, it establishes a secure connection to Loki tls = true # Set the tenant ID for Loki communication, which is disabled by default. # The tenant ID is required to interact with Loki running in multi-tenant mode. tenant_id = ``` If you have multiple Grafana instances sending logs to the same Loki service or if you are using Loki for non-audit logs, audit logs come with additional labels to help identifying them: - **host** - OS hostname on which the Grafana instance is running. - **grafana_instance** - Application URL. - **kind** - `auditing` When basic authentication is needed to ingest logs in your Loki instance, you can specify credentials in the URL field. For example: ```ini # Set the communication protocol to use with Loki (can be grpc or http) type = http # Set the address for writing logs to Loki url = user:password@localhost:3000 ``` ### Console exporter Audit logs are sent to the Grafana default logger. The audit logs use the `auditing.console` logger and are logged on `debug`-level, learn how to enable debug logging in the [log configuration]() section of the documentation. Accessing the audit logs in this way is not recommended for production use.
grafana setup
aliases enterprise auditing description Auditing keywords grafana auditing audit logs labels products cloud enterprise title Audit a Grafana instance weight 800 Audit a Grafana instance Auditing allows you to track important changes to your Grafana instance By default audit logs are logged to file but the auditing feature also supports sending logs directly to Loki To enable sending Grafana Cloud audit logs to your Grafana Cloud Logs instance please file a support ticket profile org tickets new Note that standard ingest and retention rates apply for ingesting these audit logs Only API requests or UI actions that trigger an API request generate an audit log Available in Grafana Enterprise and Grafana Cloud docs grafana cloud Audit logs Audit logs are JSON objects representing user actions like Modifications to resources such as dashboards and data sources A user failing to log in Format Audit logs contain the following fields The fields followed by are always available the others depend on the type of action logged Field name Type Description timestamp string The date and time the request was made in coordinated universal time UTC using the RFC3339 https tools ietf org html rfc3339 section 5 6 format user object Information about the user that made the request Either one of the UserID or ApiKeyID fields will contain content if isAnonymous false user userId number ID of the Grafana user that made the request user orgId number Current organization of the user that made the request user orgRole string Current role of the user that made the request user name string Name of the Grafana user that made the request user authTokenId number ID of the user authentication token user apiKeyId number ID of the Grafana API key used to make the request user isAnonymous boolean If an anonymous user made the request true Otherwise false action string The request action For example create update or manage permissions request object Information about the HTTP request request params object Request s path parameters request query object Request s query parameters request body string Request s body Filled with non marshalable format when it isn t a valid JSON result object Information about the HTTP response result statusType string If the request action was successful success Otherwise failure result statusCode number HTTP status of the request result failureMessage string HTTP error message result body string Response body Filled with non marshalable format when it isn t a valid JSON resources array Information about the resources that the request action affected This field can be null for non resource actions such as login or logout resources x id number ID of the resource resources x type string The type of the resource that was logged alert alert notification annotation api key auth token dashboard datasource folder org panel playlist report team user or version requestUri string Request URI ipAddress string IP address that the request was made from userAgent string Agent through which the request was made grafanaVersion string Current version of Grafana when this log is created additionalData object Additional information that can be provided about the request The additionalData field can contain the following information Field name Action Description loginUsername login Login used in the Grafana authentication form extUserInfo login User information provided by the external system that was used to log in authTokenCount login Number of active authentication tokens for the user that logged in terminationReason logout The reason why the user logged out such as a manual logout or a token expiring billing role billing information The billing role associated with the billing information being sent Recorded actions The audit logs include records about the following categories of actions Each action is distinguished by the action and resources type fields in the JSON record For example creating an API key produces an audit log like this json hl lines 4 action create resources id 1 type api key timestamp 2021 11 12T22 12 36 144795692Z user userId 1 orgId 1 orgRole Admin username admin isAnonymous false authTokenId 1 request body name example role Viewer secondsToLive null result statusType success statusCode 200 responseBody id 1 name example resources id 1 type api key requestUri api auth keys ipAddress 127 0 0 1 54652 userAgent Mozilla 5 0 X11 Linux x86 64 rv 94 0 Gecko 20100101 Firefox 94 0 grafanaVersion 8 3 0 pre Some actions can only be distinguished by their requestUri fields For those actions the relevant pattern of the requestUri field is given Note that almost all these recorded actions are actions that correspond to API requests or UI actions that trigger an API request Therefore the action action email resources type report corresponds to the action when the user requests a report s preview to be sent through email and not the scheduled ones Sessions Action Distinguishing fields Log in action login AUTH MODULE Log out action logout Force logout for user action logout user Remove user authentication token action revoke auth token resources type auth token type user Create API key action create resources type api key Delete API key action delete resources type api key Where AUTH MODULE is the name of the authentication module grafana saml ldap etc Includes manual log out token expired revoked and SAML Single Logout Service accounts Action Distinguishing fields Create service account action create resources type service account Update service account action update resources type service account Delete service account action delete resources type service account Create service account token action create resources type service account type service account token Delete service account token action delete resources type service account type service account token Hide API keys action hide api keys Migrate API keys action migrate api keys Migrate API key action migrate api keys resources type api key Access control Action Distinguishing fields Create role action create resources type role Update role action update resources type role Delete role action delete resources type role Assign built in role action assign builtin role resources type role type builtin role Remove built in role action remove builtin role resources type role type builtin role Grant team role action grant team role resources type team Set team roles action set team roles resources type team Revoke team role action revoke team role resources type role type team Grant user role action grant user role resources type role type user Set user roles action set user roles resources type user Revoke user role action revoke user role resources type role type user Set user permissions on folder action set user permissions on folder resources type folder type user Set team permissions on folder action set team permissions on folder resources type folder type team Set basic role permissions on folder action set basic role permissions on folder resources type folder type builtin role Set user permissions on dashboard action set user permissions on dashboards resources type dashboard type user Set team permissions on dashboard action set team permissions on dashboards resources type dashboard type team Set basic role permissions on dashboard action set basic role permissions on dashboards resources type dashboard type builtin role Set user permissions on team action set user permissions on teams resources type teams type user Set user permissions on service account action set user permissions on service accounts resources type service account type user Set user permissions on datasource action set user permissions on data sources resources type datasource type user Set team permissions on datasource action set team permissions on data sources resources type datasource type team Set basic role permissions on datasource action set basic role permissions on data sources resources type datasource type builtin role User management Action Distinguishing fields Create user action create resources type user Update user action update resources type user Delete user action delete resources type user Disable user action disable resources type user Enable user action enable resources type user Update password action update password resources type user Send password reset email action send reset email Reset password action reset password Update permissions action update permissions resources type user Send signup email action signup email Click signup link action signup Reload LDAP configuration action ldap reload Get user in LDAP action ldap search Sync user with LDAP action ldap sync resources type user Team and organization management Action Distinguishing fields Add team action create requestUri api teams Update team action update requestUri api teams TEAM ID Delete team action delete requestUri api teams TEAM ID Add external group for team action create requestUri api teams TEAM ID groups Remove external group for team action delete requestUri api teams TEAM ID groups GROUP ID Add user to team action create resources type user type team Update team member permissions action update resources type user type team Remove user from team action delete resources type user type team Create organization action create resources type org Update organization action update resources type org Delete organization action delete resources type org Add user to organization action create resources type org type user Change user role in organization action update resources type user type org Remove user from organization action delete resources type user type org Invite external user to organization action org invite resources type org type user Revoke invitation action revoke org invite resources type org Where TEAM ID is the ID of the affected team and GROUP ID if present is the ID of the external group Folder and dashboard management Action Distinguishing fields Create folder action create resources type folder Update folder action update resources type folder Update folder permissions action manage permissions resources type folder Delete folder action delete resources type folder Create update dashboard action create update resources type dashboard Import dashboard action create resources type dashboard Update dashboard permissions action manage permissions resources type dashboard Restore old dashboard version action restore resources type dashboard Delete dashboard action delete resources type dashboard Library elements management Action Distinguishing fields Create library element action create resources type library element Update library element action update resources type library element Delete library element action delete resources type library element Data sources management Action Distinguishing fields Create datasource action create resources type datasource Update datasource action update resources type datasource Delete datasource action delete resources type datasource Enable permissions for datasource action enable permissions resources type datasource Disable permissions for datasource action disable permissions resources type datasource Grant datasource permission to role team or user action create resources type datasource type dspermission Remove datasource permission action delete resources type datasource type dspermission Enable caching for datasource action enable cache resources type datasource Disable caching for datasource action disable cache resources type datasource Update datasource caching configuration action update resources type datasource resources may also contain a third item with type set to user or team Data source query Action Distinguishing fields Query datasource action query resources type datasource Reporting Action Distinguishing fields Create report action create resources type report type dashboard Update report action update resources type report type dashboard Delete report action delete resources type report Send report by email action email resources type report Update reporting settings action change settings Annotations playlists and snapshots management Action Distinguishing fields Create annotation action create resources type annotation Create Graphite annotation action create graphite resources type annotation Update annotation action update resources type annotation Patch annotation action patch resources type annotation Delete annotation action delete resources type annotation Delete all annotations from panel action mass delete resources type dashboard type panel Create playlist action create resources type playlist Update playlist action update resources type playlist Delete playlist action delete resources type playlist Create a snapshot action create resources type dashboard type snapshot Delete a snapshot action delete resources type snapshot Delete a snapshot by delete key action delete resources type snapshot Provisioning Action Distinguishing fields Reload provisioned dashboards action provisioning dashboards Reload provisioned datasources action provisioning datasources Reload provisioned plugins action provisioning plugins Reload provisioned alerts action provisioning alerts Reload provisioned access control action provisioning accesscontrol Plugins management Action Distinguishing fields Install plugin action install Uninstall plugin action uninstall Miscellaneous Action Distinguishing fields Set licensing token action create requestUri api licensing token Save billing information action billing information Cloud migration management Action Distinguishing fields Connect to a cloud instance action connect instance Disconnect from a cloud instance action disconnect instance Build a snapshot action build resources type snapshot Upload a snapshot action upload resources type snapshot Generic actions In addition to the actions listed above any HTTP request POST PATCH PUT and DELETE against the API is recorded with one of the following generic actions Furthermore you can also record GET requests See below how to configure it Action Distinguishing fields POST request action post action PATCH request action partial update PUT request action update DELETE request action delete GET request action retrieve Configuration The auditing feature is disabled by default Audit logs can be saved into files sent to a Loki instance or sent to the Grafana default logger By default only the file exporter is enabled You can choose which exporter to use in the configuration file Options are file loki and logger Use spaces to separate multiple modes such as file loki By default when a user creates or updates a dashboard its content will not appear in the logs as it can significantly increase the size of your logs If this is important information for you and you can handle the amount of data generated then you can enable this option in the configuration ini auditing Enable the auditing feature enabled false List of enabled loggers loggers file Keep dashboard content in the logs request or response fields this can significantly increase the size of your logs log dashboard content false Keep requests and responses body this can significantly increase the size of your logs verbose false Write an audit log for every status code By default it only logs the following ones 2XX 3XX 401 403 and 500 log all status codes false Maximum response body in bytes to be audited 500KiB by default May help reducing the memory footprint caused by auditing max response size bytes 512000 Each exporter has its own configuration fields File exporter Audit logs are saved into files You can configure the folder to use to save these files Logs are rotated when the file size is exceeded and at the start of a new day ini auditing logs file Path to logs folder path data log Maximum log files to keep max files 5 Max size in megabytes per log file max file size mb 256 Loki exporter Audit logs are sent to a Loki oss loki service through HTTP or gRPC The HTTP option for the Loki exporter is available only in Grafana Enterprise version 7 4 and later ini auditing logs loki Set the communication protocol to use with Loki can be grpc or http type grpc Set the address for writing logs to Loki url localhost 9095 Defaults to true If true it establishes a secure connection to Loki tls true Set the tenant ID for Loki communication which is disabled by default The tenant ID is required to interact with Loki running in multi tenant mode tenant id If you have multiple Grafana instances sending logs to the same Loki service or if you are using Loki for non audit logs audit logs come with additional labels to help identifying them host OS hostname on which the Grafana instance is running grafana instance Application URL kind auditing When basic authentication is needed to ingest logs in your Loki instance you can specify credentials in the URL field For example ini Set the communication protocol to use with Loki can be grpc or http type http Set the address for writing logs to Loki url user password localhost 3000 Console exporter Audit logs are sent to the Grafana default logger The audit logs use the auditing console logger and are logged on debug level learn how to enable debug logging in the log configuration section of the documentation Accessing the audit logs in this way is not recommended for production use
grafana setup export enterprise usage insights export logs labels aliases usage insights keywords Export logs of usage insights grafana enterprise
--- aliases: - ../../enterprise/usage-insights/export-logs/ description: Export logs of usage insights keywords: - grafana - export - usage-insights - enterprise labels: products: - cloud - enterprise title: Export logs of usage insights weight: 900 --- # Export logs of usage insights Available in [Grafana Enterprise]() and [Grafana Cloud Pro and Advanced](/docs/grafana-cloud/). By exporting usage logs to Loki, you can directly query them and create dashboards of the information that matters to you most, such as dashboard errors, most active organizations, or your top-10 most-used queries. This configuration is done for you in Grafana Cloud, with provisioned dashboards. Read about them in the [Grafana Cloud documentation](/docs/grafana-cloud/usage-insights/). ## Usage insights logs Usage insights logs are JSON objects that represent certain user activities, such as: - A user opens a dashboard. - A query is sent to a data source. ### Scope A log is created every time: - A user opens a dashboard. - A query is sent to a data source in the dashboard view. - A query is performed via Explore. ### Format Logs of usage insights contain the following fields, where the fields followed by \* are always available, and the others depend on the logged event: | Field name | Type | Description | | ---------- | ---- | ----------- | | `eventName`\* | string | Type of the event, which can be either `data-request` or `dashboard-view`. | | `folderName`\* | string | Name of the dashboard folder. | | `dashboardName`\* | string | Name of the dashboard where the event happened. | | `dashboardId`\* | number | ID of the dashboard where the event happened. | | `datasourceName`| string | Name of the data source that was queried. | | `datasourceType` | string | Type of the data source that was queried. For example, `prometheus`, `elasticsearch`, or `loki`. | | `datasourceId` | number | ID of the data source that was queried. | | `panelId` | number | ID of the panel of the query. | | `panelName` | string | Name of the panel of the query. | | `error` | string | Error returned by the query. | | `duration` | number | Duration of the query. | | `source` | string | Source of the query. For example, `dashboard` or `explore`. | | `orgId`\* | number | ID of the user’s organization. | | `orgName`\* | string | Name of the user’s organization. | | `timestamp`\* | string | The date and time that the request was made, in Coordinated Universal Time (UTC) in [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.6) format. | | `tokenId`\* | number | ID of the user’s authentication token. | | `username`\* | string | Name of the Grafana user that made the request. | | `userId`\* | number | ID of the Grafana user that made the request. | | `totalQueries`\* | number | Number of queries executed for the data request. | | `cachedQueries`\* | number | Number of fetched queries that came from the cache. | ## Configuration To export your logs, enable the usage insights feature and [configure]() an export location in the configuration file: ```ini [usage_insights.export] # Enable the usage insights export feature enabled = true # Storage type storage = loki ``` The options for storage type are `loki` and `logger` (added in Grafana Enterprise 8.2). If the storage type is set to `loki` you'll need to also configure Grafana to export to a Loki ingestion server. To do this, you'll need Loki installed. Refer to [Install Loki](/docs/loki/latest/installation/) for instructions on how to install Loki. ```ini [usage_insights.export.storage.loki] # Set the communication protocol to use with Loki (can be grpc or http) type = grpc # Set the address for writing logs to Loki (format must be host:port) url = localhost:9095 # Defaults to true. If true, it establishes a secure connection to Loki tls = true # Set the tenant ID for Loki communication, which is disabled by default. # The tenant ID is required to interact with Loki running in multi-tenant mode. tenant_id = ``` Using `logger` will print usage insights to your [Grafana server log](). There is no option for configuring the `logger` storage type. ## Visualize Loki usage insights in Grafana If you export logs into Loki, you can build Grafana dashboards to understand your Grafana instance usage. 1. Add Loki as a data source. Refer to [Grafana fundamentals tutorial](/tutorials/grafana-fundamentals/#6). 1. Import one of the following dashboards: - [Usage insights](/grafana/dashboards/13785) - [Usage insights datasource details](/grafana/dashboards/13786) 1. Play with usage insights to understand them: - In Explore, you can use the query `{datasource="gdev-loki",kind="usage_insights"}` to retrieve all logs related to your `gdev-loki` data source. - In a dashboard, you can build a table panel with the query `topk(10, sum by (error) (count_over_time({kind="usage_insights", datasource="gdev-prometheus"} | json | error != "" [$__interval])))` to display the 10 most common errors your users see using the `gdev-prometheus` data source. - In a dashboard, you can build a graph panel with the queries `sum by(host) (count_over_time({kind="usage_insights"} | json | eventName="data-request" | error != "" [$__interval]))` and `sum by(host) (count_over_time({kind="usage_insights"} | json | eventName="data-request" | error = "" [$__interval]))` to show the evolution of the data request count over time. Using `by (host)` allows you to have more information for each Grafana server you have if you have set up Grafana for [high availability](<>).
grafana setup
aliases enterprise usage insights export logs description Export logs of usage insights keywords grafana export usage insights enterprise labels products cloud enterprise title Export logs of usage insights weight 900 Export logs of usage insights Available in Grafana Enterprise and Grafana Cloud Pro and Advanced docs grafana cloud By exporting usage logs to Loki you can directly query them and create dashboards of the information that matters to you most such as dashboard errors most active organizations or your top 10 most used queries This configuration is done for you in Grafana Cloud with provisioned dashboards Read about them in the Grafana Cloud documentation docs grafana cloud usage insights Usage insights logs Usage insights logs are JSON objects that represent certain user activities such as A user opens a dashboard A query is sent to a data source Scope A log is created every time A user opens a dashboard A query is sent to a data source in the dashboard view A query is performed via Explore Format Logs of usage insights contain the following fields where the fields followed by are always available and the others depend on the logged event Field name Type Description eventName string Type of the event which can be either data request or dashboard view folderName string Name of the dashboard folder dashboardName string Name of the dashboard where the event happened dashboardId number ID of the dashboard where the event happened datasourceName string Name of the data source that was queried datasourceType string Type of the data source that was queried For example prometheus elasticsearch or loki datasourceId number ID of the data source that was queried panelId number ID of the panel of the query panelName string Name of the panel of the query error string Error returned by the query duration number Duration of the query source string Source of the query For example dashboard or explore orgId number ID of the user s organization orgName string Name of the user s organization timestamp string The date and time that the request was made in Coordinated Universal Time UTC in RFC3339 https tools ietf org html rfc3339 section 5 6 format tokenId number ID of the user s authentication token username string Name of the Grafana user that made the request userId number ID of the Grafana user that made the request totalQueries number Number of queries executed for the data request cachedQueries number Number of fetched queries that came from the cache Configuration To export your logs enable the usage insights feature and configure an export location in the configuration file ini usage insights export Enable the usage insights export feature enabled true Storage type storage loki The options for storage type are loki and logger added in Grafana Enterprise 8 2 If the storage type is set to loki you ll need to also configure Grafana to export to a Loki ingestion server To do this you ll need Loki installed Refer to Install Loki docs loki latest installation for instructions on how to install Loki ini usage insights export storage loki Set the communication protocol to use with Loki can be grpc or http type grpc Set the address for writing logs to Loki format must be host port url localhost 9095 Defaults to true If true it establishes a secure connection to Loki tls true Set the tenant ID for Loki communication which is disabled by default The tenant ID is required to interact with Loki running in multi tenant mode tenant id Using logger will print usage insights to your Grafana server log There is no option for configuring the logger storage type Visualize Loki usage insights in Grafana If you export logs into Loki you can build Grafana dashboards to understand your Grafana instance usage 1 Add Loki as a data source Refer to Grafana fundamentals tutorial tutorials grafana fundamentals 6 1 Import one of the following dashboards Usage insights grafana dashboards 13785 Usage insights datasource details grafana dashboards 13786 1 Play with usage insights to understand them In Explore you can use the query datasource gdev loki kind usage insights to retrieve all logs related to your gdev loki data source In a dashboard you can build a table panel with the query topk 10 sum by error count over time kind usage insights datasource gdev prometheus json error interval to display the 10 most common errors your users see using the gdev prometheus data source In a dashboard you can build a graph panel with the queries sum by host count over time kind usage insights json eventName data request error interval and sum by host count over time kind usage insights json eventName data request error interval to show the evolution of the data request count over time Using by host allows you to have more information for each Grafana server you have if you have set up Grafana for high availability
grafana setup stop certain vulnerabilities from being exploited by a malicious attacker labels aliases products docs grafana latest setup grafana configure security configure security hardening title Configure security hardening Security hardening enables you to apply additional security which might oss enterprise
--- aliases: - /docs/grafana/latest/setup-grafana/configure-security/configure-security-hardening/ description: Security hardening enables you to apply additional security which might stop certain vulnerabilities from being exploited by a malicious attacker. labels: products: - enterprise - oss title: Configure security hardening --- # Configure security hardening Security hardening enables you to apply additional security, which can help stop certain vulnerabilities from being exploited by a malicious attacker. These settings are available in the [grafana.ini configuration file](). To apply changes to the configuration file, restart the Grafana server. ## Additional security for cookies If Grafana uses HTTPS, you can further secure the cookie that the system uses to authenticate access to the web UI. By applying additional security to the cookie, you might mitigate certain attacks that result from an attacker obtaining the cookie value. Grafana must use HTTPS for the following configurations to work properly. ### Add a secure attribute to cookies To provide mitigation against some MITM attacks, add the `Secure` attribute to the cookie that is used to authenticate users. This attribute forces users only to send the cookie over a valid HTTPS secure connection. Example: ```toml # Set to true if you host Grafana behind HTTPS. The default value is false. cookie_secure = true ``` ### Add a SameSite attribute to cookies To mitigate almost all CSRF-attacks, set the _cookie_samesite_ option to `strict`. This setting prevents clients from sending the cookie in requests that are made cross-site, but only from the site that creates the cookie. Example: ```toml # set cookie SameSite attribute. defaults to `lax`. can be set to "lax", "strict", "none" and "disabled" cookie_samesite = strict ``` By setting the SameSite attribute to "strict," only the user clicks within a Grafana instance work. The default option, "lax," does not produce this behavior. ### Add a prefix to cookie names You can further secure the cookie authentication by adding a [Cookie Prefix](https://googlechrome.github.io/samples/cookie-prefixes/). Cookies without a special prefix can be overwritten in a man-in-the-middle attack, even if the site uses HTTPS. A cookie prefix forces clients only to accept the cookie if certain criteria are met. Add a prefix to the current cookie name with either `__Secure-` or `__Host-` where the latter provides additional protection by only allowing the cookie to be created from the host that sent the Set-Cookie header. Example: ```toml # Login cookie name login_cookie_name = __Host-grafana_session ``` ## Security headers Grafana includes a few additional headers that you can configure to help mitigate against certain attacks, such as XSS. ### Add a Content Security Policy A content security policy (CSP) is an HTTP response header that controls how the web browser handles content, such as allowing inline scripts to execute or loading images from certain domains. The default CSP template is already configured to provide sufficient protection against some attacks. This makes it more difficult for attackers to execute arbitrary JavaScript if such a vulnerability is present. Example: ```toml # Enable adding the Content-Security-Policy header to your requests. # CSP enables you to control the resources the user agent can load and helps prevent XSS attacks. content_security_policy = true # Set the Content Security Policy template that is used when the Content-Security-Policy header is added to your requests. # $NONCE in the template includes a random nonce. # $ROOT_PATH is server.root_url without the protocol. content_security_policy_template = """script-src 'self' 'unsafe-eval' 'unsafe-inline' 'strict-dynamic' $NONCE;object-src 'none';font-src 'self';style-src 'self' 'unsafe-inline' blob:;img-src * data:;base-uri 'self';connect-src 'self' grafana.com ws://$ROOT_PATH wss://$ROOT_PATH;manifest-src 'self';media-src 'none';form-action 'self';""" ``` ### Enable trusted types **Currently in development. [Trusted types](https://github.com/w3c/trusted-types/blob/main/explainer.md) is an experimental Javascript API with [limited browser support](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/trusted-types#browser_compatibility).** Trusted types reduce the risk of DOM XSS by enforcing developers to sanitize strings that are used in injection sinks, such as setting `innerHTML` on an element. Furthermore, when enabling trusted types, these injection sinks need to go through a policy that will sanitize, or leave the string intact and return it as "safe". This provides some protection from client side injection vulnerabilities in third party libraries, such as jQuery, Angular and even third party plugins. To enable trusted types in enforce mode, where injection sinks are automatically sanitized: - Enable `content_security_policy` in the configuration. - Add `require-trusted-types-for 'script'` to the `content_security_policy_template` in the configuration. To enable trusted types in report mode, where inputs that have not been sanitized with trusted types will be logged to the console: - Enable `content_security_policy_report_only` in the configuration. - Add `require-trusted-types-for 'script'` to the `content_security_policy_report_only_template` in the configuration. As this is a feature currently in development, things may break. If they do, or if you have any other feedback, feel free to [open an issue](https://github.com/grafana/grafana/issues/new/choose). ## Additional security hardening The Grafana server has several built-in security features that you can opt-in to enhance security. This section describes additional techniques you can use to harden security. ### Hide the version number If set to `true`, the Grafana server hides the running version number for unauthenticated users. Version numbers might reveal if you are running an outdated and vulnerable version of Grafana. Example: ```toml # mask the Grafana version number for unauthenticated users hide_version = true ``` ### Enforce domain verification If set to `true`, the Grafana server redirects requests that have a Host-header value that is mismatched to the actual domain. This might help to mitigate some DNS rebinding attacks. Example: ```toml # Redirect to correct domain if host header does not match domain # Prevents DNS rebinding attacks enforce_domain = true ```
grafana setup
aliases docs grafana latest setup grafana configure security configure security hardening description Security hardening enables you to apply additional security which might stop certain vulnerabilities from being exploited by a malicious attacker labels products enterprise oss title Configure security hardening Configure security hardening Security hardening enables you to apply additional security which can help stop certain vulnerabilities from being exploited by a malicious attacker These settings are available in the grafana ini configuration file To apply changes to the configuration file restart the Grafana server Additional security for cookies If Grafana uses HTTPS you can further secure the cookie that the system uses to authenticate access to the web UI By applying additional security to the cookie you might mitigate certain attacks that result from an attacker obtaining the cookie value Grafana must use HTTPS for the following configurations to work properly Add a secure attribute to cookies To provide mitigation against some MITM attacks add the Secure attribute to the cookie that is used to authenticate users This attribute forces users only to send the cookie over a valid HTTPS secure connection Example toml Set to true if you host Grafana behind HTTPS The default value is false cookie secure true Add a SameSite attribute to cookies To mitigate almost all CSRF attacks set the cookie samesite option to strict This setting prevents clients from sending the cookie in requests that are made cross site but only from the site that creates the cookie Example toml set cookie SameSite attribute defaults to lax can be set to lax strict none and disabled cookie samesite strict By setting the SameSite attribute to strict only the user clicks within a Grafana instance work The default option lax does not produce this behavior Add a prefix to cookie names You can further secure the cookie authentication by adding a Cookie Prefix https googlechrome github io samples cookie prefixes Cookies without a special prefix can be overwritten in a man in the middle attack even if the site uses HTTPS A cookie prefix forces clients only to accept the cookie if certain criteria are met Add a prefix to the current cookie name with either Secure or Host where the latter provides additional protection by only allowing the cookie to be created from the host that sent the Set Cookie header Example toml Login cookie name login cookie name Host grafana session Security headers Grafana includes a few additional headers that you can configure to help mitigate against certain attacks such as XSS Add a Content Security Policy A content security policy CSP is an HTTP response header that controls how the web browser handles content such as allowing inline scripts to execute or loading images from certain domains The default CSP template is already configured to provide sufficient protection against some attacks This makes it more difficult for attackers to execute arbitrary JavaScript if such a vulnerability is present Example toml Enable adding the Content Security Policy header to your requests CSP enables you to control the resources the user agent can load and helps prevent XSS attacks content security policy true Set the Content Security Policy template that is used when the Content Security Policy header is added to your requests NONCE in the template includes a random nonce ROOT PATH is server root url without the protocol content security policy template script src self unsafe eval unsafe inline strict dynamic NONCE object src none font src self style src self unsafe inline blob img src data base uri self connect src self grafana com ws ROOT PATH wss ROOT PATH manifest src self media src none form action self Enable trusted types Currently in development Trusted types https github com w3c trusted types blob main explainer md is an experimental Javascript API with limited browser support https developer mozilla org en US docs Web HTTP Headers Content Security Policy trusted types browser compatibility Trusted types reduce the risk of DOM XSS by enforcing developers to sanitize strings that are used in injection sinks such as setting innerHTML on an element Furthermore when enabling trusted types these injection sinks need to go through a policy that will sanitize or leave the string intact and return it as safe This provides some protection from client side injection vulnerabilities in third party libraries such as jQuery Angular and even third party plugins To enable trusted types in enforce mode where injection sinks are automatically sanitized Enable content security policy in the configuration Add require trusted types for script to the content security policy template in the configuration To enable trusted types in report mode where inputs that have not been sanitized with trusted types will be logged to the console Enable content security policy report only in the configuration Add require trusted types for script to the content security policy report only template in the configuration As this is a feature currently in development things may break If they do or if you have any other feedback feel free to open an issue https github com grafana grafana issues new choose Additional security hardening The Grafana server has several built in security features that you can opt in to enhance security This section describes additional techniques you can use to harden security Hide the version number If set to true the Grafana server hides the running version number for unauthenticated users Version numbers might reveal if you are running an outdated and vulnerable version of Grafana Example toml mask the Grafana version number for unauthenticated users hide version true Enforce domain verification If set to true the Grafana server redirects requests that have a Host header value that is mismatched to the actual domain This might help to mitigate some DNS rebinding attacks Example toml Redirect to correct domain if host header does not match domain Prevents DNS rebinding attacks enforce domain true
grafana setup weight 100 menuTitle Plan your IAM integration strategy Learn how to plan your identity and access management strategy before setting up Grafana title Plan your IAM integration strategy keywords IAM Auth Grafana IdP
--- title: Plan your IAM integration strategy menuTitle: Plan your IAM integration strategy description: Learn how to plan your identity and access management strategy before setting up Grafana. weight: 100 keywords: - IdP - IAM - Auth - Grafana --- # Plan your IAM integration strategy This section describes the decisions you should make when using an Identity and Access Management (IAM) provider to manage access to Grafana. IAM ensures that users have secure access to sensitive data and [other resources](), simplifying user management and authentication. ## Benefits of integrating with an IAM provider Integrating with an IAM provider provides the following benefits: - **User management**: By providing Grafana access to your current user management system, you eliminate the overhead of replicating user information and instead have centralized user management for users' roles and permissions to Grafana resources. - **Security**: Many IAM solutions provide advanced security features such as multi-factor authentication, RBAC, and audit trails, which can help to improve the security of your Grafana installation. - **SSO**: Properly setting up Grafana with your current IAM solution enables users to access Grafana with the same credentials they use for other applications. - **Scalability**: User additions and updates in your user database are immediately reflected in Grafana. In order to plan an integration with Grafana, assess your organization's current needs, requirements, and any existing IAM solutions being used. This includes thinking about how roles and permissions will be mapped to users in Grafana and how users can be grouped to access shared resources. ## Internal vs external users As a first step, determine how you want to manage users who will access Grafana. Do you already use an identity provider to manage users? If so, Grafana might be able to integrate with your identity provider through one of our IdP integrations. Refer to [Configure authentication documentation]() for the list of supported providers. If you are not interested in setting up an external identity provider, but still want to limit access to your Grafana instance, consider using Grafana's basic authentication. Finally, if you want your Grafana instance to be accessible to everyone, you can enable anonymous access to Grafana. For information, refer to the [anonymous authentication documentation](). ## Ways to organize users Organize users in subgroups that are sensible to the organization. For example: - **Security**: Different groups of users or customers should only have access to their intended resources. - **Simplicity**: Reduce the scope of dashboards and resources available. - **Cost attribution**: Track and bill costs to individual customers, departments, or divisions. - **Customization**: Each group of users could have a personalized experience like different dashboards or theme colors. ### Users in Grafana teams You can organize users into [teams]() and assign them roles and permissions reflecting the current organization. For example, instead of assigning five users access to the same dashboard, you can create a team of those users and assign dashboard permissions to the team. A user can belong to multiple teams and be a member or an administrator for a given team. Team members inherit permissions from the team but cannot edit the team itself. Team administrators can add members to a team and update its settings, such as the team name, team members, roles assigned, and UI preferences. Teams are a perfect solution for working with a subset of users. Teams can share resources with other teams. ### Users in Grafana organizations [Grafana organizations]() allow complete isolation of resources, such as dashboards and data sources. Users can be members of one or several organizations, and they can only access resources from an organization they belong to. Having multiple organizations in a single instance of Grafana lets you manage your users in one place while completely separating resources. Organizations provide a higher measure of isolation within Grafana than teams do and can be helpful in certain scenarios. However, because organizations lack the scalability and flexibility of teams and [folders](), we do not recommend using them as the default way to group users and resources. Note that Grafana Cloud does not support having more than 1 organizations per instance. ### Choosing between teams and organizations [Grafana teams]() and Grafana organizations serve similar purposes in the Grafana platform. Both are designed to help group users and manage and control access to resources. Teams provide more flexibility, as resources can be accessible by multiple teams, and team creation and management are simple. In contrast, organizations provide more isolation than teams, as resources cannot be shared between organizations. They are more difficult to manage than teams, as you must create and update resources for each organization individually. Organizations cater to bigger companies or users with intricate access needs, necessitating complete resource segregation. ## Access to external systems Consider the need for machine-to-machine [M2M](https://en.wikipedia.org/wiki/Machine_to_machine) communications. If a system needs to interact with Grafana, ensure it has proper access. Consider the following scenarios: **Schedule reports**: Generate reports periodically from Grafana through the reporting API and have them delivered to different communications channels like email, instant messaging, or keep them in a shared storage. **Define alerts**: Define alert rules to be triggered when a specific condition is met. Route alert notifications to different teams according to your organization's needs. **Provisioning file**: Provisioning files can be used to automate the creation of dashboards, data sources, and other resources. These are just a few examples of how Grafana can be used in M2M scenarios. The platform is highly flexible and can be used in various M2M applications, making it a powerful tool for organizations seeking insights into their systems and devices. ### Service accounts You can use a service account to run automated workloads in Grafana, such as dashboard provisioning, configuration, or report generation. Create service accounts and service accounts tokens to authenticate applications, such as Terraform, with the Grafana API. Service accounts will eventually replace [API keys](/docs/grafana/<GRAFANA_VERSION>/administration/service-accounts/migrate-api-keys/) as the primary way to authenticate applications that interact with Grafana. A common use case for creating a service account is to perform operations on automated or triggered tasks. You can use service accounts to: - Schedule reports for specific dashboards to be delivered on a daily/weekly/monthly basis - Define alerts in your system to be used in Grafana - Set up an external SAML authentication provider - Interact with Grafana without signing in as a user In [Grafana Enterprise](), you can also use service accounts in combination with [role-based access control]() to grant very specific permissions to applications that interact with Grafana. Service accounts can only act in the organization they are created for. We recommend creating service accounts in each organization if you have the same task needed for multiple organizations. The following video shows how to migrate from API keys to service accounts. <br> #### Service account tokens To authenticate with Grafana's HTTP API, a randomly generated string known as a service account token can be used as an alternative to a password. When a service account is created, it can be linked to multiple access tokens. These service access tokens can be utilized in the same manner as API keys, providing a means to programmatically access Grafana HTTP API. You can create multiple tokens for the same service account. You might want to do this if: - Multiple applications use the same permissions, but you want to audit or manage their actions separately. - You need to rotate or replace a compromised token. In Grafana's audit logs it will still show up as the same service account. Service account access tokens inherit permissions from the service account. ### API keys Grafana recommends using service accounts instead of API keys. API keys will be deprecated in the near future. For more information, refer to [Grafana service accounts](). You can use Grafana API keys to interact with data sources via HTTP APIs. ## How to work with roles? Grafana roles control the access of users and service accounts to specific resources and determine their authorized actions. You can assign roles through the user interface or APIs, establish them through Terraform, or synchronize them automatically via an external IAM provider. ### What are roles? Within an organization, Grafana has established three primary [organization roles]() - organization administrator, editor, and viewer - which dictate the user's level of access and permissions, including the ability to edit data sources or create teams. Grafana also has an empty role that you can start with and to which you can gradually add custom permissions. To be a member of any organization, every user must be assigned a role. In addition, Grafana provides a server administrator role that grants access to and enables interaction with resources that affect the entire instance, including organizations, users, and server-wide settings. This particular role can only be accessed by users of self-hosted Grafana instances. It is a significant role intended for the administrators of the Grafana instance. ### What are permissions? Each role consists of a set of [permissions]() that determine the tasks a user can perform in the system. For example, the **Admin** role includes permissions that let an administrator create and delete users. Grafana allows for precise permission settings on both dashboards and folders, giving you the ability to control which users and teams can view, edit, and administer them. For example, you might want a certain viewer to be able to edit a dashboard. While that user can see all dashboards, you can grant them access to update only one of them. In [Grafana Enterprise](), you can also grant granular permissions for data sources to control who can query and edit them. Dashboard, folder, and data source permissions can be set through the UI or APIs or provisioned through Terraform. ### Role-based access control Available in [Grafana Enterprise]() and [Grafana Cloud](/docs/grafana-cloud/). If you think that the basic organization and server administrator roles are too limiting, it might be beneficial to employ [role-based access control (RBAC)](). RBAC is a flexible approach to managing user access to Grafana resources, including users, data sources, and reports. It enables easy granting, changing, and revoking of read and write access for users. RBAC comes with pre-defined roles, such as data source writer, which allows updating, reading, or querying all data sources. You can assign these roles to users, teams, and service accounts. In addition, RBAC empowers you to generate personalized roles and modify permissions authorized by the standard Grafana roles. ## User synchronization between Grafana and identity providers When connecting Grafana to an identity provider, it's important to think beyond just the initial authentication setup. You should also think about the maintenance of user bases and roles. Using Grafana's team and role synchronization features ensures that updates you make to a user in your identity provider will be reflected in their role assignment and team memberships in Grafana. ### Team sync Team sync is a feature that allows you to synchronize teams or groups from your authentication provider with teams in Grafana. This means that users of specific teams or groups in LDAP, OAuth, or SAML will be automatically added or removed as members of corresponding teams in Grafana. Whenever a user logs in, Grafana will check for any changes in the teams or groups of the authentication provider and update the user's teams in Grafana accordingly. This makes it easy to manage user permissions across multiple systems. Available in [Grafana Enterprise]() and [Grafana Cloud Advanced](/docs/grafana-cloud/). Team synchronization occurs only when a user logs in. However, if you are using LDAP, it is possible to enable active background synchronization. This allows for the continuous synchronization of teams. ### Role Sync Grafana can synchronize basic roles from your authentication provider by mapping attributes from the identity provider to the user role in Grafana. This means that users with specific attributes, like role, team, or group membership in LDAP, OAuth, or SAML, will be automatically assigned the corresponding role in Grafana. Whenever a user logs in, Grafana will check for any changes in the user information retrieved from the authentication provider and update the user's role in Grafana accordingly. ### Organization sync Organization sync is the process of binding all the users from an organization in Grafana. This delegates the role of managing users to the identity provider. This way, there's no need to manage user access from Grafana because the identity provider will be queried whenever a new user tries to log in. With organization sync, users from identity provider groups can be assigned to corresponding Grafana organizations. This functionality is similar to role sync but with the added benefit of specifying the organization that a user belongs to for a particular identity provider group. Please note that this feature is only available for self-hosted Grafana instances, as Cloud Grafana instances have a single organization limit. Organization sync is currently only supported for SAML and LDAP. You don't need to invite users through Grafana when syncing with Organization sync. Currently, only basic roles can be mapped via Organization sync.
grafana setup
title Plan your IAM integration strategy menuTitle Plan your IAM integration strategy description Learn how to plan your identity and access management strategy before setting up Grafana weight 100 keywords IdP IAM Auth Grafana Plan your IAM integration strategy This section describes the decisions you should make when using an Identity and Access Management IAM provider to manage access to Grafana IAM ensures that users have secure access to sensitive data and other resources simplifying user management and authentication Benefits of integrating with an IAM provider Integrating with an IAM provider provides the following benefits User management By providing Grafana access to your current user management system you eliminate the overhead of replicating user information and instead have centralized user management for users roles and permissions to Grafana resources Security Many IAM solutions provide advanced security features such as multi factor authentication RBAC and audit trails which can help to improve the security of your Grafana installation SSO Properly setting up Grafana with your current IAM solution enables users to access Grafana with the same credentials they use for other applications Scalability User additions and updates in your user database are immediately reflected in Grafana In order to plan an integration with Grafana assess your organization s current needs requirements and any existing IAM solutions being used This includes thinking about how roles and permissions will be mapped to users in Grafana and how users can be grouped to access shared resources Internal vs external users As a first step determine how you want to manage users who will access Grafana Do you already use an identity provider to manage users If so Grafana might be able to integrate with your identity provider through one of our IdP integrations Refer to Configure authentication documentation for the list of supported providers If you are not interested in setting up an external identity provider but still want to limit access to your Grafana instance consider using Grafana s basic authentication Finally if you want your Grafana instance to be accessible to everyone you can enable anonymous access to Grafana For information refer to the anonymous authentication documentation Ways to organize users Organize users in subgroups that are sensible to the organization For example Security Different groups of users or customers should only have access to their intended resources Simplicity Reduce the scope of dashboards and resources available Cost attribution Track and bill costs to individual customers departments or divisions Customization Each group of users could have a personalized experience like different dashboards or theme colors Users in Grafana teams You can organize users into teams and assign them roles and permissions reflecting the current organization For example instead of assigning five users access to the same dashboard you can create a team of those users and assign dashboard permissions to the team A user can belong to multiple teams and be a member or an administrator for a given team Team members inherit permissions from the team but cannot edit the team itself Team administrators can add members to a team and update its settings such as the team name team members roles assigned and UI preferences Teams are a perfect solution for working with a subset of users Teams can share resources with other teams Users in Grafana organizations Grafana organizations allow complete isolation of resources such as dashboards and data sources Users can be members of one or several organizations and they can only access resources from an organization they belong to Having multiple organizations in a single instance of Grafana lets you manage your users in one place while completely separating resources Organizations provide a higher measure of isolation within Grafana than teams do and can be helpful in certain scenarios However because organizations lack the scalability and flexibility of teams and folders we do not recommend using them as the default way to group users and resources Note that Grafana Cloud does not support having more than 1 organizations per instance Choosing between teams and organizations Grafana teams and Grafana organizations serve similar purposes in the Grafana platform Both are designed to help group users and manage and control access to resources Teams provide more flexibility as resources can be accessible by multiple teams and team creation and management are simple In contrast organizations provide more isolation than teams as resources cannot be shared between organizations They are more difficult to manage than teams as you must create and update resources for each organization individually Organizations cater to bigger companies or users with intricate access needs necessitating complete resource segregation Access to external systems Consider the need for machine to machine M2M https en wikipedia org wiki Machine to machine communications If a system needs to interact with Grafana ensure it has proper access Consider the following scenarios Schedule reports Generate reports periodically from Grafana through the reporting API and have them delivered to different communications channels like email instant messaging or keep them in a shared storage Define alerts Define alert rules to be triggered when a specific condition is met Route alert notifications to different teams according to your organization s needs Provisioning file Provisioning files can be used to automate the creation of dashboards data sources and other resources These are just a few examples of how Grafana can be used in M2M scenarios The platform is highly flexible and can be used in various M2M applications making it a powerful tool for organizations seeking insights into their systems and devices Service accounts You can use a service account to run automated workloads in Grafana such as dashboard provisioning configuration or report generation Create service accounts and service accounts tokens to authenticate applications such as Terraform with the Grafana API Service accounts will eventually replace API keys docs grafana GRAFANA VERSION administration service accounts migrate api keys as the primary way to authenticate applications that interact with Grafana A common use case for creating a service account is to perform operations on automated or triggered tasks You can use service accounts to Schedule reports for specific dashboards to be delivered on a daily weekly monthly basis Define alerts in your system to be used in Grafana Set up an external SAML authentication provider Interact with Grafana without signing in as a user In Grafana Enterprise you can also use service accounts in combination with role based access control to grant very specific permissions to applications that interact with Grafana Service accounts can only act in the organization they are created for We recommend creating service accounts in each organization if you have the same task needed for multiple organizations The following video shows how to migrate from API keys to service accounts br Service account tokens To authenticate with Grafana s HTTP API a randomly generated string known as a service account token can be used as an alternative to a password When a service account is created it can be linked to multiple access tokens These service access tokens can be utilized in the same manner as API keys providing a means to programmatically access Grafana HTTP API You can create multiple tokens for the same service account You might want to do this if Multiple applications use the same permissions but you want to audit or manage their actions separately You need to rotate or replace a compromised token In Grafana s audit logs it will still show up as the same service account Service account access tokens inherit permissions from the service account API keys Grafana recommends using service accounts instead of API keys API keys will be deprecated in the near future For more information refer to Grafana service accounts You can use Grafana API keys to interact with data sources via HTTP APIs How to work with roles Grafana roles control the access of users and service accounts to specific resources and determine their authorized actions You can assign roles through the user interface or APIs establish them through Terraform or synchronize them automatically via an external IAM provider What are roles Within an organization Grafana has established three primary organization roles organization administrator editor and viewer which dictate the user s level of access and permissions including the ability to edit data sources or create teams Grafana also has an empty role that you can start with and to which you can gradually add custom permissions To be a member of any organization every user must be assigned a role In addition Grafana provides a server administrator role that grants access to and enables interaction with resources that affect the entire instance including organizations users and server wide settings This particular role can only be accessed by users of self hosted Grafana instances It is a significant role intended for the administrators of the Grafana instance What are permissions Each role consists of a set of permissions that determine the tasks a user can perform in the system For example the Admin role includes permissions that let an administrator create and delete users Grafana allows for precise permission settings on both dashboards and folders giving you the ability to control which users and teams can view edit and administer them For example you might want a certain viewer to be able to edit a dashboard While that user can see all dashboards you can grant them access to update only one of them In Grafana Enterprise you can also grant granular permissions for data sources to control who can query and edit them Dashboard folder and data source permissions can be set through the UI or APIs or provisioned through Terraform Role based access control Available in Grafana Enterprise and Grafana Cloud docs grafana cloud If you think that the basic organization and server administrator roles are too limiting it might be beneficial to employ role based access control RBAC RBAC is a flexible approach to managing user access to Grafana resources including users data sources and reports It enables easy granting changing and revoking of read and write access for users RBAC comes with pre defined roles such as data source writer which allows updating reading or querying all data sources You can assign these roles to users teams and service accounts In addition RBAC empowers you to generate personalized roles and modify permissions authorized by the standard Grafana roles User synchronization between Grafana and identity providers When connecting Grafana to an identity provider it s important to think beyond just the initial authentication setup You should also think about the maintenance of user bases and roles Using Grafana s team and role synchronization features ensures that updates you make to a user in your identity provider will be reflected in their role assignment and team memberships in Grafana Team sync Team sync is a feature that allows you to synchronize teams or groups from your authentication provider with teams in Grafana This means that users of specific teams or groups in LDAP OAuth or SAML will be automatically added or removed as members of corresponding teams in Grafana Whenever a user logs in Grafana will check for any changes in the teams or groups of the authentication provider and update the user s teams in Grafana accordingly This makes it easy to manage user permissions across multiple systems Available in Grafana Enterprise and Grafana Cloud Advanced docs grafana cloud Team synchronization occurs only when a user logs in However if you are using LDAP it is possible to enable active background synchronization This allows for the continuous synchronization of teams Role Sync Grafana can synchronize basic roles from your authentication provider by mapping attributes from the identity provider to the user role in Grafana This means that users with specific attributes like role team or group membership in LDAP OAuth or SAML will be automatically assigned the corresponding role in Grafana Whenever a user logs in Grafana will check for any changes in the user information retrieved from the authentication provider and update the user s role in Grafana accordingly Organization sync Organization sync is the process of binding all the users from an organization in Grafana This delegates the role of managing users to the identity provider This way there s no need to manage user access from Grafana because the identity provider will be queried whenever a new user tries to log in With organization sync users from identity provider groups can be assigned to corresponding Grafana organizations This functionality is similar to role sync but with the added benefit of specifying the organization that a user belongs to for a particular identity provider group Please note that this feature is only available for self hosted Grafana instances as Cloud Grafana instances have a single organization limit Organization sync is currently only supported for SAML and LDAP You don t need to invite users through Grafana when syncing with Organization sync Currently only basic roles can be mapped via Organization sync
grafana setup of key management system providers administration database encryption enterprise enterprise encryption aliases products If you have a Grafana Enterprise license you can integrate with a variety labels oss enterprise
--- aliases: - ../../administration/database-encryption/ - ../../enterprise/enterprise-encryption/ description: If you have a Grafana Enterprise license, you can integrate with a variety of key management system providers. labels: products: - enterprise - oss title: Configure database encryption weight: 700 --- # Configure database encryption Grafana’s database contains secrets, which are used to query data sources, send alert notifications, and perform other functions within Grafana. Grafana encrypts these secrets before they are written to the database, by using a symmetric-key encryption algorithm called Advanced Encryption Standard (AES). These secrets are signed using a [secret key]() that you can change when you configure a new Grafana instance. Grafana v9.0 and newer use [envelope encryption](#envelope-encryption) by default, which adds a layer of indirection to the encryption process that introduces an [**implicit breaking change**](#implicit-breaking-change) for older versions of Grafana. For further details about how to operate a Grafana instance with envelope encryption, see the [Operational work]() section. In Grafana Enterprise, you can also [encrypt secrets in AES-GCM (Galois/Counter Mode)]() instead of the default AES-CFB (Cipher FeedBack mode). ## Envelope encryption Since Grafana v9.0, you can turn envelope encryption off by adding the feature toggle `disableEnvelopeEncryption` to your [Grafana configuration](). Instead of encrypting all secrets with a single key, Grafana uses a set of keys called data encryption keys (DEKs) to encrypt them. These data encryption keys are themselves encrypted with a single key encryption key (KEK), configured through the `secret_key` attribute in your [Grafana configuration]() or by [Encrypting your database with a key from a key management service (KMS)](#encrypting-your-database-with-a-key-from-a-key-management-service-kms). ### Implicit breaking change Envelope encryption introduces an implicit breaking change to versions of Grafana prior to v9.0, because it changes how secrets stored in the Grafana database are encrypted. Grafana administrators can upgrade to Grafana v9.0 with no action required from the database encryption perspective, but must be extremely careful if they need to roll an upgrade back to Grafana v8.5 or earlier because secrets created or modified after upgrading to Grafana v9.0 can’t be decrypted by previous versions. Grafana v8.5 implemented envelope encryption behind an optional feature toggle. Grafana administrators who need to downgrade to Grafana v8.5 can enable envelope encryption as a workaround by adding the feature toggle `envelopeEncryption` to the [Grafana configuration](). ## Operational work From the database encryption perspective, Grafana administrators can: - [**Re-encrypt secrets**](#re-encrypt-secrets): re-encrypt secrets with envelope encryption and a fresh data key. - [**Roll back secrets**](#roll-back-secrets): decrypt secrets encrypted with envelope encryption and re-encrypt them with legacy encryption. - [**Re-encrypt data keys**](#re-encrypt-data-keys): re-encrypt data keys with a fresh key encryption key and a KMS integration. - [**Rotate data keys**](#rotate-data-keys): disable active data keys and stop using them for encryption in favor of a fresh one. ### Re-encrypt secrets You can re-encrypt secrets in order to: - Move already existing secrets' encryption forward from legacy to envelope encryption. - Re-encrypt secrets after a [data keys rotation](#rotate-data-keys). To re-encrypt secrets, use the [Grafana CLI]() by running the `grafana cli admin secrets-migration re-encrypt` command or the `/encryption/reencrypt-secrets` endpoint of the Grafana [Admin API](). It's safe to run more than once, more recommended under maintenance mode. ### Roll back secrets You can roll back secrets encrypted with envelope encryption to legacy encryption. This might be necessary to downgrade to Grafana versions prior to v9.0 after an unsuccessful upgrade. To roll back secrets, use the [Grafana CLI]() by running the `grafana cli admin secrets-migration rollback` command or the `/encryption/rollback-secrets` endpoint of the Grafana [Admin API](). It's safe to run more than once, more recommended under maintenance mode. ### Re-encrypt data keys You can re-encrypt data keys encrypted with a specific key encryption key (KEK). This allows you to either re-encrypt existing data keys with a new KEK version or to re-encrypt them with a completely different KEK. To re-encrypt data keys, use the [Grafana CLI]() by running the `grafana cli admin secrets-migration re-encrypt-data-keys` command or the `/encryption/reencrypt-data-keys` endpoint of the Grafana [Admin API](). It's safe to run more than once, more recommended under maintenance mode. ### Rotate data keys You can rotate data keys to disable the active data key and therefore stop using them for encryption operations. For high-availability setups, you might need to wait until the data keys cache's time-to-live (TTL) expires to ensure that all rotated data keys are no longer being used for encryption operations. New data keys for encryption operations are generated on demand. Data key rotation does **not** implicitly re-encrypt secrets. Grafana will continue to use rotated data keys to decrypt secrets still encrypted with them. To completely stop using rotated data keys for both encryption and decryption, see [secrets re-encryption](#re-encrypt-secrets). To rotate data keys, use the `/encryption/rotate-data-keys` endpoint of the Grafana [Admin API](). It's safe to call more than once, more recommended under maintenance mode. ## Encrypting your database with a key from a key management service (KMS) If you are using Grafana Enterprise, you can integrate with a key management service (KMS) provider, and change Grafana’s cryptographic mode of operation from AES-CFB to AES-GCM. You can choose to encrypt secrets stored in the Grafana database using a key from a KMS, which is a secure central storage location that is designed to help you to create and manage cryptographic keys and control their use across many services. When you integrate with a KMS, Grafana does not directly store your encryption key. Instead, Grafana stores KMS credentials and the identifier of the key, which Grafana uses to encrypt the database. Grafana integrates with the following key management services: - [AWS KMS]() - [Azure Key Vault]() - [Google Cloud KMS]() - [Hashicorp Key Vault]() ## Changing your encryption mode to AES-GCM Grafana encrypts secrets using Advanced Encryption Standard in Cipher FeedBack mode (AES-CFB). You might prefer to use AES in Galois/Counter Mode (AES-GCM) instead, to meet your company’s security requirements or in order to maintain consistency with other services. To change your encryption mode, update the `algorithm` value in the `[security.encryption]` section of your Grafana configuration file. For further details, refer to [Enterprise configuration]().
grafana setup
aliases administration database encryption enterprise enterprise encryption description If you have a Grafana Enterprise license you can integrate with a variety of key management system providers labels products enterprise oss title Configure database encryption weight 700 Configure database encryption Grafana s database contains secrets which are used to query data sources send alert notifications and perform other functions within Grafana Grafana encrypts these secrets before they are written to the database by using a symmetric key encryption algorithm called Advanced Encryption Standard AES These secrets are signed using a secret key that you can change when you configure a new Grafana instance Grafana v9 0 and newer use envelope encryption envelope encryption by default which adds a layer of indirection to the encryption process that introduces an implicit breaking change implicit breaking change for older versions of Grafana For further details about how to operate a Grafana instance with envelope encryption see the Operational work section In Grafana Enterprise you can also encrypt secrets in AES GCM Galois Counter Mode instead of the default AES CFB Cipher FeedBack mode Envelope encryption Since Grafana v9 0 you can turn envelope encryption off by adding the feature toggle disableEnvelopeEncryption to your Grafana configuration Instead of encrypting all secrets with a single key Grafana uses a set of keys called data encryption keys DEKs to encrypt them These data encryption keys are themselves encrypted with a single key encryption key KEK configured through the secret key attribute in your Grafana configuration or by Encrypting your database with a key from a key management service KMS encrypting your database with a key from a key management service kms Implicit breaking change Envelope encryption introduces an implicit breaking change to versions of Grafana prior to v9 0 because it changes how secrets stored in the Grafana database are encrypted Grafana administrators can upgrade to Grafana v9 0 with no action required from the database encryption perspective but must be extremely careful if they need to roll an upgrade back to Grafana v8 5 or earlier because secrets created or modified after upgrading to Grafana v9 0 can t be decrypted by previous versions Grafana v8 5 implemented envelope encryption behind an optional feature toggle Grafana administrators who need to downgrade to Grafana v8 5 can enable envelope encryption as a workaround by adding the feature toggle envelopeEncryption to the Grafana configuration Operational work From the database encryption perspective Grafana administrators can Re encrypt secrets re encrypt secrets re encrypt secrets with envelope encryption and a fresh data key Roll back secrets roll back secrets decrypt secrets encrypted with envelope encryption and re encrypt them with legacy encryption Re encrypt data keys re encrypt data keys re encrypt data keys with a fresh key encryption key and a KMS integration Rotate data keys rotate data keys disable active data keys and stop using them for encryption in favor of a fresh one Re encrypt secrets You can re encrypt secrets in order to Move already existing secrets encryption forward from legacy to envelope encryption Re encrypt secrets after a data keys rotation rotate data keys To re encrypt secrets use the Grafana CLI by running the grafana cli admin secrets migration re encrypt command or the encryption reencrypt secrets endpoint of the Grafana Admin API It s safe to run more than once more recommended under maintenance mode Roll back secrets You can roll back secrets encrypted with envelope encryption to legacy encryption This might be necessary to downgrade to Grafana versions prior to v9 0 after an unsuccessful upgrade To roll back secrets use the Grafana CLI by running the grafana cli admin secrets migration rollback command or the encryption rollback secrets endpoint of the Grafana Admin API It s safe to run more than once more recommended under maintenance mode Re encrypt data keys You can re encrypt data keys encrypted with a specific key encryption key KEK This allows you to either re encrypt existing data keys with a new KEK version or to re encrypt them with a completely different KEK To re encrypt data keys use the Grafana CLI by running the grafana cli admin secrets migration re encrypt data keys command or the encryption reencrypt data keys endpoint of the Grafana Admin API It s safe to run more than once more recommended under maintenance mode Rotate data keys You can rotate data keys to disable the active data key and therefore stop using them for encryption operations For high availability setups you might need to wait until the data keys cache s time to live TTL expires to ensure that all rotated data keys are no longer being used for encryption operations New data keys for encryption operations are generated on demand Data key rotation does not implicitly re encrypt secrets Grafana will continue to use rotated data keys to decrypt secrets still encrypted with them To completely stop using rotated data keys for both encryption and decryption see secrets re encryption re encrypt secrets To rotate data keys use the encryption rotate data keys endpoint of the Grafana Admin API It s safe to call more than once more recommended under maintenance mode Encrypting your database with a key from a key management service KMS If you are using Grafana Enterprise you can integrate with a key management service KMS provider and change Grafana s cryptographic mode of operation from AES CFB to AES GCM You can choose to encrypt secrets stored in the Grafana database using a key from a KMS which is a secure central storage location that is designed to help you to create and manage cryptographic keys and control their use across many services When you integrate with a KMS Grafana does not directly store your encryption key Instead Grafana stores KMS credentials and the identifier of the key which Grafana uses to encrypt the database Grafana integrates with the following key management services AWS KMS Azure Key Vault Google Cloud KMS Hashicorp Key Vault Changing your encryption mode to AES GCM Grafana encrypts secrets using Advanced Encryption Standard in Cipher FeedBack mode AES CFB You might prefer to use AES in Galois Counter Mode AES GCM instead to meet your company s security requirements or in order to maintain consistency with other services To change your encryption mode update the algorithm value in the security encryption section of your Grafana configuration file For further details refer to Enterprise configuration
grafana setup enterprise vault aliases products secrets for configuration and provisioning title Integrate Grafana with Hashicorp Vault Learn how to integrate Grafana with Hashicorp Vault so that you can use labels oss enterprise
--- aliases: - ../../../enterprise/vault/ description: Learn how to integrate Grafana with Hashicorp Vault so that you can use secrets for configuration and provisioning. labels: products: - enterprise - oss title: Integrate Grafana with Hashicorp Vault weight: 500 --- # Integrate Grafana with Hashicorp Vault If you manage your secrets with [Hashicorp Vault](https://www.hashicorp.com/products/vault), you can use them for [Configuration]() and [Provisioning](). Available in [Grafana Enterprise](). If you have Grafana [set up for high availability](), then we advise not to use dynamic secrets for provisioning files. Each Grafana instance is responsible for renewing its own leases. Your data source leases might expire when one of your Grafana servers shuts down. ## Configuration Before using Vault, you need to activate it by providing a URL, authentication method (currently only token), and a token for your Vault service. Grafana automatically renews the service token if it is renewable and set up with a limited lifetime. If you're using short-lived leases, then you can also configure how often Grafana should renew the lease and for how long. We recommend keeping the defaults unless you run into problems. ```ini [keystore.vault] # Location of the Vault server ;url = # Vault namespace if using Vault with multi-tenancy ;namespace = # Method for authenticating towards Vault. Vault is inactive if this option is not set # Possible values: token ;auth_method = # Secret token to connect to Vault when auth_method is token ;token = # Time between checking if there are any secrets which needs to be renewed. ;lease_renewal_interval = 5m # Time until expiration for tokens which are renewed. Should have a value higher than lease_renewal_interval ;lease_renewal_expires_within = 15m # New duration for renewed tokens. Vault may be configured to ignore this value and impose a stricter limit. ;lease_renewal_increment = 1h ``` Example for `vault server -dev`: ```ini [keystore.vault] url = http://127.0.0.1:8200 # HTTP should only be used for local testing auth_method = token token = s.sAZLyI0r7sFLMPq6MWtoOhAN # replace with your key ``` ## Using the Vault expander After you configure Vault, you must set the configuration or provisioning files you wish to use Vault. Vault configuration is an extension of configuration's [variable expansion]() and follows the `$__vault{<argument>}` syntax. The argument to Vault consists of three parts separated by a colon: - The first part specifies which secrets engine should be used. - The second part specifies which secret should be accessed. - The third part specifies which field of that secret should be used. For example, if you place a Key/Value secret for the Grafana admin user in _secret/grafana/admin_defaults_ the syntax for accessing its _password_ field would be `$__vault{kv:secret/grafana/admin_defaults:password}`. ### Secrets engines Vault supports many secrets engines which represents different methods for storing or generating secrets when requested by an authorized user. Grafana supports a subset of these which are most likely to be relevant for a Grafana installation. #### Key/Value Grafana supports Vault's [K/V version 2](https://www.vaultproject.io/docs/secrets/kv/kv-v2) storage engine which is used to store and retrieve arbitrary secrets as `kv`. ```ini $__vault{kv:secret/grafana/smtp:username} ``` #### Databases The Vault [databases secrets engines](https://www.vaultproject.io/docs/secrets/databases) is a family of secret engines which shares a similar syntax and grants the user dynamic access to a database. You can use this both for setting up Grafana's own database access and for provisioning data sources. ```ini $__vault{database:database/creds/grafana:username} ``` ### Examples The following examples show you how to set your [configuration]() or [provisioning]() files to use Vault to retrieve configuration values. #### Configuration The following is a partial example for using Vault to set up a Grafana configuration file's email and database credentials. Refer to [Configuration]() for more information. ```ini [smtp] enabled = true host = $__vault{kv:secret/grafana/smtp:hostname}:587 user = $__vault{kv:secret/grafana/smtp:username} password = $__vault{kv:secret/grafana/smtp:password} [database] type = mysql host = mysqlhost:3306 name = grafana user = $__vault{database:database/creds/grafana:username} password = $__vault{database:database/creds/grafana:password} ``` #### Provisioning The following is a full examples of a provisioning YAML file setting up a MySQL data source using Vault's database secrets engine. Refer to [Provisioning]() for more information. **provisioning/custom.yaml** ```ini apiVersion: 1 datasources: - name: statistics type: mysql url: localhost:3306 database: stats user: $__vault{database:database/creds/ro/stats:username} secureJsonData: password: $__vault{database:database/creds/ro/stats:password} ```
grafana setup
aliases enterprise vault description Learn how to integrate Grafana with Hashicorp Vault so that you can use secrets for configuration and provisioning labels products enterprise oss title Integrate Grafana with Hashicorp Vault weight 500 Integrate Grafana with Hashicorp Vault If you manage your secrets with Hashicorp Vault https www hashicorp com products vault you can use them for Configuration and Provisioning Available in Grafana Enterprise If you have Grafana set up for high availability then we advise not to use dynamic secrets for provisioning files Each Grafana instance is responsible for renewing its own leases Your data source leases might expire when one of your Grafana servers shuts down Configuration Before using Vault you need to activate it by providing a URL authentication method currently only token and a token for your Vault service Grafana automatically renews the service token if it is renewable and set up with a limited lifetime If you re using short lived leases then you can also configure how often Grafana should renew the lease and for how long We recommend keeping the defaults unless you run into problems ini keystore vault Location of the Vault server url Vault namespace if using Vault with multi tenancy namespace Method for authenticating towards Vault Vault is inactive if this option is not set Possible values token auth method Secret token to connect to Vault when auth method is token token Time between checking if there are any secrets which needs to be renewed lease renewal interval 5m Time until expiration for tokens which are renewed Should have a value higher than lease renewal interval lease renewal expires within 15m New duration for renewed tokens Vault may be configured to ignore this value and impose a stricter limit lease renewal increment 1h Example for vault server dev ini keystore vault url http 127 0 0 1 8200 HTTP should only be used for local testing auth method token token s sAZLyI0r7sFLMPq6MWtoOhAN replace with your key Using the Vault expander After you configure Vault you must set the configuration or provisioning files you wish to use Vault Vault configuration is an extension of configuration s variable expansion and follows the vault argument syntax The argument to Vault consists of three parts separated by a colon The first part specifies which secrets engine should be used The second part specifies which secret should be accessed The third part specifies which field of that secret should be used For example if you place a Key Value secret for the Grafana admin user in secret grafana admin defaults the syntax for accessing its password field would be vault kv secret grafana admin defaults password Secrets engines Vault supports many secrets engines which represents different methods for storing or generating secrets when requested by an authorized user Grafana supports a subset of these which are most likely to be relevant for a Grafana installation Key Value Grafana supports Vault s K V version 2 https www vaultproject io docs secrets kv kv v2 storage engine which is used to store and retrieve arbitrary secrets as kv ini vault kv secret grafana smtp username Databases The Vault databases secrets engines https www vaultproject io docs secrets databases is a family of secret engines which shares a similar syntax and grants the user dynamic access to a database You can use this both for setting up Grafana s own database access and for provisioning data sources ini vault database database creds grafana username Examples The following examples show you how to set your configuration or provisioning files to use Vault to retrieve configuration values Configuration The following is a partial example for using Vault to set up a Grafana configuration file s email and database credentials Refer to Configuration for more information ini smtp enabled true host vault kv secret grafana smtp hostname 587 user vault kv secret grafana smtp username password vault kv secret grafana smtp password database type mysql host mysqlhost 3306 name grafana user vault database database creds grafana username password vault database database creds grafana password Provisioning The following is a full examples of a provisioning YAML file setting up a MySQL data source using Vault s database secrets engine Refer to Provisioning for more information provisioning custom yaml ini apiVersion 1 datasources name statistics type mysql url localhost 3306 database stats user vault database database creds ro stats username secureJsonData password vault database database creds ro stats password
grafana setup auth overview Learn about all the ways in which you can configure Grafana to authenticate aliases products auth users labels cloud enterprise
--- aliases: - ../../auth/ - ../../auth/overview/ description: Learn about all the ways in which you can configure Grafana to authenticate users. labels: products: - cloud - enterprise - oss title: Configure authentication weight: 200 --- # Configure authentication Grafana provides many ways to authenticate users. Some authentication integrations also enable syncing user permissions and org memberships. The following table shows all supported authentication methods and the features available for them. [Team sync]() and [active sync]() are only available in Grafana Enterprise. | Authentication method | Multi Org Mapping | Enforce Sync | Role Mapping | Grafana Admin Mapping | Team Sync | Allowed groups | Active Sync | Skip OrgRole mapping | Auto Login | Single Logout | | :---------------------------------------------------- | :---------------- | :----------- | :----------- | :-------------------- | :-------- | :------------- | :---------- | :------------------- | :--------- | :------------ | | [Anonymous access]() | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | | [Auth Proxy]() | no | yes | yes | no | yes | no | N/A | no | N/A | N/A | | [Azure AD OAuth]() | yes | yes | yes | yes | yes | yes | N/A | yes | yes | yes | | [Basic auth]() | yes | N/A | yes | yes | N/A | N/A | N/A | N/A | N/A | N/A | | [Generic OAuth]() | yes | yes | yes | yes | yes | no | N/A | yes | yes | yes | | [GitHub OAuth]() | yes | yes | yes | yes | yes | yes | N/A | yes | yes | yes | | [GitLab OAuth]() | yes | yes | yes | yes | yes | yes | N/A | yes | yes | yes | | [Google OAuth]() | yes | no | no | no | yes | no | N/A | no | yes | yes | | [Grafana.com OAuth]() | no | no | yes | no | N/A | N/A | N/A | yes | yes | yes | | [Okta OAuth]() | yes | yes | yes | yes | yes | yes | N/A | yes | yes | yes | | [SAML]() (Enterprise only) | yes | yes | yes | yes | yes | yes | N/A | yes | yes | yes | | [LDAP]() | yes | yes | yes | yes | yes | yes | yes | no | N/A | N/A | | [JWT Proxy]() | no | yes | yes | yes | no | no | N/A | no | N/A | N/A | Fields explanation: **Multi Org Mapping:** Able to add a user and map roles to multiple organizations **Enforce Sync:** If the information provided by the identity provider is empty, does the integration skip setting that user’s field or does it enforce a default. **Role Mapping:** Able to map a user’s role in the default org **Grafana Admin Mapping:** Able to map a user’s admin role in the default org **Team Sync:** Able to sync teams from a predefined group/team in a your IdP **Allowed Groups:** Only allow members of certain groups to login **Active Sync:** Add users to teams and update their profile without requiring them to log in **Skip OrgRole Sync:** Able to modify org role for users and not sync it back to the IdP **Auto Login:** Automatically redirects to provider login page if user is not logged in \* for OAuth; Only works if it's the only configured provider **Single Logout:** Logging out from Grafana also logs you out of provider session ## Configuring multiple identity providers Grafana allows you to configure more than one authentication provider, however it is not possible to configure the same type of authentication provider twice. For example, you can have [SAML]() (Enterprise only) and [Generic OAuth]() configured, but you can not have two different [Generic OAuth]() configurations. > Note: Grafana does not support multiple identity providers resolving the same user. Ensure there are no user account overlaps between the different providers In scenarios where you have multiple identity providers of the same type, there are a couple of options: - Use different Grafana instances each configured with a given identity provider. - Check if the identity provider supports account federation. In such cases, you can configure it once and let your identity provider federate the accounts from different providers. - If SAML is supported by the identity provider, you can configure one [Generic OAuth]() and one [SAML]() (Enterprise only). ## Using the same email address to login with different identity providers If users want to use the same email address with multiple identity providers (for example, Grafana.Com OAuth and Google OAuth), you can configure Grafana to use the email address as the unique identifier for the user. This is done by enabling the `oauth_allow_insecure_email_lookup` option, which is disabled by default. Please note that enabling this option can lower the security of your Grafana instance. If you enable this option, you should also ensure that the `Allowed organization`, `Allowed groups` and `Allowed domains` settings are configured correctly to prevent unauthorized access. To enable this option, refer to the [Enable email lookup](#enable-email-lookup) section. ## Multi-factor authentication (MFA/2FA) Grafana and the Grafana Cloud portal currently do not include built-in support for multi-factor authentication (MFA). We strongly recommend integrating an external identity provider (IdP) that supports MFA, such as Okta, Azure AD, or Google Workspace. By configuring your Grafana instances to use an external IdP, you can leverage MFA to protect your accounts and resources effectively. ## Login and short-lived tokens > The following applies when using Grafana's basic authentication, LDAP (without Auth proxy) or OAuth integration. Grafana uses short-lived tokens as a mechanism for verifying authenticated users. These short-lived tokens are rotated on an interval specified by `token_rotation_interval_minutes` for active authenticated users. Inactive authenticated users will remain logged in for a duration specified by `login_maximum_inactive_lifetime_duration`. This means that a user can close a Grafana window and return before `now + login_maximum_inactive_lifetime_duration` to continue their session. This is true as long as the time since last user login is less than `login_maximum_lifetime_duration`. ## Settings Example: ```bash [auth] # Login cookie name login_cookie_name = grafana_session # The maximum lifetime (duration) an authenticated user can be inactive before being required to login at next visit. Default is 7 days (7d). This setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month). The lifetime resets at each successful token rotation (token_rotation_interval_minutes). login_maximum_inactive_lifetime_duration = # The maximum lifetime (duration) an authenticated user can be logged in since login time before being required to login. Default is 30 days (30d). This setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month). login_maximum_lifetime_duration = # How often should auth tokens be rotated for authenticated users when being active. The default is every 10 minutes. token_rotation_interval_minutes = 10 # The maximum lifetime (seconds) an API key can be used. If it is set all the API keys should have limited lifetime that is lower than this value. api_key_max_seconds_to_live = -1 # Enforce user lookup based on email instead of the unique ID provided by the IdP. oauth_allow_insecure_email_lookup = false ``` ## Extended authentication settings ### Enable email lookup By default, Grafana identifies users based on the unique ID provided by the identity provider (IdP). In certain cases, however, enabling user lookups by email can be a feasible option, such as when: - The identity provider is a single-tenant setup. - Unique, validated, and non-editable emails are provided by the IdP. - The infrastructure allows email-based identification without compromising security. **Important note**: While it is possible to configure Grafana to allow email-based user lookups, we strongly recommend against this approach in most cases due to potential security risks. If you still choose to proceed, the following configuration can be applied to enable email lookup. ```bash [auth] oauth_allow_insecure_email_lookup = true ``` You can also enable email lookup using the API: Available in [Grafana Enterprise]() and [Grafana Cloud]() since Grafana v10.4. ``` curl --request PUT \ --url http://{slug}.grafana.com/api/admin/settings \ --header 'Authorization: Bearer glsa_yourserviceaccounttoken' \ --header 'Content-Type: application/json' \ --data '{ "updates": { "auth": { "oauth_allow_insecure_email_lookup": "true" }}}' ``` Finally, you can also enable it using the UI by going to **Administration -> Authentication -> Auth settings**. ### Automatic OAuth login Set to true to attempt login with specific OAuth provider automatically, skipping the login screen. This setting is ignored if multiple auth providers are configured to use auto login. Defaults to `false`. ```bash [auth.generic_oauth] auto_login = true ``` ### Avoid automatic login The `disableAutoLogin=true` URL parameter allows users to bypass the automatic login feature in scenarios where incorrect configuration changes prevent normal login functionality. This feature is especially helpful when you need to access the login screen to troubleshoot and fix misconfigurations. #### How to use 1. Add `disableAutoLogin=true` as a query parameter to your Grafana URL. - Example: `grafana.example.net/login?disableAutoLogin=true` or `grafana.example.net/login?disableAutoLogin` 1. This will redirect you to the standard login screen, bypassing the automatic login mechanism. 1. Fix any configuration issues and test your login setup. This feature is available for both for OAuth and SAML. Ensure that after fixing the issue, you remove the parameter or revert the configuration to re-enable the automatic login feature, if desired. ### Hide sign-out menu Set the option detailed below to true to hide sign-out menu link. Useful if you use an auth proxy or JWT authentication. ```bash [auth] disable_signout_menu = true ``` ### URL redirect after signing out URL to redirect the user to after signing out from Grafana. This can for example be used to enable signout from an OAuth provider. Example for Generic OAuth: ```bash [auth.generic_oauth] signout_redirect_url = ``` ### Remote logout You can log out from other devices by removing login sessions from the bottom of your profile page. If you are a Grafana admin user, you can also do the same for any user from the Server Admin / Edit User view. ### Protected roles Available in [Grafana Enterprise]() and [Grafana Cloud](). By default, after you configure an authorization provider, Grafana will adopt existing users into the new authentication scheme. For example, if you have created a user with basic authentication having the login `[email protected]`, then set up SAML authentication where `[email protected]` is an account, the user's authentication type will be changed to SAML if they perform a SAML sign-in. You can disable this user adoption for certain roles using the `protected_roles` property: ```bash [auth.security] protected_roles = server_admins org_admins ``` The value of `protected_roles` should be a list of roles to protect, separated by spaces. Valid roles are `viewers`, `editors`, `org_admins`, `server_admins`, and `all` (a superset of the other roles).
grafana setup
aliases auth auth overview description Learn about all the ways in which you can configure Grafana to authenticate users labels products cloud enterprise oss title Configure authentication weight 200 Configure authentication Grafana provides many ways to authenticate users Some authentication integrations also enable syncing user permissions and org memberships The following table shows all supported authentication methods and the features available for them Team sync and active sync are only available in Grafana Enterprise Authentication method Multi Org Mapping Enforce Sync Role Mapping Grafana Admin Mapping Team Sync Allowed groups Active Sync Skip OrgRole mapping Auto Login Single Logout Anonymous access N A N A N A N A N A N A N A N A N A N A Auth Proxy no yes yes no yes no N A no N A N A Azure AD OAuth yes yes yes yes yes yes N A yes yes yes Basic auth yes N A yes yes N A N A N A N A N A N A Generic OAuth yes yes yes yes yes no N A yes yes yes GitHub OAuth yes yes yes yes yes yes N A yes yes yes GitLab OAuth yes yes yes yes yes yes N A yes yes yes Google OAuth yes no no no yes no N A no yes yes Grafana com OAuth no no yes no N A N A N A yes yes yes Okta OAuth yes yes yes yes yes yes N A yes yes yes SAML Enterprise only yes yes yes yes yes yes N A yes yes yes LDAP yes yes yes yes yes yes yes no N A N A JWT Proxy no yes yes yes no no N A no N A N A Fields explanation Multi Org Mapping Able to add a user and map roles to multiple organizations Enforce Sync If the information provided by the identity provider is empty does the integration skip setting that user s field or does it enforce a default Role Mapping Able to map a user s role in the default org Grafana Admin Mapping Able to map a user s admin role in the default org Team Sync Able to sync teams from a predefined group team in a your IdP Allowed Groups Only allow members of certain groups to login Active Sync Add users to teams and update their profile without requiring them to log in Skip OrgRole Sync Able to modify org role for users and not sync it back to the IdP Auto Login Automatically redirects to provider login page if user is not logged in for OAuth Only works if it s the only configured provider Single Logout Logging out from Grafana also logs you out of provider session Configuring multiple identity providers Grafana allows you to configure more than one authentication provider however it is not possible to configure the same type of authentication provider twice For example you can have SAML Enterprise only and Generic OAuth configured but you can not have two different Generic OAuth configurations Note Grafana does not support multiple identity providers resolving the same user Ensure there are no user account overlaps between the different providers In scenarios where you have multiple identity providers of the same type there are a couple of options Use different Grafana instances each configured with a given identity provider Check if the identity provider supports account federation In such cases you can configure it once and let your identity provider federate the accounts from different providers If SAML is supported by the identity provider you can configure one Generic OAuth and one SAML Enterprise only Using the same email address to login with different identity providers If users want to use the same email address with multiple identity providers for example Grafana Com OAuth and Google OAuth you can configure Grafana to use the email address as the unique identifier for the user This is done by enabling the oauth allow insecure email lookup option which is disabled by default Please note that enabling this option can lower the security of your Grafana instance If you enable this option you should also ensure that the Allowed organization Allowed groups and Allowed domains settings are configured correctly to prevent unauthorized access To enable this option refer to the Enable email lookup enable email lookup section Multi factor authentication MFA 2FA Grafana and the Grafana Cloud portal currently do not include built in support for multi factor authentication MFA We strongly recommend integrating an external identity provider IdP that supports MFA such as Okta Azure AD or Google Workspace By configuring your Grafana instances to use an external IdP you can leverage MFA to protect your accounts and resources effectively Login and short lived tokens The following applies when using Grafana s basic authentication LDAP without Auth proxy or OAuth integration Grafana uses short lived tokens as a mechanism for verifying authenticated users These short lived tokens are rotated on an interval specified by token rotation interval minutes for active authenticated users Inactive authenticated users will remain logged in for a duration specified by login maximum inactive lifetime duration This means that a user can close a Grafana window and return before now login maximum inactive lifetime duration to continue their session This is true as long as the time since last user login is less than login maximum lifetime duration Settings Example bash auth Login cookie name login cookie name grafana session The maximum lifetime duration an authenticated user can be inactive before being required to login at next visit Default is 7 days 7d This setting should be expressed as a duration e g 5m minutes 6h hours 10d days 2w weeks 1M month The lifetime resets at each successful token rotation token rotation interval minutes login maximum inactive lifetime duration The maximum lifetime duration an authenticated user can be logged in since login time before being required to login Default is 30 days 30d This setting should be expressed as a duration e g 5m minutes 6h hours 10d days 2w weeks 1M month login maximum lifetime duration How often should auth tokens be rotated for authenticated users when being active The default is every 10 minutes token rotation interval minutes 10 The maximum lifetime seconds an API key can be used If it is set all the API keys should have limited lifetime that is lower than this value api key max seconds to live 1 Enforce user lookup based on email instead of the unique ID provided by the IdP oauth allow insecure email lookup false Extended authentication settings Enable email lookup By default Grafana identifies users based on the unique ID provided by the identity provider IdP In certain cases however enabling user lookups by email can be a feasible option such as when The identity provider is a single tenant setup Unique validated and non editable emails are provided by the IdP The infrastructure allows email based identification without compromising security Important note While it is possible to configure Grafana to allow email based user lookups we strongly recommend against this approach in most cases due to potential security risks If you still choose to proceed the following configuration can be applied to enable email lookup bash auth oauth allow insecure email lookup true You can also enable email lookup using the API Available in Grafana Enterprise and Grafana Cloud since Grafana v10 4 curl request PUT url http slug grafana com api admin settings header Authorization Bearer glsa yourserviceaccounttoken header Content Type application json data updates auth oauth allow insecure email lookup true Finally you can also enable it using the UI by going to Administration Authentication Auth settings Automatic OAuth login Set to true to attempt login with specific OAuth provider automatically skipping the login screen This setting is ignored if multiple auth providers are configured to use auto login Defaults to false bash auth generic oauth auto login true Avoid automatic login The disableAutoLogin true URL parameter allows users to bypass the automatic login feature in scenarios where incorrect configuration changes prevent normal login functionality This feature is especially helpful when you need to access the login screen to troubleshoot and fix misconfigurations How to use 1 Add disableAutoLogin true as a query parameter to your Grafana URL Example grafana example net login disableAutoLogin true or grafana example net login disableAutoLogin 1 This will redirect you to the standard login screen bypassing the automatic login mechanism 1 Fix any configuration issues and test your login setup This feature is available for both for OAuth and SAML Ensure that after fixing the issue you remove the parameter or revert the configuration to re enable the automatic login feature if desired Hide sign out menu Set the option detailed below to true to hide sign out menu link Useful if you use an auth proxy or JWT authentication bash auth disable signout menu true URL redirect after signing out URL to redirect the user to after signing out from Grafana This can for example be used to enable signout from an OAuth provider Example for Generic OAuth bash auth generic oauth signout redirect url Remote logout You can log out from other devices by removing login sessions from the bottom of your profile page If you are a Grafana admin user you can also do the same for any user from the Server Admin Edit User view Protected roles Available in Grafana Enterprise and Grafana Cloud By default after you configure an authorization provider Grafana will adopt existing users into the new authentication scheme For example if you have created a user with basic authentication having the login jsmith example com then set up SAML authentication where jsmith example com is an account the user s authentication type will be changed to SAML if they perform a SAML sign in You can disable this user adoption for certain roles using the protected roles property bash auth security protected roles server admins org admins The value of protected roles should be a list of roles to protect separated by spaces Valid roles are viewers editors org admins server admins and all a superset of the other roles
grafana setup title Configure JWT authentication menuTitle JWT Grafana JWT Authentication aliases products auth jwt labels oss enterprise
--- aliases: - ../../../auth/jwt/ description: Grafana JWT Authentication labels: products: - enterprise - oss menuTitle: JWT title: Configure JWT authentication weight: 1600 --- # Configure JWT authentication You can configure Grafana to accept a JWT token provided in the HTTP header. The token is verified using any of the following: - PEM-encoded key file - JSON Web Key Set (JWKS) in a local file - JWKS provided by the configured JWKS endpoint This method of authentication is useful for integrating with other systems that use JWKS but can't directly integrate with Grafana or if you want to use pass-through authentication in an app embedding Grafana. Grafana does not currently support refresh tokens. ## Enable JWT To use JWT authentication: 1. Enable JWT in the [main config file](). 1. Specify the header name that contains a token. ```ini [auth.jwt] # By default, auth.jwt is disabled. enabled = true # HTTP header to look into to get a JWT token. header_name = X-JWT-Assertion ``` ## Configure login claim To identify the user, some of the claims needs to be selected as a login info. The subject claim called `"sub"` is mandatory and needs to identify the principal that is the subject of the JWT. Typically, the subject claim called `"sub"` would be used as a login but it might also be set to some application specific claim. ```ini # [auth.jwt] # ... # Specify a claim to use as a username to sign in. username_claim = sub # Specify a claim to use as an email to sign in. email_claim = sub # auto-create users if they are not already matched # auto_sign_up = true ``` If `auto_sign_up` is enabled, then the `sub` claim is used as the "external Auth ID". The `name` claim is used as the user's full name if it is present. Additionally, if the login username or the email claims are nested inside the JWT structure, you can specify the path to the attributes using the `username_attribute_path` and `email_attribute_path` configuration options using the JMESPath syntax. JWT structure example. ```json { "user": { "UID": "1234567890", "name": "John Doe", "username": "johndoe", "emails": ["[email protected]", "[email protected]"] } } ``` ```ini # [auth.jwt] # ... # Specify a nested attribute to use as a username to sign in. username_attribute_path = user.username # user's login is johndoe # Specify a nested attribute to use as an email to sign in. email_attribute_path = user.emails[1] # user's email is [email protected] ``` ## Iframe Embedding If you want to embed Grafana in an iframe while maintaining user identity and role checks, you can use JWT authentication to authenticate the iframe. For Grafana Cloud, or scenarios where verifying viewer identity is not required, embed [shared dashboards](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/dashboards/share-dashboards-panels/shared-dashboards/). In this scenario, you will need to configure Grafana to accept a JWT provided in the HTTP header and a reverse proxy should rewrite requests to the Grafana instance to include the JWT in the request's headers. For embedding to work, you must enable `allow_embedding` in the [security section](). This setting is not available in Grafana Cloud. In a scenario where it is not possible to rewrite the request headers you can use URL login instead. ### URL login `url_login` allows grafana to search for a JWT in the URL query parameter `auth_token` and use it as the authentication token. **Note**: You need to have enabled JWT before setting this setting see section Enabled JWT this can lead to JWTs being exposed in logs and possible session hijacking if the server is not using HTTP over TLS. ```ini # [auth.jwt] # ... url_login = true # enable JWT authentication in the URL ``` An example of an URL for accessing grafana with JWT URL authentication is: ``` http://env.grafana.local/d/RciOKLR4z/board-identifier?orgId=1&kiosk&auth_token=eyJhbxxxxxxxxxxxxx ``` A sample repository using this authentication method is available at [grafana-iframe-oauth-sample](https://github.com/grafana/grafana-iframe-oauth-sample). ## Signature verification JSON web token integrity needs to be verified so cryptographic signature is used for this purpose. So we expect that every token must be signed with some known cryptographic key. You have a variety of options on how to specify where the keys are located. ### Verify token using a JSON Web Key Set loaded from https endpoint For more information on JWKS endpoints, refer to [Auth0 docs](https://auth0.com/docs/tokens/json-web-tokens/json-web-key-sets). ```ini # [auth.jwt] # ... jwk_set_url = https://your-auth-provider.example.com/.well-known/jwks.json # Cache TTL for data loaded from http endpoint. cache_ttl = 60m ``` > **Note**: If the JWKS endpoint includes cache control headers and the value is less than the configured `cache_ttl`, then the cache control header value is used instead. If the cache_ttl is not set, no caching is performed. `no-store` and `no-cache` cache control headers are ignored. ### Verify token using a JSON Web Key Set loaded from JSON file Key set in the same format as in JWKS endpoint but located on disk. ```ini jwk_set_file = /path/to/jwks.json ``` ### Verify token using a single key loaded from PEM-encoded file PEM-encoded key file in PKIX, PKCS #1, PKCS #8 or SEC 1 format. ```ini key_file = /path/to/key.pem ``` If the JWT token's header specifies a `kid` (Key ID), then the Key ID must be set using the `key_id` configuration option. ```ini key_id = my-key-id ``` ## Validate claims By default, only `"exp"`, `"nbf"` and `"iat"` claims are validated. Consider validating that other claims match your expectations by using the `expect_claims` configuration option. Token claims must match exactly the values set here. ```ini # This can be seen as a required "subset" of a JWT Claims Set. expect_claims = {"iss": "https://your-token-issuer", "your-custom-claim": "foo"} ``` ## Roles Grafana checks for the presence of a role using the [JMESPath](http://jmespath.org/examples.html) specified via the `role_attribute_path` configuration option. The JMESPath is applied to JWT token claims. The result after evaluation of the `role_attribute_path` JMESPath expression should be a valid Grafana role, for example, `None`, `Viewer`, `Editor` or `Admin`. The organization that the role is assigned to can be configured using the `X-Grafana-Org-Id` header. ### JMESPath examples To ease configuration of a proper JMESPath expression, you can test/evaluate expressions with custom payloads at http://jmespath.org/. ### Role mapping If the `role_attribute_path` property does not return a role, then the user is assigned the `Viewer` role by default. You can disable the role assignment by setting `role_attribute_strict = true`. It denies user access if no role or an invalid role is returned. **Basic example:** In the following example user will get `Editor` as role when authenticating. The value of the property `role` will be the resulting role if the role is a proper Grafana role, i.e. `None`, `Viewer`, `Editor` or `Admin`. Payload: ```json { ... "role": "Editor", ... } ``` Config: ```bash role_attribute_path = role ``` **Advanced example:** In the following example user will get `Admin` as role when authenticating since it has a role `admin`. If a user has a role `editor` it will get `Editor` as role, otherwise `Viewer`. Payload: ```json { ... "info": { ... "roles": [ "engineer", "admin", ], ... }, ... } ``` Config: ```bash role_attribute_path = contains(info.roles[*], 'admin') && 'Admin' || contains(info.roles[*], 'editor') && 'Editor' || 'Viewer' ``` ### Grafana Admin Role If the `role_attribute_path` property returns a `GrafanaAdmin` role, Grafana Admin is not assigned by default, instead the `Admin` role is assigned. To allow `Grafana Admin` role to be assigned set `allow_assign_grafana_admin = true`. ### Skip organization role mapping To skip the assignment of roles and permissions upon login via JWT and handle them via other mechanisms like the user interface, we can skip the organization role synchronization with the following configuration. ```ini [auth.jwt] # ... skip_org_role_sync = true ```
grafana setup
aliases auth jwt description Grafana JWT Authentication labels products enterprise oss menuTitle JWT title Configure JWT authentication weight 1600 Configure JWT authentication You can configure Grafana to accept a JWT token provided in the HTTP header The token is verified using any of the following PEM encoded key file JSON Web Key Set JWKS in a local file JWKS provided by the configured JWKS endpoint This method of authentication is useful for integrating with other systems that use JWKS but can t directly integrate with Grafana or if you want to use pass through authentication in an app embedding Grafana Grafana does not currently support refresh tokens Enable JWT To use JWT authentication 1 Enable JWT in the main config file 1 Specify the header name that contains a token ini auth jwt By default auth jwt is disabled enabled true HTTP header to look into to get a JWT token header name X JWT Assertion Configure login claim To identify the user some of the claims needs to be selected as a login info The subject claim called sub is mandatory and needs to identify the principal that is the subject of the JWT Typically the subject claim called sub would be used as a login but it might also be set to some application specific claim ini auth jwt Specify a claim to use as a username to sign in username claim sub Specify a claim to use as an email to sign in email claim sub auto create users if they are not already matched auto sign up true If auto sign up is enabled then the sub claim is used as the external Auth ID The name claim is used as the user s full name if it is present Additionally if the login username or the email claims are nested inside the JWT structure you can specify the path to the attributes using the username attribute path and email attribute path configuration options using the JMESPath syntax JWT structure example json user UID 1234567890 name John Doe username johndoe emails personal email com professional email com ini auth jwt Specify a nested attribute to use as a username to sign in username attribute path user username user s login is johndoe Specify a nested attribute to use as an email to sign in email attribute path user emails 1 user s email is professional email com Iframe Embedding If you want to embed Grafana in an iframe while maintaining user identity and role checks you can use JWT authentication to authenticate the iframe For Grafana Cloud or scenarios where verifying viewer identity is not required embed shared dashboards https grafana com docs grafana GRAFANA VERSION dashboards share dashboards panels shared dashboards In this scenario you will need to configure Grafana to accept a JWT provided in the HTTP header and a reverse proxy should rewrite requests to the Grafana instance to include the JWT in the request s headers For embedding to work you must enable allow embedding in the security section This setting is not available in Grafana Cloud In a scenario where it is not possible to rewrite the request headers you can use URL login instead URL login url login allows grafana to search for a JWT in the URL query parameter auth token and use it as the authentication token Note You need to have enabled JWT before setting this setting see section Enabled JWT this can lead to JWTs being exposed in logs and possible session hijacking if the server is not using HTTP over TLS ini auth jwt url login true enable JWT authentication in the URL An example of an URL for accessing grafana with JWT URL authentication is http env grafana local d RciOKLR4z board identifier orgId 1 kiosk auth token eyJhbxxxxxxxxxxxxx A sample repository using this authentication method is available at grafana iframe oauth sample https github com grafana grafana iframe oauth sample Signature verification JSON web token integrity needs to be verified so cryptographic signature is used for this purpose So we expect that every token must be signed with some known cryptographic key You have a variety of options on how to specify where the keys are located Verify token using a JSON Web Key Set loaded from https endpoint For more information on JWKS endpoints refer to Auth0 docs https auth0 com docs tokens json web tokens json web key sets ini auth jwt jwk set url https your auth provider example com well known jwks json Cache TTL for data loaded from http endpoint cache ttl 60m Note If the JWKS endpoint includes cache control headers and the value is less than the configured cache ttl then the cache control header value is used instead If the cache ttl is not set no caching is performed no store and no cache cache control headers are ignored Verify token using a JSON Web Key Set loaded from JSON file Key set in the same format as in JWKS endpoint but located on disk ini jwk set file path to jwks json Verify token using a single key loaded from PEM encoded file PEM encoded key file in PKIX PKCS 1 PKCS 8 or SEC 1 format ini key file path to key pem If the JWT token s header specifies a kid Key ID then the Key ID must be set using the key id configuration option ini key id my key id Validate claims By default only exp nbf and iat claims are validated Consider validating that other claims match your expectations by using the expect claims configuration option Token claims must match exactly the values set here ini This can be seen as a required subset of a JWT Claims Set expect claims iss https your token issuer your custom claim foo Roles Grafana checks for the presence of a role using the JMESPath http jmespath org examples html specified via the role attribute path configuration option The JMESPath is applied to JWT token claims The result after evaluation of the role attribute path JMESPath expression should be a valid Grafana role for example None Viewer Editor or Admin The organization that the role is assigned to can be configured using the X Grafana Org Id header JMESPath examples To ease configuration of a proper JMESPath expression you can test evaluate expressions with custom payloads at http jmespath org Role mapping If the role attribute path property does not return a role then the user is assigned the Viewer role by default You can disable the role assignment by setting role attribute strict true It denies user access if no role or an invalid role is returned Basic example In the following example user will get Editor as role when authenticating The value of the property role will be the resulting role if the role is a proper Grafana role i e None Viewer Editor or Admin Payload json role Editor Config bash role attribute path role Advanced example In the following example user will get Admin as role when authenticating since it has a role admin If a user has a role editor it will get Editor as role otherwise Viewer Payload json info roles engineer admin Config bash role attribute path contains info roles admin Admin contains info roles editor Editor Viewer Grafana Admin Role If the role attribute path property returns a GrafanaAdmin role Grafana Admin is not assigned by default instead the Admin role is assigned To allow Grafana Admin role to be assigned set allow assign grafana admin true Skip organization role mapping To skip the assignment of roles and permissions upon login via JWT and handle them via other mechanisms like the user interface we can skip the organization role synchronization with the following configuration ini auth jwt skip org role sync true
grafana setup weight 600 Learn how to configure SAML authentication in Grafana s UI products title Configure SAML authentication using the Grafana user interface menuTitle SAML user interface labels cloud enterprise
--- description: Learn how to configure SAML authentication in Grafana's UI. labels: products: - cloud - enterprise menuTitle: SAML user interface title: Configure SAML authentication using the Grafana user interface weight: 600 --- # Configure SAML authentication using the Grafana user interface Available in [Grafana Enterprise]() version 10.0 and later, and [Grafana Cloud Pro and Advanced](/docs/grafana-cloud/). You can configure SAML authentication in Grafana through the user interface (UI) or the Grafana configuration file. For instructions on how to set up SAML using the Grafana configuration file, refer to [Configure SAML authentication using the configuration file](). The Grafana SAML UI provides the following advantages over configuring SAML in the Grafana configuration file: - It is accessible by Grafana Cloud users - SAML UI carries out input validation and provides useful feedback on the correctness of the configuration, making SAML setup easier - It doesn't require Grafana to be restarted after a configuration update - Access to the SAML UI only requires access to authentication settings, so it can be used by users with limited access to Grafana's configuration Any configuration changes made through the Grafana user interface (UI) will take precedence over settings specified in the Grafana configuration file or through environment variables. This means that if you modify any configuration settings in the UI, they will override any corresponding settings set via environment variables or defined in the configuration file. For more information on how Grafana determines the order of precedence for its settings, please refer to the [Settings update at runtime](). Disabling the UI does not affect any configuration settings that were previously set up through the UI. Those settings will continue to function as intended even with the UI disabled. ## Before you begin To follow this guide, you need: - Knowledge of SAML authentication. Refer to [SAML authentication in Grafana]() for an overview of Grafana's SAML integration. - Permissions `settings:read` and `settings:write` with scope `settings:auth.saml:*` that allow you to read and update SAML authentication settings. These permissions are granted by `fixed:authentication.config:writer` role. By default, this role is granted to Grafana server administrator in self-hosted instances and to Organization admins in Grafana Cloud instances. - Grafana instance running Grafana version 10.0 or later with [Grafana Enterprise]() or [Grafana Cloud Pro or Advanced](/docs/grafana-cloud/) license. It is possible to set up Grafana with SAML authentication using Azure AD. However, if an Azure AD user belongs to more than 150 groups, a Graph API endpoint is shared instead. Grafana versions 11.1 and below do not support fetching the groups from the Graph API endpoint. As a result, users with more than 150 groups will not be able to retrieve their groups. Instead, it is recommended that you use OIDC/OAuth workflows. As of Grafana 11.2, the SAML integration offers a mechanism to retrieve user groups from the Graph API. Related links: - [Azure AD SAML limitations](https://learn.microsoft.com/en-us/entra/identity-platform/id-token-claims-reference#groups-overage-claim) - [Set up SAML with Azure AD]() - [Configure a Graph API application in Azure AD]() ## Steps To Configure SAML Authentication Sign in to Grafana and navigate to **Administration > Authentication > Configure SAML**. ### 1. General Settings Section 1. Complete the **General settings** fields. For assistance, consult the following table for additional guidance about certain fields: | Field | Description | | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Allow signup** | If enabled, you can create new users through the SAML login. If disabled, then only existing Grafana users can log in with SAML. | | **Auto login** | If enabled, Grafana will attempt to automatically log in with SAML skipping the login screen. | | **Single logout** | The SAML single logout feature enables users to log out from all applications associated with the current IdP session established using SAML SSO. For more information, refer to [SAML single logout documentation]](). | | **Identity provider initiated login** | Enables users to log in to Grafana directly from the SAML IdP. For more information, refer to [IdP initiated login documentation](). | 1. Click **Next: Sign requests**. ### 2. Sign Requests Section 1. In the **Sign requests** field, specify whether you want the outgoing requests to be signed, and, if so, then: 1. Provide a certificate and a private key that will be used by the service provider (Grafana) and the SAML IdP. Use the [PKCS #8](https://en.wikipedia.org/wiki/PKCS_8) format to issue the private key. For more information, refer to an [example on how to generate SAML credentials](). Alternatively, you can generate a new private key and certificate pair directly from the UI. Click on the `Generate key and certificate` button to open a form where you enter some information you want to be embedded into the new certificate. 1. Choose which signature algorithm should be used. The SAML standard recommends using a digital signature for some types of messages, like authentication or logout requests to avoid [man-in-the-middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack). 1. Click **Next: Connect Grafana with Identity Provider**. ### 3. Connect Grafana with Identity Provider Section 1. Configure IdP using Grafana Metadata 1. Copy the **Metadata URL** and provide it to your SAML IdP to establish a connection between Grafana and the IdP. - The metadata URL contains all the necessary information for the IdP to establish a connection with Grafana. 1. Copy the **Assertion Consumer Service URL** and provide it to your SAML IdP. - The Assertion Consumer Service URL is the endpoint where the IdP sends the SAML assertion after the user has been authenticated. 1. If you want to use the **Single Logout** feature, copy the **Single Logout Service URL** and provide it to your SAML IdP. 1. Finish configuring Grafana using IdP data 1. Provide IdP Metadata to Grafana. - The metadata contains all the necessary information for Grafana to establish a connection with the IdP. - This can be provided as Base64-encoded value, a path to a file, or as a URL. 1. Click **Next: User mapping**. ### 4. User Mapping Section 1. If you wish to [map user information from SAML assertions](), complete the **Assertion attributes mappings** section. You also need to configure the **Groups attribute** field if you want to use group synchronization. Group sync allows you to automatically map users to Grafana teams or role-based access control roles based on their SAML group membership. To learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-group-attribute-sync) documentation. 1. If you want to automatically assign users' roles based on their SAML roles, complete the **Role mapping** section. First, you need to configure the **Role attribute** field to specify which SAML attribute should be used to retrieve SAML role information. Then enter the SAML roles that you want to map to Grafana roles in **Role mapping** section. If you want to map multiple SAML roles to a Grafana role, separate them by a comma and a space. For example, `Editor: editor, developer`. Role mapping will automatically update user's [basic role]() based on their SAML roles every time the user logs in to Grafana. Learn more about [SAML role synchronization](). 1. If you're setting up Grafana with Azure AD using the SAML protocol and want to fetch user groups from the Graph API, complete the **Azure AD Service Account Configuration** subsection. 1. Set up a service account in Azure AD and provide the necessary details in the **Azure AD Service Account Configuration** section. 1. Provide the **Client ID** of your Azure AD application. 1. Provide the **Client Secret** of your Azure AD application, the **Client Secret** will be used to request an access token from Azure AD. 1. Provide the Azure AD request **Access Token URL**. 1. If you don't have users with more than 150 groups, you can still force the use of the Graph API by enabling the **Force use Graph API** toggle. 1. If you have multiple organizations and want to automatically add users to organizations, complete the **Org mapping section**. First, you need to configure the **Org attribute** field to specify which SAML attribute should be used to retrieve SAML organization information. Now fill in the **Org mapping** field with mappings from SAML organization to Grafana organization. For example, `Org mapping: Engineering:2, Sales:2` will map users who belong to `Engineering` or `Sales` organizations in SAML to Grafana organization with ID 2. If you want users to have different roles in different organizations, you can additionally specify a role. For example, `Org mapping: Engineering:2:Editor` will map users who belong to `Engineering` organizations in SAML to Grafana organization with ID 2 and assign them Editor role. Organization mapping will automatically update user's organization memberships (and roles, if they have been configured) based on their SAML organization every time the user logs in to Grafana. Learn more about [SAML organization mapping](). 1. If you want to limit the access to Grafana based on user's SAML organization membership, fill in the **Allowed organizations** field. 1. Click **Next: Test and enable**. ### 5. Test And Enable Section 1. Click **Save and enable** - If there are issues with your configuration, an error message will appear. Refer back to the previous steps to correct the issues and click on `Save and apply` on the top right corner once you are done. 1. If there are no configuration issues, SAML integration status will change to `Enabled`. Your SAML configuration is now enabled. 1. To disable SAML integration, click `Disable` in the top right corner.
grafana setup
description Learn how to configure SAML authentication in Grafana s UI labels products cloud enterprise menuTitle SAML user interface title Configure SAML authentication using the Grafana user interface weight 600 Configure SAML authentication using the Grafana user interface Available in Grafana Enterprise version 10 0 and later and Grafana Cloud Pro and Advanced docs grafana cloud You can configure SAML authentication in Grafana through the user interface UI or the Grafana configuration file For instructions on how to set up SAML using the Grafana configuration file refer to Configure SAML authentication using the configuration file The Grafana SAML UI provides the following advantages over configuring SAML in the Grafana configuration file It is accessible by Grafana Cloud users SAML UI carries out input validation and provides useful feedback on the correctness of the configuration making SAML setup easier It doesn t require Grafana to be restarted after a configuration update Access to the SAML UI only requires access to authentication settings so it can be used by users with limited access to Grafana s configuration Any configuration changes made through the Grafana user interface UI will take precedence over settings specified in the Grafana configuration file or through environment variables This means that if you modify any configuration settings in the UI they will override any corresponding settings set via environment variables or defined in the configuration file For more information on how Grafana determines the order of precedence for its settings please refer to the Settings update at runtime Disabling the UI does not affect any configuration settings that were previously set up through the UI Those settings will continue to function as intended even with the UI disabled Before you begin To follow this guide you need Knowledge of SAML authentication Refer to SAML authentication in Grafana for an overview of Grafana s SAML integration Permissions settings read and settings write with scope settings auth saml that allow you to read and update SAML authentication settings These permissions are granted by fixed authentication config writer role By default this role is granted to Grafana server administrator in self hosted instances and to Organization admins in Grafana Cloud instances Grafana instance running Grafana version 10 0 or later with Grafana Enterprise or Grafana Cloud Pro or Advanced docs grafana cloud license It is possible to set up Grafana with SAML authentication using Azure AD However if an Azure AD user belongs to more than 150 groups a Graph API endpoint is shared instead Grafana versions 11 1 and below do not support fetching the groups from the Graph API endpoint As a result users with more than 150 groups will not be able to retrieve their groups Instead it is recommended that you use OIDC OAuth workflows As of Grafana 11 2 the SAML integration offers a mechanism to retrieve user groups from the Graph API Related links Azure AD SAML limitations https learn microsoft com en us entra identity platform id token claims reference groups overage claim Set up SAML with Azure AD Configure a Graph API application in Azure AD Steps To Configure SAML Authentication Sign in to Grafana and navigate to Administration Authentication Configure SAML 1 General Settings Section 1 Complete the General settings fields For assistance consult the following table for additional guidance about certain fields Field Description Allow signup If enabled you can create new users through the SAML login If disabled then only existing Grafana users can log in with SAML Auto login If enabled Grafana will attempt to automatically log in with SAML skipping the login screen Single logout The SAML single logout feature enables users to log out from all applications associated with the current IdP session established using SAML SSO For more information refer to SAML single logout documentation Identity provider initiated login Enables users to log in to Grafana directly from the SAML IdP For more information refer to IdP initiated login documentation 1 Click Next Sign requests 2 Sign Requests Section 1 In the Sign requests field specify whether you want the outgoing requests to be signed and if so then 1 Provide a certificate and a private key that will be used by the service provider Grafana and the SAML IdP Use the PKCS 8 https en wikipedia org wiki PKCS 8 format to issue the private key For more information refer to an example on how to generate SAML credentials Alternatively you can generate a new private key and certificate pair directly from the UI Click on the Generate key and certificate button to open a form where you enter some information you want to be embedded into the new certificate 1 Choose which signature algorithm should be used The SAML standard recommends using a digital signature for some types of messages like authentication or logout requests to avoid man in the middle attacks https en wikipedia org wiki Man in the middle attack 1 Click Next Connect Grafana with Identity Provider 3 Connect Grafana with Identity Provider Section 1 Configure IdP using Grafana Metadata 1 Copy the Metadata URL and provide it to your SAML IdP to establish a connection between Grafana and the IdP The metadata URL contains all the necessary information for the IdP to establish a connection with Grafana 1 Copy the Assertion Consumer Service URL and provide it to your SAML IdP The Assertion Consumer Service URL is the endpoint where the IdP sends the SAML assertion after the user has been authenticated 1 If you want to use the Single Logout feature copy the Single Logout Service URL and provide it to your SAML IdP 1 Finish configuring Grafana using IdP data 1 Provide IdP Metadata to Grafana The metadata contains all the necessary information for Grafana to establish a connection with the IdP This can be provided as Base64 encoded value a path to a file or as a URL 1 Click Next User mapping 4 User Mapping Section 1 If you wish to map user information from SAML assertions complete the Assertion attributes mappings section You also need to configure the Groups attribute field if you want to use group synchronization Group sync allows you to automatically map users to Grafana teams or role based access control roles based on their SAML group membership To learn more about how to configure group synchronization refer to Configure team sync and Configure group attribute sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure group attribute sync documentation 1 If you want to automatically assign users roles based on their SAML roles complete the Role mapping section First you need to configure the Role attribute field to specify which SAML attribute should be used to retrieve SAML role information Then enter the SAML roles that you want to map to Grafana roles in Role mapping section If you want to map multiple SAML roles to a Grafana role separate them by a comma and a space For example Editor editor developer Role mapping will automatically update user s basic role based on their SAML roles every time the user logs in to Grafana Learn more about SAML role synchronization 1 If you re setting up Grafana with Azure AD using the SAML protocol and want to fetch user groups from the Graph API complete the Azure AD Service Account Configuration subsection 1 Set up a service account in Azure AD and provide the necessary details in the Azure AD Service Account Configuration section 1 Provide the Client ID of your Azure AD application 1 Provide the Client Secret of your Azure AD application the Client Secret will be used to request an access token from Azure AD 1 Provide the Azure AD request Access Token URL 1 If you don t have users with more than 150 groups you can still force the use of the Graph API by enabling the Force use Graph API toggle 1 If you have multiple organizations and want to automatically add users to organizations complete the Org mapping section First you need to configure the Org attribute field to specify which SAML attribute should be used to retrieve SAML organization information Now fill in the Org mapping field with mappings from SAML organization to Grafana organization For example Org mapping Engineering 2 Sales 2 will map users who belong to Engineering or Sales organizations in SAML to Grafana organization with ID 2 If you want users to have different roles in different organizations you can additionally specify a role For example Org mapping Engineering 2 Editor will map users who belong to Engineering organizations in SAML to Grafana organization with ID 2 and assign them Editor role Organization mapping will automatically update user s organization memberships and roles if they have been configured based on their SAML organization every time the user logs in to Grafana Learn more about SAML organization mapping 1 If you want to limit the access to Grafana based on user s SAML organization membership fill in the Allowed organizations field 1 Click Next Test and enable 5 Test And Enable Section 1 Click Save and enable If there are issues with your configuration an error message will appear Refer back to the previous steps to correct the issues and click on Save and apply on the top right corner once you are done 1 If there are no configuration issues SAML integration status will change to Enabled Your SAML configuration is now enabled 1 To disable SAML integration click Disable in the top right corner
grafana setup documentation labels auth gitlab aliases keywords configuration grafana Grafana GitLab OAuth Guide oauth
--- aliases: - ../../../auth/gitlab/ description: Grafana GitLab OAuth Guide keywords: - grafana - configuration - documentation - oauth labels: products: - cloud - enterprise - oss menuTitle: GitLab OAuth title: Configure GitLab OAuth authentication weight: 1000 --- # Configure GitLab OAuth authentication This topic describes how to configure GitLab OAuth authentication. If Users use the same email address in GitLab that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information. ## Before you begin Ensure you know how to create a GitLab OAuth application. Consult GitLab's documentation on [creating a GitLab OAuth application](https://docs.gitlab.com/ee/integration/oauth_provider.html) for more information. ### Create a GitLab OAuth Application 1. Log in to your GitLab account and go to **Profile > Preferences > Applications**. 1. Click **Add new application**. 1. Fill out the fields. - In the **Redirect URI** field, enter the following: `https://<YOUR-GRAFANA-URL>/login/gitlab` and check `openid`, `email`, `profile` in the **Scopes** list. - Leave the **Confidential** checkbox checked. 1. Click **Save application**. 1. Note your **Application ID** (this is the `Client Id`) and **Secret** (this is the `Client Secret`). ## Configure GitLab authentication client using the Grafana UI Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. As a Grafana Admin, you can configure GitLab OAuth client from within Grafana using the GitLab UI. To do this, navigate to **Administration > Authentication > GitLab** page and fill in the form. If you have a current configuration in the Grafana configuration file then the form will be pre-populated with those values otherwise the form will contain default values. After you have filled in the form, click **Save** to save the configuration. If the save was successful, Grafana will apply the new configurations. If you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values. If you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances. Refer to [configuration options]() for more information. ## Configure GitLab authentication client using the Terraform provider Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0. ```terraform resource "grafana_sso_settings" "gitlab_sso_settings" { provider_name = "gitlab" oauth2_settings { name = "Gitlab" client_id = "YOUR_GITLAB_APPLICATION_ID" client_secret = "YOUR_GITLAB_APPLICATION_SECRET" allow_sign_up = true auto_login = false scopes = "openid email profile" allowed_domains = "mycompany.com mycompany.org" role_attribute_path = "contains(groups[*], 'example-group') && 'Editor' || 'Viewer'" role_attribute_strict = false allowed_groups = "[\"admins\", \"software engineers\", \"developers/frontend\"]" use_pkce = true use_refresh_token = true } } ``` Go to [Terraform Registry](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/sso_settings) for a complete reference on using the `grafana_sso_settings` resource. ## Configure GitLab authentication client using the Grafana configuration file Ensure that you have access to the [Grafana configuration file](). ### Steps To configure GitLab authentication with Grafana, follow these steps: 1. Create an OAuth application in GitLab. 1. Set the redirect URI to `http://<my_grafana_server_name_or_ip>:<grafana_server_port>/login/gitlab`. Ensure that the Redirect URI is the complete HTTP address that you use to access Grafana via your browser, but with the appended path of `/login/gitlab`. For the Redirect URI to be correct, it might be necessary to set the `root_url` option in the `[server]`section of the Grafana configuration file. For example, if you are serving Grafana behind a proxy. 1. Set the OAuth2 scopes to `openid`, `email` and `profile`. 1. Refer to the following table to update field values located in the `[auth.gitlab]` section of the Grafana configuration file: | Field | Description | | ---------------------------- | --------------------------------------------------------------------------------------------- | | `client_id`, `client_secret` | These values must match the `Application ID` and `Secret` from your GitLab OAuth application. | | `enabled` | Enables GitLab authentication. Set this value to `true`. | Review the list of other GitLab [configuration options]() and complete them, as necessary. 1. Optional: [Configure a refresh token](): a. Set `use_refresh_token` to `true` in `[auth.gitlab]` section in Grafana configuration file. 1. [Configure role mapping](). 1. Optional: [Configure group synchronization](). 1. Restart Grafana. You should now see a GitLab login button on the login page and be able to log in or sign up with your GitLab accounts. ### Configure a refresh token When a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token. Grafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired. By default, GitLab provides a refresh token. Refresh token fetching and access token expiration check is enabled by default for the GitLab provider since Grafana v10.1.0. If you would like to disable access token expiration check then set the `use_refresh_token` configuration value to `false`. The `accessTokenExpirationCheck` feature toggle has been removed in Grafana v10.3.0 and the `use_refresh_token` configuration value will be used instead for configuring refresh token fetching and access token expiration check. ### Configure allowed groups To limit access to authenticated users that are members of one or more [GitLab groups](https://docs.gitlab.com/ce/user/group/index.html), set `allowed_groups` to a comma or space-separated list of groups. GitLab's groups are referenced by the group name. For example, `developers`. To reference a subgroup `frontend`, use `developers/frontend`. Note that in GitLab, the group or subgroup name does not always match its display name, especially if the display name contains spaces or special characters. Make sure you always use the group or subgroup name as it appears in the URL of the group or subgroup. ### Configure role mapping Unless `skip_org_role_sync` option is enabled, the user's role will be set to the role retrieved from GitLab upon user login. The user's role is retrieved using a [JMESPath](http://jmespath.org/examples.html) expression from the `role_attribute_path` configuration option. To map the server administrator role, use the `allow_assign_grafana_admin` configuration option. Refer to [configuration options]() for more information. You can use the `org_mapping` configuration option to assign the user to multiple organizations and specify their role based on their GitLab group membership. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If the org role mapping (`org_mapping`) is specified and Entra ID returns a valid role, then the user will get the highest of the two roles. If no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option](). You can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions. To ease configuration of a proper JMESPath expression, go to [JMESPath](http://jmespath.org/) to test and evaluate expressions with custom payloads. ### Role mapping examples This section includes examples of JMESPath expressions used for role mapping. ##### Org roles mapping example The GitLab integration uses the external users' groups in the `org_mapping` configuration to map organizations and roles based on their GitLab group membership. In this example, the user has been granted the role of a `Viewer` in the `org_foo` organization, and the role of an `Editor` in the `org_bar` and `org_baz` orgs. The external user is part of the following GitLab groups: `groupd-1` and `group-2`. Config: ```ini org_mapping = group-1:org_foo:Viewer groupd-1:org_bar:Editor *:org_baz:Editor ``` #### Map roles using user information from OAuth token In this example, the user with email `[email protected]` has been granted the `Admin` role. All other users are granted the `Viewer` role. ```ini role_attribute_path = email=='[email protected]' && 'Admin' || 'Viewer' ``` #### Map roles using groups In this example, the user from GitLab group 'example-group' have been granted the `Editor` role. All other users are granted the `Viewer` role. ```ini role_attribute_path = contains(groups[*], 'example-group') && 'Editor' || 'Viewer' ``` #### Map server administrator role In this example, the user with email `[email protected]` has been granted the `Admin` organization role as well as the Grafana server admin role. All other users are granted the `Viewer` role. ```bash role_attribute_path = email=='[email protected]' && 'GrafanaAdmin' || 'Viewer' ``` #### Map one role to all users In this example, all users will be assigned `Viewer` role regardless of the user information received from the identity provider. ```ini role_attribute_path = "'Viewer'" skip_org_role_sync = false ``` ### Example of GitLab configuration in Grafana This section includes an example of GitLab configuration in the Grafana configuration file. ```bash [auth.gitlab] enabled = true allow_sign_up = true auto_login = false client_id = YOUR_GITLAB_APPLICATION_ID client_secret = YOUR_GITLAB_APPLICATION_SECRET scopes = openid email profile auth_url = https://gitlab.com/oauth/authorize token_url = https://gitlab.com/oauth/token api_url = https://gitlab.com/api/v4 role_attribute_path = contains(groups[*], 'example-group') && 'Editor' || 'Viewer' role_attribute_strict = false allow_assign_grafana_admin = false allowed_groups = ["admins", "software engineers", "developers/frontend"] allowed_domains = mycompany.com mycompany.org tls_skip_verify_insecure = false use_pkce = true use_refresh_token = true ``` ## Configure group synchronization Available in [Grafana Enterprise](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/introduction/grafana-enterprise) and [Grafana Cloud](/docs/grafana-cloud/). Grafana supports synchronization of GitLab groups with Grafana teams and roles. This allows automatically assigning users to the appropriate teams or granting them the mapped roles. Teams and roles get synchronized when the user logs in. GitLab groups are referenced by the group name. For example, `developers`. To reference a subgroup `frontend`, use `developers/frontend`. Note that in GitLab, the group or subgroup name does not always match its display name, especially if the display name contains spaces or special characters. Make sure you always use the group or subgroup name as it appears in the URL of the group or subgroup. To learn more about group synchronization, refer to [Configure team sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-team-sync) and [Configure group attribute sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-group-attribute-sync). ## Configuration options The table below describes all GitLab OAuth configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables](). If the configuration option requires a JMESPath expression that includes a colon, enclose the entire expression in quotes to prevent parsing errors. For example `role_attribute_path: "role:view"` | Setting | Required | Supported on Cloud | Description | Default | | ---------------------------- | -------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ | | `enabled` | Yes | Yes | Whether GitLab OAuth authentication is allowed. | `false` | | `client_id` | Yes | Yes | Client ID provided by your GitLab OAuth app. | | | `client_secret` | Yes | Yes | Client secret provided by your GitLab OAuth app. | | | `auth_url` | Yes | Yes | Authorization endpoint of your GitLab OAuth provider. If you use your own instance of GitLab instead of gitlab.com, adjust `auth_url` by replacing the `gitlab.com` hostname with your own. | `https://gitlab.com/oauth/authorize` | | `token_url` | Yes | Yes | Endpoint used to obtain GitLab OAuth access token. If you use your own instance of GitLab instead of gitlab.com, adjust `token_url` by replacing the `gitlab.com` hostname with your own. | `https://gitlab.com/oauth/token` | | `api_url` | No | Yes | Grafana uses `<api_url>/user` endpoint to obtain GitLab user information compatible with [OpenID UserInfo](https://connect2id.com/products/server/docs/api/userinfo). | `https://gitlab.com/api/v4` | | `name` | No | Yes | Name used to refer to the GitLab authentication in the Grafana user interface. | `GitLab` | | `icon` | No | Yes | Icon used for GitLab authentication in the Grafana user interface. | `gitlab` | | `scopes` | No | Yes | List of comma or space-separated GitLab OAuth scopes. | `openid email profile` | | `allow_sign_up` | No | Yes | Whether to allow new Grafana user creation through GitLab login. If set to `false`, then only existing Grafana users can log in with GitLab OAuth. | `true` | | `auto_login` | No | Yes | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login. | `false` | | `role_attribute_path` | No | Yes | [JMESPath](http://jmespath.org/examples.html) expression to use for Grafana role lookup. Grafana will first evaluate the expression using the GitLab OAuth token. If no role is found, Grafana creates a JSON data with `groups` key that maps to groups obtained from GitLab's `/oauth/userinfo` endpoint, and evaluates the expression using this data. Finally, if a valid role is still not found, the expression is evaluated against the user information retrieved from `api_url/users` endpoint and groups retrieved from `api_url/groups` endpoint. The result of the evaluation should be a valid Grafana role (`None`, `Viewer`, `Editor`, `Admin` or `GrafanaAdmin`). For more information on user role mapping, refer to [Configure role mapping](). | | | `role_attribute_strict` | No | Yes | Set to `true` to deny user login if the Grafana role cannot be extracted using `role_attribute_path`. For more information on user role mapping, refer to [Configure role mapping](). | `false` | | `org_mapping` | No | No | List of comma- or space-separated `<ExternalGitlabGroupName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning "All users". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `skip_org_role_sync` | No | Yes | Set to `true` to stop automatically syncing user roles. | `false` | | `allow_assign_grafana_admin` | No | No | Set to `true` to enable automatic sync of the Grafana server administrator role. If this option is set to `true` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user the server administrator privileges and organization administrator role. If this option is set to `false` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user only organization administrator role. For more information on user role mapping, refer to [Configure role mapping](). | `false` | | `allowed_domains` | No | Yes | List of comma or space-separated domains. User must belong to at least one domain to log in. | | | `allowed_groups` | No | Yes | List of comma or space-separated groups. The user should be a member of at least one group to log in. | | | `tls_skip_verify_insecure` | No | No | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL/TLS susceptible to man-in-the-middle attacks. | `false` | | `tls_client_cert` | No | No | The path to the certificate. | | | `tls_client_key` | No | No | The path to the key. | | | `tls_client_ca` | No | No | The path to the trusted certificate authority list. | | | `use_pkce` | No | Yes | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https://datatracker.ietf.org/doc/html/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier. | `true` | | `use_refresh_token` | No | Yes | Set to `true` to use refresh token and check access token expiration. The `accessTokenExpirationCheck` feature toggle should also be enabled to use refresh token. | `true` | | `signout_redirect_url` | No | Yes | URL to redirect to after the user logs out. | |
grafana setup
aliases auth gitlab description Grafana GitLab OAuth Guide keywords grafana configuration documentation oauth labels products cloud enterprise oss menuTitle GitLab OAuth title Configure GitLab OAuth authentication weight 1000 Configure GitLab OAuth authentication This topic describes how to configure GitLab OAuth authentication If Users use the same email address in GitLab that they use with other authentication providers such as Grafana com you need to do additional configuration to ensure that the users are matched correctly Please refer to the Using the same email address to login with different identity providers documentation for more information Before you begin Ensure you know how to create a GitLab OAuth application Consult GitLab s documentation on creating a GitLab OAuth application https docs gitlab com ee integration oauth provider html for more information Create a GitLab OAuth Application 1 Log in to your GitLab account and go to Profile Preferences Applications 1 Click Add new application 1 Fill out the fields In the Redirect URI field enter the following https YOUR GRAFANA URL login gitlab and check openid email profile in the Scopes list Leave the Confidential checkbox checked 1 Click Save application 1 Note your Application ID this is the Client Id and Secret this is the Client Secret Configure GitLab authentication client using the Grafana UI Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle As a Grafana Admin you can configure GitLab OAuth client from within Grafana using the GitLab UI To do this navigate to Administration Authentication GitLab page and fill in the form If you have a current configuration in the Grafana configuration file then the form will be pre populated with those values otherwise the form will contain default values After you have filled in the form click Save to save the configuration If the save was successful Grafana will apply the new configurations If you need to reset changes you made in the UI back to the default values click Reset After you have reset the changes Grafana will apply the configuration from the Grafana configuration file if there is any configuration or the default values If you run Grafana in high availability mode configuration changes may not get applied to all Grafana instances immediately You may need to wait a few minutes for the configuration to propagate to all Grafana instances Refer to configuration options for more information Configure GitLab authentication client using the Terraform provider Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle Supported in the Terraform provider since v2 12 0 terraform resource grafana sso settings gitlab sso settings provider name gitlab oauth2 settings name Gitlab client id YOUR GITLAB APPLICATION ID client secret YOUR GITLAB APPLICATION SECRET allow sign up true auto login false scopes openid email profile allowed domains mycompany com mycompany org role attribute path contains groups example group Editor Viewer role attribute strict false allowed groups admins software engineers developers frontend use pkce true use refresh token true Go to Terraform Registry https registry terraform io providers grafana grafana latest docs resources sso settings for a complete reference on using the grafana sso settings resource Configure GitLab authentication client using the Grafana configuration file Ensure that you have access to the Grafana configuration file Steps To configure GitLab authentication with Grafana follow these steps 1 Create an OAuth application in GitLab 1 Set the redirect URI to http my grafana server name or ip grafana server port login gitlab Ensure that the Redirect URI is the complete HTTP address that you use to access Grafana via your browser but with the appended path of login gitlab For the Redirect URI to be correct it might be necessary to set the root url option in the server section of the Grafana configuration file For example if you are serving Grafana behind a proxy 1 Set the OAuth2 scopes to openid email and profile 1 Refer to the following table to update field values located in the auth gitlab section of the Grafana configuration file Field Description client id client secret These values must match the Application ID and Secret from your GitLab OAuth application enabled Enables GitLab authentication Set this value to true Review the list of other GitLab configuration options and complete them as necessary 1 Optional Configure a refresh token a Set use refresh token to true in auth gitlab section in Grafana configuration file 1 Configure role mapping 1 Optional Configure group synchronization 1 Restart Grafana You should now see a GitLab login button on the login page and be able to log in or sign up with your GitLab accounts Configure a refresh token When a user logs in using an OAuth provider Grafana verifies that the access token has not expired When an access token expires Grafana uses the provided refresh token if any exists to obtain a new access token Grafana uses a refresh token to obtain a new access token without requiring the user to log in again If a refresh token doesn t exist Grafana logs the user out of the system after the access token has expired By default GitLab provides a refresh token Refresh token fetching and access token expiration check is enabled by default for the GitLab provider since Grafana v10 1 0 If you would like to disable access token expiration check then set the use refresh token configuration value to false The accessTokenExpirationCheck feature toggle has been removed in Grafana v10 3 0 and the use refresh token configuration value will be used instead for configuring refresh token fetching and access token expiration check Configure allowed groups To limit access to authenticated users that are members of one or more GitLab groups https docs gitlab com ce user group index html set allowed groups to a comma or space separated list of groups GitLab s groups are referenced by the group name For example developers To reference a subgroup frontend use developers frontend Note that in GitLab the group or subgroup name does not always match its display name especially if the display name contains spaces or special characters Make sure you always use the group or subgroup name as it appears in the URL of the group or subgroup Configure role mapping Unless skip org role sync option is enabled the user s role will be set to the role retrieved from GitLab upon user login The user s role is retrieved using a JMESPath http jmespath org examples html expression from the role attribute path configuration option To map the server administrator role use the allow assign grafana admin configuration option Refer to configuration options for more information You can use the org mapping configuration option to assign the user to multiple organizations and specify their role based on their GitLab group membership For more information refer to Org roles mapping example org roles mapping example If the org role mapping org mapping is specified and Entra ID returns a valid role then the user will get the highest of the two roles If no valid role is found the user is assigned the role specified by the auto assign org role option You can disable this default role assignment by setting role attribute strict true This setting denies user access if no role or an invalid role is returned after evaluating the role attribute path and the org mapping expressions To ease configuration of a proper JMESPath expression go to JMESPath http jmespath org to test and evaluate expressions with custom payloads Role mapping examples This section includes examples of JMESPath expressions used for role mapping Org roles mapping example The GitLab integration uses the external users groups in the org mapping configuration to map organizations and roles based on their GitLab group membership In this example the user has been granted the role of a Viewer in the org foo organization and the role of an Editor in the org bar and org baz orgs The external user is part of the following GitLab groups groupd 1 and group 2 Config ini org mapping group 1 org foo Viewer groupd 1 org bar Editor org baz Editor Map roles using user information from OAuth token In this example the user with email admin company com has been granted the Admin role All other users are granted the Viewer role ini role attribute path email admin company com Admin Viewer Map roles using groups In this example the user from GitLab group example group have been granted the Editor role All other users are granted the Viewer role ini role attribute path contains groups example group Editor Viewer Map server administrator role In this example the user with email admin company com has been granted the Admin organization role as well as the Grafana server admin role All other users are granted the Viewer role bash role attribute path email admin company com GrafanaAdmin Viewer Map one role to all users In this example all users will be assigned Viewer role regardless of the user information received from the identity provider ini role attribute path Viewer skip org role sync false Example of GitLab configuration in Grafana This section includes an example of GitLab configuration in the Grafana configuration file bash auth gitlab enabled true allow sign up true auto login false client id YOUR GITLAB APPLICATION ID client secret YOUR GITLAB APPLICATION SECRET scopes openid email profile auth url https gitlab com oauth authorize token url https gitlab com oauth token api url https gitlab com api v4 role attribute path contains groups example group Editor Viewer role attribute strict false allow assign grafana admin false allowed groups admins software engineers developers frontend allowed domains mycompany com mycompany org tls skip verify insecure false use pkce true use refresh token true Configure group synchronization Available in Grafana Enterprise https grafana com docs grafana GRAFANA VERSION introduction grafana enterprise and Grafana Cloud docs grafana cloud Grafana supports synchronization of GitLab groups with Grafana teams and roles This allows automatically assigning users to the appropriate teams or granting them the mapped roles Teams and roles get synchronized when the user logs in GitLab groups are referenced by the group name For example developers To reference a subgroup frontend use developers frontend Note that in GitLab the group or subgroup name does not always match its display name especially if the display name contains spaces or special characters Make sure you always use the group or subgroup name as it appears in the URL of the group or subgroup To learn more about group synchronization refer to Configure team sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure team sync and Configure group attribute sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure group attribute sync Configuration options The table below describes all GitLab OAuth configuration options You can apply these options as environment variables similar to any other configuration within Grafana For more information refer to Override configuration with environment variables If the configuration option requires a JMESPath expression that includes a colon enclose the entire expression in quotes to prevent parsing errors For example role attribute path role view Setting Required Supported on Cloud Description Default enabled Yes Yes Whether GitLab OAuth authentication is allowed false client id Yes Yes Client ID provided by your GitLab OAuth app client secret Yes Yes Client secret provided by your GitLab OAuth app auth url Yes Yes Authorization endpoint of your GitLab OAuth provider If you use your own instance of GitLab instead of gitlab com adjust auth url by replacing the gitlab com hostname with your own https gitlab com oauth authorize token url Yes Yes Endpoint used to obtain GitLab OAuth access token If you use your own instance of GitLab instead of gitlab com adjust token url by replacing the gitlab com hostname with your own https gitlab com oauth token api url No Yes Grafana uses api url user endpoint to obtain GitLab user information compatible with OpenID UserInfo https connect2id com products server docs api userinfo https gitlab com api v4 name No Yes Name used to refer to the GitLab authentication in the Grafana user interface GitLab icon No Yes Icon used for GitLab authentication in the Grafana user interface gitlab scopes No Yes List of comma or space separated GitLab OAuth scopes openid email profile allow sign up No Yes Whether to allow new Grafana user creation through GitLab login If set to false then only existing Grafana users can log in with GitLab OAuth true auto login No Yes Set to true to enable users to bypass the login screen and automatically log in This setting is ignored if you configure multiple auth providers to use auto login false role attribute path No Yes JMESPath http jmespath org examples html expression to use for Grafana role lookup Grafana will first evaluate the expression using the GitLab OAuth token If no role is found Grafana creates a JSON data with groups key that maps to groups obtained from GitLab s oauth userinfo endpoint and evaluates the expression using this data Finally if a valid role is still not found the expression is evaluated against the user information retrieved from api url users endpoint and groups retrieved from api url groups endpoint The result of the evaluation should be a valid Grafana role None Viewer Editor Admin or GrafanaAdmin For more information on user role mapping refer to Configure role mapping role attribute strict No Yes Set to true to deny user login if the Grafana role cannot be extracted using role attribute path For more information on user role mapping refer to Configure role mapping false org mapping No No List of comma or space separated ExternalGitlabGroupName OrgIdOrName Role mappings Value can be meaning All users Role is optional and can have the following values None Viewer Editor or Admin For more information on external organization to role mapping refer to Org roles mapping example org roles mapping example skip org role sync No Yes Set to true to stop automatically syncing user roles false allow assign grafana admin No No Set to true to enable automatic sync of the Grafana server administrator role If this option is set to true and the result of evaluating role attribute path for a user is GrafanaAdmin Grafana grants the user the server administrator privileges and organization administrator role If this option is set to false and the result of evaluating role attribute path for a user is GrafanaAdmin Grafana grants the user only organization administrator role For more information on user role mapping refer to Configure role mapping false allowed domains No Yes List of comma or space separated domains User must belong to at least one domain to log in allowed groups No Yes List of comma or space separated groups The user should be a member of at least one group to log in tls skip verify insecure No No If set to true the client accepts any certificate presented by the server and any host name in that certificate You should only use this for testing because this mode leaves SSL TLS susceptible to man in the middle attacks false tls client cert No No The path to the certificate tls client key No No The path to the key tls client ca No No The path to the trusted certificate authority list use pkce No Yes Set to true to use Proof Key for Code Exchange PKCE https datatracker ietf org doc html rfc7636 Grafana uses the SHA256 based S256 challenge method and a 128 bytes base64url encoded code verifier true use refresh token No Yes Set to true to use refresh token and check access token expiration The accessTokenExpirationCheck feature toggle should also be enabled to use refresh token true signout redirect url No Yes URL to redirect to after the user logs out
grafana setup documentation Grafana Auth Proxy Guide aliases tutorials authproxy proxy auth auth proxy keywords configuration grafana
--- aliases: - ../../../auth/auth-proxy/ - ../../../tutorials/authproxy/ description: Grafana Auth Proxy Guide keywords: - grafana - configuration - documentation - proxy labels: products: - cloud - enterprise - oss menuTitle: Auth proxy title: Configure auth proxy authentication weight: 1500 --- # Configure auth proxy authentication You can configure Grafana to let a HTTP reverse proxy handle authentication. Popular web servers have a very extensive list of pluggable authentication modules, and any of them can be used with the AuthProxy feature. Below we detail the configuration options for auth proxy. ```bash [auth.proxy] # Defaults to false, but set to true to enable this feature enabled = true # HTTP Header name that will contain the username or email header_name = X-WEBAUTH-USER # HTTP Header property, defaults to `username` but can also be `email` header_property = username # Set to `true` to enable auto sign up of users who do not exist in Grafana DB. Defaults to `true`. auto_sign_up = true # Define cache time to live in minutes # If combined with Grafana LDAP integration it is also the sync interval # Set to 0 to always fetch and sync the latest user data sync_ttl = 15 # Limit where auth proxy requests come from by configuring a list of IP addresses. # This can be used to prevent users spoofing the X-WEBAUTH-USER header. # Example `whitelist = 192.168.1.1, 192.168.1.0/24, 2001::23, 2001::0/120` whitelist = # Optionally define more headers to sync other user attributes # Example `headers = Name:X-WEBAUTH-NAME Role:X-WEBAUTH-ROLE Email:X-WEBAUTH-EMAIL Groups:X-WEBAUTH-GROUPS` headers = # Non-ASCII strings in header values are encoded using quoted-printable encoding ;headers_encoded = false # Check out docs on this for more details on the below setting enable_login_token = false ``` ## Interacting with Grafana’s AuthProxy via curl ```bash curl -H "X-WEBAUTH-USER: admin" http://localhost:3000/api/users [ { "id":1, "name":"", "login":"admin", "email":"admin@localhost", "isAdmin":true } ] ``` We can then send a second request to the `/api/user` method which will return the details of the logged in user. We will use this request to show how Grafana automatically adds the new user we specify to the system. Here we create a new user called β€œanthony”. ```bash curl -H "X-WEBAUTH-USER: anthony" http://localhost:3000/api/user { "email":"anthony", "name":"", "login":"anthony", "theme":"", "orgId":1, "isGrafanaAdmin":false } ``` ## Making Apache’s auth work together with Grafana’s AuthProxy I’ll demonstrate how to use Apache for authenticating users. In this example we use BasicAuth with Apache’s text file based authentication handler, i.e. htpasswd files. However, any available Apache authentication capabilities could be used. ### Apache BasicAuth In this example we use Apache as a reverse proxy in front of Grafana. Apache handles the Authentication of users before forwarding requests to the Grafana backend service. #### Apache configuration ```bash <VirtualHost *:80> ServerAdmin webmaster@authproxy ServerName authproxy ErrorLog "logs/authproxy-error_log" CustomLog "logs/authproxy-access_log" common <Proxy *> AuthType Basic AuthName GrafanaAuthProxy AuthBasicProvider file AuthUserFile /etc/apache2/grafana_htpasswd Require valid-user RewriteEngine On RewriteRule .* - [E=PROXY_USER:%{LA-U:REMOTE_USER},NS] RequestHeader set X-WEBAUTH-USER "%{PROXY_USER}e" </Proxy> RequestHeader unset Authorization ProxyRequests Off ProxyPass / http://localhost:3000/ ProxyPassReverse / http://localhost:3000/ </VirtualHost> ``` - The first four lines of the virtualhost configuration are standard, so we won’t go into detail on what they do. - We use a **\<proxy>** configuration block for applying our authentication rules to every proxied request. These rules include requiring basic authentication where user:password credentials are stored in the **/etc/apache2/grafana_htpasswd** file. This file can be created with the `htpasswd` command. - The next part of the configuration is the tricky part. We use Apache’s rewrite engine to create our **X-WEBAUTH-USER header**, populated with the authenticated user. - **RewriteRule .\* - [E=PROXY_USER:%{LA-U:REMOTE_USER}, NS]**: This line is a little bit of magic. What it does, is for every request use the rewriteEngines look-ahead (LA-U) feature to determine what the REMOTE_USER variable would be set to after processing the request. Then assign the result to the variable PROXY_USER. This is necessary as the REMOTE_USER variable is not available to the RequestHeader function. - **RequestHeader set X-WEBAUTH-USER β€œ%{PROXY_USER}e”**: With the authenticated username now stored in the PROXY_USER variable, we create a new HTTP request header that will be sent to our backend Grafana containing the username. - The **RequestHeader unset Authorization** removes the Authorization header from the HTTP request before it is forwarded to Grafana. This ensures that Grafana does not try to authenticate the user using these credentials (BasicAuth is a supported authentication handler in Grafana). - The last 3 lines are then just standard reverse proxy configuration to direct all authenticated requests to our Grafana server running on port 3000. ## Full walkthrough using Docker. For this example, we use the official Grafana Docker image available at [Docker Hub](https://hub.docker.com/r/grafana/grafana/). - Create a file `grafana.ini` with the following contents ```bash [users] allow_sign_up = false auto_assign_org = true auto_assign_org_role = Editor [auth.proxy] enabled = true header_name = X-WEBAUTH-USER header_property = username auto_sign_up = true ``` Launch the Grafana container, using our custom grafana.ini to replace `/etc/grafana/grafana.ini`. We don't expose any ports for this container as it will only be connected to by our Apache container. ```bash docker run -i -v $(pwd)/grafana.ini:/etc/grafana/grafana.ini --name grafana grafana/grafana ``` ### Apache Container For this example we use the official Apache docker image available at [Docker Hub](https://hub.docker.com/_/httpd/) - Create a file `httpd.conf` with the following contents ```bash ServerRoot "/usr/local/apache2" Listen 80 LoadModule mpm_event_module modules/mod_mpm_event.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule log_config_module modules/mod_log_config.so LoadModule env_module modules/mod_env.so LoadModule headers_module modules/mod_headers.so LoadModule unixd_module modules/mod_unixd.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so <IfModule unixd_module> User daemon Group daemon </IfModule> ServerAdmin [email protected] <Directory /> AllowOverride none Require all denied </Directory> DocumentRoot "/usr/local/apache2/htdocs" ErrorLog /proc/self/fd/2 LogLevel error <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog /proc/self/fd/1 common </IfModule> <Proxy *> AuthType Basic AuthName GrafanaAuthProxy AuthBasicProvider file AuthUserFile /tmp/htpasswd Require valid-user RewriteEngine On RewriteRule .* - [E=PROXY_USER:%{LA-U:REMOTE_USER},NS] RequestHeader set X-WEBAUTH-USER "%{PROXY_USER}e" </Proxy> RequestHeader unset Authorization ProxyRequests Off ProxyPass / http://grafana:3000/ ProxyPassReverse / http://grafana:3000/ ``` - Create a htpasswd file. We create a new user **anthony** with the password **password** ```bash htpasswd -bc htpasswd anthony password ``` - Launch the httpd container using our custom httpd.conf and our htpasswd file. The container will listen on port 80, and we create a link to the **grafana** container so that this container can resolve the hostname **grafana** to the Grafana container’s IP address. ```bash docker run -i -p 80:80 --link grafana:grafana -v $(pwd)/httpd.conf:/usr/local/apache2/conf/httpd.conf -v $(pwd)/htpasswd:/tmp/htpasswd httpd:2.4 ``` ### Use grafana. With our Grafana and Apache containers running, you can now connect to http://localhost/ and log in using the username/password we created in the htpasswd file. If the user is deleted from Grafana, the user will be not be able to login and resync until after the `sync_ttl` has expired. ### Team Sync (Enterprise only) > Only available in Grafana Enterprise v6.3+ With Team Sync, it's possible to set up synchronization between teams in your authentication provider and Grafana. You can send Grafana values as part of an HTTP header and have Grafana map them to your team structure. This allows you to put users into specific teams automatically. To support the feature, auth proxy allows optional headers to map additional user attributes. The specific attribute to support team sync is `Groups`. ```bash # Optionally define more headers to sync other user attributes headers = "Groups:X-WEBAUTH-GROUPS" ``` You use the `X-WEBAUTH-GROUPS` header to send the team information for each user. Specifically, the set of Grafana's group IDs that the user belongs to. First, we need to set up the mapping between your authentication provider and Grafana. Follow [these instructions]() to add groups to a team within Grafana. Once that's done. You can verify your mappings by querying the API. ```bash # First, inspect your teams and obtain the corresponding ID of the team we want to inspect the groups for. curl -H "X-WEBAUTH-USER: admin" -H "X-WEBAUTH-GROUPS: lokiteamOnExternalSystem" http://localhost:3000/api/teams/search { "totalCount": 2, "teams": [ { "id": 1, "orgId": 1, "name": "Core", "email": "[email protected]", "avatarUrl": "/avatar/327a5353552d2dc3966e2e646908f540", "memberCount": 1, "permission": 0 }, { "id": 2, "orgId": 1, "name": "Loki", "email": "[email protected]", "avatarUrl": "/avatar/102f937d5344d33fdb37b65d430f36ef", "memberCount": 0, "permission": 0 } ], "page": 1, "perPage": 1000 } # Then, query the groups for that particular team. In our case, the Loki team which has an ID of "2". curl -H "X-WEBAUTH-USER: admin" -H "X-WEBAUTH-GROUPS: lokiteamOnExternalSystem" http://localhost:3000/api/teams/2/groups [ { "orgId": 1, "teamId": 2, "groupId": "lokiTeamOnExternalSystem" } ] ``` Finally, whenever Grafana receives a request with a header of `X-WEBAUTH-GROUPS: lokiTeamOnExternalSystem`, the user under authentication will be placed into the specified team. Placement in multiple teams is supported by using comma-separated values e.g. `lokiTeamOnExternalSystem,CoreTeamOnExternalSystem`. ```bash curl -H "X-WEBAUTH-USER: leonard" -H "X-WEBAUTH-GROUPS: lokiteamOnExternalSystem" http://localhost:3000/dashboards/home { "meta": { "isHome": true, "canSave": false, ... } ``` With this, the user `leonard` will be automatically placed into the Loki team as part of Grafana authentication. An empty `X-WEBAUTH-GROUPS` or the absence of a groups header will remove the user from all teams. [Learn more about Team Sync]() ## Login token and session cookie With `enable_login_token` set to `true` Grafana will, after successful auth proxy header validation, assign the user a login token and cookie. You only have to configure your auth proxy to provide headers for the /login route. Requests via other routes will be authenticated using the cookie. Use settings `login_maximum_inactive_lifetime_duration` and `login_maximum_lifetime_duration` under `[auth]` to control session lifetime.
grafana setup
aliases auth auth proxy tutorials authproxy description Grafana Auth Proxy Guide keywords grafana configuration documentation proxy labels products cloud enterprise oss menuTitle Auth proxy title Configure auth proxy authentication weight 1500 Configure auth proxy authentication You can configure Grafana to let a HTTP reverse proxy handle authentication Popular web servers have a very extensive list of pluggable authentication modules and any of them can be used with the AuthProxy feature Below we detail the configuration options for auth proxy bash auth proxy Defaults to false but set to true to enable this feature enabled true HTTP Header name that will contain the username or email header name X WEBAUTH USER HTTP Header property defaults to username but can also be email header property username Set to true to enable auto sign up of users who do not exist in Grafana DB Defaults to true auto sign up true Define cache time to live in minutes If combined with Grafana LDAP integration it is also the sync interval Set to 0 to always fetch and sync the latest user data sync ttl 15 Limit where auth proxy requests come from by configuring a list of IP addresses This can be used to prevent users spoofing the X WEBAUTH USER header Example whitelist 192 168 1 1 192 168 1 0 24 2001 23 2001 0 120 whitelist Optionally define more headers to sync other user attributes Example headers Name X WEBAUTH NAME Role X WEBAUTH ROLE Email X WEBAUTH EMAIL Groups X WEBAUTH GROUPS headers Non ASCII strings in header values are encoded using quoted printable encoding headers encoded false Check out docs on this for more details on the below setting enable login token false Interacting with Grafana s AuthProxy via curl bash curl H X WEBAUTH USER admin http localhost 3000 api users id 1 name login admin email admin localhost isAdmin true We can then send a second request to the api user method which will return the details of the logged in user We will use this request to show how Grafana automatically adds the new user we specify to the system Here we create a new user called anthony bash curl H X WEBAUTH USER anthony http localhost 3000 api user email anthony name login anthony theme orgId 1 isGrafanaAdmin false Making Apache s auth work together with Grafana s AuthProxy I ll demonstrate how to use Apache for authenticating users In this example we use BasicAuth with Apache s text file based authentication handler i e htpasswd files However any available Apache authentication capabilities could be used Apache BasicAuth In this example we use Apache as a reverse proxy in front of Grafana Apache handles the Authentication of users before forwarding requests to the Grafana backend service Apache configuration bash VirtualHost 80 ServerAdmin webmaster authproxy ServerName authproxy ErrorLog logs authproxy error log CustomLog logs authproxy access log common Proxy AuthType Basic AuthName GrafanaAuthProxy AuthBasicProvider file AuthUserFile etc apache2 grafana htpasswd Require valid user RewriteEngine On RewriteRule E PROXY USER LA U REMOTE USER NS RequestHeader set X WEBAUTH USER PROXY USER e Proxy RequestHeader unset Authorization ProxyRequests Off ProxyPass http localhost 3000 ProxyPassReverse http localhost 3000 VirtualHost The first four lines of the virtualhost configuration are standard so we won t go into detail on what they do We use a proxy configuration block for applying our authentication rules to every proxied request These rules include requiring basic authentication where user password credentials are stored in the etc apache2 grafana htpasswd file This file can be created with the htpasswd command The next part of the configuration is the tricky part We use Apache s rewrite engine to create our X WEBAUTH USER header populated with the authenticated user RewriteRule E PROXY USER LA U REMOTE USER NS This line is a little bit of magic What it does is for every request use the rewriteEngines look ahead LA U feature to determine what the REMOTE USER variable would be set to after processing the request Then assign the result to the variable PROXY USER This is necessary as the REMOTE USER variable is not available to the RequestHeader function RequestHeader set X WEBAUTH USER PROXY USER e With the authenticated username now stored in the PROXY USER variable we create a new HTTP request header that will be sent to our backend Grafana containing the username The RequestHeader unset Authorization removes the Authorization header from the HTTP request before it is forwarded to Grafana This ensures that Grafana does not try to authenticate the user using these credentials BasicAuth is a supported authentication handler in Grafana The last 3 lines are then just standard reverse proxy configuration to direct all authenticated requests to our Grafana server running on port 3000 Full walkthrough using Docker For this example we use the official Grafana Docker image available at Docker Hub https hub docker com r grafana grafana Create a file grafana ini with the following contents bash users allow sign up false auto assign org true auto assign org role Editor auth proxy enabled true header name X WEBAUTH USER header property username auto sign up true Launch the Grafana container using our custom grafana ini to replace etc grafana grafana ini We don t expose any ports for this container as it will only be connected to by our Apache container bash docker run i v pwd grafana ini etc grafana grafana ini name grafana grafana grafana Apache Container For this example we use the official Apache docker image available at Docker Hub https hub docker com httpd Create a file httpd conf with the following contents bash ServerRoot usr local apache2 Listen 80 LoadModule mpm event module modules mod mpm event so LoadModule authn file module modules mod authn file so LoadModule authn core module modules mod authn core so LoadModule authz host module modules mod authz host so LoadModule authz user module modules mod authz user so LoadModule authz core module modules mod authz core so LoadModule auth basic module modules mod auth basic so LoadModule log config module modules mod log config so LoadModule env module modules mod env so LoadModule headers module modules mod headers so LoadModule unixd module modules mod unixd so LoadModule rewrite module modules mod rewrite so LoadModule proxy module modules mod proxy so LoadModule proxy http module modules mod proxy http so IfModule unixd module User daemon Group daemon IfModule ServerAdmin you example com Directory AllowOverride none Require all denied Directory DocumentRoot usr local apache2 htdocs ErrorLog proc self fd 2 LogLevel error IfModule log config module LogFormat h l u t r s b Referer i User Agent i combined LogFormat h l u t r s b common IfModule logio module LogFormat h l u t r s b Referer i User Agent i I O combinedio IfModule CustomLog proc self fd 1 common IfModule Proxy AuthType Basic AuthName GrafanaAuthProxy AuthBasicProvider file AuthUserFile tmp htpasswd Require valid user RewriteEngine On RewriteRule E PROXY USER LA U REMOTE USER NS RequestHeader set X WEBAUTH USER PROXY USER e Proxy RequestHeader unset Authorization ProxyRequests Off ProxyPass http grafana 3000 ProxyPassReverse http grafana 3000 Create a htpasswd file We create a new user anthony with the password password bash htpasswd bc htpasswd anthony password Launch the httpd container using our custom httpd conf and our htpasswd file The container will listen on port 80 and we create a link to the grafana container so that this container can resolve the hostname grafana to the Grafana container s IP address bash docker run i p 80 80 link grafana grafana v pwd httpd conf usr local apache2 conf httpd conf v pwd htpasswd tmp htpasswd httpd 2 4 Use grafana With our Grafana and Apache containers running you can now connect to http localhost and log in using the username password we created in the htpasswd file If the user is deleted from Grafana the user will be not be able to login and resync until after the sync ttl has expired Team Sync Enterprise only Only available in Grafana Enterprise v6 3 With Team Sync it s possible to set up synchronization between teams in your authentication provider and Grafana You can send Grafana values as part of an HTTP header and have Grafana map them to your team structure This allows you to put users into specific teams automatically To support the feature auth proxy allows optional headers to map additional user attributes The specific attribute to support team sync is Groups bash Optionally define more headers to sync other user attributes headers Groups X WEBAUTH GROUPS You use the X WEBAUTH GROUPS header to send the team information for each user Specifically the set of Grafana s group IDs that the user belongs to First we need to set up the mapping between your authentication provider and Grafana Follow these instructions to add groups to a team within Grafana Once that s done You can verify your mappings by querying the API bash First inspect your teams and obtain the corresponding ID of the team we want to inspect the groups for curl H X WEBAUTH USER admin H X WEBAUTH GROUPS lokiteamOnExternalSystem http localhost 3000 api teams search totalCount 2 teams id 1 orgId 1 name Core email core grafana com avatarUrl avatar 327a5353552d2dc3966e2e646908f540 memberCount 1 permission 0 id 2 orgId 1 name Loki email loki grafana com avatarUrl avatar 102f937d5344d33fdb37b65d430f36ef memberCount 0 permission 0 page 1 perPage 1000 Then query the groups for that particular team In our case the Loki team which has an ID of 2 curl H X WEBAUTH USER admin H X WEBAUTH GROUPS lokiteamOnExternalSystem http localhost 3000 api teams 2 groups orgId 1 teamId 2 groupId lokiTeamOnExternalSystem Finally whenever Grafana receives a request with a header of X WEBAUTH GROUPS lokiTeamOnExternalSystem the user under authentication will be placed into the specified team Placement in multiple teams is supported by using comma separated values e g lokiTeamOnExternalSystem CoreTeamOnExternalSystem bash curl H X WEBAUTH USER leonard H X WEBAUTH GROUPS lokiteamOnExternalSystem http localhost 3000 dashboards home meta isHome true canSave false With this the user leonard will be automatically placed into the Loki team as part of Grafana authentication An empty X WEBAUTH GROUPS or the absence of a groups header will remove the user from all teams Learn more about Team Sync Login token and session cookie With enable login token set to true Grafana will after successful auth proxy header validation assign the user a login token and cookie You only have to configure your auth proxy to provide headers for the login route Requests via other routes will be authenticated using the cookie Use settings login maximum inactive lifetime duration and login maximum lifetime duration under auth to control session lifetime
grafana setup Grafana LDAP Authentication Guide auth ldap aliases products labels cloud installation ldap oss enterprise
--- aliases: - ../../../auth/ldap/ - ../../../installation/ldap/ description: Grafana LDAP Authentication Guide labels: products: - cloud - enterprise - oss menuTitle: LDAP title: Configure LDAP authentication weight: 300 --- # Configure LDAP authentication The LDAP integration in Grafana allows your Grafana users to login with their LDAP credentials. You can also specify mappings between LDAP group memberships and Grafana Organization user roles. [Enhanced LDAP authentication]() is available in [Grafana Cloud](/docs/grafana-cloud/) and in [Grafana Enterprise](). Refer to [Role-based access control]() to understand how you can control access with role-based permissions. ## Supported LDAP Servers Grafana uses a [third-party LDAP library](https://github.com/go-ldap/ldap) under the hood that supports basic LDAP v3 functionality. This means that you should be able to configure LDAP integration using any compliant LDAPv3 server, for example [OpenLDAP](#openldap) or [Active Directory](#active-directory) among [others](https://en.wikipedia.org/wiki/Directory_service#LDAP_implementations). ## Enable LDAP In order to use LDAP integration you'll first need to enable LDAP in the [main config file]() as well as specify the path to the LDAP specific configuration file (default: `/etc/grafana/ldap.toml`). After enabling LDAP, the default behavior is for Grafana users to be created automatically upon successful LDAP authentication. If you prefer for only existing Grafana users to be able to sign in, you can change `allow_sign_up` to `false` in the `[auth.ldap]` section. ```ini [auth.ldap] # Set to `true` to enable LDAP integration (default: `false`) enabled = true # Path to the LDAP specific configuration file (default: `/etc/grafana/ldap.toml`) config_file = /etc/grafana/ldap.toml # Allow sign-up should be `true` (default) to allow Grafana to create users on successful LDAP authentication. # If set to `false` only already existing Grafana users will be able to login. allow_sign_up = true ``` ## Disable org role synchronization If you use LDAP to authenticate users but don't use role mapping, and prefer to manually assign organizations and roles, you can use the `skip_org_role_sync` configuration option. ```ini [auth.ldap] # Set to `true` to enable LDAP integration (default: `false`) enabled = true # Path to the LDAP specific configuration file (default: `/etc/grafana/ldap.toml`) config_file = /etc/grafana/ldap.toml # Allow sign-up should be `true` (default) to allow Grafana to create users on successful LDAP authentication. # If set to `false` only already existing Grafana users will be able to login. allow_sign_up = true # Prevent synchronizing ldap users organization roles skip_org_role_sync = true ``` ## Grafana LDAP Configuration Depending on which LDAP server you're using and how that's configured your Grafana LDAP configuration may vary. See [configuration examples](#configuration-examples) for more information. **LDAP specific configuration file (ldap.toml) example:** ```bash [[servers]] # Ldap server host (specify multiple hosts space separated) host = "ldap.my_secure_remote_server.org" # Default port is 389 or 636 if use_ssl = true port = 636 # Set to true if LDAP server should use an encrypted TLS connection (either with STARTTLS or LDAPS) use_ssl = true # If set to true, use LDAP with STARTTLS instead of LDAPS start_tls = false # The value of an accepted TLS cipher. By default, this value is empty. Example value: ["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"]) # For a complete list of supported ciphers and TLS versions, refer to: https://go.dev/src/crypto/tls/cipher_suites.go # Starting with Grafana v11.0 only ciphers with ECDHE support are accepted for TLS 1.2 connections. tls_ciphers = [] # This is the minimum TLS version allowed. By default, this value is empty. Accepted values are: TLS1.1 (only for Grafana v10.4 or earlier), TLS1.2, TLS1.3. min_tls_version = "" # set to true if you want to skip SSL cert validation ssl_skip_verify = false # set to the path to your root CA certificate or leave unset to use system defaults # root_ca_cert = "/path/to/certificate.crt" # Authentication against LDAP servers requiring client certificates # client_cert = "/path/to/client.crt" # client_key = "/path/to/client.key" # Search user bind dn bind_dn = "cn=admin,dc=grafana,dc=org" # Search user bind password # If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;""" bind_password = "grafana" # We recommend using variable expansion for the bind_password, for more info https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#variable-expansion # bind_password = '$__env{LDAP_BIND_PASSWORD}' # Timeout in seconds. Applies to each host specified in the 'host' entry (space separated). timeout = 10 # User search filter, for example "(cn=%s)" or "(sAMAccountName=%s)" or "(uid=%s)" # Allow login from email or username, example "(|(sAMAccountName=%s)(userPrincipalName=%s))" search_filter = "(cn=%s)" # An array of base dns to search through search_base_dns = ["dc=grafana,dc=org"] # group_search_filter = "(&(objectClass=posixGroup)(memberUid=%s))" # group_search_filter_user_attribute = "distinguishedName" # group_search_base_dns = ["ou=groups,dc=grafana,dc=org"] # Specify names of the LDAP attributes your LDAP uses [servers.attributes] member_of = "memberOf" email = "email" ``` Whenever you modify the ldap.toml file, you must restart Grafana in order for the change(s) to take effect. ### Using environment variables You can interpolate variables in the TOML configuration from environment variables. For instance, you could externalize your `bind_password` that way: ```bash bind_password = "${LDAP_ADMIN_PASSWORD}" ``` ## LDAP debug view Grafana has an LDAP debug view built-in which allows you to test your LDAP configuration directly within Grafana. Only Grafana admins can use the LDAP debug view. Within this view, you'll be able to see which LDAP servers are currently reachable and test your current configuration. To use the debug view, complete the following steps: 1. Type the username of a user that exists within any of your LDAP server(s) 1. Then, press "Run" 1. If the user is found within any of your LDAP instances, the mapping information is displayed. Note that this does not work if you are using the single bind configuration outlined below. [Grafana Enterprise]() users with [enhanced LDAP integration]() enabled can also see sync status in the debug view. This requires the `ldap.status:read` permission. ### Bind and bind password By default the configuration expects you to specify a bind DN and bind password. This should be a read only user that can perform LDAP searches. When the user DN is found a second bind is performed with the user provided username and password (in the normal Grafana login form). ```bash bind_dn = "cn=admin,dc=grafana,dc=org" bind_password = "grafana" ``` #### Single bind example If you can provide a single bind expression that matches all possible users, you can skip the second bind and bind against the user DN directly. This allows you to not specify a bind_password in the configuration file. ```bash bind_dn = "cn=%s,o=users,dc=grafana,dc=org" ``` In this case you skip providing a `bind_password` and instead provide a `bind_dn` value with a `%s` somewhere. This will be replaced with the username entered in on the Grafana login page. The search filter and search bases settings are still needed to perform the LDAP search to retrieve the other LDAP information (like LDAP groups and email). ### POSIX schema If your LDAP server does not support the `memberOf` attribute, add the following options: ```bash ## Group search filter, to retrieve the groups of which the user is a member (only set if memberOf attribute is not available) group_search_filter = "(&(objectClass=posixGroup)(memberUid=%s))" ## An array of the base DNs to search through for groups. Typically uses ou=groups group_search_base_dns = ["ou=groups,dc=grafana,dc=org"] ## the %s in the search filter will be replaced with the attribute defined below group_search_filter_user_attribute = "uid" ``` ### Group mappings In `[[servers.group_mappings]]` you can map an LDAP group to a Grafana organization and role. These will be synced every time the user logs in, with LDAP being the authoritative source. The first group mapping that an LDAP user is matched to will be used for the sync. If you have LDAP users that fit multiple mappings, the topmost mapping in the TOML configuration will be used. **LDAP specific configuration file (ldap.toml) example:** ```bash [[servers]] # other settings omitted for clarity [[servers.group_mappings]] group_dn = "cn=superadmins,dc=grafana,dc=org" org_role = "Admin" grafana_admin = true [[servers.group_mappings]] group_dn = "cn=admins,dc=grafana,dc=org" org_role = "Admin" [[servers.group_mappings]] group_dn = "cn=users,dc=grafana,dc=org" org_role = "Editor" [[servers.group_mappings]] group_dn = "*" org_role = "Viewer" ``` | Setting | Required | Description | Default | | --------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------- | | `group_dn` | Yes | LDAP distinguished name (DN) of LDAP group. If you want to match all (or no LDAP groups) then you can use wildcard (`"*"`) | | `org_role` | Yes | Assign users of `group_dn` the organization role `Admin`, `Editor`, or `Viewer`. The organization role name is case sensitive. | | `org_id` | No | The Grafana organization database id. Setting this allows for multiple group_dn's to be assigned to the same `org_role` provided the `org_id` differs | `1` (default org id) | | `grafana_admin` | No | When `true` makes user of `group_dn` Grafana server admin. A Grafana server admin has admin access over all organizations and users. | `false` | Commenting out a group mapping requires also commenting out the header of said group or it will fail validation as an empty mapping. Example: ```bash [[servers]] # other settings omitted for clarity [[servers.group_mappings]] group_dn = "cn=superadmins,dc=grafana,dc=org" org_role = "Admin" grafana_admin = true # [[servers.group_mappings]] # group_dn = "cn=admins,dc=grafana,dc=org" # org_role = "Admin" [[servers.group_mappings]] group_dn = "cn=users,dc=grafana,dc=org" org_role = "Editor" ``` ### Nested/recursive group membership Users with nested/recursive group membership must have an LDAP server that supports `LDAP_MATCHING_RULE_IN_CHAIN` and configure `group_search_filter` in a way that it returns the groups the submitted username is a member of. To configure `group_search_filter`: - You can set `group_search_base_dns` to specify where the matching groups are defined. - If you do not use `group_search_base_dns`, then the previously defined `search_base_dns` is used. **Active Directory example:** Active Directory groups store the Distinguished Names (DNs) of members, so your filter will need to know the DN for the user based only on the submitted username. Multiple DN templates are searched by combining filters with the LDAP OR-operator. Two examples: ```bash group_search_filter = "(member:1.2.840.113556.1.4.1941:=%s)" group_search_base_dns = ["DC=mycorp,DC=mytld"] group_search_filter_user_attribute = "dn" ``` ```bash group_search_filter = "(member:1.2.840.113556.1.4.1941:=CN=%s,[user container/OU])" group_search_filter = "(|(member:1.2.840.113556.1.4.1941:=CN=%s,[user container/OU])(member:1.2.840.113556.1.4.1941:=CN=%s,[another user container/OU]))" group_search_filter_user_attribute = "cn" ``` For more information on AD searches see [Microsoft's Search Filter Syntax](https://docs.microsoft.com/en-us/windows/desktop/adsi/search-filter-syntax) documentation. For troubleshooting, changing `member_of` in `[servers.attributes]` to "dn" will show you more accurate group memberships when [debug is enabled](#troubleshooting). ## Configuration examples The following examples describe different LDAP configuration options. ### OpenLDAP [OpenLDAP](http://www.openldap.org/) is an open source directory service. **LDAP specific configuration file (ldap.toml):** ```bash [[servers]] host = "127.0.0.1" port = 389 use_ssl = false start_tls = false ssl_skip_verify = false bind_dn = "cn=admin,dc=grafana,dc=org" bind_password = "grafana" search_filter = "(cn=%s)" search_base_dns = ["dc=grafana,dc=org"] [servers.attributes] member_of = "memberOf" email = "email" # [[servers.group_mappings]] omitted for clarity ``` ### Multiple LDAP servers Grafana does support receiving information from multiple LDAP servers. **LDAP specific configuration file (ldap.toml):** ```bash # --- First LDAP Server --- [[servers]] host = "10.0.0.1" port = 389 use_ssl = false start_tls = false ssl_skip_verify = false bind_dn = "cn=admin,dc=grafana,dc=org" bind_password = "grafana" search_filter = "(cn=%s)" search_base_dns = ["ou=users,dc=grafana,dc=org"] [servers.attributes] member_of = "memberOf" email = "email" [[servers.group_mappings]] group_dn = "cn=admins,ou=groups,dc=grafana,dc=org" org_role = "Admin" grafana_admin = true # --- Second LDAP Server --- [[servers]] host = "10.0.0.2" port = 389 use_ssl = false start_tls = false ssl_skip_verify = false bind_dn = "cn=admin,dc=grafana,dc=org" bind_password = "grafana" search_filter = "(cn=%s)" search_base_dns = ["ou=users,dc=grafana,dc=org"] [servers.attributes] member_of = "memberOf" email = "email" [[servers.group_mappings]] group_dn = "cn=editors,ou=groups,dc=grafana,dc=org" org_role = "Editor" [[servers.group_mappings]] group_dn = "*" org_role = "Viewer" ``` ### Active Directory [Active Directory](<https://technet.microsoft.com/en-us/library/hh831484(v=ws.11).aspx>) is a directory service which is commonly used in Windows environments. Assuming the following Active Directory server setup: - IP address: `10.0.0.1` - Domain: `CORP` - DNS name: `corp.local` **LDAP specific configuration file (ldap.toml):** ```bash [[servers]] host = "10.0.0.1" port = 3269 use_ssl = true start_tls = false ssl_skip_verify = true bind_dn = "CORP\\%s" search_filter = "(sAMAccountName=%s)" search_base_dns = ["dc=corp,dc=local"] [servers.attributes] member_of = "memberOf" email = "mail" # [[servers.group_mappings]] omitted for clarity ``` #### Port requirements In above example SSL is enabled and an encrypted port have been configured. If your Active Directory don't support SSL please change `enable_ssl = false` and `port = 389`. Please inspect your Active Directory configuration and documentation to find the correct settings. For more information about Active Directory and port requirements see [link](<https://technet.microsoft.com/en-us/library/dd772723(v=ws.10)>). ## Troubleshooting To troubleshoot and get more log info enable LDAP debug logging in the [main config file](). ```bash [log] filters = ldap:debug ```
grafana setup
aliases auth ldap installation ldap description Grafana LDAP Authentication Guide labels products cloud enterprise oss menuTitle LDAP title Configure LDAP authentication weight 300 Configure LDAP authentication The LDAP integration in Grafana allows your Grafana users to login with their LDAP credentials You can also specify mappings between LDAP group memberships and Grafana Organization user roles Enhanced LDAP authentication is available in Grafana Cloud docs grafana cloud and in Grafana Enterprise Refer to Role based access control to understand how you can control access with role based permissions Supported LDAP Servers Grafana uses a third party LDAP library https github com go ldap ldap under the hood that supports basic LDAP v3 functionality This means that you should be able to configure LDAP integration using any compliant LDAPv3 server for example OpenLDAP openldap or Active Directory active directory among others https en wikipedia org wiki Directory service LDAP implementations Enable LDAP In order to use LDAP integration you ll first need to enable LDAP in the main config file as well as specify the path to the LDAP specific configuration file default etc grafana ldap toml After enabling LDAP the default behavior is for Grafana users to be created automatically upon successful LDAP authentication If you prefer for only existing Grafana users to be able to sign in you can change allow sign up to false in the auth ldap section ini auth ldap Set to true to enable LDAP integration default false enabled true Path to the LDAP specific configuration file default etc grafana ldap toml config file etc grafana ldap toml Allow sign up should be true default to allow Grafana to create users on successful LDAP authentication If set to false only already existing Grafana users will be able to login allow sign up true Disable org role synchronization If you use LDAP to authenticate users but don t use role mapping and prefer to manually assign organizations and roles you can use the skip org role sync configuration option ini auth ldap Set to true to enable LDAP integration default false enabled true Path to the LDAP specific configuration file default etc grafana ldap toml config file etc grafana ldap toml Allow sign up should be true default to allow Grafana to create users on successful LDAP authentication If set to false only already existing Grafana users will be able to login allow sign up true Prevent synchronizing ldap users organization roles skip org role sync true Grafana LDAP Configuration Depending on which LDAP server you re using and how that s configured your Grafana LDAP configuration may vary See configuration examples configuration examples for more information LDAP specific configuration file ldap toml example bash servers Ldap server host specify multiple hosts space separated host ldap my secure remote server org Default port is 389 or 636 if use ssl true port 636 Set to true if LDAP server should use an encrypted TLS connection either with STARTTLS or LDAPS use ssl true If set to true use LDAP with STARTTLS instead of LDAPS start tls false The value of an accepted TLS cipher By default this value is empty Example value TLS ECDHE ECDSA WITH AES 256 GCM SHA384 For a complete list of supported ciphers and TLS versions refer to https go dev src crypto tls cipher suites go Starting with Grafana v11 0 only ciphers with ECDHE support are accepted for TLS 1 2 connections tls ciphers This is the minimum TLS version allowed By default this value is empty Accepted values are TLS1 1 only for Grafana v10 4 or earlier TLS1 2 TLS1 3 min tls version set to true if you want to skip SSL cert validation ssl skip verify false set to the path to your root CA certificate or leave unset to use system defaults root ca cert path to certificate crt Authentication against LDAP servers requiring client certificates client cert path to client crt client key path to client key Search user bind dn bind dn cn admin dc grafana dc org Search user bind password If the password contains or you have to wrap it with triple quotes Ex password bind password grafana We recommend using variable expansion for the bind password for more info https grafana com docs grafana latest setup grafana configure grafana variable expansion bind password env LDAP BIND PASSWORD Timeout in seconds Applies to each host specified in the host entry space separated timeout 10 User search filter for example cn s or sAMAccountName s or uid s Allow login from email or username example sAMAccountName s userPrincipalName s search filter cn s An array of base dns to search through search base dns dc grafana dc org group search filter objectClass posixGroup memberUid s group search filter user attribute distinguishedName group search base dns ou groups dc grafana dc org Specify names of the LDAP attributes your LDAP uses servers attributes member of memberOf email email Whenever you modify the ldap toml file you must restart Grafana in order for the change s to take effect Using environment variables You can interpolate variables in the TOML configuration from environment variables For instance you could externalize your bind password that way bash bind password LDAP ADMIN PASSWORD LDAP debug view Grafana has an LDAP debug view built in which allows you to test your LDAP configuration directly within Grafana Only Grafana admins can use the LDAP debug view Within this view you ll be able to see which LDAP servers are currently reachable and test your current configuration To use the debug view complete the following steps 1 Type the username of a user that exists within any of your LDAP server s 1 Then press Run 1 If the user is found within any of your LDAP instances the mapping information is displayed Note that this does not work if you are using the single bind configuration outlined below Grafana Enterprise users with enhanced LDAP integration enabled can also see sync status in the debug view This requires the ldap status read permission Bind and bind password By default the configuration expects you to specify a bind DN and bind password This should be a read only user that can perform LDAP searches When the user DN is found a second bind is performed with the user provided username and password in the normal Grafana login form bash bind dn cn admin dc grafana dc org bind password grafana Single bind example If you can provide a single bind expression that matches all possible users you can skip the second bind and bind against the user DN directly This allows you to not specify a bind password in the configuration file bash bind dn cn s o users dc grafana dc org In this case you skip providing a bind password and instead provide a bind dn value with a s somewhere This will be replaced with the username entered in on the Grafana login page The search filter and search bases settings are still needed to perform the LDAP search to retrieve the other LDAP information like LDAP groups and email POSIX schema If your LDAP server does not support the memberOf attribute add the following options bash Group search filter to retrieve the groups of which the user is a member only set if memberOf attribute is not available group search filter objectClass posixGroup memberUid s An array of the base DNs to search through for groups Typically uses ou groups group search base dns ou groups dc grafana dc org the s in the search filter will be replaced with the attribute defined below group search filter user attribute uid Group mappings In servers group mappings you can map an LDAP group to a Grafana organization and role These will be synced every time the user logs in with LDAP being the authoritative source The first group mapping that an LDAP user is matched to will be used for the sync If you have LDAP users that fit multiple mappings the topmost mapping in the TOML configuration will be used LDAP specific configuration file ldap toml example bash servers other settings omitted for clarity servers group mappings group dn cn superadmins dc grafana dc org org role Admin grafana admin true servers group mappings group dn cn admins dc grafana dc org org role Admin servers group mappings group dn cn users dc grafana dc org org role Editor servers group mappings group dn org role Viewer Setting Required Description Default group dn Yes LDAP distinguished name DN of LDAP group If you want to match all or no LDAP groups then you can use wildcard org role Yes Assign users of group dn the organization role Admin Editor or Viewer The organization role name is case sensitive org id No The Grafana organization database id Setting this allows for multiple group dn s to be assigned to the same org role provided the org id differs 1 default org id grafana admin No When true makes user of group dn Grafana server admin A Grafana server admin has admin access over all organizations and users false Commenting out a group mapping requires also commenting out the header of said group or it will fail validation as an empty mapping Example bash servers other settings omitted for clarity servers group mappings group dn cn superadmins dc grafana dc org org role Admin grafana admin true servers group mappings group dn cn admins dc grafana dc org org role Admin servers group mappings group dn cn users dc grafana dc org org role Editor Nested recursive group membership Users with nested recursive group membership must have an LDAP server that supports LDAP MATCHING RULE IN CHAIN and configure group search filter in a way that it returns the groups the submitted username is a member of To configure group search filter You can set group search base dns to specify where the matching groups are defined If you do not use group search base dns then the previously defined search base dns is used Active Directory example Active Directory groups store the Distinguished Names DNs of members so your filter will need to know the DN for the user based only on the submitted username Multiple DN templates are searched by combining filters with the LDAP OR operator Two examples bash group search filter member 1 2 840 113556 1 4 1941 s group search base dns DC mycorp DC mytld group search filter user attribute dn bash group search filter member 1 2 840 113556 1 4 1941 CN s user container OU group search filter member 1 2 840 113556 1 4 1941 CN s user container OU member 1 2 840 113556 1 4 1941 CN s another user container OU group search filter user attribute cn For more information on AD searches see Microsoft s Search Filter Syntax https docs microsoft com en us windows desktop adsi search filter syntax documentation For troubleshooting changing member of in servers attributes to dn will show you more accurate group memberships when debug is enabled troubleshooting Configuration examples The following examples describe different LDAP configuration options OpenLDAP OpenLDAP http www openldap org is an open source directory service LDAP specific configuration file ldap toml bash servers host 127 0 0 1 port 389 use ssl false start tls false ssl skip verify false bind dn cn admin dc grafana dc org bind password grafana search filter cn s search base dns dc grafana dc org servers attributes member of memberOf email email servers group mappings omitted for clarity Multiple LDAP servers Grafana does support receiving information from multiple LDAP servers LDAP specific configuration file ldap toml bash First LDAP Server servers host 10 0 0 1 port 389 use ssl false start tls false ssl skip verify false bind dn cn admin dc grafana dc org bind password grafana search filter cn s search base dns ou users dc grafana dc org servers attributes member of memberOf email email servers group mappings group dn cn admins ou groups dc grafana dc org org role Admin grafana admin true Second LDAP Server servers host 10 0 0 2 port 389 use ssl false start tls false ssl skip verify false bind dn cn admin dc grafana dc org bind password grafana search filter cn s search base dns ou users dc grafana dc org servers attributes member of memberOf email email servers group mappings group dn cn editors ou groups dc grafana dc org org role Editor servers group mappings group dn org role Viewer Active Directory Active Directory https technet microsoft com en us library hh831484 v ws 11 aspx is a directory service which is commonly used in Windows environments Assuming the following Active Directory server setup IP address 10 0 0 1 Domain CORP DNS name corp local LDAP specific configuration file ldap toml bash servers host 10 0 0 1 port 3269 use ssl true start tls false ssl skip verify true bind dn CORP s search filter sAMAccountName s search base dns dc corp dc local servers attributes member of memberOf email mail servers group mappings omitted for clarity Port requirements In above example SSL is enabled and an encrypted port have been configured If your Active Directory don t support SSL please change enable ssl false and port 389 Please inspect your Active Directory configuration and documentation to find the correct settings For more information about Active Directory and port requirements see link https technet microsoft com en us library dd772723 v ws 10 Troubleshooting To troubleshoot and get more log info enable LDAP debug logging in the main config file bash log filters ldap debug
grafana setup menuTitle Google OAuth auth google aliases products labels cloud Grafana Google OAuth Guide oss enterprise
--- aliases: - ../../../auth/google/ description: Grafana Google OAuth Guide labels: products: - cloud - enterprise - oss menuTitle: Google OAuth title: Configure Google OAuth authentication weight: 1100 --- # Configure Google OAuth authentication To enable Google OAuth you must register your application with Google. Google will generate a client ID and secret key for you to use. If Users use the same email address in Google that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information. ## Create Google OAuth keys First, you need to create a Google OAuth Client: 1. Go to https://console.developers.google.com/apis/credentials. 1. Create a new project if you don't have one already. 1. Enter a project name. The **Organization** and **Location** fields should both be set to your organization's information. 1. In **OAuth consent screen** select the **External** User Type. Click **CREATE**. 1. Fill out the requested information using the URL of your Grafana Cloud instance. 1. Accept the defaults, or customize the consent screen options. 1. Click **Create Credentials**, then click **OAuth Client ID** in the drop-down menu 1. Enter the following: - **Application Type**: Web application - **Name**: Grafana - **Authorized JavaScript origins**: `https://<YOUR_GRAFANA_URL>` - **Authorized redirect URIs**: `https://<YOUR_GRAFANA_URL>/login/google` - Replace `<YOUR_GRAFANA_URL>` with the URL of your Grafana instance. The URL you enter is the one for your Grafana instance home page, not your Grafana Cloud portal URL. 1. Click Create 1. Copy the Client ID and Client Secret from the 'OAuth Client' modal ## Configure Google authentication client using the Grafana UI Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. As a Grafana Admin, you can configure Google OAuth client from within Grafana using the Google UI. To do this, navigate to **Administration > Authentication > Google** page and fill in the form. If you have a current configuration in the Grafana configuration file then the form will be pre-populated with those values otherwise the form will contain default values. After you have filled in the form, click **Save**. If the save was successful, Grafana will apply the new configurations. If you need to reset changes made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values. If you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances. ## Configure Google authentication client using the Terraform provider Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0. ```terraform resource "grafana_sso_settings" "google_sso_settings" { provider_name = "google" oauth2_settings { name = "Google" client_id = "CLIENT_ID" client_secret = "CLIENT_SECRET" allow_sign_up = true auto_login = false scopes = "openid email profile" allowed_domains = "mycompany.com mycompany.org" hosted_domain = "mycompany.com" use_pkce = true } } ``` Go to [Terraform Registry](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/sso_settings) for a complete reference on using the `grafana_sso_settings` resource. ## Configure Google authentication client using the Grafana configuration file Ensure that you have access to the [Grafana configuration file](). ### Enable Google OAuth in Grafana Specify the Client ID and Secret in the [Grafana configuration file](). For example: ```bash [auth.google] enabled = true allow_sign_up = true auto_login = false client_id = CLIENT_ID client_secret = CLIENT_SECRET scopes = openid email profile auth_url = https://accounts.google.com/o/oauth2/v2/auth token_url = https://oauth2.googleapis.com/token api_url = https://openidconnect.googleapis.com/v1/userinfo allowed_domains = mycompany.com mycompany.org hosted_domain = mycompany.com use_pkce = true ``` You may have to set the `root_url` option of `[server]` for the callback URL to be correct. For example in case you are serving Grafana behind a proxy. Restart the Grafana back-end. You should now see a Google login button on the login page. You can now login or sign up with your Google accounts. The `allowed_domains` option is optional, and domains were separated by space. You may allow users to sign-up via Google authentication by setting the `allow_sign_up` option to `true`. When this option is set to `true`, any user successfully authenticating via Google authentication will be automatically signed up. You may specify a domain to be passed as `hd` query parameter accepted by Google's OAuth 2.0 authentication API. Refer to Google's OAuth [documentation](https://developers.google.com/identity/openid-connect/openid-connect#hd-param). Since Grafana 10.3.0, the `hd` parameter retrieved from Google ID token is also used to determine the user's hosted domain. The Google Oauth `allowed_domains` configuration option is used to restrict access to users from a specific domain. If the `allowed_domains` configuration option is set, the `hd` parameter from the Google ID token must match the `allowed_domains` configuration option. If the `hd` parameter from the Google ID token does not match the `allowed_domains` configuration option, the user is denied access. When an account does not belong to a google workspace, the hd claim will not be available. This validation is enabled by default. To disable this validation, set the `validate_hd` configuration option to `false`. The `allowed_domains` configuration option will use the email claim to validate the domain. #### PKCE IETF's [RFC 7636](https://datatracker.ietf.org/doc/html/rfc7636) introduces "proof key for code exchange" (PKCE) which provides additional protection against some forms of authorization code interception attacks. PKCE will be required in [OAuth 2.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-03). You can disable PKCE in Grafana by setting `use_pkce` to `false` in the`[auth.google]` section. #### Configure refresh token When a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token. Grafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired. By default, Grafana includes the `access_type=offline` parameter in the authorization request to request a refresh token. Refresh token fetching and access token expiration check is enabled by default for the Google provider since Grafana v10.1.0. If you would like to disable access token expiration check then set the `use_refresh_token` configuration value to `false`. The `accessTokenExpirationCheck` feature toggle has been removed in Grafana v10.3.0 and the `use_refresh_token` configuration value will be used instead for configuring refresh token fetching and access token expiration check. #### Configure automatic login Set `auto_login` option to true to attempt login automatically, skipping the login screen. This setting is ignored if multiple auth providers are configured to use auto login. ``` auto_login = true ``` ### Configure group synchronization Available in [Grafana Enterprise](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/introduction/grafana-enterprise) and [Grafana Cloud](/docs/grafana-cloud/). Grafana supports syncing users to teams and roles based on their Google groups. To set up group sync for Google OAuth: 1. Enable the Google Cloud Identity API on your [organization's dashboard](https://console.cloud.google.com/apis/api/cloudidentity.googleapis.com/). 1. Add the `https://www.googleapis.com/auth/cloud-identity.groups.readonly` scope to your Grafana `[auth.google]` configuration: Example: ```ini [auth.google] # .. scopes = openid email profile https://www.googleapis.com/auth/cloud-identity.groups.readonly ``` The external group ID for a Google group is the group's email address, such as `[email protected]`. To learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-group-attribute-sync) documentation. #### Configure allowed groups To limit access to authenticated users that are members of one or more groups, set `allowed_groups` to a comma or space separated list of groups. Google groups are referenced by the group email key. For example, `[email protected]`. Add the `https://www.googleapis.com/auth/cloud-identity.groups.readonly` scope to your Grafana `[auth.google]` scopes configuration to retrieve groups. #### Configure role mapping Unless `skip_org_role_sync` option is enabled, the user's role will be set to the role mapped from Google upon user login. If no mapping is set the default instance role is used. The user's role is retrieved using a [JMESPath](http://jmespath.org/examples.html) expression from the `role_attribute_path` configuration option. To map the server administrator role, use the `allow_assign_grafana_admin` configuration option. If no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option](). You can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions. To ease configuration of a proper JMESPath expression, go to [JMESPath](http://jmespath.org/) to test and evaluate expressions with custom payloads. By default skip_org_role_sync is enabled. skip_org_role_sync will default to false in Grafana v10.3.0 and later versions. ##### Role mapping examples This section includes examples of JMESPath expressions used for role mapping. ##### Org roles mapping example The Google integration uses the external users' groups in the `org_mapping` configuration to map organizations and roles based on their Google group membership. In this example, the user has been granted the role of a `Viewer` in the `org_foo` organization, and the role of an `Editor` in the `org_bar` and `org_baz` orgs. The external user is part of the following Google groups: `group-1` and `group-2`. Config: ```ini org_mapping = group-1:org_foo:Viewer group-2:org_bar:Editor *:org_baz:Editor ``` ###### Map roles using user information from OAuth token In this example, the user with email `[email protected]` has been granted the `Admin` role. All other users are granted the `Viewer` role. ```ini role_attribute_path = email=='[email protected]' && 'Admin' || 'Viewer' skip_org_role_sync = false ``` ###### Map roles using groups In this example, the user from Google group '[email protected]' have been granted the `Editor` role. All other users are granted the `Viewer` role. ```ini role_attribute_path = contains(groups[*], '[email protected]') && 'Editor' || 'Viewer' skip_org_role_sync = false ``` Add the `https://www.googleapis.com/auth/cloud-identity.groups.readonly` scope to your Grafana `[auth.google]` scopes configuration to retrieve groups. ###### Map server administrator role In this example, the user with email `[email protected]` has been granted the `Admin` organization role as well as the Grafana server admin role. All other users are granted the `Viewer` role. ```ini allow_assign_grafana_admin = true skip_org_role_sync = false role_attribute_path = email=='[email protected]' && 'GrafanaAdmin' || 'Viewer' ``` ###### Map one role to all users In this example, all users will be assigned `Viewer` role regardless of the user information received from the identity provider. ```ini role_attribute_path = "'Viewer'" skip_org_role_sync = false ``` ## Configuration options The following table outlines the various Google OAuth configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables](). | Setting | Required | Supported on Cloud | Description | Default | | ---------------------------- | -------- | ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- | | `enabled` | No | Yes | Enables Google authentication. | `false` | | `name` | No | Yes | Name that refers to the Google authentication from the Grafana user interface. | `Google` | | `icon` | No | Yes | Icon used for the Google authentication in the Grafana user interface. | `google` | | `client_id` | Yes | Yes | Client ID of the App. | | | `client_secret` | Yes | Yes | Client secret of the App. | | | `auth_url` | Yes | Yes | Authorization endpoint of the Google OAuth provider. | `https://accounts.google.com/o/oauth2/v2/auth` | | `token_url` | Yes | Yes | Endpoint used to obtain the OAuth2 access token. | `https://oauth2.googleapis.com/token` | | `api_url` | Yes | Yes | Endpoint used to obtain user information compatible with [OpenID UserInfo](https://connect2id.com/products/server/docs/api/userinfo). | `https://openidconnect.googleapis.com/v1/userinfo` | | `auth_style` | No | Yes | Name of the [OAuth2 AuthStyle](https://pkg.go.dev/golang.org/x/oauth2#AuthStyle) to be used when ID token is requested from OAuth2 provider. It determines how `client_id` and `client_secret` are sent to Oauth2 provider. Available values are `AutoDetect`, `InParams` and `InHeader`. | `AutoDetect` | | `scopes` | No | Yes | List of comma- or space-separated OAuth2 scopes. | `openid email profile` | | `allow_sign_up` | No | Yes | Controls Grafana user creation through the Google login. Only existing Grafana users can log in with Google if set to `false`. | `true` | | `auto_login` | No | Yes | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login. | `false` | | `hosted_domain` | No | Yes | Specifies the domain to restrict access to users from that domain. This value is appended to the authorization request using the `hd` parameter. | | | `validate_hd` | No | Yes | Set to `false` to disable the validation of the `hd` parameter from the Google ID token. For more informatiion, refer to [Enable Google OAuth in Grafana](). | `true` | | `role_attribute_strict` | No | Yes | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Configure role mapping](). | `false` | | `org_attribute_path` | No | No | [JMESPath](http://jmespath.org/examples.html) expression to use for Grafana org to role lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no value is returned, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation will be mapped to org roles based on `org_mapping`. For more information on org to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `org_mapping` | No | No | List of comma- or space-separated `<ExternalOrgName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning "All users". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `allow_assign_grafana_admin` | No | No | Set to `true` to automatically sync the Grafana server administrator role. When enabled, if the Google user's App role is `GrafanaAdmin`, Grafana grants the user server administrator privileges and the organization administrator role. If disabled, the user will only receive the organization administrator role. For more details on user role mapping, refer to [Map roles](). | `false` | | `skip_org_role_sync` | No | Yes | Set to `true` to stop automatically syncing user roles. This will allow you to set organization roles for your users from within Grafana manually. | `false` | | `allowed_groups` | No | Yes | List of comma- or space-separated groups. The user should be a member of at least one group to log in. If you configure `allowed_groups`, you must also configure Google to include the `groups` claim following [Configure allowed groups](). | | | `allowed_organizations` | No | Yes | List of comma- or space-separated Azure tenant identifiers. The user should be a member of at least one tenant to log in. | | | `allowed_domains` | No | Yes | List of comma- or space-separated domains. The user should belong to at least one domain to log in. | | | `tls_skip_verify_insecure` | No | No | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL/TLS susceptible to man-in-the-middle attacks. | `false` | | `tls_client_cert` | No | No | The path to the certificate. | | | `tls_client_key` | No | No | The path to the key. | | | `tls_client_ca` | No | No | The path to the trusted certificate authority list. | | | `use_pkce` | No | Yes | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https://datatracker.ietf.org/doc/html/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier. | `true` | | `use_refresh_token` | No | Yes | Enables the use of refresh tokens and checks for access token expiration. When enabled, Grafana automatically adds the `promp=consent` and `access_type=offline` parameters to the authorization request. | `true` | | `signout_redirect_url` | No | Yes | URL to redirect to after the user logs out. | |
grafana setup
aliases auth google description Grafana Google OAuth Guide labels products cloud enterprise oss menuTitle Google OAuth title Configure Google OAuth authentication weight 1100 Configure Google OAuth authentication To enable Google OAuth you must register your application with Google Google will generate a client ID and secret key for you to use If Users use the same email address in Google that they use with other authentication providers such as Grafana com you need to do additional configuration to ensure that the users are matched correctly Please refer to the Using the same email address to login with different identity providers documentation for more information Create Google OAuth keys First you need to create a Google OAuth Client 1 Go to https console developers google com apis credentials 1 Create a new project if you don t have one already 1 Enter a project name The Organization and Location fields should both be set to your organization s information 1 In OAuth consent screen select the External User Type Click CREATE 1 Fill out the requested information using the URL of your Grafana Cloud instance 1 Accept the defaults or customize the consent screen options 1 Click Create Credentials then click OAuth Client ID in the drop down menu 1 Enter the following Application Type Web application Name Grafana Authorized JavaScript origins https YOUR GRAFANA URL Authorized redirect URIs https YOUR GRAFANA URL login google Replace YOUR GRAFANA URL with the URL of your Grafana instance The URL you enter is the one for your Grafana instance home page not your Grafana Cloud portal URL 1 Click Create 1 Copy the Client ID and Client Secret from the OAuth Client modal Configure Google authentication client using the Grafana UI Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle As a Grafana Admin you can configure Google OAuth client from within Grafana using the Google UI To do this navigate to Administration Authentication Google page and fill in the form If you have a current configuration in the Grafana configuration file then the form will be pre populated with those values otherwise the form will contain default values After you have filled in the form click Save If the save was successful Grafana will apply the new configurations If you need to reset changes made in the UI back to the default values click Reset After you have reset the changes Grafana will apply the configuration from the Grafana configuration file if there is any configuration or the default values If you run Grafana in high availability mode configuration changes may not get applied to all Grafana instances immediately You may need to wait a few minutes for the configuration to propagate to all Grafana instances Configure Google authentication client using the Terraform provider Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle Supported in the Terraform provider since v2 12 0 terraform resource grafana sso settings google sso settings provider name google oauth2 settings name Google client id CLIENT ID client secret CLIENT SECRET allow sign up true auto login false scopes openid email profile allowed domains mycompany com mycompany org hosted domain mycompany com use pkce true Go to Terraform Registry https registry terraform io providers grafana grafana latest docs resources sso settings for a complete reference on using the grafana sso settings resource Configure Google authentication client using the Grafana configuration file Ensure that you have access to the Grafana configuration file Enable Google OAuth in Grafana Specify the Client ID and Secret in the Grafana configuration file For example bash auth google enabled true allow sign up true auto login false client id CLIENT ID client secret CLIENT SECRET scopes openid email profile auth url https accounts google com o oauth2 v2 auth token url https oauth2 googleapis com token api url https openidconnect googleapis com v1 userinfo allowed domains mycompany com mycompany org hosted domain mycompany com use pkce true You may have to set the root url option of server for the callback URL to be correct For example in case you are serving Grafana behind a proxy Restart the Grafana back end You should now see a Google login button on the login page You can now login or sign up with your Google accounts The allowed domains option is optional and domains were separated by space You may allow users to sign up via Google authentication by setting the allow sign up option to true When this option is set to true any user successfully authenticating via Google authentication will be automatically signed up You may specify a domain to be passed as hd query parameter accepted by Google s OAuth 2 0 authentication API Refer to Google s OAuth documentation https developers google com identity openid connect openid connect hd param Since Grafana 10 3 0 the hd parameter retrieved from Google ID token is also used to determine the user s hosted domain The Google Oauth allowed domains configuration option is used to restrict access to users from a specific domain If the allowed domains configuration option is set the hd parameter from the Google ID token must match the allowed domains configuration option If the hd parameter from the Google ID token does not match the allowed domains configuration option the user is denied access When an account does not belong to a google workspace the hd claim will not be available This validation is enabled by default To disable this validation set the validate hd configuration option to false The allowed domains configuration option will use the email claim to validate the domain PKCE IETF s RFC 7636 https datatracker ietf org doc html rfc7636 introduces proof key for code exchange PKCE which provides additional protection against some forms of authorization code interception attacks PKCE will be required in OAuth 2 1 https datatracker ietf org doc html draft ietf oauth v2 1 03 You can disable PKCE in Grafana by setting use pkce to false in the auth google section Configure refresh token When a user logs in using an OAuth provider Grafana verifies that the access token has not expired When an access token expires Grafana uses the provided refresh token if any exists to obtain a new access token Grafana uses a refresh token to obtain a new access token without requiring the user to log in again If a refresh token doesn t exist Grafana logs the user out of the system after the access token has expired By default Grafana includes the access type offline parameter in the authorization request to request a refresh token Refresh token fetching and access token expiration check is enabled by default for the Google provider since Grafana v10 1 0 If you would like to disable access token expiration check then set the use refresh token configuration value to false The accessTokenExpirationCheck feature toggle has been removed in Grafana v10 3 0 and the use refresh token configuration value will be used instead for configuring refresh token fetching and access token expiration check Configure automatic login Set auto login option to true to attempt login automatically skipping the login screen This setting is ignored if multiple auth providers are configured to use auto login auto login true Configure group synchronization Available in Grafana Enterprise https grafana com docs grafana GRAFANA VERSION introduction grafana enterprise and Grafana Cloud docs grafana cloud Grafana supports syncing users to teams and roles based on their Google groups To set up group sync for Google OAuth 1 Enable the Google Cloud Identity API on your organization s dashboard https console cloud google com apis api cloudidentity googleapis com 1 Add the https www googleapis com auth cloud identity groups readonly scope to your Grafana auth google configuration Example ini auth google scopes openid email profile https www googleapis com auth cloud identity groups readonly The external group ID for a Google group is the group s email address such as dev grafana com To learn more about how to configure group synchronization refer to Configure team sync and Configure group attribute sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure group attribute sync documentation Configure allowed groups To limit access to authenticated users that are members of one or more groups set allowed groups to a comma or space separated list of groups Google groups are referenced by the group email key For example developers google com Add the https www googleapis com auth cloud identity groups readonly scope to your Grafana auth google scopes configuration to retrieve groups Configure role mapping Unless skip org role sync option is enabled the user s role will be set to the role mapped from Google upon user login If no mapping is set the default instance role is used The user s role is retrieved using a JMESPath http jmespath org examples html expression from the role attribute path configuration option To map the server administrator role use the allow assign grafana admin configuration option If no valid role is found the user is assigned the role specified by the auto assign org role option You can disable this default role assignment by setting role attribute strict true This setting denies user access if no role or an invalid role is returned after evaluating the role attribute path and the org mapping expressions To ease configuration of a proper JMESPath expression go to JMESPath http jmespath org to test and evaluate expressions with custom payloads By default skip org role sync is enabled skip org role sync will default to false in Grafana v10 3 0 and later versions Role mapping examples This section includes examples of JMESPath expressions used for role mapping Org roles mapping example The Google integration uses the external users groups in the org mapping configuration to map organizations and roles based on their Google group membership In this example the user has been granted the role of a Viewer in the org foo organization and the role of an Editor in the org bar and org baz orgs The external user is part of the following Google groups group 1 and group 2 Config ini org mapping group 1 org foo Viewer group 2 org bar Editor org baz Editor Map roles using user information from OAuth token In this example the user with email admin company com has been granted the Admin role All other users are granted the Viewer role ini role attribute path email admin company com Admin Viewer skip org role sync false Map roles using groups In this example the user from Google group example group google com have been granted the Editor role All other users are granted the Viewer role ini role attribute path contains groups example group google com Editor Viewer skip org role sync false Add the https www googleapis com auth cloud identity groups readonly scope to your Grafana auth google scopes configuration to retrieve groups Map server administrator role In this example the user with email admin company com has been granted the Admin organization role as well as the Grafana server admin role All other users are granted the Viewer role ini allow assign grafana admin true skip org role sync false role attribute path email admin company com GrafanaAdmin Viewer Map one role to all users In this example all users will be assigned Viewer role regardless of the user information received from the identity provider ini role attribute path Viewer skip org role sync false Configuration options The following table outlines the various Google OAuth configuration options You can apply these options as environment variables similar to any other configuration within Grafana For more information refer to Override configuration with environment variables Setting Required Supported on Cloud Description Default enabled No Yes Enables Google authentication false name No Yes Name that refers to the Google authentication from the Grafana user interface Google icon No Yes Icon used for the Google authentication in the Grafana user interface google client id Yes Yes Client ID of the App client secret Yes Yes Client secret of the App auth url Yes Yes Authorization endpoint of the Google OAuth provider https accounts google com o oauth2 v2 auth token url Yes Yes Endpoint used to obtain the OAuth2 access token https oauth2 googleapis com token api url Yes Yes Endpoint used to obtain user information compatible with OpenID UserInfo https connect2id com products server docs api userinfo https openidconnect googleapis com v1 userinfo auth style No Yes Name of the OAuth2 AuthStyle https pkg go dev golang org x oauth2 AuthStyle to be used when ID token is requested from OAuth2 provider It determines how client id and client secret are sent to Oauth2 provider Available values are AutoDetect InParams and InHeader AutoDetect scopes No Yes List of comma or space separated OAuth2 scopes openid email profile allow sign up No Yes Controls Grafana user creation through the Google login Only existing Grafana users can log in with Google if set to false true auto login No Yes Set to true to enable users to bypass the login screen and automatically log in This setting is ignored if you configure multiple auth providers to use auto login false hosted domain No Yes Specifies the domain to restrict access to users from that domain This value is appended to the authorization request using the hd parameter validate hd No Yes Set to false to disable the validation of the hd parameter from the Google ID token For more informatiion refer to Enable Google OAuth in Grafana true role attribute strict No Yes Set to true to deny user login if the Grafana org role cannot be extracted using role attribute path or org mapping For more information on user role mapping refer to Configure role mapping false org attribute path No No JMESPath http jmespath org examples html expression to use for Grafana org to role lookup Grafana will first evaluate the expression using the OAuth2 ID token If no value is returned the expression will be evaluated using the user information obtained from the UserInfo endpoint The result of the evaluation will be mapped to org roles based on org mapping For more information on org to role mapping refer to Org roles mapping example org roles mapping example org mapping No No List of comma or space separated ExternalOrgName OrgIdOrName Role mappings Value can be meaning All users Role is optional and can have the following values None Viewer Editor or Admin For more information on external organization to role mapping refer to Org roles mapping example org roles mapping example allow assign grafana admin No No Set to true to automatically sync the Grafana server administrator role When enabled if the Google user s App role is GrafanaAdmin Grafana grants the user server administrator privileges and the organization administrator role If disabled the user will only receive the organization administrator role For more details on user role mapping refer to Map roles false skip org role sync No Yes Set to true to stop automatically syncing user roles This will allow you to set organization roles for your users from within Grafana manually false allowed groups No Yes List of comma or space separated groups The user should be a member of at least one group to log in If you configure allowed groups you must also configure Google to include the groups claim following Configure allowed groups allowed organizations No Yes List of comma or space separated Azure tenant identifiers The user should be a member of at least one tenant to log in allowed domains No Yes List of comma or space separated domains The user should belong to at least one domain to log in tls skip verify insecure No No If set to true the client accepts any certificate presented by the server and any host name in that certificate You should only use this for testing because this mode leaves SSL TLS susceptible to man in the middle attacks false tls client cert No No The path to the certificate tls client key No No The path to the key tls client ca No No The path to the trusted certificate authority list use pkce No Yes Set to true to use Proof Key for Code Exchange PKCE https datatracker ietf org doc html rfc7636 Grafana uses the SHA256 based S256 challenge method and a 128 bytes base64url encoded code verifier true use refresh token No Yes Enables the use of refresh tokens and checks for access token expiration When enabled Grafana automatically adds the promp consent and access type offline parameters to the authorization request true signout redirect url No Yes URL to redirect to after the user logs out
grafana setup Grafana Azure AD OAuth Guide documentation labels aliases auth azuread keywords configuration grafana oauth
--- aliases: - ../../../auth/azuread/ description: Grafana Azure AD OAuth Guide keywords: - grafana - configuration - documentation - oauth labels: products: - cloud - enterprise - oss menuTitle: Azure AD/Entra ID OAuth title: Configure Azure AD/Entra ID OAuth authentication weight: 800 --- # Configure Azure AD/Entra ID OAuth authentication The Azure AD authentication allows you to use a Microsoft Entra ID (formerly known as Azure Active Directory) tenant as an identity provider for Grafana. You can use Entra ID application roles to assign users and groups to Grafana roles from the Azure Portal. If Users use the same email address in Microsoft Entra ID that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to [Using the same email address to login with different identity providers]() for more information. ## Create the Microsoft Entra ID application To enable the Azure AD/Entra ID OAuth, register your application with Entra ID. 1. Log in to [Azure Portal](https://portal.azure.com), then click **Microsoft Entra ID** in the side menu. 1. If you have access to more than one tenant, select your account in the upper right. Set your session to the Entra ID tenant you wish to use. 1. Under **Manage** in the side menu, click **App Registrations** > **New Registration**. Enter a descriptive name. 1. Under **Redirect URI**, select the app type **Web**. 1. Add the following redirect URLs `https://<grafana domain>/login/azuread` and `https://<grafana domain>` then click **Register**. The app's **Overview** page opens. 1. Note the **Application ID**. This is the OAuth client ID. 1. Click **Endpoints** from the top menu. - Note the **OAuth 2.0 authorization endpoint (v2)** URL. This is the authorization URL. - Note the **OAuth 2.0 token endpoint (v2)**. This is the token URL. 1. Click **Certificates & secrets** in the side menu, then add a new entry under **Client secrets** with the following configuration. - Description: Grafana OAuth - Expires: Select an expiration period 1. Click **Add** then copy the key **Value**. This is the OAuth client secret. Make sure that you copy the string in the **Value** field, rather than the one in the **Secret ID** field. 1. Define the required application roles for Grafana [using the Azure Portal](#configure-application-roles-for-grafana-in-the-azure-portal) or [using the manifest file](#configure-application-roles-for-grafana-in-the-manifest-file). 1. Go to **Microsoft Entra ID** and then to **Enterprise Applications**, under **Manage**. 1. Search for your application and click it. 1. Click **Users and Groups**. 1. Click **Add user/group** to add a user or group to the Grafana roles. When assigning a group to a Grafana role, ensure that users are direct members of the group. Users in nested groups will not have access to Grafana due to limitations within Azure AD/Entra ID side. For more information, see [Microsoft Entra service limits and restrictions](https://learn.microsoft.com/en-us/entra/identity/users/directory-service-limits-restrictions). ### Configure application roles for Grafana in the Azure Portal This section describes setting up basic application roles for Grafana within the Azure Portal. For more information, see [Add app roles to your application and receive them in the token](https://learn.microsoft.com/en-us/entra/identity-platform/howto-add-app-roles-in-apps). 1. Go to **App Registrations**, search for your application, and click it. 1. Click **App roles** and then **Create app role**. 1. Define a role corresponding to each Grafana role: Viewer, Editor, and Admin. 1. Choose a **Display name** for the role. For example, "Grafana Editor". 1. Set the **Allowed member types** to **Users/Groups**. 1. Ensure that the **Value** field matches the Grafana role name. For example, "Editor". 1. Choose a **Description** for the role. For example, "Grafana Editor Users". 1. Click **Apply**. ### Configure application roles for Grafana in the manifest file If you prefer to configure the application roles for Grafana in the manifest file, complete the following steps: 1. Go to **App Registrations**, search for your application, and click it. 1. Click **Manifest**. 1. Add a Universally Unique Identifier to each role. Every role requires a [Universally Unique Identifier](https://en.wikipedia.org/wiki/Universally_unique_identifier) which you can generate on Linux with `uuidgen`, and on Windows through Microsoft PowerShell with `New-Guid`. 1. Replace each "SOME_UNIQUE_ID" with the generated ID in the manifest file: ```json "appRoles": [ { "allowedMemberTypes": [ "User" ], "description": "Grafana org admin Users", "displayName": "Grafana Org Admin", "id": "SOME_UNIQUE_ID", "isEnabled": true, "lang": null, "origin": "Application", "value": "Admin" }, { "allowedMemberTypes": [ "User" ], "description": "Grafana read only Users", "displayName": "Grafana Viewer", "id": "SOME_UNIQUE_ID", "isEnabled": true, "lang": null, "origin": "Application", "value": "Viewer" }, { "allowedMemberTypes": [ "User" ], "description": "Grafana Editor Users", "displayName": "Grafana Editor", "id": "SOME_UNIQUE_ID", "isEnabled": true, "lang": null, "origin": "Application", "value": "Editor" } ], ``` 1. Click **Save**. ### Assign server administrator privileges If the application role received by Grafana is `GrafanaAdmin`, Grafana grants the user server administrator privileges. This is useful if you want to grant server administrator privileges to a subset of users. Grafana also assigns the user the `Admin` role of the default organization. The setting `allow_assign_grafana_admin` under `[auth.azuread]` must be set to `true` for this to work. If the setting is set to `false`, the user is assigned the role of `Admin` of the default organization, but not server administrator privileges. ```json { "allowedMemberTypes": ["User"], "description": "Grafana server admin Users", "displayName": "Grafana Server Admin", "id": "SOME_UNIQUE_ID", "isEnabled": true, "lang": null, "origin": "Application", "value": "GrafanaAdmin" } ``` ## Before you begin Ensure that you have followed the steps in [Create the Microsoft Entra ID application](#create-the-microsoft-entra-id-application) before you begin. ## Configure Azure AD authentication client using the Grafana UI Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. As a Grafana Admin, you can configure your Azure AD/Entra ID OAuth client from within Grafana using the Grafana UI. To do this, navigate to the **Administration > Authentication > Azure AD** page and fill in the form. If you have a current configuration in the Grafana configuration file, the form will be pre-populated with those values. Otherwise the form will contain default values. After you have filled in the form, click **Save** to save the configuration. If the save was successful, Grafana will apply the new configurations. If you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values. If you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances. ## Configure Azure AD authentication client using the Terraform provider Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0. ```terraform resource "grafana_sso_settings" "azuread_sso_settings" { provider_name = "azuread" oauth2_settings { name = "Azure AD" auth_url = "https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/authorize" token_url = "https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token" client_id = "APPLICATION_ID" client_secret = "CLIENT_SECRET" allow_sign_up = true auto_login = false scopes = "openid email profile" allowed_organizations = "TENANT_ID" role_attribute_strict = false allow_assign_grafana_admin = false skip_org_role_sync = false use_pkce = true } } ``` Refer to [Terraform Registry](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/sso_settings) for a complete reference on using the `grafana_sso_settings` resource. ## Configure Azure AD authentication client using the Grafana configuration file Ensure that you have access to the [Grafana configuration file](). ### Enable Azure AD OAuth in Grafana Add the following to the [Grafana configuration file](): ``` [auth.azuread] name = Azure AD enabled = true allow_sign_up = true auto_login = false client_id = APPLICATION_ID client_secret = CLIENT_SECRET scopes = openid email profile auth_url = https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/authorize token_url = https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token allowed_domains = allowed_groups = allowed_organizations = TENANT_ID role_attribute_strict = false allow_assign_grafana_admin = false skip_org_role_sync = false use_pkce = true ``` You can also use these environment variables to configure **client_id** and **client_secret**: ``` GF_AUTH_AZUREAD_CLIENT_ID GF_AUTH_AZUREAD_CLIENT_SECRET ``` Verify that the Grafana [root_url]() is set in your Azure Application Redirect URLs. ### Configure refresh token When a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token. Grafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired. Refresh token fetching and access token expiration check is enabled by default for the AzureAD provider since Grafana v10.1.0. If you would like to disable access token expiration check then set the `use_refresh_token` configuration value to `false`. > **Note:** The `accessTokenExpirationCheck` feature toggle has been removed in Grafana v10.3.0 and the `use_refresh_token` configuration value will be used instead for configuring refresh token fetching and access token expiration check. ### Configure allowed tenants To limit access to authenticated users who are members of one or more tenants, set `allowed_organizations` to a comma- or space-separated list of tenant IDs. You can find tenant IDs on the Azure portal under **Microsoft Entra ID -> Overview**. Make sure to include the tenant IDs of all the federated Users' root directory if your Entra ID contains external identities. For example, if you want to only give access to members of the tenant `example` with an ID of `8bab1c86-8fba-33e5-2089-1d1c80ec267d`, then set the following: ``` allowed_organizations = 8bab1c86-8fba-33e5-2089-1d1c80ec267d ``` ### Configure allowed groups Microsoft Entra ID groups can be used to limit user access to Grafana. For more information about managing groups in Entra ID, refer to [Manage Microsoft Entra groups and group membership](https://learn.microsoft.com/en-us/entra/fundamentals/how-to-manage-groups). To limit access to authenticated users who are members of one or more Entra ID groups, set `allowed_groups` to a **comma-** or **space-separated** list of group object IDs. 1. To find object IDs for a specific group on the Azure portal, go to **Microsoft Entra ID > Manage > Groups**. You can find the Object Id of a group by clicking on the group and then clicking on **Properties**. The object ID is listed under **Object ID**. If you want to only give access to members of the group `example` with an Object Id of `8bab1c86-8fba-33e5-2089-1d1c80ec267d`, then set the following: ``` allowed_groups = 8bab1c86-8fba-33e5-2089-1d1c80ec267d ``` 1. You must enable adding the [group attribute](https://learn.microsoft.com/en-us/entra/identity-platform/optional-claims#configure-groups-optional-claims) to the tokens in your Entra ID App registration either [from the Azure Portal](#configure-group-membership-claims-on-the-azure-portal) or [from the manifest file](#configure-group-membership-claim-in-the-manifest-file). #### Configure group membership claims on the Azure Portal To ensure that the `groups` claim is included in the token, add the `groups` claim to the token configuration either through the Azure Portal UI or by editing the manifest file. To configure group membership claims from the Azure Portal UI, complete the following steps: 1. Navigate to the **App Registrations** page and select your application. 1. Under **Manage** in the side menu, select **Token configuration**. 1. Click **Add groups claim** and select the relevant option for your use case (for example, **Security groups** and **Groups assigned to the application**). For more information, see [Configure groups optional claims](https://learn.microsoft.com/en-us/entra/identity-platform/optional-claims#configure-groups-optional-claims). If the user is a member of more than 200 groups, Entra ID does not emit the groups claim in the token and instead emits a group overage claim. To set up a group overage claim, see [Users with over 200 Group assignments](#users-with-over-200-group-assignments). #### Configure group membership claim in the manifest file 1. Go to **App Registrations**, search for your application, and click it. 1. Click **Manifest**. 1. Add the following to the root of the manifest file: ``` "groupMembershipClaims": "ApplicationGroup, SecurityGroup" ``` ### Configure allowed domains The `allowed_domains` option limits access to users who belong to specific domains. Separate domains with space or comma. For example, ``` allowed_domains = mycompany.com mycompany.org ``` ### PKCE IETF's [RFC 7636](https://datatracker.ietf.org/doc/html/rfc7636) introduces "proof key for code exchange" (PKCE) which provides additional protection against some forms of authorization code interception attacks. PKCE will be required in [OAuth 2.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-03). > You can disable PKCE in Grafana by setting `use_pkce` to `false` in the`[auth.azuread]` section. ### Configure automatic login To bypass the login screen and log in automatically, enable the "auto_login" feature. This setting is ignored if multiple auth providers are configured to use auto login. ``` auto_login = true ``` ### Group sync (Enterprise only) With group sync you can map your Entra ID groups to teams and roles in Grafana. This allows users to automatically be added to the correct teams and be granted the correct roles in Grafana. You can reference Entra ID groups by group object ID, like `8bab1c86-8fba-33e5-2089-1d1c80ec267d`. To learn more about group synchronization, refer to [Configure team sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-team-sync) and [Configure group attribute sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-group-attribute-sync). ## Common troubleshooting Here are some common issues and particulars you can run into when configuring Azure AD authentication in Grafana. ### Users with over 200 Group assignments To ensure that the token size doesn't exceed HTTP header size limits, Entra ID limits the number of object IDs that it includes in the groups claim. If a user is member of more groups than the overage limit (200), then Entra ID does not emit the groups claim in the token and emits a group overage claim instead. > More information in [Groups overage claim](https://learn.microsoft.com/en-us/entra/identity-platform/id-token-claims-reference#groups-overage-claim) If Grafana receives a token with a group overage claim instead of a groups claim, Grafana attempts to retrieve the user's group membership by calling the included endpoint. The 'App registration' must include the `GroupMember.Read.All` API permission for group overage claim calls to succeed. Admin consent might be required for this permission. #### Configure the required Graph API permissions 1. Navigate to **Microsoft Entra ID > Manage > App registrations** and select your application. 1. Select **API permissions** and then click on **Add a permission**. 1. Select **Microsoft Graph** from the list of APIs. 1. Select **Delegated permissions**. 1. Under the **GroupMember** section, select **GroupMember.Read.All**. 1. Click **Add permissions**. Admin consent may be required for this permission. ### Force fetching groups from Microsoft Graph API To force fetching groups from Microsoft Graph API instead of the `id_token`. You can use the `force_use_graph_api` config option. ``` force_use_graph_api = true ``` ### Map roles By default, Azure AD authentication will map users to organization roles based on the most privileged application role assigned to the user in Entra ID. If no application role is found, the user is assigned the role specified by [the `auto_assign_org_role` option](). You can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned and the `org_mapping` expression evaluates to an empty mapping. You can use the `org_mapping` configuration option to assign the user to multiple organizations and specify their role based on their Entra ID group membership. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If the org role mapping (`org_mapping`) is specified and Entra ID returns a valid role, then the user will get the highest of the two roles. **On every login** the user organization role will be reset to match Entra ID's application role and their organization membership will be reset to the default organization. #### Org roles mapping example The Entra ID integration uses the external users' groups in the `org_mapping` configuration to map organizations and roles based on their Entra ID group membership. In this example, the user has been granted the role of a `Viewer` in the `org_foo` organization, and the role of an `Editor` in the `org_bar` and `org_baz` orgs. The external user is part of the following Entra ID groups: `032cb8e0-240f-4347-9120-6f33013e817a` and `bce1c492-0679-4989-941b-8de5e6789cb9`. Config: ```ini org_mapping = ["032cb8e0-240f-4347-9120-6f33013e817a:org_foo:Viewer", "bce1c492-0679-4989-941b-8de5e6789cb9:org_bar:Editor", "*:org_baz:Editor"] ``` ## Skip organization role sync If Azure AD authentication is not intended to sync user roles and organization membership and prevent the sync of org roles from Entra ID, set `skip_org_role_sync` to `true`. This is useful if you want to manage the organization roles for your users from within Grafana or that your organization roles are synced from another provider. See [Configure Grafana]() for more details. ```ini [auth.azuread] # .. # prevents the sync of org roles from AzureAD skip_org_role_sync = true ``` ## Configuration options The following table outlines the various Azure AD/Entra ID configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables](). | Setting | Required | Supported on Cloud | Description | Default | | ---------------------------- | -------- | ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------- | | `enabled` | No | Yes | Enables Azure AD/Entra ID authentication. | `false` | | `name` | No | Yes | Name that refers to the Azure AD/Entra ID authentication from the Grafana user interface. | `OAuth` | | `icon` | No | Yes | Icon used for the Azure AD/Entra ID authentication in the Grafana user interface. | `signin` | | `client_id` | Yes | Yes | Client ID of the App (`Application (client) ID` on the **App registration** dashboard). | | | `client_secret` | Yes | Yes | Client secret of the App. | | | `auth_url` | Yes | Yes | Authorization endpoint of the Azure AD/Entra ID OAuth2 provider. | | | `token_url` | Yes | Yes | Endpoint used to obtain the OAuth2 access token. | | | `auth_style` | No | Yes | Name of the [OAuth2 AuthStyle](https://pkg.go.dev/golang.org/x/oauth2#AuthStyle) to be used when ID token is requested from OAuth2 provider. It determines how `client_id` and `client_secret` are sent to Oauth2 provider. Available values are `AutoDetect`, `InParams` and `InHeader`. | `AutoDetect` | | `scopes` | No | Yes | List of comma- or space-separated OAuth2 scopes. | `openid email profile` | | `allow_sign_up` | No | Yes | Controls Grafana user creation through the Azure AD/Entra ID login. Only existing Grafana users can log in with Azure AD/Entra ID if set to `false`. | `true` | | `auto_login` | No | Yes | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login. | `false` | | `role_attribute_strict` | No | Yes | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Map roles](). | `false` | | `org_attribute_path` | No | No | [JMESPath](http://jmespath.org/examples.html) expression to use for Grafana org to role lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no value is returned, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation will be mapped to org roles based on `org_mapping`. For more information on org to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `org_mapping` | No | No | List of comma- or space-separated `<ExternalOrgName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning "All users". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `allow_assign_grafana_admin` | No | No | Set to `true` to automatically sync the Grafana server administrator role. When enabled, if the Azure AD/Entra ID user's App role is `GrafanaAdmin`, Grafana grants the user server administrator privileges and the organization administrator role. If disabled, the user will only receive the organization administrator role. For more details on user role mapping, refer to [Map roles](). | `false` | | `skip_org_role_sync` | No | Yes | Set to `true` to stop automatically syncing user roles. This will allow you to set organization roles for your users from within Grafana manually. | `false` | | `allowed_groups` | No | Yes | List of comma- or space-separated groups. The user should be a member of at least one group to log in. If you configure `allowed_groups`, you must also configure Azure AD/Entra ID to include the `groups` claim following [Configure group membership claims on the Azure Portal](). | | | `allowed_organizations` | No | Yes | List of comma- or space-separated Azure tenant identifiers. The user should be a member of at least one tenant to log in. | | | `allowed_domains` | No | Yes | List of comma- or space-separated domains. The user should belong to at least one domain to log in. | | | `tls_skip_verify_insecure` | No | No | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL/TLS susceptible to man-in-the-middle attacks. | `false` | | `tls_client_cert` | No | No | The path to the certificate. | | | `tls_client_key` | No | No | The path to the key. | | | `tls_client_ca` | No | No | The path to the trusted certificate authority list. | | | `use_pkce` | No | Yes | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https://datatracker.ietf.org/doc/html/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier. | `true` | | `use_refresh_token` | No | Yes | Enables the use of refresh tokens and checks for access token expiration. When enabled, Grafana automatically adds the `offline_access` scope to the list of scopes. | `true` | | `force_use_graph_api` | No | Yes | Set to `true` to always fetch groups from the Microsoft Graph API instead of the `id_token`. If a user belongs to more than 200 groups, the Microsoft Graph API will be used to retrieve the groups regardless of this setting. | `false` | | `signout_redirect_url` | No | Yes | URL to redirect to after the user logs out. | |
grafana setup
aliases auth azuread description Grafana Azure AD OAuth Guide keywords grafana configuration documentation oauth labels products cloud enterprise oss menuTitle Azure AD Entra ID OAuth title Configure Azure AD Entra ID OAuth authentication weight 800 Configure Azure AD Entra ID OAuth authentication The Azure AD authentication allows you to use a Microsoft Entra ID formerly known as Azure Active Directory tenant as an identity provider for Grafana You can use Entra ID application roles to assign users and groups to Grafana roles from the Azure Portal If Users use the same email address in Microsoft Entra ID that they use with other authentication providers such as Grafana com you need to do additional configuration to ensure that the users are matched correctly Please refer to Using the same email address to login with different identity providers for more information Create the Microsoft Entra ID application To enable the Azure AD Entra ID OAuth register your application with Entra ID 1 Log in to Azure Portal https portal azure com then click Microsoft Entra ID in the side menu 1 If you have access to more than one tenant select your account in the upper right Set your session to the Entra ID tenant you wish to use 1 Under Manage in the side menu click App Registrations New Registration Enter a descriptive name 1 Under Redirect URI select the app type Web 1 Add the following redirect URLs https grafana domain login azuread and https grafana domain then click Register The app s Overview page opens 1 Note the Application ID This is the OAuth client ID 1 Click Endpoints from the top menu Note the OAuth 2 0 authorization endpoint v2 URL This is the authorization URL Note the OAuth 2 0 token endpoint v2 This is the token URL 1 Click Certificates secrets in the side menu then add a new entry under Client secrets with the following configuration Description Grafana OAuth Expires Select an expiration period 1 Click Add then copy the key Value This is the OAuth client secret Make sure that you copy the string in the Value field rather than the one in the Secret ID field 1 Define the required application roles for Grafana using the Azure Portal configure application roles for grafana in the azure portal or using the manifest file configure application roles for grafana in the manifest file 1 Go to Microsoft Entra ID and then to Enterprise Applications under Manage 1 Search for your application and click it 1 Click Users and Groups 1 Click Add user group to add a user or group to the Grafana roles When assigning a group to a Grafana role ensure that users are direct members of the group Users in nested groups will not have access to Grafana due to limitations within Azure AD Entra ID side For more information see Microsoft Entra service limits and restrictions https learn microsoft com en us entra identity users directory service limits restrictions Configure application roles for Grafana in the Azure Portal This section describes setting up basic application roles for Grafana within the Azure Portal For more information see Add app roles to your application and receive them in the token https learn microsoft com en us entra identity platform howto add app roles in apps 1 Go to App Registrations search for your application and click it 1 Click App roles and then Create app role 1 Define a role corresponding to each Grafana role Viewer Editor and Admin 1 Choose a Display name for the role For example Grafana Editor 1 Set the Allowed member types to Users Groups 1 Ensure that the Value field matches the Grafana role name For example Editor 1 Choose a Description for the role For example Grafana Editor Users 1 Click Apply Configure application roles for Grafana in the manifest file If you prefer to configure the application roles for Grafana in the manifest file complete the following steps 1 Go to App Registrations search for your application and click it 1 Click Manifest 1 Add a Universally Unique Identifier to each role Every role requires a Universally Unique Identifier https en wikipedia org wiki Universally unique identifier which you can generate on Linux with uuidgen and on Windows through Microsoft PowerShell with New Guid 1 Replace each SOME UNIQUE ID with the generated ID in the manifest file json appRoles allowedMemberTypes User description Grafana org admin Users displayName Grafana Org Admin id SOME UNIQUE ID isEnabled true lang null origin Application value Admin allowedMemberTypes User description Grafana read only Users displayName Grafana Viewer id SOME UNIQUE ID isEnabled true lang null origin Application value Viewer allowedMemberTypes User description Grafana Editor Users displayName Grafana Editor id SOME UNIQUE ID isEnabled true lang null origin Application value Editor 1 Click Save Assign server administrator privileges If the application role received by Grafana is GrafanaAdmin Grafana grants the user server administrator privileges This is useful if you want to grant server administrator privileges to a subset of users Grafana also assigns the user the Admin role of the default organization The setting allow assign grafana admin under auth azuread must be set to true for this to work If the setting is set to false the user is assigned the role of Admin of the default organization but not server administrator privileges json allowedMemberTypes User description Grafana server admin Users displayName Grafana Server Admin id SOME UNIQUE ID isEnabled true lang null origin Application value GrafanaAdmin Before you begin Ensure that you have followed the steps in Create the Microsoft Entra ID application create the microsoft entra id application before you begin Configure Azure AD authentication client using the Grafana UI Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle As a Grafana Admin you can configure your Azure AD Entra ID OAuth client from within Grafana using the Grafana UI To do this navigate to the Administration Authentication Azure AD page and fill in the form If you have a current configuration in the Grafana configuration file the form will be pre populated with those values Otherwise the form will contain default values After you have filled in the form click Save to save the configuration If the save was successful Grafana will apply the new configurations If you need to reset changes you made in the UI back to the default values click Reset After you have reset the changes Grafana will apply the configuration from the Grafana configuration file if there is any configuration or the default values If you run Grafana in high availability mode configuration changes may not get applied to all Grafana instances immediately You may need to wait a few minutes for the configuration to propagate to all Grafana instances Configure Azure AD authentication client using the Terraform provider Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle Supported in the Terraform provider since v2 12 0 terraform resource grafana sso settings azuread sso settings provider name azuread oauth2 settings name Azure AD auth url https login microsoftonline com TENANT ID oauth2 v2 0 authorize token url https login microsoftonline com TENANT ID oauth2 v2 0 token client id APPLICATION ID client secret CLIENT SECRET allow sign up true auto login false scopes openid email profile allowed organizations TENANT ID role attribute strict false allow assign grafana admin false skip org role sync false use pkce true Refer to Terraform Registry https registry terraform io providers grafana grafana latest docs resources sso settings for a complete reference on using the grafana sso settings resource Configure Azure AD authentication client using the Grafana configuration file Ensure that you have access to the Grafana configuration file Enable Azure AD OAuth in Grafana Add the following to the Grafana configuration file auth azuread name Azure AD enabled true allow sign up true auto login false client id APPLICATION ID client secret CLIENT SECRET scopes openid email profile auth url https login microsoftonline com TENANT ID oauth2 v2 0 authorize token url https login microsoftonline com TENANT ID oauth2 v2 0 token allowed domains allowed groups allowed organizations TENANT ID role attribute strict false allow assign grafana admin false skip org role sync false use pkce true You can also use these environment variables to configure client id and client secret GF AUTH AZUREAD CLIENT ID GF AUTH AZUREAD CLIENT SECRET Verify that the Grafana root url is set in your Azure Application Redirect URLs Configure refresh token When a user logs in using an OAuth provider Grafana verifies that the access token has not expired When an access token expires Grafana uses the provided refresh token if any exists to obtain a new access token Grafana uses a refresh token to obtain a new access token without requiring the user to log in again If a refresh token doesn t exist Grafana logs the user out of the system after the access token has expired Refresh token fetching and access token expiration check is enabled by default for the AzureAD provider since Grafana v10 1 0 If you would like to disable access token expiration check then set the use refresh token configuration value to false Note The accessTokenExpirationCheck feature toggle has been removed in Grafana v10 3 0 and the use refresh token configuration value will be used instead for configuring refresh token fetching and access token expiration check Configure allowed tenants To limit access to authenticated users who are members of one or more tenants set allowed organizations to a comma or space separated list of tenant IDs You can find tenant IDs on the Azure portal under Microsoft Entra ID Overview Make sure to include the tenant IDs of all the federated Users root directory if your Entra ID contains external identities For example if you want to only give access to members of the tenant example with an ID of 8bab1c86 8fba 33e5 2089 1d1c80ec267d then set the following allowed organizations 8bab1c86 8fba 33e5 2089 1d1c80ec267d Configure allowed groups Microsoft Entra ID groups can be used to limit user access to Grafana For more information about managing groups in Entra ID refer to Manage Microsoft Entra groups and group membership https learn microsoft com en us entra fundamentals how to manage groups To limit access to authenticated users who are members of one or more Entra ID groups set allowed groups to a comma or space separated list of group object IDs 1 To find object IDs for a specific group on the Azure portal go to Microsoft Entra ID Manage Groups You can find the Object Id of a group by clicking on the group and then clicking on Properties The object ID is listed under Object ID If you want to only give access to members of the group example with an Object Id of 8bab1c86 8fba 33e5 2089 1d1c80ec267d then set the following allowed groups 8bab1c86 8fba 33e5 2089 1d1c80ec267d 1 You must enable adding the group attribute https learn microsoft com en us entra identity platform optional claims configure groups optional claims to the tokens in your Entra ID App registration either from the Azure Portal configure group membership claims on the azure portal or from the manifest file configure group membership claim in the manifest file Configure group membership claims on the Azure Portal To ensure that the groups claim is included in the token add the groups claim to the token configuration either through the Azure Portal UI or by editing the manifest file To configure group membership claims from the Azure Portal UI complete the following steps 1 Navigate to the App Registrations page and select your application 1 Under Manage in the side menu select Token configuration 1 Click Add groups claim and select the relevant option for your use case for example Security groups and Groups assigned to the application For more information see Configure groups optional claims https learn microsoft com en us entra identity platform optional claims configure groups optional claims If the user is a member of more than 200 groups Entra ID does not emit the groups claim in the token and instead emits a group overage claim To set up a group overage claim see Users with over 200 Group assignments users with over 200 group assignments Configure group membership claim in the manifest file 1 Go to App Registrations search for your application and click it 1 Click Manifest 1 Add the following to the root of the manifest file groupMembershipClaims ApplicationGroup SecurityGroup Configure allowed domains The allowed domains option limits access to users who belong to specific domains Separate domains with space or comma For example allowed domains mycompany com mycompany org PKCE IETF s RFC 7636 https datatracker ietf org doc html rfc7636 introduces proof key for code exchange PKCE which provides additional protection against some forms of authorization code interception attacks PKCE will be required in OAuth 2 1 https datatracker ietf org doc html draft ietf oauth v2 1 03 You can disable PKCE in Grafana by setting use pkce to false in the auth azuread section Configure automatic login To bypass the login screen and log in automatically enable the auto login feature This setting is ignored if multiple auth providers are configured to use auto login auto login true Group sync Enterprise only With group sync you can map your Entra ID groups to teams and roles in Grafana This allows users to automatically be added to the correct teams and be granted the correct roles in Grafana You can reference Entra ID groups by group object ID like 8bab1c86 8fba 33e5 2089 1d1c80ec267d To learn more about group synchronization refer to Configure team sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure team sync and Configure group attribute sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure group attribute sync Common troubleshooting Here are some common issues and particulars you can run into when configuring Azure AD authentication in Grafana Users with over 200 Group assignments To ensure that the token size doesn t exceed HTTP header size limits Entra ID limits the number of object IDs that it includes in the groups claim If a user is member of more groups than the overage limit 200 then Entra ID does not emit the groups claim in the token and emits a group overage claim instead More information in Groups overage claim https learn microsoft com en us entra identity platform id token claims reference groups overage claim If Grafana receives a token with a group overage claim instead of a groups claim Grafana attempts to retrieve the user s group membership by calling the included endpoint The App registration must include the GroupMember Read All API permission for group overage claim calls to succeed Admin consent might be required for this permission Configure the required Graph API permissions 1 Navigate to Microsoft Entra ID Manage App registrations and select your application 1 Select API permissions and then click on Add a permission 1 Select Microsoft Graph from the list of APIs 1 Select Delegated permissions 1 Under the GroupMember section select GroupMember Read All 1 Click Add permissions Admin consent may be required for this permission Force fetching groups from Microsoft Graph API To force fetching groups from Microsoft Graph API instead of the id token You can use the force use graph api config option force use graph api true Map roles By default Azure AD authentication will map users to organization roles based on the most privileged application role assigned to the user in Entra ID If no application role is found the user is assigned the role specified by the auto assign org role option You can disable this default role assignment by setting role attribute strict true This setting denies user access if no role or an invalid role is returned and the org mapping expression evaluates to an empty mapping You can use the org mapping configuration option to assign the user to multiple organizations and specify their role based on their Entra ID group membership For more information refer to Org roles mapping example org roles mapping example If the org role mapping org mapping is specified and Entra ID returns a valid role then the user will get the highest of the two roles On every login the user organization role will be reset to match Entra ID s application role and their organization membership will be reset to the default organization Org roles mapping example The Entra ID integration uses the external users groups in the org mapping configuration to map organizations and roles based on their Entra ID group membership In this example the user has been granted the role of a Viewer in the org foo organization and the role of an Editor in the org bar and org baz orgs The external user is part of the following Entra ID groups 032cb8e0 240f 4347 9120 6f33013e817a and bce1c492 0679 4989 941b 8de5e6789cb9 Config ini org mapping 032cb8e0 240f 4347 9120 6f33013e817a org foo Viewer bce1c492 0679 4989 941b 8de5e6789cb9 org bar Editor org baz Editor Skip organization role sync If Azure AD authentication is not intended to sync user roles and organization membership and prevent the sync of org roles from Entra ID set skip org role sync to true This is useful if you want to manage the organization roles for your users from within Grafana or that your organization roles are synced from another provider See Configure Grafana for more details ini auth azuread prevents the sync of org roles from AzureAD skip org role sync true Configuration options The following table outlines the various Azure AD Entra ID configuration options You can apply these options as environment variables similar to any other configuration within Grafana For more information refer to Override configuration with environment variables Setting Required Supported on Cloud Description Default enabled No Yes Enables Azure AD Entra ID authentication false name No Yes Name that refers to the Azure AD Entra ID authentication from the Grafana user interface OAuth icon No Yes Icon used for the Azure AD Entra ID authentication in the Grafana user interface signin client id Yes Yes Client ID of the App Application client ID on the App registration dashboard client secret Yes Yes Client secret of the App auth url Yes Yes Authorization endpoint of the Azure AD Entra ID OAuth2 provider token url Yes Yes Endpoint used to obtain the OAuth2 access token auth style No Yes Name of the OAuth2 AuthStyle https pkg go dev golang org x oauth2 AuthStyle to be used when ID token is requested from OAuth2 provider It determines how client id and client secret are sent to Oauth2 provider Available values are AutoDetect InParams and InHeader AutoDetect scopes No Yes List of comma or space separated OAuth2 scopes openid email profile allow sign up No Yes Controls Grafana user creation through the Azure AD Entra ID login Only existing Grafana users can log in with Azure AD Entra ID if set to false true auto login No Yes Set to true to enable users to bypass the login screen and automatically log in This setting is ignored if you configure multiple auth providers to use auto login false role attribute strict No Yes Set to true to deny user login if the Grafana org role cannot be extracted using role attribute path or org mapping For more information on user role mapping refer to Map roles false org attribute path No No JMESPath http jmespath org examples html expression to use for Grafana org to role lookup Grafana will first evaluate the expression using the OAuth2 ID token If no value is returned the expression will be evaluated using the user information obtained from the UserInfo endpoint The result of the evaluation will be mapped to org roles based on org mapping For more information on org to role mapping refer to Org roles mapping example org roles mapping example org mapping No No List of comma or space separated ExternalOrgName OrgIdOrName Role mappings Value can be meaning All users Role is optional and can have the following values None Viewer Editor or Admin For more information on external organization to role mapping refer to Org roles mapping example org roles mapping example allow assign grafana admin No No Set to true to automatically sync the Grafana server administrator role When enabled if the Azure AD Entra ID user s App role is GrafanaAdmin Grafana grants the user server administrator privileges and the organization administrator role If disabled the user will only receive the organization administrator role For more details on user role mapping refer to Map roles false skip org role sync No Yes Set to true to stop automatically syncing user roles This will allow you to set organization roles for your users from within Grafana manually false allowed groups No Yes List of comma or space separated groups The user should be a member of at least one group to log in If you configure allowed groups you must also configure Azure AD Entra ID to include the groups claim following Configure group membership claims on the Azure Portal allowed organizations No Yes List of comma or space separated Azure tenant identifiers The user should be a member of at least one tenant to log in allowed domains No Yes List of comma or space separated domains The user should belong to at least one domain to log in tls skip verify insecure No No If set to true the client accepts any certificate presented by the server and any host name in that certificate You should only use this for testing because this mode leaves SSL TLS susceptible to man in the middle attacks false tls client cert No No The path to the certificate tls client key No No The path to the key tls client ca No No The path to the trusted certificate authority list use pkce No Yes Set to true to use Proof Key for Code Exchange PKCE https datatracker ietf org doc html rfc7636 Grafana uses the SHA256 based S256 challenge method and a 128 bytes base64url encoded code verifier true use refresh token No Yes Enables the use of refresh tokens and checks for access token expiration When enabled Grafana automatically adds the offline access scope to the list of scopes true force use graph api No Yes Set to true to always fetch groups from the Microsoft Graph API instead of the id token If a user belongs to more than 200 groups the Microsoft Graph API will be used to retrieve the groups regardless of this setting false signout redirect url No Yes URL to redirect to after the user logs out
grafana setup Learn about configuring LDAP authentication in Grafana using the Grafana UI aliases products auth enhanced ldap menuTitle LDAP user interface labels cloud oss enterprise
--- aliases: - ../../../auth/enhanced-ldap/ description: Learn about configuring LDAP authentication in Grafana using the Grafana UI. labels: products: - cloud - enterprise - oss menuTitle: LDAP user interface title: Configure LDAP authentication using the Grafana user interface weight: 300 --- # Configure LDAP authentication using the Grafana user interface This page explains how to configure LDAP authentication in Grafana using the Grafana user interface. For more detailed information about configuring LDAP authentication using the configuration file, refer to [LDAP authentication](). Benefits of using the Grafana user interface to configure LDAP authentication include: - There is no need to edit the configuration file manually. - Quickly test the connection to the LDAP server. - There is no need to restart Grafana after making changes. Any configuration changes made through the Grafana user interface (UI) will take precedence over settings specified in the Grafana configuration file or through environment variables. If you modify any configuration settings in the UI, they will override any corresponding settings set via environment variables or defined in the configuration file. ## Before you begin Prerequisites: - Knowledge of LDAP authentication and how it works. - Grafana instance v11.3.0 or later. - Permissions `settings:read` and `settings:write` with `settings:auth.ldap:*` scope. - This feature requires the `ssoSettingsLDAP` feature toggle to be enabled. ## Steps to configure LDAP authentication Sign in to Grafana and navigate to **Administration > Authentication > LDAP**. ### 1. Complete mandatory fields The mandatory fields have an asterisk (**\***) next to them. Complete the following fields: 1. **Server host**: Host name or IP address of the LDAP server. 1. **Search filter**: The LDAP search filter finds entries within the directory. 1. **Search base DNS**: List of base DNs to search through. ### 2. Complete optional fields Complete the optional fields as needed: 1. **Bind DN**: Distinguished name (DN) of the user to bind to. 1. **Bind password**: Password for the server. ### 3. Advanced settings Click the **Edit** button in the **Advanced settings** section to configure the following settings: #### 1. Miscellaneous settings Complementary settings for LDAP authentication. 1. **Allow sign-up**: Allows new users to register upon logging in. 1. **Port**: Port number of the LDAP server. The default is 389. 1. **Timeout**: Time in seconds to wait for a response from the LDAP server. #### 2. Attributes Attributes used to map LDAP user assertion to Grafana user attributes. 1. **Name**: Name of the assertion attribute to map to the Grafana user name. 1. **Surname**: Name of the assertion attribute to map to the Grafana user surname. 1. **Username**: Name of the assertion attribute to map to the Grafana user username. 1. **Member Of**: Name of the assertion attribute to map to the Grafana user membership. 1. **Email**: Name of the assertion attribute to map to the Grafana user email. #### 3. Group mapping Map LDAP groups to Grafana roles. 1. **Skip organization role sync**: This option avoids syncing organization roles. It is useful when you want to manage roles manually. 1. **Group search filter**: The LDAP search filter finds groups within the directory. 1. **Group search base DNS**: List of base DNS to specify the matching groups' locations. 1. **Group name attribute**: Identifies users within group entries. 1. **Manage group mappings**: When managing group mappings, the following fields will become available. To add a new group mapping, click the **Add group mapping** button. 1. **Add a group DN mapping**: The name of the key used to extract the ID token. 1. **Add an organization role mapping**: Select the Basic Role mapped to this group. 1. **Add the organization ID membership mapping**: Map the group to an organization ID. 1. **Define Grafana Admin membership**: Enable Grafana Admin privileges to the group. #### 4. Extra security settings Additional security settings options for LDAP authentication. 1. **Enable SSL**: This option will enable SSL to connect to the LDAP server. 1. **Start TLS**: Use StartTLS to secure the connection to the LDAP server. 1. **Min TLS version**: Choose the minimum TLS version to use. TLS1.2 or TLS1.3 1. **TLS ciphers**: List the ciphers to use for the connection. For a complete list of ciphers, refer to the [Cipher Go library](https://go.dev/src/crypto/tls/cipher_suites.go). 1. **Encryption key and certificate provision specification**: This section allows you to specify the key and certificate for the LDAP server. You can provide the key and certificate in two ways: **base-64** encoded or **path to files**. 1. **Base-64 encoded certificate**: All values used in this section must be base-64 encoded. 1. **Root CA certificate content**: List of root CA certificates. 1. **Client certificate content**: Client certificate content. 1. **Client key content**: Client key content. 1. **Path to files**: Path in the file system to the key and certificate files 1. **Root CA certificate path**: Path to the root CA certificate. 1. **Client certificate path**: Path to the client certificate. 1. **Client key path**: Path to the client key. ### 4. Persisting the configuration Once you have configured the LDAP settings, click **Save** to persist the configuration. If you want to delete all the changes made through the UI and revert to the configuration file settings, click the three dots menu icon and click **Reset to default values**.
grafana setup
aliases auth enhanced ldap description Learn about configuring LDAP authentication in Grafana using the Grafana UI labels products cloud enterprise oss menuTitle LDAP user interface title Configure LDAP authentication using the Grafana user interface weight 300 Configure LDAP authentication using the Grafana user interface This page explains how to configure LDAP authentication in Grafana using the Grafana user interface For more detailed information about configuring LDAP authentication using the configuration file refer to LDAP authentication Benefits of using the Grafana user interface to configure LDAP authentication include There is no need to edit the configuration file manually Quickly test the connection to the LDAP server There is no need to restart Grafana after making changes Any configuration changes made through the Grafana user interface UI will take precedence over settings specified in the Grafana configuration file or through environment variables If you modify any configuration settings in the UI they will override any corresponding settings set via environment variables or defined in the configuration file Before you begin Prerequisites Knowledge of LDAP authentication and how it works Grafana instance v11 3 0 or later Permissions settings read and settings write with settings auth ldap scope This feature requires the ssoSettingsLDAP feature toggle to be enabled Steps to configure LDAP authentication Sign in to Grafana and navigate to Administration Authentication LDAP 1 Complete mandatory fields The mandatory fields have an asterisk next to them Complete the following fields 1 Server host Host name or IP address of the LDAP server 1 Search filter The LDAP search filter finds entries within the directory 1 Search base DNS List of base DNs to search through 2 Complete optional fields Complete the optional fields as needed 1 Bind DN Distinguished name DN of the user to bind to 1 Bind password Password for the server 3 Advanced settings Click the Edit button in the Advanced settings section to configure the following settings 1 Miscellaneous settings Complementary settings for LDAP authentication 1 Allow sign up Allows new users to register upon logging in 1 Port Port number of the LDAP server The default is 389 1 Timeout Time in seconds to wait for a response from the LDAP server 2 Attributes Attributes used to map LDAP user assertion to Grafana user attributes 1 Name Name of the assertion attribute to map to the Grafana user name 1 Surname Name of the assertion attribute to map to the Grafana user surname 1 Username Name of the assertion attribute to map to the Grafana user username 1 Member Of Name of the assertion attribute to map to the Grafana user membership 1 Email Name of the assertion attribute to map to the Grafana user email 3 Group mapping Map LDAP groups to Grafana roles 1 Skip organization role sync This option avoids syncing organization roles It is useful when you want to manage roles manually 1 Group search filter The LDAP search filter finds groups within the directory 1 Group search base DNS List of base DNS to specify the matching groups locations 1 Group name attribute Identifies users within group entries 1 Manage group mappings When managing group mappings the following fields will become available To add a new group mapping click the Add group mapping button 1 Add a group DN mapping The name of the key used to extract the ID token 1 Add an organization role mapping Select the Basic Role mapped to this group 1 Add the organization ID membership mapping Map the group to an organization ID 1 Define Grafana Admin membership Enable Grafana Admin privileges to the group 4 Extra security settings Additional security settings options for LDAP authentication 1 Enable SSL This option will enable SSL to connect to the LDAP server 1 Start TLS Use StartTLS to secure the connection to the LDAP server 1 Min TLS version Choose the minimum TLS version to use TLS1 2 or TLS1 3 1 TLS ciphers List the ciphers to use for the connection For a complete list of ciphers refer to the Cipher Go library https go dev src crypto tls cipher suites go 1 Encryption key and certificate provision specification This section allows you to specify the key and certificate for the LDAP server You can provide the key and certificate in two ways base 64 encoded or path to files 1 Base 64 encoded certificate All values used in this section must be base 64 encoded 1 Root CA certificate content List of root CA certificates 1 Client certificate content Client certificate content 1 Client key content Client key content 1 Path to files Path in the file system to the key and certificate files 1 Root CA certificate path Path to the root CA certificate 1 Client certificate path Path to the client certificate 1 Client key path Path to the client key 4 Persisting the configuration Once you have configured the LDAP settings click Save to persist the configuration If you want to delete all the changes made through the UI and revert to the configuration file settings click the three dots menu icon and click Reset to default values
grafana setup documentation labels aliases auth github keywords configuration grafana Configure GitHub OAuth authentication oauth
--- aliases: - ../../../auth/github/ description: Configure GitHub OAuth authentication keywords: - grafana - configuration - documentation - oauth labels: products: - cloud - enterprise - oss menuTitle: GitHub OAuth title: Configure GitHub OAuth authentication weight: 900 --- # Configure GitHub OAuth authentication This topic describes how to configure GitHub OAuth authentication. If Users use the same email address in GitHub that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information. ## Before you begin Ensure you know how to create a GitHub OAuth app. Consult GitHub's documentation on [creating an OAuth app](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/creating-an-oauth-app) for more information. ### Create a GitHub OAuth App 1. Log in to your GitHub account. In **Profile > Settings > Developer settings**, select **OAuth Apps**. 1. Click **New OAuth App**. 1. Fill out the fields, using your Grafana homepage URL when appropriate. In the **Authorization callback URL** field, enter the following: `https://<YOUR-GRAFANA-URL>/login/github` . 1. Note your client ID. 1. Generate, then note, your client secret. ## Configure GitHub authentication client using the Grafana UI Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. As a Grafana Admin, you can configure GitHub OAuth client from within Grafana using the GitHub UI. To do this, navigate to **Administration > Authentication > GitHub** page and fill in the form. If you have a current configuration in the Grafana configuration file, the form will be pre-populated with those values. Otherwise the form will contain default values. After you have filled in the form, click **Save** . If the save was successful, Grafana will apply the new configurations. If you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values. If you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances. Refer to [configuration options]() for more information. ## Configure GitHub authentication client using the Terraform provider Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0. ```terraform resource "grafana_sso_settings" "github_sso_settings" { provider_name = "github" oauth2_settings { name = "Github" client_id = "YOUR_GITHUB_APP_CLIENT_ID" client_secret = "YOUR_GITHUB_APP_CLIENT_SECRET" allow_sign_up = true auto_login = false scopes = "user:email,read:org" team_ids = "150,300" allowed_organizations = "[\"My Organization\", \"Octocats\"]" allowed_domains = "mycompany.com mycompany.org" role_attribute_path = "[login=='octocat'][0] && 'GrafanaAdmin' || 'Viewer'" } } ``` Go to [Terraform Registry](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/sso_settings) for a complete reference on using the `grafana_sso_settings` resource. ## Configure GitHub authentication client using the Grafana configuration file Ensure that you have access to the [Grafana configuration file](). ### Configure GitHub authentication To configure GitHub authentication with Grafana, follow these steps: 1. Create an OAuth application in GitHub. 1. Set the callback URL for your GitHub OAuth app to `http://<my_grafana_server_name_or_ip>:<grafana_server_port>/login/github`. Ensure that the callback URL is the complete HTTP address that you use to access Grafana via your browser, but with the appended path of `/login/github`. For the callback URL to be correct, it might be necessary to set the `root_url` option in the `[server]`section of the Grafana configuration file. For example, if you are serving Grafana behind a proxy. 1. Refer to the following table to update field values located in the `[auth.github]` section of the Grafana configuration file: | Field | Description | | ---------------------------- | ----------------------------------------------------------------------------------- | | `client_id`, `client_secret` | These values must match the client ID and client secret from your GitHub OAuth app. | | `enabled` | Enables GitHub authentication. Set this value to `true`. | Review the list of other GitHub [configuration options]() and complete them, as necessary. 1. [Configure role mapping](). 1. Optional: [Configure group synchronization](). 1. Restart Grafana. You should now see a GitHub login button on the login page and be able to log in or sign up with your GitHub accounts. ### Configure role mapping Unless `skip_org_role_sync` option is enabled, the user's role will be set to the role retrieved from GitHub upon user login. The user's role is retrieved using a [JMESPath](http://jmespath.org/examples.html) expression from the `role_attribute_path` configuration option. To map the server administrator role, use the `allow_assign_grafana_admin` configuration option. Refer to [configuration options]() for more information. If no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option](). You can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions. You can use the `org_mapping` configuration options to assign the user to organizations and specify their role based on their GitHub team membership. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If both org role mapping (`org_mapping`) and the regular role mapping (`role_attribute_path`) are specified, then the user will get the highest of the two mapped roles. To ease configuration of a proper JMESPath expression, go to [JMESPath](http://jmespath.org/) to test and evaluate expressions with custom payloads. #### Role mapping examples This section includes examples of JMESPath expressions used for role mapping. ##### Org roles mapping example The GitHub integration uses the external users' teams in the `org_mapping` configuration to map organizations and roles based on their GitHub team membership. In this example, the user has been granted the role of a `Viewer` in the `org_foo` organization, and the role of an `Editor` in the `org_bar` and `org_baz` orgs. The external user is part of the following GitHub teams: `@my-github-organization/my-github-team-1` and `@my-github-organization/my-github-team-2`. Config: ```ini org_mapping = @my-github-organization/my-github-team-1:org_foo:Viewer @my-github-organization/my-github-team-2:org_bar:Editor *:org_baz:Editor ``` ##### Map roles using GitHub user information In this example, the user with login `octocat` has been granted the `Admin` role. All other users are granted the `Viewer` role. ```bash role_attribute_path = [login=='octocat'][0] && 'Admin' || 'Viewer' ``` ##### Map roles using GitHub teams In this example, the user from GitHub team `my-github-team` has been granted the `Editor` role. All other users are granted the `Viewer` role. ```bash role_attribute_path = contains(groups[*], '@my-github-organization/my-github-team') && 'Editor' || 'Viewer' ``` ##### Map roles using multiple GitHub teams In this example, the users from GitHub teams `admins` and `devops` have been granted the `Admin` role, the users from GitHub teams `engineers` and `managers` have been granted the `Editor` role, the users from GitHub team `qa` have been granted the `Viewer` role and all other users are granted the `None` role. ```bash role_attribute_path = contains(groups[*], '@my-github-organization/admins') && 'Admin' || contains(groups[*], '@my-github-organization/devops') && 'Admin' || contains(groups[*], '@my-github-organization/engineers') && 'Editor' || contains(groups[*], '@my-github-organization/managers') && 'Editor' || contains(groups[*], '@my-github-organization/qa') && 'Viewer' || 'None' ``` ##### Map server administrator role In this example, the user with login `octocat` has been granted the `Admin` organization role as well as the Grafana server admin role. All other users are granted the `Viewer` role. ```bash role_attribute_path = [login=='octocat'][0] && 'GrafanaAdmin' || 'Viewer' ``` ##### Map one role to all users In this example, all users will be assigned `Viewer` role regardless of the user information received from the identity provider. ```ini role_attribute_path = "'Viewer'" skip_org_role_sync = false ``` ### Example of GitHub configuration in Grafana This section includes an example of GitHub configuration in the Grafana configuration file. ```bash [auth.github] enabled = true client_id = YOUR_GITHUB_APP_CLIENT_ID client_secret = YOUR_GITHUB_APP_CLIENT_SECRET scopes = user:email,read:org auth_url = https://github.com/login/oauth/authorize token_url = https://github.com/login/oauth/access_token api_url = https://api.github.com/user allow_sign_up = true auto_login = false team_ids = 150,300 allowed_organizations = ["My Organization", "Octocats"] allowed_domains = mycompany.com mycompany.org role_attribute_path = [login=='octocat'][0] && 'GrafanaAdmin' || 'Viewer' ``` ## Configure group synchronization Available in [Grafana Enterprise](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/introduction/grafana-enterprise) and [Grafana Cloud](/docs/grafana-cloud/). Grafana supports synchronization of teams from your GitHub organization with Grafana teams and roles. This allows automatically assigning users to the appropriate teams or granting them the mapped roles. Teams and roles get synchronized when the user logs in. GitHub teams can be referenced in two ways: - `https://github.com/orgs/<org>/teams/<slug>` - `@<org>/<slug>` Examples: `https://github.com/orgs/grafana/teams/developers` or `@grafana/developers`. To learn more about group synchronization, refer to [Configure team sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-team-sync) and [Configure group attribute sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-group-attribute-sync). ## Configuration options The table below describes all GitHub OAuth configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables](). If the configuration option requires a JMESPath expression that includes a colon, enclose the entire expression in quotes to prevent parsing errors. For example `role_attribute_path: "role:view"` | Setting | Required | Supported on Cloud | Description | Default | | ---------------------------- | -------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------- | | `enabled` | No | Yes | Whether GitHub OAuth authentication is allowed. | `false` | | `name` | No | Yes | Name used to refer to the GitHub authentication in the Grafana user interface. | `GitHub` | | `icon` | No | Yes | Icon used for GitHub authentication in the Grafana user interface. | `github` | | `client_id` | Yes | Yes | Client ID provided by your GitHub OAuth app. | | | `client_secret` | Yes | Yes | Client secret provided by your GitHub OAuth app. | | | `auth_url` | Yes | Yes | Authorization endpoint of your GitHub OAuth provider. | `https://github.com/login/oauth/authorize` | | `token_url` | Yes | Yes | Endpoint used to obtain GitHub OAuth access token. | `https://github.com/login/oauth/access_token` | | `api_url` | Yes | Yes | Endpoint used to obtain GitHub user information compatible with [OpenID UserInfo](https://connect2id.com/products/server/docs/api/userinfo). | `https://api.github.com/user` | | `scopes` | No | Yes | List of comma- or space-separated GitHub OAuth scopes. | `user:email,read:org` | | `allow_sign_up` | No | Yes | Whether to allow new Grafana user creation through GitHub login. If set to `false`, then only existing Grafana users can log in with GitHub OAuth. | `true` | | `auto_login` | No | Yes | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login. | `false` | | `role_attribute_path` | No | Yes | [JMESPath](http://jmespath.org/examples.html) expression to use for Grafana role lookup. Grafana will first evaluate the expression using the user information obtained from the UserInfo endpoint. If no role is found, Grafana creates a JSON data with `groups` key that maps to GitHub teams obtained from GitHub's [`/api/user/teams`](https://docs.github.com/en/rest/teams/teams#list-teams-for-the-authenticated-user) endpoint, and evaluates the expression using this data. The result of the evaluation should be a valid Grafana role (`None`, `Viewer`, `Editor`, `Admin` or `GrafanaAdmin`). For more information on user role mapping, refer to [Configure role mapping](#org-roles-mapping-example). | | | `role_attribute_strict` | No | Yes | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Configure role mapping](#org-roles-mapping-example). | `false` | | `org_mapping` | No | No | List of comma- or space-separated `<ExternalGitHubTeamName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning "All users". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `skip_org_role_sync` | No | Yes | Set to `true` to stop automatically syncing user roles. | `false` | | `allow_assign_grafana_admin` | No | No | Set to `true` to enable automatic sync of the Grafana server administrator role. If this option is set to `true` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user the server administrator privileges and organization administrator role. If this option is set to `false` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user only organization administrator role. For more information on user role mapping, refer to [Configure role mapping](). | `false` | | `allowed_organizations` | No | Yes | List of comma- or space-separated organizations. User must be a member of at least one organization to log in. | | | `allowed_domains` | No | Yes | List of comma- or space-separated domains. User must belong to at least one domain to log in. | | | `team_ids` | No | Yes | Integer list of team IDs. If set, user has to be a member of one of the given teams to log in. | | | `tls_skip_verify_insecure` | No | No | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL/TLS susceptible to man-in-the-middle attacks. | `false` | | `tls_client_cert` | No | No | The path to the certificate. | | | `tls_client_key` | No | No | The path to the key. | | | `tls_client_ca` | No | No | The path to the trusted certificate authority list. | | | `signout_redirect_url` | No | Yes | URL to redirect to after the user logs out. | |
grafana setup
aliases auth github description Configure GitHub OAuth authentication keywords grafana configuration documentation oauth labels products cloud enterprise oss menuTitle GitHub OAuth title Configure GitHub OAuth authentication weight 900 Configure GitHub OAuth authentication This topic describes how to configure GitHub OAuth authentication If Users use the same email address in GitHub that they use with other authentication providers such as Grafana com you need to do additional configuration to ensure that the users are matched correctly Please refer to the Using the same email address to login with different identity providers documentation for more information Before you begin Ensure you know how to create a GitHub OAuth app Consult GitHub s documentation on creating an OAuth app https docs github com en apps oauth apps building oauth apps creating an oauth app for more information Create a GitHub OAuth App 1 Log in to your GitHub account In Profile Settings Developer settings select OAuth Apps 1 Click New OAuth App 1 Fill out the fields using your Grafana homepage URL when appropriate In the Authorization callback URL field enter the following https YOUR GRAFANA URL login github 1 Note your client ID 1 Generate then note your client secret Configure GitHub authentication client using the Grafana UI Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle As a Grafana Admin you can configure GitHub OAuth client from within Grafana using the GitHub UI To do this navigate to Administration Authentication GitHub page and fill in the form If you have a current configuration in the Grafana configuration file the form will be pre populated with those values Otherwise the form will contain default values After you have filled in the form click Save If the save was successful Grafana will apply the new configurations If you need to reset changes you made in the UI back to the default values click Reset After you have reset the changes Grafana will apply the configuration from the Grafana configuration file if there is any configuration or the default values If you run Grafana in high availability mode configuration changes may not get applied to all Grafana instances immediately You may need to wait a few minutes for the configuration to propagate to all Grafana instances Refer to configuration options for more information Configure GitHub authentication client using the Terraform provider Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle Supported in the Terraform provider since v2 12 0 terraform resource grafana sso settings github sso settings provider name github oauth2 settings name Github client id YOUR GITHUB APP CLIENT ID client secret YOUR GITHUB APP CLIENT SECRET allow sign up true auto login false scopes user email read org team ids 150 300 allowed organizations My Organization Octocats allowed domains mycompany com mycompany org role attribute path login octocat 0 GrafanaAdmin Viewer Go to Terraform Registry https registry terraform io providers grafana grafana latest docs resources sso settings for a complete reference on using the grafana sso settings resource Configure GitHub authentication client using the Grafana configuration file Ensure that you have access to the Grafana configuration file Configure GitHub authentication To configure GitHub authentication with Grafana follow these steps 1 Create an OAuth application in GitHub 1 Set the callback URL for your GitHub OAuth app to http my grafana server name or ip grafana server port login github Ensure that the callback URL is the complete HTTP address that you use to access Grafana via your browser but with the appended path of login github For the callback URL to be correct it might be necessary to set the root url option in the server section of the Grafana configuration file For example if you are serving Grafana behind a proxy 1 Refer to the following table to update field values located in the auth github section of the Grafana configuration file Field Description client id client secret These values must match the client ID and client secret from your GitHub OAuth app enabled Enables GitHub authentication Set this value to true Review the list of other GitHub configuration options and complete them as necessary 1 Configure role mapping 1 Optional Configure group synchronization 1 Restart Grafana You should now see a GitHub login button on the login page and be able to log in or sign up with your GitHub accounts Configure role mapping Unless skip org role sync option is enabled the user s role will be set to the role retrieved from GitHub upon user login The user s role is retrieved using a JMESPath http jmespath org examples html expression from the role attribute path configuration option To map the server administrator role use the allow assign grafana admin configuration option Refer to configuration options for more information If no valid role is found the user is assigned the role specified by the auto assign org role option You can disable this default role assignment by setting role attribute strict true This setting denies user access if no role or an invalid role is returned after evaluating the role attribute path and the org mapping expressions You can use the org mapping configuration options to assign the user to organizations and specify their role based on their GitHub team membership For more information refer to Org roles mapping example org roles mapping example If both org role mapping org mapping and the regular role mapping role attribute path are specified then the user will get the highest of the two mapped roles To ease configuration of a proper JMESPath expression go to JMESPath http jmespath org to test and evaluate expressions with custom payloads Role mapping examples This section includes examples of JMESPath expressions used for role mapping Org roles mapping example The GitHub integration uses the external users teams in the org mapping configuration to map organizations and roles based on their GitHub team membership In this example the user has been granted the role of a Viewer in the org foo organization and the role of an Editor in the org bar and org baz orgs The external user is part of the following GitHub teams my github organization my github team 1 and my github organization my github team 2 Config ini org mapping my github organization my github team 1 org foo Viewer my github organization my github team 2 org bar Editor org baz Editor Map roles using GitHub user information In this example the user with login octocat has been granted the Admin role All other users are granted the Viewer role bash role attribute path login octocat 0 Admin Viewer Map roles using GitHub teams In this example the user from GitHub team my github team has been granted the Editor role All other users are granted the Viewer role bash role attribute path contains groups my github organization my github team Editor Viewer Map roles using multiple GitHub teams In this example the users from GitHub teams admins and devops have been granted the Admin role the users from GitHub teams engineers and managers have been granted the Editor role the users from GitHub team qa have been granted the Viewer role and all other users are granted the None role bash role attribute path contains groups my github organization admins Admin contains groups my github organization devops Admin contains groups my github organization engineers Editor contains groups my github organization managers Editor contains groups my github organization qa Viewer None Map server administrator role In this example the user with login octocat has been granted the Admin organization role as well as the Grafana server admin role All other users are granted the Viewer role bash role attribute path login octocat 0 GrafanaAdmin Viewer Map one role to all users In this example all users will be assigned Viewer role regardless of the user information received from the identity provider ini role attribute path Viewer skip org role sync false Example of GitHub configuration in Grafana This section includes an example of GitHub configuration in the Grafana configuration file bash auth github enabled true client id YOUR GITHUB APP CLIENT ID client secret YOUR GITHUB APP CLIENT SECRET scopes user email read org auth url https github com login oauth authorize token url https github com login oauth access token api url https api github com user allow sign up true auto login false team ids 150 300 allowed organizations My Organization Octocats allowed domains mycompany com mycompany org role attribute path login octocat 0 GrafanaAdmin Viewer Configure group synchronization Available in Grafana Enterprise https grafana com docs grafana GRAFANA VERSION introduction grafana enterprise and Grafana Cloud docs grafana cloud Grafana supports synchronization of teams from your GitHub organization with Grafana teams and roles This allows automatically assigning users to the appropriate teams or granting them the mapped roles Teams and roles get synchronized when the user logs in GitHub teams can be referenced in two ways https github com orgs org teams slug org slug Examples https github com orgs grafana teams developers or grafana developers To learn more about group synchronization refer to Configure team sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure team sync and Configure group attribute sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure group attribute sync Configuration options The table below describes all GitHub OAuth configuration options You can apply these options as environment variables similar to any other configuration within Grafana For more information refer to Override configuration with environment variables If the configuration option requires a JMESPath expression that includes a colon enclose the entire expression in quotes to prevent parsing errors For example role attribute path role view Setting Required Supported on Cloud Description Default enabled No Yes Whether GitHub OAuth authentication is allowed false name No Yes Name used to refer to the GitHub authentication in the Grafana user interface GitHub icon No Yes Icon used for GitHub authentication in the Grafana user interface github client id Yes Yes Client ID provided by your GitHub OAuth app client secret Yes Yes Client secret provided by your GitHub OAuth app auth url Yes Yes Authorization endpoint of your GitHub OAuth provider https github com login oauth authorize token url Yes Yes Endpoint used to obtain GitHub OAuth access token https github com login oauth access token api url Yes Yes Endpoint used to obtain GitHub user information compatible with OpenID UserInfo https connect2id com products server docs api userinfo https api github com user scopes No Yes List of comma or space separated GitHub OAuth scopes user email read org allow sign up No Yes Whether to allow new Grafana user creation through GitHub login If set to false then only existing Grafana users can log in with GitHub OAuth true auto login No Yes Set to true to enable users to bypass the login screen and automatically log in This setting is ignored if you configure multiple auth providers to use auto login false role attribute path No Yes JMESPath http jmespath org examples html expression to use for Grafana role lookup Grafana will first evaluate the expression using the user information obtained from the UserInfo endpoint If no role is found Grafana creates a JSON data with groups key that maps to GitHub teams obtained from GitHub s api user teams https docs github com en rest teams teams list teams for the authenticated user endpoint and evaluates the expression using this data The result of the evaluation should be a valid Grafana role None Viewer Editor Admin or GrafanaAdmin For more information on user role mapping refer to Configure role mapping org roles mapping example role attribute strict No Yes Set to true to deny user login if the Grafana org role cannot be extracted using role attribute path or org mapping For more information on user role mapping refer to Configure role mapping org roles mapping example false org mapping No No List of comma or space separated ExternalGitHubTeamName OrgIdOrName Role mappings Value can be meaning All users Role is optional and can have the following values None Viewer Editor or Admin For more information on external organization to role mapping refer to Org roles mapping example org roles mapping example skip org role sync No Yes Set to true to stop automatically syncing user roles false allow assign grafana admin No No Set to true to enable automatic sync of the Grafana server administrator role If this option is set to true and the result of evaluating role attribute path for a user is GrafanaAdmin Grafana grants the user the server administrator privileges and organization administrator role If this option is set to false and the result of evaluating role attribute path for a user is GrafanaAdmin Grafana grants the user only organization administrator role For more information on user role mapping refer to Configure role mapping false allowed organizations No Yes List of comma or space separated organizations User must be a member of at least one organization to log in allowed domains No Yes List of comma or space separated domains User must belong to at least one domain to log in team ids No Yes Integer list of team IDs If set user has to be a member of one of the given teams to log in tls skip verify insecure No No If set to true the client accepts any certificate presented by the server and any host name in that certificate You should only use this for testing because this mode leaves SSL TLS susceptible to man in the middle attacks false tls client cert No No The path to the certificate tls client key No No The path to the key tls client ca No No The path to the trusted certificate authority list signout redirect url No Yes URL to redirect to after the user logs out
grafana setup Configure Generic OAuth authentication documentation labels auth generic oauth aliases keywords configuration grafana oauth
--- aliases: - ../../../auth/generic-oauth/ description: Configure Generic OAuth authentication keywords: - grafana - configuration - documentation - oauth labels: products: - cloud - enterprise - oss menuTitle: Generic OAuth title: Configure Generic OAuth authentication weight: 700 --- # Configure Generic OAuth authentication Grafana provides OAuth2 integrations for the following auth providers: - [Azure AD OAuth]() - [GitHub OAuth]() - [GitLab OAuth]() - [Google OAuth]() - [Grafana Com OAuth]() - [Keycloak OAuth]() - [Okta OAuth]() If your OAuth2 provider is not listed, you can use Generic OAuth authentication. This topic describes how to configure Generic OAuth authentication using different methods and includes [examples of setting up Generic OAuth]() with specific OAuth2 providers. ## Before you begin To follow this guide: - Ensure you know how to create an OAuth2 application with your OAuth2 provider. Consult the documentation of your OAuth2 provider for more information. - Ensure your identity provider returns OpenID UserInfo compatible information such as the `sub` claim. - If you are using refresh tokens, ensure you know how to set them up with your OAuth2 provider. Consult the documentation of your OAuth2 provider for more information. If Users use the same email address in Azure AD that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information. ## Configure generic OAuth authentication client using the Grafana UI Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. As a Grafana Admin, you can configure Generic OAuth client from within Grafana using the Generic OAuth UI. To do this, navigate to **Administration > Authentication > Generic OAuth** page and fill in the form. If you have a current configuration in the Grafana configuration file then the form will be pre-populated with those values otherwise the form will contain default values. After you have filled in the form, click **Save** to save the configuration. If the save was successful, Grafana will apply the new configurations. If you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values. If you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances. Refer to [configuration options]() for more information. ## Configure generic OAuth authentication client using the Terraform provider Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0. ```terraform resource "grafana_sso_settings" "generic_sso_settings" { provider_name = "generic_oauth" oauth2_settings { name = "Auth0" auth_url = "https://<domain>/authorize" token_url = "https://<domain>/oauth/token" api_url = "https://<domain>/userinfo" client_id = "<client id>" client_secret = "<client secret>" allow_sign_up = true auto_login = false scopes = "openid profile email offline_access" use_pkce = true use_refresh_token = true } } ``` Refer to [Terraform Registry](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/sso_settings) for a complete reference on using the `grafana_sso_settings` resource. ## Configure generic OAuth authentication client using the Grafana configuration file Ensure that you have access to the [Grafana configuration file](). ### Steps To integrate your OAuth2 provider with Grafana using our Generic OAuth authentication, follow these steps: 1. Create an OAuth2 application in your chosen OAuth2 provider. 1. Set the callback URL for your OAuth2 app to `http://<my_grafana_server_name_or_ip>:<grafana_server_port>/login/generic_oauth`. Ensure that the callback URL is the complete HTTP address that you use to access Grafana via your browser, but with the appended path of `/login/generic_oauth`. For the callback URL to be correct, it might be necessary to set the `root_url` option in the `[server]`section of the Grafana configuration file. For example, if you are serving Grafana behind a proxy. 1. Refer to the following table to update field values located in the `[auth.generic_oauth]` section of the Grafana configuration file: | Field | Description | | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `client_id`, `client_secret` | These values must match the client ID and client secret from your OAuth2 app. | | `auth_url` | The authorization endpoint of your OAuth2 provider. | | `api_url` | The user information endpoint of your OAuth2 provider. Information returned by this endpoint must be compatible with [OpenID UserInfo](https://connect2id.com/products/server/docs/api/userinfo). | | `enabled` | Enables Generic OAuth authentication. Set this value to `true`. | Review the list of other Generic OAuth [configuration options]() and complete them, as necessary. 1. Optional: [Configure a refresh token](): a. Extend the `scopes` field of `[auth.generic_oauth]` section in Grafana configuration file with refresh token scope used by your OAuth2 provider. b. Set `use_refresh_token` to `true` in `[auth.generic_oauth]` section in Grafana configuration file. c. Enable the refresh token on the provider if required. 1. [Configure role mapping](). 1. Optional: [Configure group synchronization](). 1. Restart Grafana. You should now see a Generic OAuth login button on the login page and be able to log in or sign up with your OAuth2 provider. ### Configure login Grafana can resolve a user's login from the OAuth2 ID token or user information retrieved from the OAuth2 UserInfo endpoint. Grafana looks at these sources in the order listed until it finds a login. If no login is found, then the user's login is set to user's email address. Refer to the following table for information on what to configure based on how your Oauth2 provider returns a user's login: | Source of login | Required configuration | | ------------------------------------------------------------------------------- | ------------------------------------------------ | | `login` or `username` field of the OAuth2 ID token. | N/A | | Another field of the OAuth2 ID token. | Set `login_attribute_path` configuration option. | | `login` or `username` field of the user information from the UserInfo endpoint. | N/A | | Another field of the user information from the UserInfo endpoint. | Set `login_attribute_path` configuration option. | ### Configure display name Grafana can resolve a user's display name from the OAuth2 ID token or user information retrieved from the OAuth2 UserInfo endpoint. Grafana looks at these sources in the order listed until it finds a display name. If no display name is found, then user's login is displayed instead. Refer to the following table for information on what you need to configure depending on how your Oauth2 provider returns a user's name: | Source of display name | Required configuration | | ---------------------------------------------------------------------------------- | ----------------------------------------------- | | `name` or `display_name` field of the OAuth2 ID token. | N/A | | Another field of the OAuth2 ID token. | Set `name_attribute_path` configuration option. | | `name` or `display_name` field of the user information from the UserInfo endpoint. | N/A | | Another field of the user information from the UserInfo endpoint. | Set `name_attribute_path` configuration option. | ### Configure email address Grafana can resolve the user's email address from the OAuth2 ID token, the user information retrieved from the OAuth2 UserInfo endpoint, or the OAuth2 `/emails` endpoint. Grafana looks at these sources in the order listed until an email address is found. If no email is found, then the email address of the user is set to an empty string. Refer to the following table for information on what to configure based on how the Oauth2 provider returns a user's email address: | Source of email address | Required configuration | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | | `email` field of the OAuth2 ID token. | N/A | | `attributes` map of the OAuth2 ID token. | Set `email_attribute_name` configuration option. By default, Grafana searches for email under `email:primary` key. | | `upn` field of the OAuth2 ID token. | N/A | | `email` field of the user information from the UserInfo endpoint. | N/A | | Another field of the user information from the UserInfo endpoint. | Set `email_attribute_path` configuration option. | | Email address marked as primary from the `/emails` endpoint of <br /> the OAuth2 provider (obtained by appending `/emails` to the URL <br /> configured with `api_url`) | N/A | ### Configure a refresh token When a user logs in using an OAuth2 provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token. Grafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired. To configure Generic OAuth to use a refresh token, set `use_refresh_token` configuration option to `true` and perform one or both of the following steps, if required: 1. Extend the `scopes` field of `[auth.generic_oauth]` section in Grafana configuration file with additional scopes. 1. Enable the refresh token on the provider. > **Note:** The `accessTokenExpirationCheck` feature toggle has been removed in Grafana v10.3.0 and the `use_refresh_token` configuration value will be used instead for configuring refresh token fetching and access token expiration check. ### Configure role mapping Unless `skip_org_role_sync` option is enabled, the user's role will be set to the role retrieved from the auth provider upon user login. The user's role is retrieved using a [JMESPath](http://jmespath.org/examples.html) expression from the `role_attribute_path` configuration option. To map the server administrator role, use the `allow_assign_grafana_admin` configuration option. Refer to [configuration options]() for more information. If no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option](). You can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions. You can use the `org_attribute_path` and `org_mapping` configuration options to assign the user to organizations and specify their role. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If both org role mapping (`org_mapping`) and the regular role mapping (`role_attribute_path`) are specified, then the user will get the highest of the two mapped roles. To ease configuration of a proper JMESPath expression, go to [JMESPath](http://jmespath.org/) to test and evaluate expressions with custom payloads. #### Role mapping examples This section includes examples of JMESPath expressions used for role mapping. ##### Map user organization role In this example, the user has been granted the role of an `Editor`. The role assigned is based on the value of the property `role`, which must be a valid Grafana role such as `Admin`, `Editor`, `Viewer` or `None`. Payload: ```json { ... "role": "Editor", ... } ``` Config: ```bash role_attribute_path = role ``` In the following more complex example, the user has been granted the `Admin` role. This is because they are a member of the `admin` group of their OAuth2 provider. If the user was a member of the `editor` group, they would be granted the `Editor` role, otherwise `Viewer`. Payload: ```json { ... "info": { ... "groups": [ "engineer", "admin", ], ... }, ... } ``` Config: ```bash role_attribute_path = contains(info.groups[*], 'admin') && 'Admin' || contains(info.groups[*], 'editor') && 'Editor' || 'Viewer' ``` ##### Map server administrator role In the following example, the user is granted the Grafana server administrator role. Payload: ```json { ... "info": { ... "roles": [ "admin", ], ... }, ... } ``` Config: ```ini role_attribute_path = contains(info.roles[*], 'admin') && 'GrafanaAdmin' || contains(info.roles[*], 'editor') && 'Editor' || 'Viewer' allow_assign_grafana_admin = true ``` ##### Map one role to all users In this example, all users will be assigned `Viewer` role regardless of the user information received from the identity provider. Config: ```ini role_attribute_path = "'Viewer'" skip_org_role_sync = false ``` #### Org roles mapping example In this example, the user has been granted the role of a `Viewer` in the `org_foo` org, and the role of an `Editor` in the `org_bar` and `org_baz` orgs. If the user was a member of the `admin` group, they would be granted the Grafana server administrator role. Payload: ```json { ... "info": { ... "roles": [ "org_foo", "org_bar", "another_org" ], ... }, ... } ``` Config: ```ini role_attribute_path = contains(info.roles[*], 'admin') && 'GrafanaAdmin' || 'None' allow_assign_grafana_admin = true org_attribute_path = info.roles org_mapping = org_foo:org_foo:Viewer org_bar:org_bar:Editor *:org_baz:Editor ``` ## Configure group synchronization Available in [Grafana Enterprise](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/introduction/grafana-enterprise) and [Grafana Cloud](/docs/grafana-cloud/). Grafana supports synchronization of OAuth2 groups with Grafana teams and roles. This allows automatically assigning users to the appropriate teams or automatically granting them the mapped roles. Teams and roles get synchronized when the user logs in. Generic OAuth groups can be referenced by group ID, such as `8bab1c86-8fba-33e5-2089-1d1c80ec267d` or `myteam`. For information on configuring OAuth2 groups with Grafana using the `groups_attribute_path` configuration option, refer to [configuration options](). To learn more about group synchronization, refer to [Configure team sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-team-sync) and [Configure group attribute sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-group-attribute-sync). #### Group attribute synchronization example Configuration: ```bash groups_attribute_path = info.groups ``` Payload: ```json { ... "info": { ... "groups": [ "engineers", "analysts", ], ... }, ... } ``` ## Configuration options The following table outlines the various Generic OAuth configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables](). If the configuration option requires a JMESPath expression that includes a colon, enclose the entire expression in quotes to prevent parsing errors. For example `role_attribute_path: "role:view"` | Setting | Required | Supported on Cloud | Description | Default | | ---------------------------- | -------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | | `enabled` | No | Yes | Enables Generic OAuth authentication. | `false` | | `name` | No | Yes | Name that refers to the Generic OAuth authentication from the Grafana user interface. | `OAuth` | | `icon` | No | Yes | Icon used for the Generic OAuth authentication in the Grafana user interface. | `signin` | | `client_id` | Yes | Yes | Client ID provided by your OAuth2 app. | | | `client_secret` | Yes | Yes | Client secret provided by your OAuth2 app. | | | `auth_url` | Yes | Yes | Authorization endpoint of your OAuth2 provider. | | | `token_url` | Yes | Yes | Endpoint used to obtain the OAuth2 access token. | | | `api_url` | Yes | Yes | Endpoint used to obtain user information compatible with [OpenID UserInfo](https://connect2id.com/products/server/docs/api/userinfo). | | | `auth_style` | No | Yes | Name of the [OAuth2 AuthStyle](https://pkg.go.dev/golang.org/x/oauth2#AuthStyle) to be used when ID token is requested from OAuth2 provider. It determines how `client_id` and `client_secret` are sent to Oauth2 provider. Available values are `AutoDetect`, `InParams` and `InHeader`. | `AutoDetect` | | `scopes` | No | Yes | List of comma- or space-separated OAuth2 scopes. | `user:email` | | `empty_scopes` | No | Yes | Set to `true` to use an empty scope during authentication. | `false` | | `allow_sign_up` | No | Yes | Controls Grafana user creation through the Generic OAuth login. Only existing Grafana users can log in with Generic OAuth if set to `false`. | `true` | | `auto_login` | No | Yes | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login. | `false` | | `id_token_attribute_name` | No | Yes | The name of the key used to extract the ID token from the returned OAuth2 token. | `id_token` | | `login_attribute_path` | No | Yes | [JMESPath](http://jmespath.org/examples.html) expression to use for user login lookup from the user ID token. For more information on how user login is retrieved, refer to [Configure login](). | | | `name_attribute_path` | No | Yes | [JMESPath](http://jmespath.org/examples.html) expression to use for user name lookup from the user ID token. This name will be used as the user's display name. For more information on how user display name is retrieved, refer to [Configure display name](). | | | `email_attribute_path` | No | Yes | [JMESPath](http://jmespath.org/examples.html) expression to use for user email lookup from the user information. For more information on how user email is retrieved, refer to [Configure email address](). | | | `email_attribute_name` | No | Yes | Name of the key to use for user email lookup within the `attributes` map of OAuth2 ID token. For more information on how user email is retrieved, refer to [Configure email address](). | `email:primary` | | `role_attribute_path` | No | Yes | [JMESPath](http://jmespath.org/examples.html) expression to use for Grafana role lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no role is found, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation should be a valid Grafana role (`None`, `Viewer`, `Editor`, `Admin` or `GrafanaAdmin`). For more information on user role mapping, refer to [Configure role mapping](). | | | `role_attribute_strict` | No | Yes | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Configure role mapping](). | `false` | | `skip_org_role_sync` | No | Yes | Set to `true` to stop automatically syncing user roles. This will allow you to set organization roles for your users from within Grafana manually. | `false` | | `org_attribute_path` | No | No | [JMESPath](http://jmespath.org/examples.html) expression to use for Grafana org to role lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no value is returned, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation will be mapped to org roles based on `org_mapping`. For more information on org to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `org_mapping` | No | No | List of comma- or space-separated `<ExternalOrgName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning "All users". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `allow_assign_grafana_admin` | No | No | Set to `true` to enable automatic sync of the Grafana server administrator role. If this option is set to `true` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user the server administrator privileges and organization administrator role. If this option is set to `false` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user only organization administrator role. For more information on user role mapping, refer to [Configure role mapping](). | `false` | | `groups_attribute_path` | No | Yes | [JMESPath](http://jmespath.org/examples.html) expression to use for user group lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no groups are found, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation should be a string array of groups. | | | `allowed_groups` | No | Yes | List of comma- or space-separated groups. The user should be a member of at least one group to log in. If you configure `allowed_groups`, you must also configure `groups_attribute_path`. | | | `allowed_organizations` | No | Yes | List of comma- or space-separated organizations. The user should be a member of at least one organization to log in. | | | `allowed_domains` | No | Yes | List of comma- or space-separated domains. The user should belong to at least one domain to log in. | | | `team_ids` | No | Yes | String list of team IDs. If set, the user must be a member of one of the given teams to log in. If you configure `team_ids`, you must also configure `teams_url` and `team_ids_attribute_path`. | | | `team_ids_attribute_path` | No | Yes | The [JMESPath](http://jmespath.org/examples.html) expression to use for Grafana team ID lookup within the results returned by the `teams_url` endpoint. | | | `teams_url` | No | Yes | The URL used to query for team IDs. If not set, the default value is `/teams`. If you configure `teams_url`, you must also configure `team_ids_attribute_path`. | | | `tls_skip_verify_insecure` | No | No | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL/TLS susceptible to man-in-the-middle attacks. | `false` | | `tls_client_cert` | No | No | The path to the certificate. | | | `tls_client_key` | No | No | The path to the key. | | | `tls_client_ca` | No | No | The path to the trusted certificate authority list. | | | `use_pkce` | No | Yes | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https://datatracker.ietf.org/doc/html/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier. | `false` | | `use_refresh_token` | No | Yes | Set to `true` to use refresh token and check access token expiration. | `false` | | `signout_redirect_url` | No | Yes | URL to redirect to after the user logs out. | | ## Examples of setting up Generic OAuth This section includes examples of setting up Generic OAuth integration. ### Set up OAuth2 with Descope To set up Generic OAuth authentication with Descope, follow these steps: 1. Create a Descope Project [here](https://app.descope.com/gettingStarted), and go through the Getting Started Wizard to configure your authentication. You can skip step if you already have Descope project set up. 1. If you wish to use a flow besides `Sign Up or In`, go to the **IdP Applications** menu in the console, and select your IdP application. Then alter the **Flow Hosting URL** query parameter `?flow=sign-up-or-in` to change which flow id you wish to use. 1. Click **Save**. 1. Update the `[auth.generic_oauth]` section of the Grafana configuration file using the values from the **Settings** tab: You can get your Client ID (Descope Project ID) under [Project Settings](https://app.descope.com/settings/project). Your Client Secret (Descope Access Key) can be generated under [Access Keys](https://app.descope.com/accesskeys). ```bash [auth.generic_oauth] enabled = true allow_sign_up = true auto_login = false team_ids = allowed_organizations = name = Descope client_id = <Descope Project ID> client_secret = <Descope Access Key> scopes = openid profile email descope.claims descope.custom_claims auth_url = https://api.descope.com/oauth2/v1/authorize token_url = https://api.descope.com/oauth2/v1/token api_url = https://api.descope.com/oauth2/v1/userinfo use_pkce = true use_refresh_token = true ``` ### Set up OAuth2 with Auth0 Support for the Auth0 "audience" feature is not currently available in Grafana. For roles and permissions, the available options are described [here](). To set up Generic OAuth authentication with Auth0, follow these steps: 1. Create an Auth0 application using the following parameters: - Name: Grafana - Type: Regular Web Application 1. Go to the **Settings** tab of the application and set **Allowed Callback URLs** to `https://<grafana domain>/login/generic_oauth`. 1. Click **Save Changes**. 1. Update the `[auth.generic_oauth]` section of the Grafana configuration file using the values from the **Settings** tab: ```bash [auth.generic_oauth] enabled = true allow_sign_up = true auto_login = false team_ids = allowed_organizations = name = Auth0 client_id = <client id> client_secret = <client secret> scopes = openid profile email offline_access auth_url = https://<domain>/authorize token_url = https://<domain>/oauth/token api_url = https://<domain>/userinfo use_pkce = true use_refresh_token = true ``` ### Set up OAuth2 with Bitbucket To set up Generic OAuth authentication with Bitbucket, follow these steps: 1. Navigate to **Settings > Workspace setting > OAuth consumers** in BitBucket. 1. Create an application by selecting **Add consumer** and using the following parameters: - Allowed Callback URLs: `https://<grafana domain>/login/generic_oauth` 1. Click **Save**. 1. Update the `[auth.generic_oauth]` section of the Grafana configuration file using the values from the `Key` and `Secret` from the consumer description: ```bash [auth.generic_oauth] name = BitBucket enabled = true allow_sign_up = true auto_login = false client_id = <client key> client_secret = <client secret> scopes = account email auth_url = https://bitbucket.org/site/oauth2/authorize token_url = https://bitbucket.org/site/oauth2/access_token api_url = https://api.bitbucket.org/2.0/user teams_url = https://api.bitbucket.org/2.0/user/permissions/workspaces team_ids_attribute_path = values[*].workspace.slug team_ids = allowed_organizations = use_refresh_token = true ``` By default, a refresh token is included in the response for the **Authorization Code Grant**. ### Set up OAuth2 with OneLogin To set up Generic OAuth authentication with OneLogin, follow these steps: 1. Create a new Custom Connector in OneLogin with the following settings: - Name: Grafana - Sign On Method: OpenID Connect - Redirect URI: `https://<grafana domain>/login/generic_oauth` - Signing Algorithm: RS256 - Login URL: `https://<grafana domain>/login/generic_oauth` 1. Add an app to the Grafana Connector: - Display Name: Grafana 1. Update the `[auth.generic_oauth]` section of the Grafana configuration file using the client ID and client secret from the **SSO** tab of the app details page: Your OneLogin Domain will match the URL you use to access OneLogin. ```bash [auth.generic_oauth] name = OneLogin enabled = true allow_sign_up = true auto_login = false client_id = <client id> client_secret = <client secret> scopes = openid email name auth_url = https://<onelogin domain>.onelogin.com/oidc/2/auth token_url = https://<onelogin domain>.onelogin.com/oidc/2/token api_url = https://<onelogin domain>.onelogin.com/oidc/2/me team_ids = allowed_organizations = ``` ### Set up OAuth2 with Dex To set up Generic OAuth authentication with [Dex IdP](https://dexidp.io/), follow these steps: 1. Add Grafana as a client in the Dex config YAML file: ```yaml staticClients: - id: <client id> name: Grafana secret: <client secret> redirectURIs: - 'https://<grafana domain>/login/generic_oauth' ``` Unlike many other OAuth2 providers, Dex doesn't provide `<client secret>`. Instead, a secret can be generated with for example `openssl rand -hex 20`. 2. Update the `[auth.generic_oauth]` section of the Grafana configuration: ```bash [auth.generic_oauth] name = Dex enabled = true client_id = <client id> client_secret = <client secret> scopes = openid email profile groups offline_access auth_url = https://<dex base uri>/auth token_url = https://<dex base uri>/token api_url = https://<dex base uri>/userinfo ``` `<dex base uri>` corresponds to the `issuer: ` configuration in Dex (e.g. the Dex domain possibly including a path such as e.g. `/dex`). The `offline_access` scope is needed when using [refresh tokens]().
grafana setup
aliases auth generic oauth description Configure Generic OAuth authentication keywords grafana configuration documentation oauth labels products cloud enterprise oss menuTitle Generic OAuth title Configure Generic OAuth authentication weight 700 Configure Generic OAuth authentication Grafana provides OAuth2 integrations for the following auth providers Azure AD OAuth GitHub OAuth GitLab OAuth Google OAuth Grafana Com OAuth Keycloak OAuth Okta OAuth If your OAuth2 provider is not listed you can use Generic OAuth authentication This topic describes how to configure Generic OAuth authentication using different methods and includes examples of setting up Generic OAuth with specific OAuth2 providers Before you begin To follow this guide Ensure you know how to create an OAuth2 application with your OAuth2 provider Consult the documentation of your OAuth2 provider for more information Ensure your identity provider returns OpenID UserInfo compatible information such as the sub claim If you are using refresh tokens ensure you know how to set them up with your OAuth2 provider Consult the documentation of your OAuth2 provider for more information If Users use the same email address in Azure AD that they use with other authentication providers such as Grafana com you need to do additional configuration to ensure that the users are matched correctly Please refer to the Using the same email address to login with different identity providers documentation for more information Configure generic OAuth authentication client using the Grafana UI Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle As a Grafana Admin you can configure Generic OAuth client from within Grafana using the Generic OAuth UI To do this navigate to Administration Authentication Generic OAuth page and fill in the form If you have a current configuration in the Grafana configuration file then the form will be pre populated with those values otherwise the form will contain default values After you have filled in the form click Save to save the configuration If the save was successful Grafana will apply the new configurations If you need to reset changes you made in the UI back to the default values click Reset After you have reset the changes Grafana will apply the configuration from the Grafana configuration file if there is any configuration or the default values If you run Grafana in high availability mode configuration changes may not get applied to all Grafana instances immediately You may need to wait a few minutes for the configuration to propagate to all Grafana instances Refer to configuration options for more information Configure generic OAuth authentication client using the Terraform provider Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle Supported in the Terraform provider since v2 12 0 terraform resource grafana sso settings generic sso settings provider name generic oauth oauth2 settings name Auth0 auth url https domain authorize token url https domain oauth token api url https domain userinfo client id client id client secret client secret allow sign up true auto login false scopes openid profile email offline access use pkce true use refresh token true Refer to Terraform Registry https registry terraform io providers grafana grafana latest docs resources sso settings for a complete reference on using the grafana sso settings resource Configure generic OAuth authentication client using the Grafana configuration file Ensure that you have access to the Grafana configuration file Steps To integrate your OAuth2 provider with Grafana using our Generic OAuth authentication follow these steps 1 Create an OAuth2 application in your chosen OAuth2 provider 1 Set the callback URL for your OAuth2 app to http my grafana server name or ip grafana server port login generic oauth Ensure that the callback URL is the complete HTTP address that you use to access Grafana via your browser but with the appended path of login generic oauth For the callback URL to be correct it might be necessary to set the root url option in the server section of the Grafana configuration file For example if you are serving Grafana behind a proxy 1 Refer to the following table to update field values located in the auth generic oauth section of the Grafana configuration file Field Description client id client secret These values must match the client ID and client secret from your OAuth2 app auth url The authorization endpoint of your OAuth2 provider api url The user information endpoint of your OAuth2 provider Information returned by this endpoint must be compatible with OpenID UserInfo https connect2id com products server docs api userinfo enabled Enables Generic OAuth authentication Set this value to true Review the list of other Generic OAuth configuration options and complete them as necessary 1 Optional Configure a refresh token a Extend the scopes field of auth generic oauth section in Grafana configuration file with refresh token scope used by your OAuth2 provider b Set use refresh token to true in auth generic oauth section in Grafana configuration file c Enable the refresh token on the provider if required 1 Configure role mapping 1 Optional Configure group synchronization 1 Restart Grafana You should now see a Generic OAuth login button on the login page and be able to log in or sign up with your OAuth2 provider Configure login Grafana can resolve a user s login from the OAuth2 ID token or user information retrieved from the OAuth2 UserInfo endpoint Grafana looks at these sources in the order listed until it finds a login If no login is found then the user s login is set to user s email address Refer to the following table for information on what to configure based on how your Oauth2 provider returns a user s login Source of login Required configuration login or username field of the OAuth2 ID token N A Another field of the OAuth2 ID token Set login attribute path configuration option login or username field of the user information from the UserInfo endpoint N A Another field of the user information from the UserInfo endpoint Set login attribute path configuration option Configure display name Grafana can resolve a user s display name from the OAuth2 ID token or user information retrieved from the OAuth2 UserInfo endpoint Grafana looks at these sources in the order listed until it finds a display name If no display name is found then user s login is displayed instead Refer to the following table for information on what you need to configure depending on how your Oauth2 provider returns a user s name Source of display name Required configuration name or display name field of the OAuth2 ID token N A Another field of the OAuth2 ID token Set name attribute path configuration option name or display name field of the user information from the UserInfo endpoint N A Another field of the user information from the UserInfo endpoint Set name attribute path configuration option Configure email address Grafana can resolve the user s email address from the OAuth2 ID token the user information retrieved from the OAuth2 UserInfo endpoint or the OAuth2 emails endpoint Grafana looks at these sources in the order listed until an email address is found If no email is found then the email address of the user is set to an empty string Refer to the following table for information on what to configure based on how the Oauth2 provider returns a user s email address Source of email address Required configuration email field of the OAuth2 ID token N A attributes map of the OAuth2 ID token Set email attribute name configuration option By default Grafana searches for email under email primary key upn field of the OAuth2 ID token N A email field of the user information from the UserInfo endpoint N A Another field of the user information from the UserInfo endpoint Set email attribute path configuration option Email address marked as primary from the emails endpoint of br the OAuth2 provider obtained by appending emails to the URL br configured with api url N A Configure a refresh token When a user logs in using an OAuth2 provider Grafana verifies that the access token has not expired When an access token expires Grafana uses the provided refresh token if any exists to obtain a new access token Grafana uses a refresh token to obtain a new access token without requiring the user to log in again If a refresh token doesn t exist Grafana logs the user out of the system after the access token has expired To configure Generic OAuth to use a refresh token set use refresh token configuration option to true and perform one or both of the following steps if required 1 Extend the scopes field of auth generic oauth section in Grafana configuration file with additional scopes 1 Enable the refresh token on the provider Note The accessTokenExpirationCheck feature toggle has been removed in Grafana v10 3 0 and the use refresh token configuration value will be used instead for configuring refresh token fetching and access token expiration check Configure role mapping Unless skip org role sync option is enabled the user s role will be set to the role retrieved from the auth provider upon user login The user s role is retrieved using a JMESPath http jmespath org examples html expression from the role attribute path configuration option To map the server administrator role use the allow assign grafana admin configuration option Refer to configuration options for more information If no valid role is found the user is assigned the role specified by the auto assign org role option You can disable this default role assignment by setting role attribute strict true This setting denies user access if no role or an invalid role is returned after evaluating the role attribute path and the org mapping expressions You can use the org attribute path and org mapping configuration options to assign the user to organizations and specify their role For more information refer to Org roles mapping example org roles mapping example If both org role mapping org mapping and the regular role mapping role attribute path are specified then the user will get the highest of the two mapped roles To ease configuration of a proper JMESPath expression go to JMESPath http jmespath org to test and evaluate expressions with custom payloads Role mapping examples This section includes examples of JMESPath expressions used for role mapping Map user organization role In this example the user has been granted the role of an Editor The role assigned is based on the value of the property role which must be a valid Grafana role such as Admin Editor Viewer or None Payload json role Editor Config bash role attribute path role In the following more complex example the user has been granted the Admin role This is because they are a member of the admin group of their OAuth2 provider If the user was a member of the editor group they would be granted the Editor role otherwise Viewer Payload json info groups engineer admin Config bash role attribute path contains info groups admin Admin contains info groups editor Editor Viewer Map server administrator role In the following example the user is granted the Grafana server administrator role Payload json info roles admin Config ini role attribute path contains info roles admin GrafanaAdmin contains info roles editor Editor Viewer allow assign grafana admin true Map one role to all users In this example all users will be assigned Viewer role regardless of the user information received from the identity provider Config ini role attribute path Viewer skip org role sync false Org roles mapping example In this example the user has been granted the role of a Viewer in the org foo org and the role of an Editor in the org bar and org baz orgs If the user was a member of the admin group they would be granted the Grafana server administrator role Payload json info roles org foo org bar another org Config ini role attribute path contains info roles admin GrafanaAdmin None allow assign grafana admin true org attribute path info roles org mapping org foo org foo Viewer org bar org bar Editor org baz Editor Configure group synchronization Available in Grafana Enterprise https grafana com docs grafana GRAFANA VERSION introduction grafana enterprise and Grafana Cloud docs grafana cloud Grafana supports synchronization of OAuth2 groups with Grafana teams and roles This allows automatically assigning users to the appropriate teams or automatically granting them the mapped roles Teams and roles get synchronized when the user logs in Generic OAuth groups can be referenced by group ID such as 8bab1c86 8fba 33e5 2089 1d1c80ec267d or myteam For information on configuring OAuth2 groups with Grafana using the groups attribute path configuration option refer to configuration options To learn more about group synchronization refer to Configure team sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure team sync and Configure group attribute sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure group attribute sync Group attribute synchronization example Configuration bash groups attribute path info groups Payload json info groups engineers analysts Configuration options The following table outlines the various Generic OAuth configuration options You can apply these options as environment variables similar to any other configuration within Grafana For more information refer to Override configuration with environment variables If the configuration option requires a JMESPath expression that includes a colon enclose the entire expression in quotes to prevent parsing errors For example role attribute path role view Setting Required Supported on Cloud Description Default enabled No Yes Enables Generic OAuth authentication false name No Yes Name that refers to the Generic OAuth authentication from the Grafana user interface OAuth icon No Yes Icon used for the Generic OAuth authentication in the Grafana user interface signin client id Yes Yes Client ID provided by your OAuth2 app client secret Yes Yes Client secret provided by your OAuth2 app auth url Yes Yes Authorization endpoint of your OAuth2 provider token url Yes Yes Endpoint used to obtain the OAuth2 access token api url Yes Yes Endpoint used to obtain user information compatible with OpenID UserInfo https connect2id com products server docs api userinfo auth style No Yes Name of the OAuth2 AuthStyle https pkg go dev golang org x oauth2 AuthStyle to be used when ID token is requested from OAuth2 provider It determines how client id and client secret are sent to Oauth2 provider Available values are AutoDetect InParams and InHeader AutoDetect scopes No Yes List of comma or space separated OAuth2 scopes user email empty scopes No Yes Set to true to use an empty scope during authentication false allow sign up No Yes Controls Grafana user creation through the Generic OAuth login Only existing Grafana users can log in with Generic OAuth if set to false true auto login No Yes Set to true to enable users to bypass the login screen and automatically log in This setting is ignored if you configure multiple auth providers to use auto login false id token attribute name No Yes The name of the key used to extract the ID token from the returned OAuth2 token id token login attribute path No Yes JMESPath http jmespath org examples html expression to use for user login lookup from the user ID token For more information on how user login is retrieved refer to Configure login name attribute path No Yes JMESPath http jmespath org examples html expression to use for user name lookup from the user ID token This name will be used as the user s display name For more information on how user display name is retrieved refer to Configure display name email attribute path No Yes JMESPath http jmespath org examples html expression to use for user email lookup from the user information For more information on how user email is retrieved refer to Configure email address email attribute name No Yes Name of the key to use for user email lookup within the attributes map of OAuth2 ID token For more information on how user email is retrieved refer to Configure email address email primary role attribute path No Yes JMESPath http jmespath org examples html expression to use for Grafana role lookup Grafana will first evaluate the expression using the OAuth2 ID token If no role is found the expression will be evaluated using the user information obtained from the UserInfo endpoint The result of the evaluation should be a valid Grafana role None Viewer Editor Admin or GrafanaAdmin For more information on user role mapping refer to Configure role mapping role attribute strict No Yes Set to true to deny user login if the Grafana org role cannot be extracted using role attribute path or org mapping For more information on user role mapping refer to Configure role mapping false skip org role sync No Yes Set to true to stop automatically syncing user roles This will allow you to set organization roles for your users from within Grafana manually false org attribute path No No JMESPath http jmespath org examples html expression to use for Grafana org to role lookup Grafana will first evaluate the expression using the OAuth2 ID token If no value is returned the expression will be evaluated using the user information obtained from the UserInfo endpoint The result of the evaluation will be mapped to org roles based on org mapping For more information on org to role mapping refer to Org roles mapping example org roles mapping example org mapping No No List of comma or space separated ExternalOrgName OrgIdOrName Role mappings Value can be meaning All users Role is optional and can have the following values None Viewer Editor or Admin For more information on external organization to role mapping refer to Org roles mapping example org roles mapping example allow assign grafana admin No No Set to true to enable automatic sync of the Grafana server administrator role If this option is set to true and the result of evaluating role attribute path for a user is GrafanaAdmin Grafana grants the user the server administrator privileges and organization administrator role If this option is set to false and the result of evaluating role attribute path for a user is GrafanaAdmin Grafana grants the user only organization administrator role For more information on user role mapping refer to Configure role mapping false groups attribute path No Yes JMESPath http jmespath org examples html expression to use for user group lookup Grafana will first evaluate the expression using the OAuth2 ID token If no groups are found the expression will be evaluated using the user information obtained from the UserInfo endpoint The result of the evaluation should be a string array of groups allowed groups No Yes List of comma or space separated groups The user should be a member of at least one group to log in If you configure allowed groups you must also configure groups attribute path allowed organizations No Yes List of comma or space separated organizations The user should be a member of at least one organization to log in allowed domains No Yes List of comma or space separated domains The user should belong to at least one domain to log in team ids No Yes String list of team IDs If set the user must be a member of one of the given teams to log in If you configure team ids you must also configure teams url and team ids attribute path team ids attribute path No Yes The JMESPath http jmespath org examples html expression to use for Grafana team ID lookup within the results returned by the teams url endpoint teams url No Yes The URL used to query for team IDs If not set the default value is teams If you configure teams url you must also configure team ids attribute path tls skip verify insecure No No If set to true the client accepts any certificate presented by the server and any host name in that certificate You should only use this for testing because this mode leaves SSL TLS susceptible to man in the middle attacks false tls client cert No No The path to the certificate tls client key No No The path to the key tls client ca No No The path to the trusted certificate authority list use pkce No Yes Set to true to use Proof Key for Code Exchange PKCE https datatracker ietf org doc html rfc7636 Grafana uses the SHA256 based S256 challenge method and a 128 bytes base64url encoded code verifier false use refresh token No Yes Set to true to use refresh token and check access token expiration false signout redirect url No Yes URL to redirect to after the user logs out Examples of setting up Generic OAuth This section includes examples of setting up Generic OAuth integration Set up OAuth2 with Descope To set up Generic OAuth authentication with Descope follow these steps 1 Create a Descope Project here https app descope com gettingStarted and go through the Getting Started Wizard to configure your authentication You can skip step if you already have Descope project set up 1 If you wish to use a flow besides Sign Up or In go to the IdP Applications menu in the console and select your IdP application Then alter the Flow Hosting URL query parameter flow sign up or in to change which flow id you wish to use 1 Click Save 1 Update the auth generic oauth section of the Grafana configuration file using the values from the Settings tab You can get your Client ID Descope Project ID under Project Settings https app descope com settings project Your Client Secret Descope Access Key can be generated under Access Keys https app descope com accesskeys bash auth generic oauth enabled true allow sign up true auto login false team ids allowed organizations name Descope client id Descope Project ID client secret Descope Access Key scopes openid profile email descope claims descope custom claims auth url https api descope com oauth2 v1 authorize token url https api descope com oauth2 v1 token api url https api descope com oauth2 v1 userinfo use pkce true use refresh token true Set up OAuth2 with Auth0 Support for the Auth0 audience feature is not currently available in Grafana For roles and permissions the available options are described here To set up Generic OAuth authentication with Auth0 follow these steps 1 Create an Auth0 application using the following parameters Name Grafana Type Regular Web Application 1 Go to the Settings tab of the application and set Allowed Callback URLs to https grafana domain login generic oauth 1 Click Save Changes 1 Update the auth generic oauth section of the Grafana configuration file using the values from the Settings tab bash auth generic oauth enabled true allow sign up true auto login false team ids allowed organizations name Auth0 client id client id client secret client secret scopes openid profile email offline access auth url https domain authorize token url https domain oauth token api url https domain userinfo use pkce true use refresh token true Set up OAuth2 with Bitbucket To set up Generic OAuth authentication with Bitbucket follow these steps 1 Navigate to Settings Workspace setting OAuth consumers in BitBucket 1 Create an application by selecting Add consumer and using the following parameters Allowed Callback URLs https grafana domain login generic oauth 1 Click Save 1 Update the auth generic oauth section of the Grafana configuration file using the values from the Key and Secret from the consumer description bash auth generic oauth name BitBucket enabled true allow sign up true auto login false client id client key client secret client secret scopes account email auth url https bitbucket org site oauth2 authorize token url https bitbucket org site oauth2 access token api url https api bitbucket org 2 0 user teams url https api bitbucket org 2 0 user permissions workspaces team ids attribute path values workspace slug team ids allowed organizations use refresh token true By default a refresh token is included in the response for the Authorization Code Grant Set up OAuth2 with OneLogin To set up Generic OAuth authentication with OneLogin follow these steps 1 Create a new Custom Connector in OneLogin with the following settings Name Grafana Sign On Method OpenID Connect Redirect URI https grafana domain login generic oauth Signing Algorithm RS256 Login URL https grafana domain login generic oauth 1 Add an app to the Grafana Connector Display Name Grafana 1 Update the auth generic oauth section of the Grafana configuration file using the client ID and client secret from the SSO tab of the app details page Your OneLogin Domain will match the URL you use to access OneLogin bash auth generic oauth name OneLogin enabled true allow sign up true auto login false client id client id client secret client secret scopes openid email name auth url https onelogin domain onelogin com oidc 2 auth token url https onelogin domain onelogin com oidc 2 token api url https onelogin domain onelogin com oidc 2 me team ids allowed organizations Set up OAuth2 with Dex To set up Generic OAuth authentication with Dex IdP https dexidp io follow these steps 1 Add Grafana as a client in the Dex config YAML file yaml staticClients id client id name Grafana secret client secret redirectURIs https grafana domain login generic oauth Unlike many other OAuth2 providers Dex doesn t provide client secret Instead a secret can be generated with for example openssl rand hex 20 2 Update the auth generic oauth section of the Grafana configuration bash auth generic oauth name Dex enabled true client id client id client secret client secret scopes openid email profile groups offline access auth url https dex base uri auth token url https dex base uri token api url https dex base uri userinfo dex base uri corresponds to the issuer configuration in Dex e g the Dex domain possibly including a path such as e g dex The offline access scope is needed when using refresh tokens
grafana setup Grafana Okta OIDC Guide enterprise aliases products menuTitle Okta OIDC labels cloud oss auth okta
--- aliases: - ../../../auth/okta/ description: Grafana Okta OIDC Guide labels: products: - cloud - enterprise - oss menuTitle: Okta OIDC title: Configure Okta OIDC authentication weight: 1400 --- # Configure Okta OIDC authentication If Users use the same email address in Okta that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information. ## Before you begin To follow this guide, ensure you have permissions in your Okta workspace to create an OIDC app. ## Create an Okta app 1. From the Okta Admin Console, select **Create App Integration** from the **Applications** menu. 1. For **Sign-in method**, select **OIDC - OpenID Connect**. 1. For **Application type**, select **Web Application** and click **Next**. 1. Configure **New Web App Integration Operations**: - **App integration name**: Choose a name for the app. - **Logo (optional)**: Add a logo. - **Grant type**: Select **Authorization Code** and **Refresh Token**. - **Sign-in redirect URIs**: Replace the default setting with the Grafana Cloud Okta path, replacing <YOUR_ORG> with the name of your Grafana organization: https://<YOUR_ORG>.grafana.net/login/okta. For on-premises installation, use the Grafana server URL: http://<my_grafana_server_name_or_ip>:<grafana_server_port>/login/okta. - **Sign-out redirect URIs (optional)**: Replace the default setting with the Grafana Cloud Okta path, replacing <YOUR_ORG> with the name of your Grafana organization: https://<YOUR_ORG>.grafana.net/logout. For on-premises installation, use the Grafana server URL: http://<my_grafana_server_name_or_ip>:<grafana_server_port>/logout. - **Base URIs (optional)**: Add any base URIs - **Controlled access**: Select whether to assign the app integration to everyone in your organization, or only selected groups. You can assign this option after you create the app. 1. Make a note of the following: - **ClientID** - **Client Secret** - **Auth URL** For example: https://<TENANT_ID>.okta.com/oauth2/v1/authorize - **Token URL** For example: https://<TENANT_ID>.okta.com/oauth2/v1/token - **API URL** For example: https://<TENANT_ID>.okta.com/oauth2/v1/userinfo ### Configure Okta to Grafana role mapping 1. In the **Okta Admin Console**, select **Directory > Profile Editor**. 1. Select the Okta Application Profile you created previously (the default name for this is `<App name> User`). 1. Select **Add Attribute** and fill in the following fields: - **Data Type**: string - **Display Name**: Meaningful name. For example, `Grafana Role`. - **Variable Name**: Meaningful name. For example, `grafana_role`. - **Description (optional)**: A description of the role. - **Enum**: Select **Define enumerated list of values** and add the following: - Display Name: Admin Value: Admin - Display Name: Editor Value: Editor - Display Name: Viewer Value: Viewer The remaining attributes are optional and can be set as needed. 1. Click **Save**. 1. (Optional) You can add the role attribute to the default User profile. To do this, please follow the steps in the [Optional: Add the role attribute to the User (default) Okta profile]() section. ### Configure Groups claim 1. In the **Okta Admin Console**, select **Application > Applications**. 1. Select the OpenID Connect application you created. 1. Go to the **Sign On** tab and click **Edit** in the **OpenID Connect ID Token** section. 1. In the **Group claim type** section, select **Filter**. 1. In the **Group claim filter** section, leave the default name `groups` (or add it if the box is empty), then select **Matches regex** and add the following regex: `.*`. 1. Click **Save**. 1. Click the **Back to applications** link at the top of the page. 1. From the **More** button dropdown menu, click **Refresh Application Data**. 1. Include the `groups` scope in the **Scopes** field in Grafana of the Okta integration. For Terraform or in the Grafana configuration file, include the `groups` scope in `scopes` field. If you configure the `groups` claim differently, ensure that the `groups` claim is a string array. #### Optional: Add the role attribute to the User (default) Okta profile If you want to configure the role for all users in the Okta directory, you can add the role attribute to the User (default) Okta profile. 1. Return to the **Directory** section and select **Profile Editor**. 1. Select the User (default) Okta profile, and click **Add Attribute**. 1. Set all of the attributes in the same way you did in **Step 3**. 1. Select **Add Mapping** to add your new attributes. For example, **user.grafana_role -> grafana_role**. 1. To add a role to a user, select the user from the **Directory**, and click **Profile -> Edit**. 1. Select an option from your new attribute and click **Save**. 1. Update the Okta integration by setting the `Role attribute path` (`role_attribute_path` in Terraform and config file) to `<YOUR_ROLE_VARIABLE>`. For example: `role_attribute_path = grafana_role` (using the configuration). ## Configure Okta authentication client using the Grafana UI Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. As a Grafana Admin, you can configure Okta OAuth2 client from within Grafana using the Okta UI. To do this, navigate to **Administration > Authentication > Okta** page and fill in the form. If you have a current configuration in the Grafana configuration file then the form will be pre-populated with those values otherwise the form will contain default values. After you have filled in the form, click **Save**. If the save was successful, Grafana will apply the new configurations. If you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values. If you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances. Refer to [configuration options]() for more information. ## Configure Okta authentication client using the Terraform provider Available in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0. ```terraform resource "grafana_sso_settings" "okta_sso_settings" { provider_name = "okta" oauth2_settings { name = "Okta" auth_url = "https://<okta tenant id>.okta.com/oauth2/v1/authorize" token_url = "https://<okta tenant id>.okta.com/oauth2/v1/token" api_url = "https://<okta tenant id>.okta.com/oauth2/v1/userinfo" client_id = "CLIENT_ID" client_secret = "CLIENT_SECRET" allow_sign_up = true auto_login = false scopes = "openid profile email offline_access" role_attribute_path = "contains(groups[*], 'Example::DevOps') && 'Admin' || 'None'" role_attribute_strict = true allowed_groups = "Example::DevOps,Example::Dev,Example::QA" } } ``` Go to [Terraform Registry](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/sso_settings) for a complete reference on using the `grafana_sso_settings` resource. ## Configure Okta authentication client using the Grafana configuration file Ensure that you have access to the [Grafana configuration file](). ### Steps To integrate your Okta OIDC provider with Grafana using our Okta OIDC integration, follow these steps: 1. Follow the [Create an Okta app]() steps to create an OIDC app in Okta. 1. Refer to the following table to update field values located in the `[auth.okta]` section of the Grafana configuration file: | Field | Description | | ----------- | ----------------------------------------------------------------------------------------------------------- | | `client_id` | These values must match the client ID from your Okta OIDC app. | | `auth_url` | The authorization endpoint of your OIDC provider. `https://<okta-tenant-id>.okta.com/oauth2/v1/authorize` | | `token_url` | The token endpoint of your Okta OIDC provider. `https://<okta-tenant-id>.okta.com/oauth2/v1/token` | | `api_url` | The user information endpoint of your Okta OIDC provider. `https://<tenant-id>.okta.com/oauth2/v1/userinfo` | | `enabled` | Enables Okta OIDC authentication. Set this value to `true`. | 1. Review the list of other Okta OIDC [configuration options]() and complete them as necessary. 1. Optional: [Configure a refresh token](). 1. [Configure role mapping](). 1. Optional: [Configure group synchronization](). 1. Restart Grafana. You should now see a Okta OIDC login button on the login page and be able to log in or sign up with your OIDC provider. The following is an example of a minimally functioning integration when configured with the instructions above: ```ini [auth.okta] name = Okta icon = okta enabled = true allow_sign_up = true client_id = <client id> scopes = openid profile email offline_access auth_url = https://<okta tenant id>.okta.com/oauth2/v1/authorize token_url = https://<okta tenant id>.okta.com/oauth2/v1/token api_url = https://<okta tenant id>.okta.com/oauth2/v1/userinfo role_attribute_path = grafana_role role_attribute_strict = true allowed_groups = "Example::DevOps" "Example::Dev" "Example::QA" ``` ### Configure a refresh token When a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired. To enable the `Refresh Token` head over the Okta application settings and: 1. Under `General` tab, find the `General Settings` section. 1. Within the `Grant Type` options, enable the `Refresh Token` checkbox. At the configuration file, extend the `scopes` in `[auth.okta]` section with `offline_access` and set `use_refresh_token` to `true`. ### Configure role mapping Unless `skip_org_role_sync` option is enabled, the user's role will be set to the role retrieved from the auth provider upon user login. The user's role is retrieved using a [JMESPath](http://jmespath.org/examples.html) expression from the `role_attribute_path` configuration option against the `api_url` (`/userinfo` OIDC endpoint) endpoint payload. If no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option](). You can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions. You can use the `org_attribute_path` and `org_mapping` configuration options to assign the user to organizations and specify their role. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If both org role mapping (`org_mapping`) and the regular role mapping (`role_attribute_path`) are specified, then the user will get the highest of the two mapped roles. To allow mapping Grafana server administrator role, use the `allow_assign_grafana_admin` configuration option. Refer to [configuration options]() for more information. In [Create an Okta app](), you created a custom attribute in Okta to store the role. You can use this attribute to map the role to a Grafana role by setting the `role_attribute_path` configuration option to the custom attribute name: `role_attribute_path = grafana_role`. If you want to map the role based on the user's group, you can use the `groups` attribute from the user info endpoint. An example of this is `role_attribute_path = contains(groups[*], 'Example::DevOps') && 'Admin' || 'None'`. You can find more examples of JMESPath expressions on the Generic OAuth page for [JMESPath examples](). To learn about adding custom claims to the user info in Okta, refer to [add custom claims](https://developer.okta.com/docs/guides/customize-tokens-returned-from-okta/main/#add-a-custom-claim-to-a-token). #### Org roles mapping example Available in on-premise Grafana installations. In this example, the `org_mapping` uses the `groups` attribute as the source (`org_attribute_path`) to map the current user to different organizations and roles. The user has been granted the role of a `Viewer` in the `org_foo` org if they are a member of the `Group 1` group, the role of an `Editor` in the `org_bar` org if they are a member of the `Group 2` group, and the role of an `Editor` in the `org_baz`(OrgID=3) org. Config: ```ini org_attribute_path = groups org_mapping = ["Group 1:org_foo:Viewer", "Group 2:org_bar:Editor", "*:3:Editor"] ``` ### Configure group synchronization (Enterprise only) Available in [Grafana Enterprise](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/introduction/grafana-enterprise) and [Grafana Cloud](/docs/grafana-cloud/). By using group synchronization, you can link your Okta groups to teams and roles within Grafana. This allows automatically assigning users to the appropriate teams or granting them the mapped roles. Teams and roles get synchronized when the user logs in. Okta groups can be referenced by group names, like `Admins` or `Editors`. To learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-group-attribute-sync) documentation. ## Configuration options The following table outlines the various Okta OIDC configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables](). If the configuration option requires a JMESPath expression that includes a colon, enclose the entire expression in quotes to prevent parsing errors. For example `role_attribute_path: "role:view"` | Setting | Required | Supported on Cloud | Description | Default | | ----------------------- | -------- | ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------- | | `enabled` | No | Yes | Enables Okta OIDC authentication. | `false` | | `name` | No | Yes | Name that refers to the Okta OIDC authentication from the Grafana user interface. | `Okta` | | `icon` | No | Yes | Icon used for the Okta OIDC authentication in the Grafana user interface. | `okta` | | `client_id` | Yes | Yes | Client ID provided by your Okta OIDC app. | | | `client_secret` | Yes | Yes | Client secret provided by your Okta OIDC app. | | | `auth_url` | Yes | Yes | Authorization endpoint of your Okta OIDC provider. | | | `token_url` | Yes | Yes | Endpoint used to obtain the Okta OIDC access token. | | | `api_url` | Yes | Yes | Endpoint used to obtain user information. | | | `scopes` | No | Yes | List of comma- or space-separated Okta OIDC scopes. | `openid profile email groups` | | `allow_sign_up` | No | Yes | Controls Grafana user creation through the Okta OIDC login. Only existing Grafana users can log in with Okta OIDC if set to `false`. | `true` | | `auto_login` | No | Yes | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login. | `false` | | `role_attribute_path` | No | Yes | [JMESPath](http://jmespath.org/examples.html) expression to use for Grafana role lookup. Grafana will first evaluate the expression using the Okta OIDC ID token. If no role is found, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation should be a valid Grafana role (`None`, `Viewer`, `Editor`, `Admin` or `GrafanaAdmin`). For more information on user role mapping, refer to [Configure role mapping](). | | | `role_attribute_strict` | No | Yes | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Configure role mapping](). | `false` | | `org_attribute_path` | No | No | [JMESPath](http://jmespath.org/examples.html) expression to use for Grafana org to role lookup. The result of the evaluation will be mapped to org roles based on `org_mapping`. For more information on org to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `org_mapping` | No | No | List of comma- or space-separated `<ExternalOrgName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning "All users". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). | | | `skip_org_role_sync` | No | Yes | Set to `true` to stop automatically syncing user roles. This will allow you to set organization roles for your users from within Grafana manually. | `false` | | `allowed_groups` | No | Yes | List of comma- or space-separated groups. The user should be a member of at least one group to log in. | | | `allowed_domains` | No | Yes | List of comma- or space-separated domains. The user should belong to at least one domain to log in. | | | `use_pkce` | No | Yes | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https://datatracker.ietf.org/doc/html/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier. | `true` | | `use_refresh_token` | No | Yes | Set to `true` to use refresh token and check access token expiration. | `false` | | `signout_redirect_url` | No | Yes | URL to redirect to after the user logs out. | |
grafana setup
aliases auth okta description Grafana Okta OIDC Guide labels products cloud enterprise oss menuTitle Okta OIDC title Configure Okta OIDC authentication weight 1400 Configure Okta OIDC authentication If Users use the same email address in Okta that they use with other authentication providers such as Grafana com you need to do additional configuration to ensure that the users are matched correctly Please refer to the Using the same email address to login with different identity providers documentation for more information Before you begin To follow this guide ensure you have permissions in your Okta workspace to create an OIDC app Create an Okta app 1 From the Okta Admin Console select Create App Integration from the Applications menu 1 For Sign in method select OIDC OpenID Connect 1 For Application type select Web Application and click Next 1 Configure New Web App Integration Operations App integration name Choose a name for the app Logo optional Add a logo Grant type Select Authorization Code and Refresh Token Sign in redirect URIs Replace the default setting with the Grafana Cloud Okta path replacing YOUR ORG with the name of your Grafana organization https YOUR ORG grafana net login okta For on premises installation use the Grafana server URL http my grafana server name or ip grafana server port login okta Sign out redirect URIs optional Replace the default setting with the Grafana Cloud Okta path replacing YOUR ORG with the name of your Grafana organization https YOUR ORG grafana net logout For on premises installation use the Grafana server URL http my grafana server name or ip grafana server port logout Base URIs optional Add any base URIs Controlled access Select whether to assign the app integration to everyone in your organization or only selected groups You can assign this option after you create the app 1 Make a note of the following ClientID Client Secret Auth URL For example https TENANT ID okta com oauth2 v1 authorize Token URL For example https TENANT ID okta com oauth2 v1 token API URL For example https TENANT ID okta com oauth2 v1 userinfo Configure Okta to Grafana role mapping 1 In the Okta Admin Console select Directory Profile Editor 1 Select the Okta Application Profile you created previously the default name for this is App name User 1 Select Add Attribute and fill in the following fields Data Type string Display Name Meaningful name For example Grafana Role Variable Name Meaningful name For example grafana role Description optional A description of the role Enum Select Define enumerated list of values and add the following Display Name Admin Value Admin Display Name Editor Value Editor Display Name Viewer Value Viewer The remaining attributes are optional and can be set as needed 1 Click Save 1 Optional You can add the role attribute to the default User profile To do this please follow the steps in the Optional Add the role attribute to the User default Okta profile section Configure Groups claim 1 In the Okta Admin Console select Application Applications 1 Select the OpenID Connect application you created 1 Go to the Sign On tab and click Edit in the OpenID Connect ID Token section 1 In the Group claim type section select Filter 1 In the Group claim filter section leave the default name groups or add it if the box is empty then select Matches regex and add the following regex 1 Click Save 1 Click the Back to applications link at the top of the page 1 From the More button dropdown menu click Refresh Application Data 1 Include the groups scope in the Scopes field in Grafana of the Okta integration For Terraform or in the Grafana configuration file include the groups scope in scopes field If you configure the groups claim differently ensure that the groups claim is a string array Optional Add the role attribute to the User default Okta profile If you want to configure the role for all users in the Okta directory you can add the role attribute to the User default Okta profile 1 Return to the Directory section and select Profile Editor 1 Select the User default Okta profile and click Add Attribute 1 Set all of the attributes in the same way you did in Step 3 1 Select Add Mapping to add your new attributes For example user grafana role grafana role 1 To add a role to a user select the user from the Directory and click Profile Edit 1 Select an option from your new attribute and click Save 1 Update the Okta integration by setting the Role attribute path role attribute path in Terraform and config file to YOUR ROLE VARIABLE For example role attribute path grafana role using the configuration Configure Okta authentication client using the Grafana UI Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle As a Grafana Admin you can configure Okta OAuth2 client from within Grafana using the Okta UI To do this navigate to Administration Authentication Okta page and fill in the form If you have a current configuration in the Grafana configuration file then the form will be pre populated with those values otherwise the form will contain default values After you have filled in the form click Save If the save was successful Grafana will apply the new configurations If you need to reset changes you made in the UI back to the default values click Reset After you have reset the changes Grafana will apply the configuration from the Grafana configuration file if there is any configuration or the default values If you run Grafana in high availability mode configuration changes may not get applied to all Grafana instances immediately You may need to wait a few minutes for the configuration to propagate to all Grafana instances Refer to configuration options for more information Configure Okta authentication client using the Terraform provider Available in Public Preview in Grafana 10 4 behind the ssoSettingsApi feature toggle Supported in the Terraform provider since v2 12 0 terraform resource grafana sso settings okta sso settings provider name okta oauth2 settings name Okta auth url https okta tenant id okta com oauth2 v1 authorize token url https okta tenant id okta com oauth2 v1 token api url https okta tenant id okta com oauth2 v1 userinfo client id CLIENT ID client secret CLIENT SECRET allow sign up true auto login false scopes openid profile email offline access role attribute path contains groups Example DevOps Admin None role attribute strict true allowed groups Example DevOps Example Dev Example QA Go to Terraform Registry https registry terraform io providers grafana grafana latest docs resources sso settings for a complete reference on using the grafana sso settings resource Configure Okta authentication client using the Grafana configuration file Ensure that you have access to the Grafana configuration file Steps To integrate your Okta OIDC provider with Grafana using our Okta OIDC integration follow these steps 1 Follow the Create an Okta app steps to create an OIDC app in Okta 1 Refer to the following table to update field values located in the auth okta section of the Grafana configuration file Field Description client id These values must match the client ID from your Okta OIDC app auth url The authorization endpoint of your OIDC provider https okta tenant id okta com oauth2 v1 authorize token url The token endpoint of your Okta OIDC provider https okta tenant id okta com oauth2 v1 token api url The user information endpoint of your Okta OIDC provider https tenant id okta com oauth2 v1 userinfo enabled Enables Okta OIDC authentication Set this value to true 1 Review the list of other Okta OIDC configuration options and complete them as necessary 1 Optional Configure a refresh token 1 Configure role mapping 1 Optional Configure group synchronization 1 Restart Grafana You should now see a Okta OIDC login button on the login page and be able to log in or sign up with your OIDC provider The following is an example of a minimally functioning integration when configured with the instructions above ini auth okta name Okta icon okta enabled true allow sign up true client id client id scopes openid profile email offline access auth url https okta tenant id okta com oauth2 v1 authorize token url https okta tenant id okta com oauth2 v1 token api url https okta tenant id okta com oauth2 v1 userinfo role attribute path grafana role role attribute strict true allowed groups Example DevOps Example Dev Example QA Configure a refresh token When a user logs in using an OAuth provider Grafana verifies that the access token has not expired When an access token expires Grafana uses the provided refresh token if any exists to obtain a new access token without requiring the user to log in again If a refresh token doesn t exist Grafana logs the user out of the system after the access token has expired To enable the Refresh Token head over the Okta application settings and 1 Under General tab find the General Settings section 1 Within the Grant Type options enable the Refresh Token checkbox At the configuration file extend the scopes in auth okta section with offline access and set use refresh token to true Configure role mapping Unless skip org role sync option is enabled the user s role will be set to the role retrieved from the auth provider upon user login The user s role is retrieved using a JMESPath http jmespath org examples html expression from the role attribute path configuration option against the api url userinfo OIDC endpoint endpoint payload If no valid role is found the user is assigned the role specified by the auto assign org role option You can disable this default role assignment by setting role attribute strict true This setting denies user access if no role or an invalid role is returned after evaluating the role attribute path and the org mapping expressions You can use the org attribute path and org mapping configuration options to assign the user to organizations and specify their role For more information refer to Org roles mapping example org roles mapping example If both org role mapping org mapping and the regular role mapping role attribute path are specified then the user will get the highest of the two mapped roles To allow mapping Grafana server administrator role use the allow assign grafana admin configuration option Refer to configuration options for more information In Create an Okta app you created a custom attribute in Okta to store the role You can use this attribute to map the role to a Grafana role by setting the role attribute path configuration option to the custom attribute name role attribute path grafana role If you want to map the role based on the user s group you can use the groups attribute from the user info endpoint An example of this is role attribute path contains groups Example DevOps Admin None You can find more examples of JMESPath expressions on the Generic OAuth page for JMESPath examples To learn about adding custom claims to the user info in Okta refer to add custom claims https developer okta com docs guides customize tokens returned from okta main add a custom claim to a token Org roles mapping example Available in on premise Grafana installations In this example the org mapping uses the groups attribute as the source org attribute path to map the current user to different organizations and roles The user has been granted the role of a Viewer in the org foo org if they are a member of the Group 1 group the role of an Editor in the org bar org if they are a member of the Group 2 group and the role of an Editor in the org baz OrgID 3 org Config ini org attribute path groups org mapping Group 1 org foo Viewer Group 2 org bar Editor 3 Editor Configure group synchronization Enterprise only Available in Grafana Enterprise https grafana com docs grafana GRAFANA VERSION introduction grafana enterprise and Grafana Cloud docs grafana cloud By using group synchronization you can link your Okta groups to teams and roles within Grafana This allows automatically assigning users to the appropriate teams or granting them the mapped roles Teams and roles get synchronized when the user logs in Okta groups can be referenced by group names like Admins or Editors To learn more about how to configure group synchronization refer to Configure team sync and Configure group attribute sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure group attribute sync documentation Configuration options The following table outlines the various Okta OIDC configuration options You can apply these options as environment variables similar to any other configuration within Grafana For more information refer to Override configuration with environment variables If the configuration option requires a JMESPath expression that includes a colon enclose the entire expression in quotes to prevent parsing errors For example role attribute path role view Setting Required Supported on Cloud Description Default enabled No Yes Enables Okta OIDC authentication false name No Yes Name that refers to the Okta OIDC authentication from the Grafana user interface Okta icon No Yes Icon used for the Okta OIDC authentication in the Grafana user interface okta client id Yes Yes Client ID provided by your Okta OIDC app client secret Yes Yes Client secret provided by your Okta OIDC app auth url Yes Yes Authorization endpoint of your Okta OIDC provider token url Yes Yes Endpoint used to obtain the Okta OIDC access token api url Yes Yes Endpoint used to obtain user information scopes No Yes List of comma or space separated Okta OIDC scopes openid profile email groups allow sign up No Yes Controls Grafana user creation through the Okta OIDC login Only existing Grafana users can log in with Okta OIDC if set to false true auto login No Yes Set to true to enable users to bypass the login screen and automatically log in This setting is ignored if you configure multiple auth providers to use auto login false role attribute path No Yes JMESPath http jmespath org examples html expression to use for Grafana role lookup Grafana will first evaluate the expression using the Okta OIDC ID token If no role is found the expression will be evaluated using the user information obtained from the UserInfo endpoint The result of the evaluation should be a valid Grafana role None Viewer Editor Admin or GrafanaAdmin For more information on user role mapping refer to Configure role mapping role attribute strict No Yes Set to true to deny user login if the Grafana org role cannot be extracted using role attribute path or org mapping For more information on user role mapping refer to Configure role mapping false org attribute path No No JMESPath http jmespath org examples html expression to use for Grafana org to role lookup The result of the evaluation will be mapped to org roles based on org mapping For more information on org to role mapping refer to Org roles mapping example org roles mapping example org mapping No No List of comma or space separated ExternalOrgName OrgIdOrName Role mappings Value can be meaning All users Role is optional and can have the following values None Viewer Editor or Admin For more information on external organization to role mapping refer to Org roles mapping example org roles mapping example skip org role sync No Yes Set to true to stop automatically syncing user roles This will allow you to set organization roles for your users from within Grafana manually false allowed groups No Yes List of comma or space separated groups The user should be a member of at least one group to log in allowed domains No Yes List of comma or space separated domains The user should belong to at least one domain to log in use pkce No Yes Set to true to use Proof Key for Code Exchange PKCE https datatracker ietf org doc html rfc7636 Grafana uses the SHA256 based S256 challenge method and a 128 bytes base64url encoded code verifier true use refresh token No Yes Set to true to use refresh token and check access token expiration false signout redirect url No Yes URL to redirect to after the user logs out
grafana setup documentation Grafana Keycloak Guide aliases keycloak keywords configuration auth keycloak grafana oauth
--- aliases: - ../../../auth/keycloak/ description: Grafana Keycloak Guide keywords: - grafana - keycloak - configuration - documentation - oauth labels: products: - cloud - enterprise - oss menuTitle: Keycloak OAuth2 title: Configure Keycloak OAuth2 authentication weight: 1300 --- # Configure Keycloak OAuth2 authentication Keycloak OAuth2 authentication allows users to log in to Grafana using their Keycloak credentials. This guide explains how to set up Keycloak as an authentication provider in Grafana. Refer to [Generic OAuth authentication]() for extra configuration options available for this provider. If Users use the same email address in Keycloak that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information. You may have to set the `root_url` option of `[server]` for the callback URL to be correct. For example in case you are serving Grafana behind a proxy. Example config: ```ini [auth.generic_oauth] enabled = true name = Keycloak-OAuth allow_sign_up = true client_id = YOUR_APP_CLIENT_ID client_secret = YOUR_APP_CLIENT_SECRET scopes = openid email profile offline_access roles email_attribute_path = email login_attribute_path = username name_attribute_path = full_name auth_url = https://<PROVIDER_DOMAIN>/realms/<REALM_NAME>/protocol/openid-connect/auth token_url = https://<PROVIDER_DOMAIN>/realms/<REALM_NAME>/protocol/openid-connect/token api_url = https://<PROVIDER_DOMAIN>/realms/<REALM_NAME>/protocol/openid-connect/userinfo role_attribute_path = contains(roles[*], 'admin') && 'Admin' || contains(roles[*], 'editor') && 'Editor' || 'Viewer' ``` As an example, `<PROVIDER_DOMAIN>` can be `keycloak-demo.grafana.org` and `<REALM_NAME>` can be `grafana`. To configure the `kc_idp_hint` parameter for Keycloak, you need to change the `auth_url` configuration to include the `kc_idp_hint` parameter. For example if you want to hint the Google identity provider: ```ini auth_url = https://<PROVIDER_DOMAIN>/realms/<REALM_NAME>/protocol/openid-connect/auth?kc_idp_hint=google ``` api_url is not required if the id_token contains all the necessary user information and can add latency to the login process. It is useful as a fallback or if the user has more than 150 group memberships. ## Keycloak configuration 1. Create a client in Keycloak with the following settings: - Client ID: `grafana-oauth` - Enabled: `ON` - Client Protocol: `openid-connect` - Access Type: `confidential` - Standard Flow Enabled: `ON` - Implicit Flow Enabled: `OFF` - Direct Access Grants Enabled: `ON` - Root URL: `<grafana_root_url>` - Valid Redirect URIs: `<grafana_root_url>/login/generic_oauth` - Web Origins: `<grafana_root_url>` - Admin URL: `<grafana_root_url>` - Base URL: `<grafana_root_url>` As an example, `<grafana_root_url>` can be `https://play.grafana.org`. Non-listed configuration options can be left at their default values. 2. In the client scopes configuration, _Assigned Default Client Scopes_ should match: ``` email offline_access profile roles ``` These scopes do not add group claims to the id_token. Without group claims, group synchronization will not work. Group synchronization is covered further down in this document. 3. For role mapping to work with the example configuration above, you need to create the following roles and assign them to users: ``` admin editor viewer ``` ## Group synchronization Available in [Grafana Enterprise](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/introduction/grafana-enterprise) and [Grafana Cloud](/docs/grafana-cloud/). By using group synchronization, you can link your Keycloak groups to teams and roles within Grafana. This allows automatically assigning users to the appropriate teams or granting them the mapped roles. This is useful if you want to give your users access to specific resources based on their group membership. Teams and roles get synchronized when the user logs in. To enable group synchronization, you need to add a `groups` mapper to the client configuration in Keycloak. This will add the `groups` claim to the id_token. You can then use the `groups` claim to map groups to teams and roles in Grafana. 1. In the client configuration, head to `Mappers` and create a mapper with the following settings: - Name: `Group Mapper` - Mapper Type: `Group Membership` - Token Claim Name: `groups` - Full group path: `OFF` - Add to ID token: `ON` - Add to access token: `OFF` - Add to userinfo: `ON` 2. In Grafana's configuration add the following option: ```ini [auth.generic_oauth] groups_attribute_path = groups ``` If you use nested groups containing special characters such as quotes or colons, the JMESPath parser can perform a harmless reverse function so Grafana can properly evaluate nested groups. The following example shows a parent group named `Global` with nested group `department` that contains a list of groups: ```ini [auth.generic_oauth] groups_attribute_path = reverse("Global:department") ``` To learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-group-attribute-sync) documentation. ## Enable Single Logout To enable Single Logout, you need to add the following option to the configuration of Grafana: ```ini [auth.generic_oauth] signout_redirect_url = https://<PROVIDER_DOMAIN>/auth/realms/<REALM_NAME>/protocol/openid-connect/logout?post_logout_redirect_uri=https%3A%2F%2F<GRAFANA_DOMAIN>%2Flogin ``` As an example, `<PROVIDER_DOMAIN>` can be `keycloak-demo.grafana.org`, `<REALM_NAME>` can be `grafana` and `<GRAFANA_DOMAIN>` can be `play.grafana.org`. Grafana supports ID token hints for single logout. Grafana automatically adds the `id_token_hint` parameter to the logout request if it detects OAuth as the authentication method. ## Allow assigning Grafana Admin If the application role received by Grafana is `GrafanaAdmin` , Grafana grants the user server administrator privileges. This is useful if you want to grant server administrator privileges to a subset of users. Grafana also assigns the user the `Admin` role of the default organization. ```ini role_attribute_path = contains(roles[*], 'grafanaadmin') && 'GrafanaAdmin' || contains(roles[*], 'admin') && 'Admin' || contains(roles[*], 'editor') && 'Editor' || 'Viewer' allow_assign_grafana_admin = true ``` ### Configure refresh token When a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token. Grafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired. To enable a refresh token for Keycloak, do the following: 1. Extend the `scopes` in `[auth.generic_oauth]` with `offline_access`. 1. Add `use_refresh_token = true` to `[auth.generic_oauth]` configuration.
grafana setup
aliases auth keycloak description Grafana Keycloak Guide keywords grafana keycloak configuration documentation oauth labels products cloud enterprise oss menuTitle Keycloak OAuth2 title Configure Keycloak OAuth2 authentication weight 1300 Configure Keycloak OAuth2 authentication Keycloak OAuth2 authentication allows users to log in to Grafana using their Keycloak credentials This guide explains how to set up Keycloak as an authentication provider in Grafana Refer to Generic OAuth authentication for extra configuration options available for this provider If Users use the same email address in Keycloak that they use with other authentication providers such as Grafana com you need to do additional configuration to ensure that the users are matched correctly Please refer to the Using the same email address to login with different identity providers documentation for more information You may have to set the root url option of server for the callback URL to be correct For example in case you are serving Grafana behind a proxy Example config ini auth generic oauth enabled true name Keycloak OAuth allow sign up true client id YOUR APP CLIENT ID client secret YOUR APP CLIENT SECRET scopes openid email profile offline access roles email attribute path email login attribute path username name attribute path full name auth url https PROVIDER DOMAIN realms REALM NAME protocol openid connect auth token url https PROVIDER DOMAIN realms REALM NAME protocol openid connect token api url https PROVIDER DOMAIN realms REALM NAME protocol openid connect userinfo role attribute path contains roles admin Admin contains roles editor Editor Viewer As an example PROVIDER DOMAIN can be keycloak demo grafana org and REALM NAME can be grafana To configure the kc idp hint parameter for Keycloak you need to change the auth url configuration to include the kc idp hint parameter For example if you want to hint the Google identity provider ini auth url https PROVIDER DOMAIN realms REALM NAME protocol openid connect auth kc idp hint google api url is not required if the id token contains all the necessary user information and can add latency to the login process It is useful as a fallback or if the user has more than 150 group memberships Keycloak configuration 1 Create a client in Keycloak with the following settings Client ID grafana oauth Enabled ON Client Protocol openid connect Access Type confidential Standard Flow Enabled ON Implicit Flow Enabled OFF Direct Access Grants Enabled ON Root URL grafana root url Valid Redirect URIs grafana root url login generic oauth Web Origins grafana root url Admin URL grafana root url Base URL grafana root url As an example grafana root url can be https play grafana org Non listed configuration options can be left at their default values 2 In the client scopes configuration Assigned Default Client Scopes should match email offline access profile roles These scopes do not add group claims to the id token Without group claims group synchronization will not work Group synchronization is covered further down in this document 3 For role mapping to work with the example configuration above you need to create the following roles and assign them to users admin editor viewer Group synchronization Available in Grafana Enterprise https grafana com docs grafana GRAFANA VERSION introduction grafana enterprise and Grafana Cloud docs grafana cloud By using group synchronization you can link your Keycloak groups to teams and roles within Grafana This allows automatically assigning users to the appropriate teams or granting them the mapped roles This is useful if you want to give your users access to specific resources based on their group membership Teams and roles get synchronized when the user logs in To enable group synchronization you need to add a groups mapper to the client configuration in Keycloak This will add the groups claim to the id token You can then use the groups claim to map groups to teams and roles in Grafana 1 In the client configuration head to Mappers and create a mapper with the following settings Name Group Mapper Mapper Type Group Membership Token Claim Name groups Full group path OFF Add to ID token ON Add to access token OFF Add to userinfo ON 2 In Grafana s configuration add the following option ini auth generic oauth groups attribute path groups If you use nested groups containing special characters such as quotes or colons the JMESPath parser can perform a harmless reverse function so Grafana can properly evaluate nested groups The following example shows a parent group named Global with nested group department that contains a list of groups ini auth generic oauth groups attribute path reverse Global department To learn more about how to configure group synchronization refer to Configure team sync and Configure group attribute sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure group attribute sync documentation Enable Single Logout To enable Single Logout you need to add the following option to the configuration of Grafana ini auth generic oauth signout redirect url https PROVIDER DOMAIN auth realms REALM NAME protocol openid connect logout post logout redirect uri https 3A 2F 2F GRAFANA DOMAIN 2Flogin As an example PROVIDER DOMAIN can be keycloak demo grafana org REALM NAME can be grafana and GRAFANA DOMAIN can be play grafana org Grafana supports ID token hints for single logout Grafana automatically adds the id token hint parameter to the logout request if it detects OAuth as the authentication method Allow assigning Grafana Admin If the application role received by Grafana is GrafanaAdmin Grafana grants the user server administrator privileges This is useful if you want to grant server administrator privileges to a subset of users Grafana also assigns the user the Admin role of the default organization ini role attribute path contains roles grafanaadmin GrafanaAdmin contains roles admin Admin contains roles editor Editor Viewer allow assign grafana admin true Configure refresh token When a user logs in using an OAuth provider Grafana verifies that the access token has not expired When an access token expires Grafana uses the provided refresh token if any exists to obtain a new access token Grafana uses a refresh token to obtain a new access token without requiring the user to log in again If a refresh token doesn t exist Grafana logs the user out of the system after the access token has expired To enable a refresh token for Keycloak do the following 1 Extend the scopes in auth generic oauth with offline access 1 Add use refresh token true to auth generic oauth configuration
grafana setup enterprise saml set up saml with okta enterprise configure saml enterprise saml enterprise saml troubleshoot saml enterprise saml enable saml aliases enterprise saml about saml auth saml enterprise saml configure saml
--- aliases: - ../../../auth/saml/ - ../../../enterprise/configure-saml/ - ../../../enterprise/saml/ - ../../../enterprise/saml/about-saml/ - ../../../enterprise/saml/configure-saml/ - ../../../enterprise/saml/enable-saml/ - ../../../enterprise/saml/set-up-saml-with-okta/ - ../../../enterprise/saml/troubleshoot-saml/ description: Learn how to configure SAML authentication in Grafana's configuration file. labels: products: - cloud - enterprise menuTitle: SAML title: Configure SAML authentication using the configuration file weight: 500 --- # Configure SAML authentication using the configuration file Available in [Grafana Enterprise]() and [Grafana Cloud](/docs/grafana-cloud). SAML authentication integration allows your Grafana users to log in by using an external SAML 2.0 Identity Provider (IdP). To enable this, Grafana becomes a Service Provider (SP) in the authentication flow, interacting with the IdP to exchange user information. You can configure SAML authentication in Grafana through one of the following methods: - the Grafana configuration file - the API (refer to [SSO Settings API]()) - the user interface (refer to [Configure SAML authentication using the Grafana user interface]()) - the Terraform provider (refer to [Terraform docs](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/sso_settings)) The API and Terraform support are available in Public Preview in Grafana v11.1 behind the `ssoSettingsSAML` feature toggle. You must also enable the `ssoSettingsApi` flag. All methods offer the same configuration options, but you might prefer using the Grafana configuration file or the Terraform provider if you want to keep all of Grafana's authentication settings in one place. Grafana Cloud users do not have access to Grafana configuration file, so they should configure SAML through the other methods. Configuration in the API takes precedence over the configuration in the Grafana configuration file. SAML settings from the API will override any SAML configuration set in the Grafana configuration file. ## Supported SAML Grafana supports the following SAML 2.0 bindings: - From the Service Provider (SP) to the Identity Provider (IdP): - `HTTP-POST` binding - `HTTP-Redirect` binding - From the Identity Provider (IdP) to the Service Provider (SP): - `HTTP-POST` binding In terms of security: - Grafana supports signed and encrypted assertions. - Grafana does not support signed or encrypted requests. In terms of initiation, Grafana supports: - SP-initiated requests - IdP-initiated requests By default, SP-initiated requests are enabled. For instructions on how to enable IdP-initiated logins, see [IdP-initiated Single Sign-On (SSO)](). It is possible to set up Grafana with SAML authentication using Azure AD. However, if an Azure AD user belongs to more than 150 groups, a Graph API endpoint is shared instead. Grafana versions 11.1 and below, do not support fetching the groups from the Graph API endpoint. As a result, users with more than 150 groups will not be able to retrieve their groups. Instead, it is recommended that you use OIDC/OAuth workflows,. As of Grafana 11.2, the SAML integration offers a mechanism to retrieve user groups from the Graph API. Related links: - [Azure AD SAML limitations](https://learn.microsoft.com/en-us/entra/identity-platform/id-token-claims-reference#groups-overage-claim) - [Set up SAML with Azure AD]() - [Configure a Graph API application in Azure AD]() ### Edit SAML options in the Grafana config file 1. In the `[auth.saml]` section in the Grafana configuration file, set [`enabled`]() to `true`. 1. Configure the [certificate and private key](). 1. On the Okta application page where you have been redirected after application created, navigate to the **Sign On** tab and find **Identity Provider metadata** link in the **Settings** section. 1. Set the [`idp_metadata_url`]() to the URL obtained from the previous step. The URL should look like `https://<your-org-id>.okta.com/app/<application-id>/sso/saml/metadata`. 1. Set the following options to the attribute names configured at the **step 10** of the SAML integration setup. You can find this attributes on the **General** tab of the application page (**ATTRIBUTE STATEMENTS** and **GROUP ATTRIBUTE STATEMENTS** in the **SAML Settings** section). - [`assertion_attribute_login`]() - [`assertion_attribute_email`]() - [`assertion_attribute_name`]() - [`assertion_attribute_groups`]() 1. (Optional) Set the `name` parameter in the `[auth.saml]` section in the Grafana configuration file. This parameter replaces SAML in the Grafana user interface in locations such as the sign-in button. 1. Save the configuration file and then restart the Grafana server. When you are finished, the Grafana configuration might look like this example: ```bash [server] root_url = https://grafana.example.com [auth.saml] enabled = true name = My IdP auto_login = false private_key_path = "/path/to/private_key.pem" certificate_path = "/path/to/certificate.cert" idp_metadata_url = "https://my-org.okta.com/app/my-application/sso/saml/metadata" assertion_attribute_name = DisplayName assertion_attribute_login = Login assertion_attribute_email = Email assertion_attribute_groups = Group ``` ## Enable SAML authentication in Grafana To use the SAML integration, in the `auth.saml` section of in the Grafana custom configuration file, set `enabled` to `true`. Refer to [Configuration]() for more information about configuring Grafana. ## Additional configuration for HTTP-Post binding If multiple bindings are supported for SAML Single Sign-On (SSO) by the Identity Provider (IdP), Grafana will use the `HTTP-Redirect` binding by default. If the IdP only supports the `HTTP-Post binding` then updating the `content_security_policy_template` (in case `content_security_policy = true`) and `content_security_policy_report_only_template` (in case `content_security_policy_report_only = true`) might be required to allow Grafana to initiate a POST request to the IdP. These settings are used to define the [Content Security Policy (CSP)](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy) headers that are sent by Grafana. To allow Grafana to initiate a POST request to the IdP, update the `content_security_policy_template` and `content_security_policy_report_only_template` settings in the Grafana configuration file and add the IdP's domain to the `form-action` directive. By default, the `form-action` directive is set to `self` which only allows POST requests to the same domain as Grafana. To allow POST requests to the IdP's domain, update the `form-action` directive to include the IdP's domain, for example: `form-action 'self' https://idp.example.com`. For Grafana Cloud instances, please contact Grafana Support to update the `content_security_policy_template` and `content_security_policy_report_only_template` settings of your Grafana instance. Please provide the metadata URL/file of your IdP. ## Certificate and private key The SAML SSO standard uses asymmetric encryption to exchange information between the SP (Grafana) and the IdP. To perform such encryption, you need a public part and a private part. In this case, the X.509 certificate provides the public part, while the private key provides the private part. The private key needs to be issued in a [PKCS#8](https://en.wikipedia.org/wiki/PKCS_8) format. Grafana supports two ways of specifying both the `certificate` and `private_key`. - Without a suffix (`certificate` or `private_key`), the configuration assumes you've supplied the base64-encoded file contents. - With the `_path` suffix (`certificate_path` or `private_key_path`), then Grafana treats the value entered as a file path and attempts to read the file from the file system. You can only use one form of each configuration option. Using multiple forms, such as both `certificate` and `certificate_path`, results in an error. --- ### Generate private key for SAML authentication: An example of how to generate a self-signed certificate and private key that's valid for one year: ```sh $ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes​ ``` The generated `key.pem` and `cert.pem` files are then used for certificate and private_key. The key you provide should look like: ``` -----BEGIN PRIVATE KEY----- ... ... -----END PRIVATE KEY----- ``` ## Set up SAML with Azure AD Grafana supports user authentication through Azure AD, which is useful when you want users to access Grafana using single sign-on. This topic shows you how to configure SAML authentication in Grafana with [Azure AD](https://azure.microsoft.com/en-us/services/active-directory/). **Before you begin:** - Ensure you have permission to administer SAML authentication. For more information about roles and permissions in Grafana. - [Roles and permissions](). - Learn the limitations of Azure AD SAML integration. - [Azure AD SAML limitations](https://learn.microsoft.com/en-us/entra/identity-platform/id-token-claims-reference#groups-overage-claim) - Configure SAML integration with Azure AD, create an app integration inside the Azure AD organization first. - [Add app integration in Azure AD](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal-configure) - If you have users that belong to more than 150 groups, you need to configure a registered application to provide an Azure Graph API to retrieve the groups. - [Setup Azure AD Graph API applications]() ### Generate self-signed certificates Azure AD requires a certificate to sign the SAML requests. You can generate a self-signed certificate using the following command: ```sh $ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes ``` This will generate a `key.pem` and `cert.pem` file that you can use for the `private_key_path` and `certificate_path` configuration options. ### Add Microsoft Entra SAML Toolkit from the gallery > Taken from https://learn.microsoft.com/en-us/entra/identity/saas-apps/saml-toolkit-tutorial#add-microsoft-entra-saml-toolkit-from-the-gallery 1. Go to the [Azure portal](https://portal.azure.com/#home) and sign in with your Azure AD account. 1. Search for **Enterprise Applications**. 1. In the **Enterprise applications** pane, select **New application**. 1. In the search box, enter **SAML Toolkit**, and then select the **Microsoft Entra SAML Toolkit** from the results panel. 1. Add a descriptive name and select **Create**. ### Configure the SAML Toolkit application endpoints In order to validate Azure AD users with Grafana, you need to configure the SAML Toolkit application endpoints by creating a new SAML integration in the Azure AD organization. > For the following configuration, we will use `https://localhost` as the Grafana URL. Replace it with your Grafana URL. 1. In the **SAML Toolkit application**, select **Set up single sign-on**. 1. In the **Single sign-on** pane, select **SAML**. 1. In the Set up **Single Sign-On with SAML** pane, select the pencil icon for **Basic SAML Configuration** to edit the settings. 1. In the **Basic SAML Configuration** pane, click on the **Edit** button and update the following fields: - In the **Identifier (Entity ID)** field, enter `https://localhost/saml/metadata`. - In the **Reply URL (Assertion Consumer Service URL)** field, enter `https://localhost/saml/acs`. - In the **Sign on URL** field, enter `https://localhost`. - In the **Relay State** field, enter `https://localhost`. - In the **Logout URL** field, enter `https://localhost/saml/slo`. 1. Select **Save**. 1. At the **SAML Certificate** section, copy the **App Federation Metadata Url**. - Use this URL in the `idp_metadata_url` field in the `custom.ini` file. ### Configure a Graph API application in Azure AD While an Azure AD tenant can be configured in Grafana via SAML, some additional information is only accessible via the Graph API. To retrieve this information, create a new application in Azure AD and grant it the necessary permissions. > [Azure AD SAML limitations](https://learn.microsoft.com/en-us/entra/identity-platform/id-token-claims-reference#groups-overage-claim) > For the following configuration, the URL `https://localhost` will be used as the Grafana URL. Replace it with your Grafana instance URL. #### Create a new Application registration This app registration will be used as a Service Account to retrieve more information about the user from the Azure AD. 1. Go to the [Azure portal](https://portal.azure.com/#home) and sign in with your Azure AD account. 1. In the left-hand navigation pane, select the Azure Active Directory service, and then select **App registrations**. 1. Click the **New registration** button. 1. In the **Register an application** pane, enter a name for the application. 1. In the **Supported account types** section, select the account types that can use the application. 1. In the **Redirect URI** section, select Web and enter `https://localhost/login/azuread`. 1. Click the **Register** button. #### Set up permissions for the application 1. In the overview pane, look for **API permissions** section and select **Add a permission**. 1. In the **Request API permissions** pane, select **Microsoft Graph**, and click **Application permissions**. 1. In the **Select permissions** pane, under the **GroupMember** section, select **GroupMember.Read.All**. 1. In the **Select permissions** pane, under the **User** section, select **User.Read.All**. 1. Click the **Add permissions** button at the bottom of the page. 1. In the **Request API permissions** pane, select **Microsoft Graph**, and click **Delegated permissions**. 1. In the **Select permissions** pane, under the **User** section, select **User.Read**. 1. Click the **Add permissions** button at the bottom of the page. 1. In the **API permissions** section, select **Grant admin consent for <your-organization>**. The following table shows what the permissions look like from the Azure AD portal: | Permissions name | Type | Admin consent required | Status | | ---------------- | ----------- | ---------------------- | ------- | | `Group.Read.All` | Application | Yes | Granted | | `User.Read` | Delegated | No | Granted | | `User.Read.All` | Application | Yes | Granted | #### Generate a client secret 1. In the **Overview** pane, select **Certificates & secrets**. 1. Select **New client secret**. 1. In the **Add a client secret** pane, enter a description for the secret. 1. Set the expiration date for the secret. 1. Select **Add**. 1. Copy the value of the secret. This value is used in the `client_secret` field in the `custom.ini` file. ## Set up SAML with Okta Grafana supports user authentication through Okta, which is useful when you want your users to access Grafana using single sign on. This guide will follow you through the steps of configuring SAML authentication in Grafana with [Okta](https://okta.com/). You need to be an admin in your Okta organization to access Admin Console and create SAML integration. You also need permissions to edit Grafana config file and restart Grafana server. **Before you begin:** - To configure SAML integration with Okta, create an app integration inside the Okta organization first. [Add app integration in Okta](https://help.okta.com/en/prod/Content/Topics/Apps/apps-overview-add-apps.htm) - Ensure you have permission to administer SAML authentication. For more information about roles and permissions in Grafana, refer to [Roles and permissions](). **To set up SAML with Okta:** 1. Log in to the [Okta portal](https://login.okta.com/). 1. Go to the Admin Console in your Okta organization by clicking **Admin** in the upper-right corner. If you are in the Developer Console, then click **Developer Console** in the upper-left corner and then click **Classic UI** to switch over to the Admin Console. 1. In the Admin Console, navigate to **Applications** > **Applications**. 1. Click **Create App Integration** to start the Application Integration Wizard. 1. Choose **SAML 2.0** as the **Sign-in method**. 1. Click **Create**. 1. On the **General Settings** tab, enter a name for your Grafana integration. You can also upload a logo. 1. On the **Configure SAML** tab, enter the SAML information related to your Grafana instance: - In the **Single sign on URL** field, use the `/saml/acs` endpoint URL of your Grafana instance, for example, `https://grafana.example.com/saml/acs`. - In the **Audience URI (SP Entity ID)** field, use the `/saml/metadata` endpoint URL, by default it is the `/saml/metadata` endpoint of your Grafana instance (for example `https://example.grafana.com/saml/metadata`). This could be configured differently, but the value here must match the `entity_id` setting of the SAML settings of Grafana. - Leave the default values for **Name ID format** and **Application username**. If you plan to enable SAML Single Logout, consider setting the **Name ID format** to `EmailAddress` or `Persistent`. This must match the `name_id_format` setting of the Grafana instance. - In the **ATTRIBUTE STATEMENTS (OPTIONAL)** section, enter the SAML attributes to be shared with Grafana. The attribute names in Okta need to match exactly what is defined within Grafana, for example: | Attribute name (in Grafana) | Name and value (in Okta profile) | Grafana configuration (under `auth.saml`) | | --------------------------- | ---------------------------------------------------- | ----------------------------------------- | | Login | Login - `user.login` | `assertion_attribute_login = Login` | | Email | Email - `user.email` | `assertion_attribute_email = Email` | | DisplayName | DisplayName - `user.firstName + " " + user.lastName` | `assertion_attribute_name = DisplayName` | - In the **GROUP ATTRIBUTE STATEMENTS (OPTIONAL)** section, enter a group attribute name (for example, `Group`, ensure it matches the `asssertion_attribute_groups` setting in Grafana) and set filter to `Matches regex .*` to return all user groups. 1. Click **Next**. 1. On the final Feedback tab, fill out the form and then click **Finish**. ### Signature algorithm The SAML standard recommends using a digital signature for some types of messages, like authentication or logout requests. If the `signature_algorithm` option is configured, Grafana will put a digital signature into SAML requests. Supported signature types are `rsa-sha1`, `rsa-sha256`, `rsa-sha512`. This option should match your IdP configuration, otherwise, signature validation will fail. Grafana uses key and certificate configured with `private_key` and `certificate` options for signing SAML requests. ### Specify user's Name ID The `name_id_format` configuration field specifies the format of the NameID element in the SAML assertion. By default, this is set to `urn:oasis:names:tc:SAML:2.0:nameid-format:transient` and does not need to be specified in the configuration file. The following list includes valid configuration field values: | `name_id_format` value in the configuration file or Terraform | `Name identifier format` on the UI | | ------------------------------------------------------------- | ---------------------------------- | | `urn:oasis:names:tc:SAML:2.0:nameid-format:transient` | Default | | `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified` | Unspecified | | `urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress` | Email address | | `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` | Persistent | | `urn:oasis:names:tc:SAML:2.0:nameid-format:transient` | Transient | ### IdP metadata You also need to define the public part of the IdP for message verification. The SAML IdP metadata XML defines where and how Grafana exchanges user information. Grafana supports three ways of specifying the IdP metadata. - Without a suffix `idp_metadata`, Grafana assumes base64-encoded XML file contents. - With the `_path` suffix, Grafana assumes a file path and attempts to read the file from the file system. - With the `_url` suffix, Grafana assumes a URL and attempts to load the metadata from the given location. ### Maximum issue delay Prevents SAML response replay attacks and internal clock skews between the SP (Grafana) and the IdP. You can set a maximum amount of time between the IdP issuing a response and the SP (Grafana) processing it. The configuration options is specified as a duration, such as `max_issue_delay = 90s` or `max_issue_delay = 1h`. ### Metadata valid duration SP metadata is likely to expire at some point, perhaps due to a certificate rotation or change of location binding. Grafana allows you to specify for how long the metadata should be valid. Leveraging the `validUntil` field, you can tell consumers until when your metadata is going to be valid. The duration is computed by adding the duration to the current time. The configuration option is specified as a duration, such as `metadata_valid_duration = 48h`. ### Identity provider (IdP) registration For the SAML integration to work correctly, you need to make the IdP aware of the SP. The integration provides two key endpoints as part of Grafana: - The `/saml/metadata` endpoint, which contains the SP metadata. You can either download and upload it manually, or you make the IdP request it directly from the endpoint. Some providers name it Identifier or Entity ID. - The `/saml/acs` endpoint, which is intended to receive the ACS (Assertion Customer Service) callback. Some providers name it SSO URL or Reply URL. ### IdP-initiated Single Sign-On (SSO) By default, Grafana allows only service provider (SP) initiated logins (when the user logs in with SAML via Grafana’s login page). If you want users to log in into Grafana directly from your identity provider (IdP), set the `allow_idp_initiated` configuration option to `true` and configure `relay_state` with the same value specified in the IdP configuration. IdP-initiated SSO has some security risks, so make sure you understand the risks before enabling this feature. When using IdP-initiated SSO, Grafana receives unsolicited SAML requests and can't verify that login flow was started by the user. This makes it hard to detect whether SAML message has been stolen or replaced. Because of this, IdP-initiated SSO is vulnerable to login cross-site request forgery (CSRF) and manΒ in the middle (MITM) attacks. We do not recommend using IdP-initiated SSO and keeping it disabled whenever possible. ### Single logout SAML's single logout feature allows users to log out from all applications associated with the current IdP session established via SAML SSO. If the `single_logout` option is set to `true` and a user logs out, Grafana requests IdP to end the user session which in turn triggers logout from all other applications the user is logged into using the same IdP session (applications should support single logout). Conversely, if another application connected to the same IdP logs out using single logout, Grafana receives a logout request from IdP and ends the user session. `HTTP-Redirect` and `HTTP-POST` bindings are supported for single logout. When using `HTTP-Redirect` bindings the query should include a request signature. ### Assertion mapping During the SAML SSO authentication flow, Grafana receives the ACS callback. The callback contains all the relevant information of the user under authentication embedded in the SAML response. Grafana parses the response to create (or update) the user within its internal database. For Grafana to map the user information, it looks at the individual attributes within the assertion. You can think of these attributes as Key/Value pairs (although, they contain more information than that). Grafana provides configuration options that let you modify which keys to look at for these values. The data we need to create the user in Grafana is Name, Login handle, and email. #### The `assertion_attribute_name` option `assertion_attribute_name` is a special assertion mapping that can either be a simple key, indicating a mapping to a single assertion attribute on the SAML response, or a complex template with variables using the `$__saml{<attribute>}` syntax. If this property is misconfigured, Grafana will log an error message on startup and disallow SAML sign-ins. Grafana will also log errors after a login attempt if a variable in the template is missing from the SAML response. **Examples** ```ini #plain string mapping assertion_attribute_name = displayName ``` ```ini #template mapping assertion_attribute_name = $__saml{firstName} $__saml{lastName} ``` ### Allow new user signups By default, new Grafana users using SAML authentication will have an account created for them automatically. To decouple authentication and account creation and ensure only users with existing accounts can log in with SAML, set the `allow_sign_up` option to false. ### Configure automatic login Set `auto_login` option to true to attempt login automatically, skipping the login screen. This setting is ignored if multiple auth providers are configured to use auto login. ``` auto_login = true ``` ### Configure group synchronization Group synchronization allows you to map user groups from an identity provider to Grafana teams and roles. To use SAML group synchronization, set [`assertion_attribute_groups`]() to the attribute name where you store user groups. Then Grafana will use attribute values extracted from SAML assertion to add user to Grafana teams and grant them roles. Team sync allows you sync users from SAML to Grafana teams. It does not automatically create teams in Grafana. You need to create teams in Grafana before you can use this feature. Given the following partial SAML assertion: ```xml <saml2:Attribute Name="groups" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified"> <saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">admins_group </saml2:AttributeValue> <saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">division_1 </saml2:AttributeValue> </saml2:Attribute> ``` The configuration would look like this: ```ini [auth.saml] # ... assertion_attribute_groups = groups ``` The following `External Group ID`s would be valid for configuring team sync or role sync in Grafana: - `admins_group` - `division_1` To learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-group-attribute-sync) documentation. ### Configure role sync Role sync allows you to map user roles from an identity provider to Grafana. To enable role sync, configure role attribute and possible values for the Editor, Admin, and Grafana Admin roles. For more information about user roles, refer to [Roles and permissions](). 1. In the configuration file, set [`assertion_attribute_role`]() option to the attribute name where the role information will be extracted from. 1. Set the [`role_values_none`]() option to the values mapped to the `None` role. 1. Set the [`role_values_viewer`]() option to the values mapped to the `Viewer` role. 1. Set the [`role_values_editor`]() option to the values mapped to the `Editor` role. 1. Set the [`role_values_admin`]() option to the values mapped to the organization `Admin` role. 1. Set the [`role_values_grafana_admin`]() option to the values mapped to the `Grafana Admin` role. If a user role doesn't match any of configured values, then the role specified by the `auto_assign_org_role` config option will be assigned. If the `auto_assign_org_role` field is not set then the user role will default to `Viewer`. For more information about roles and permissions in Grafana, refer to [Roles and permissions](). Example configuration: ```ini [auth.saml] assertion_attribute_role = role role_values_none = none role_values_viewer = external role_values_editor = editor, developer role_values_admin = admin, operator role_values_grafana_admin = superadmin ``` **Important**: When role sync is configured, any changes of user roles and organization membership made manually in Grafana will be overwritten on next user login. Assign user organizations and roles in the IdP instead. If you don't want user organizations and roles to be synchronized with the IdP, you can use the `skip_org_role_sync` configuration option. Example configuration: ```ini [auth.saml] skip_org_role_sync = true ``` ### Configure organization mapping Organization mapping allows you to assign users to particular organization in Grafana depending on attribute value obtained from identity provider. 1. In configuration file, set [`assertion_attribute_org`]() to the attribute name you store organization info in. This attribute can be an array if you want a user to be in multiple organizations. 1. Set [`org_mapping`]() option to the comma-separated list of `Organization:OrgId` pairs to map organization from IdP to Grafana organization specified by id. If you want users to have different roles in multiple organizations, you can set this option to a comma-separated list of `Organization:OrgId:Role` mappings. For example, use following configuration to assign users from `Engineering` organization to the Grafana organization with id `2` as Editor and users from `Sales` - to the org with id `3` as Admin, based on `Org` assertion attribute value: ```bash [auth.saml] assertion_attribute_org = Org org_mapping = Engineering:2:Editor, Sales:3:Admin ``` You can specify multiple organizations both for the IdP and Grafana: - `org_mapping = Engineering:2, Sales:2` to map users from `Engineering` and `Sales` to `2` in Grafana. - `org_mapping = Engineering:2, Engineering:3` to assign `Engineering` to both `2` and `3` in Grafana. You can use `*` as the SAML Organization if you want all your users to be in some Grafana organizations with a default role: - `org_mapping = *:2:Editor` to map all users to `2` in Grafana as Editors. You can use `*` as the Grafana organization in the mapping if you want all users from a given SAML Organization to be added to all existing Grafana organizations. - `org_mapping = Engineering:*` to map users from `Engineering` to all existing Grafana organizations. - `org_mapping = Administration:*:Admin` to map users from `Administration` to all existing Grafana organizations as Admins. ### Configure allowed organizations With the [`allowed_organizations`]() option you can specify a list of organizations where the user must be a member of at least one of them to be able to log in to Grafana. To put values containing spaces in the list, use the following JSON syntax: ```ini allowed_organizations = ["org 1", "second org"] ``` ### Example SAML configuration ```bash [auth.saml] enabled = true auto_login = false certificate_path = "/path/to/certificate.cert" private_key_path = "/path/to/private_key.pem" idp_metadata_path = "/my/metadata.xml" max_issue_delay = 90s metadata_valid_duration = 48h assertion_attribute_name = displayName assertion_attribute_login = mail assertion_attribute_email = mail assertion_attribute_groups = Group assertion_attribute_role = Role assertion_attribute_org = Org role_values_viewer = external role_values_editor = editor, developer role_values_admin = admin, operator role_values_grafana_admin = superadmin org_mapping = Engineering:2:Editor, Engineering:3:Viewer, Sales:3:Editor, *:1:Editor allowed_organizations = Engineering, Sales ``` ### Example SAML configuration in Terraform Available in Public Preview in Grafana v11.1 behind the `ssoSettingsSAML` feature toggle. Supported in the Terraform provider since v2.17.0. ```terraform resource "grafana_sso_settings" "saml_sso_settings" { provider_name = "saml" saml_settings { name = "SAML" auto_login = false certificate_path = "/path/to/certificate.cert" private_key_path = "/path/to/private_key.pem" idp_metadata_path = "/my/metadata.xml" max_issue_delay = "90s" metadata_valid_duration = "48h" assertion_attribute_name = "displayName" assertion_attribute_login = "mail" assertion_attribute_email = "mail" assertion_attribute_groups = "Group" assertion_attribute_role = "Role" assertion_attribute_org = "Org" role_values_editor = "editor, developer" role_values_admin = "admin, operator" role_values_grafana_admin = "superadmin" org_mapping = "Engineering:2:Editor, Engineering:3:Viewer, Sales:3:Editor, *:1:Editor" allowed_organizations = "Engineering, Sales" } } ``` Go to [Terraform Registry](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/sso_settings) for a complete reference on using the `grafana_sso_settings` resource. ## Troubleshoot SAML authentication in Grafana To troubleshoot and get more log information, enable SAML debug logging in the configuration file. Refer to [Configuration]() for more information. ```bash [log] filters = saml.auth:debug ``` ## Troubleshooting Following are common issues found in configuring SAML authentication in Grafana and how to resolve them. ### Infinite redirect loop / User gets redirected to the login page after successful login on the IdP side If you experience an infinite redirect loop when `auto_login = true` or redirected to the login page after successful login, it is likely that the `grafana_session` cookie's SameSite setting is set to `Strict`. This setting prevents the `grafana_session` cookie from being sent to Grafana during cross-site requests. To resolve this issue, set the `security.cookie_samesite` option to `Lax` in the Grafana configuration file. ### SAML authentication fails with error: - `asn1: structure error: tags don't match` We only support one private key format: PKCS#8. The keys may be in a different format (PKCS#1 or PKCS#12); in that case, it may be necessary to convert the private key format. The following command creates a pkcs8 key file. ```bash openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes ``` #### **Convert** the private key format to base64 The following command converts keys to base64 format. Base64-encode the cert.pem and key.pem files: (-w0 switch is not needed on Mac, only for Linux) ```sh $ base64 -w0 key.pem > key.pem.base64 $ base64 -w0 cert.pem > cert.pem.base64 ``` The base64-encoded values (`key.pem.base64, cert.pem.base64` files) are then used for certificate and private_key. The keys you provide should look like: ``` -----BEGIN PRIVATE KEY----- ... ... -----END PRIVATE KEY----- ``` ### SAML login attempts fail with request response "origin not allowed" When the user logs in using SAML and gets presented with "origin not allowed", the user might be issuing the login from an IdP (identity provider) service or the user is behind a reverse proxy. This potentially happens as Grafana's CSRF checks deem the requests to be invalid. For more information [CSRF](https://owasp.org/www-community/attacks/csrf). To solve this issue, you can configure either the [`csrf_trusted_origins`]() or [`csrf_additional_headers`]() option in the SAML configuration. Example of a configuration file: ```bash # config.ini ... [security] csrf_trusted_origins = https://grafana.example.com csrf_additional_headers = X-Forwarded-Host ... ``` ### SAML login attempts fail with request response "login session has expired" Accessing the Grafana login page from a URL that is not the root URL of the Grafana server can cause the instance to return the following error: "login session has expired". If you are accessing grafana through a proxy server, ensure that cookies are correctly rewritten to the root URL of Grafana. Cookies must be set on the same url as the `root_url` of Grafana. This is normally the reverse proxy's domain/address. Review the cookie settings in your proxy server configuration to ensure that cookies are not being discarded Review the following settings in your grafana config: ```ini [security] cookie_samesite = none ``` This setting should be set to none to allow grafana session cookies to work correctly with redirects. ```ini [security] cookie_secure = true ``` Ensure cookie_secure is set to true to ensure that cookies are only sent over HTTPS. ## Configure SAML authentication in Grafana The table below describes all SAML configuration options. Continue reading below for details on specific options. Like any other Grafana configuration, you can apply these options as [environment variables](). | Setting | Required | Description | Default | | ---------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- | | `enabled` | No | Whether SAML authentication is allowed. | `false` | | `name` | No | Name used to refer to the SAML authentication in the Grafana user interface. | `SAML` | | `entity_id` | No | The entity ID of the service provider. This is the unique identifier of the service provider. | `https://{Grafana URL}/saml/metadata` | | `single_logout` | No | Whether SAML Single Logout is enabled. | `false` | | `allow_sign_up` | No | Whether to allow new Grafana user creation through SAML login. If set to `false`, then only existing Grafana users can log in with SAML. | `true` | | `auto_login` | No | Whether SAML auto login is enabled. | `false` | | `allow_idp_initiated` | No | Whether SAML IdP-initiated login is allowed. | `false` | | `certificate` or `certificate_path` | Yes | Base64-encoded string or Path for the SP X.509 certificate. | | | `private_key` or `private_key_path` | Yes | Base64-encoded string or Path for the SP private key. | | | `signature_algorithm` | No | Signature algorithm used for signing requests to the IdP. Supported values are rsa-sha1, rsa-sha256, rsa-sha512. | | | `idp_metadata`, `idp_metadata_path`, or `idp_metadata_url` | Yes | Base64-encoded string, Path or URL for the IdP SAML metadata XML. | | | `max_issue_delay` | No | Maximum time allowed between the issuance of an AuthnRequest by the SP and the processing of the Response. | `90s` | | `metadata_valid_duration` | No | Duration for which the SP metadata remains valid. | `48h` | | `relay_state` | No | Relay state for IdP-initiated login. This should match the relay state configured in the IdP. | | | `assertion_attribute_name` | No | Friendly name or name of the attribute within the SAML assertion to use as the user name. Alternatively, this can be a template with variables that match the names of attributes within the SAML assertion. | `displayName` | | `assertion_attribute_login` | No | Friendly name or name of the attribute within the SAML assertion to use as the user login handle. | `mail` | | `assertion_attribute_email` | No | Friendly name or name of the attribute within the SAML assertion to use as the user email. | `mail` | | `assertion_attribute_groups` | No | Friendly name or name of the attribute within the SAML assertion to use as the user groups. | | | `assertion_attribute_role` | No | Friendly name or name of the attribute within the SAML assertion to use as the user roles. | | | `assertion_attribute_org` | No | Friendly name or name of the attribute within the SAML assertion to use as the user organization | | | `allowed_organizations` | No | List of comma- or space-separated organizations. User should be a member of at least one organization to log in. | | | `org_mapping` | No | List of comma- or space-separated Organization:OrgId:Role mappings. Organization can be `*` meaning "All users". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. | | | `role_values_none` | No | List of comma- or space-separated roles which will be mapped into the None role. | | | `role_values_viewer` | No | List of comma- or space-separated roles which will be mapped into the Viewer role. | | | `role_values_editor` | No | List of comma- or space-separated roles which will be mapped into the Editor role. | | | `role_values_admin` | No | List of comma- or space-separated roles which will be mapped into the Admin role. | | | `role_values_grafana_admin` | No | List of comma- or space-separated roles which will be mapped into the Grafana Admin (Super Admin) role. | | | `skip_org_role_sync` | No | Whether to skip organization role synchronization. | `false` | | `name_id_format` | No | Specifies the format of the requested NameID element in the SAML AuthnRequest. | `urn:oasis:names:tc:SAML:2.0:nameid-format:transient` | | `client_id` | No | Client ID of the IdP service application used to retrieve more information about the user from the IdP. (Microsoft Entra ID only) | | | `client_secret` | No | Client secret of the IdP service application used to retrieve more information about the user from the IdP. (Microsoft Entra ID only) | | | `token_url` | No | URL to retrieve the access token from the IdP. (Microsoft Entra ID only) | | | `force_use_graph_api` | No | Whether to use the IdP service application retrieve more information about the user from the IdP. (Microsoft Entra ID only) | `false` |
grafana setup
aliases auth saml enterprise configure saml enterprise saml enterprise saml about saml enterprise saml configure saml enterprise saml enable saml enterprise saml set up saml with okta enterprise saml troubleshoot saml description Learn how to configure SAML authentication in Grafana s configuration file labels products cloud enterprise menuTitle SAML title Configure SAML authentication using the configuration file weight 500 Configure SAML authentication using the configuration file Available in Grafana Enterprise and Grafana Cloud docs grafana cloud SAML authentication integration allows your Grafana users to log in by using an external SAML 2 0 Identity Provider IdP To enable this Grafana becomes a Service Provider SP in the authentication flow interacting with the IdP to exchange user information You can configure SAML authentication in Grafana through one of the following methods the Grafana configuration file the API refer to SSO Settings API the user interface refer to Configure SAML authentication using the Grafana user interface the Terraform provider refer to Terraform docs https registry terraform io providers grafana grafana latest docs resources sso settings The API and Terraform support are available in Public Preview in Grafana v11 1 behind the ssoSettingsSAML feature toggle You must also enable the ssoSettingsApi flag All methods offer the same configuration options but you might prefer using the Grafana configuration file or the Terraform provider if you want to keep all of Grafana s authentication settings in one place Grafana Cloud users do not have access to Grafana configuration file so they should configure SAML through the other methods Configuration in the API takes precedence over the configuration in the Grafana configuration file SAML settings from the API will override any SAML configuration set in the Grafana configuration file Supported SAML Grafana supports the following SAML 2 0 bindings From the Service Provider SP to the Identity Provider IdP HTTP POST binding HTTP Redirect binding From the Identity Provider IdP to the Service Provider SP HTTP POST binding In terms of security Grafana supports signed and encrypted assertions Grafana does not support signed or encrypted requests In terms of initiation Grafana supports SP initiated requests IdP initiated requests By default SP initiated requests are enabled For instructions on how to enable IdP initiated logins see IdP initiated Single Sign On SSO It is possible to set up Grafana with SAML authentication using Azure AD However if an Azure AD user belongs to more than 150 groups a Graph API endpoint is shared instead Grafana versions 11 1 and below do not support fetching the groups from the Graph API endpoint As a result users with more than 150 groups will not be able to retrieve their groups Instead it is recommended that you use OIDC OAuth workflows As of Grafana 11 2 the SAML integration offers a mechanism to retrieve user groups from the Graph API Related links Azure AD SAML limitations https learn microsoft com en us entra identity platform id token claims reference groups overage claim Set up SAML with Azure AD Configure a Graph API application in Azure AD Edit SAML options in the Grafana config file 1 In the auth saml section in the Grafana configuration file set enabled to true 1 Configure the certificate and private key 1 On the Okta application page where you have been redirected after application created navigate to the Sign On tab and find Identity Provider metadata link in the Settings section 1 Set the idp metadata url to the URL obtained from the previous step The URL should look like https your org id okta com app application id sso saml metadata 1 Set the following options to the attribute names configured at the step 10 of the SAML integration setup You can find this attributes on the General tab of the application page ATTRIBUTE STATEMENTS and GROUP ATTRIBUTE STATEMENTS in the SAML Settings section assertion attribute login assertion attribute email assertion attribute name assertion attribute groups 1 Optional Set the name parameter in the auth saml section in the Grafana configuration file This parameter replaces SAML in the Grafana user interface in locations such as the sign in button 1 Save the configuration file and then restart the Grafana server When you are finished the Grafana configuration might look like this example bash server root url https grafana example com auth saml enabled true name My IdP auto login false private key path path to private key pem certificate path path to certificate cert idp metadata url https my org okta com app my application sso saml metadata assertion attribute name DisplayName assertion attribute login Login assertion attribute email Email assertion attribute groups Group Enable SAML authentication in Grafana To use the SAML integration in the auth saml section of in the Grafana custom configuration file set enabled to true Refer to Configuration for more information about configuring Grafana Additional configuration for HTTP Post binding If multiple bindings are supported for SAML Single Sign On SSO by the Identity Provider IdP Grafana will use the HTTP Redirect binding by default If the IdP only supports the HTTP Post binding then updating the content security policy template in case content security policy true and content security policy report only template in case content security policy report only true might be required to allow Grafana to initiate a POST request to the IdP These settings are used to define the Content Security Policy CSP https developer mozilla org en US docs Web HTTP Headers Content Security Policy headers that are sent by Grafana To allow Grafana to initiate a POST request to the IdP update the content security policy template and content security policy report only template settings in the Grafana configuration file and add the IdP s domain to the form action directive By default the form action directive is set to self which only allows POST requests to the same domain as Grafana To allow POST requests to the IdP s domain update the form action directive to include the IdP s domain for example form action self https idp example com For Grafana Cloud instances please contact Grafana Support to update the content security policy template and content security policy report only template settings of your Grafana instance Please provide the metadata URL file of your IdP Certificate and private key The SAML SSO standard uses asymmetric encryption to exchange information between the SP Grafana and the IdP To perform such encryption you need a public part and a private part In this case the X 509 certificate provides the public part while the private key provides the private part The private key needs to be issued in a PKCS 8 https en wikipedia org wiki PKCS 8 format Grafana supports two ways of specifying both the certificate and private key Without a suffix certificate or private key the configuration assumes you ve supplied the base64 encoded file contents With the path suffix certificate path or private key path then Grafana treats the value entered as a file path and attempts to read the file from the file system You can only use one form of each configuration option Using multiple forms such as both certificate and certificate path results in an error Generate private key for SAML authentication An example of how to generate a self signed certificate and private key that s valid for one year sh openssl req x509 newkey rsa 4096 keyout key pem out cert pem days 365 nodes The generated key pem and cert pem files are then used for certificate and private key The key you provide should look like BEGIN PRIVATE KEY END PRIVATE KEY Set up SAML with Azure AD Grafana supports user authentication through Azure AD which is useful when you want users to access Grafana using single sign on This topic shows you how to configure SAML authentication in Grafana with Azure AD https azure microsoft com en us services active directory Before you begin Ensure you have permission to administer SAML authentication For more information about roles and permissions in Grafana Roles and permissions Learn the limitations of Azure AD SAML integration Azure AD SAML limitations https learn microsoft com en us entra identity platform id token claims reference groups overage claim Configure SAML integration with Azure AD create an app integration inside the Azure AD organization first Add app integration in Azure AD https docs microsoft com en us azure active directory manage apps add application portal configure If you have users that belong to more than 150 groups you need to configure a registered application to provide an Azure Graph API to retrieve the groups Setup Azure AD Graph API applications Generate self signed certificates Azure AD requires a certificate to sign the SAML requests You can generate a self signed certificate using the following command sh openssl req x509 newkey rsa 4096 keyout key pem out cert pem days 365 nodes This will generate a key pem and cert pem file that you can use for the private key path and certificate path configuration options Add Microsoft Entra SAML Toolkit from the gallery Taken from https learn microsoft com en us entra identity saas apps saml toolkit tutorial add microsoft entra saml toolkit from the gallery 1 Go to the Azure portal https portal azure com home and sign in with your Azure AD account 1 Search for Enterprise Applications 1 In the Enterprise applications pane select New application 1 In the search box enter SAML Toolkit and then select the Microsoft Entra SAML Toolkit from the results panel 1 Add a descriptive name and select Create Configure the SAML Toolkit application endpoints In order to validate Azure AD users with Grafana you need to configure the SAML Toolkit application endpoints by creating a new SAML integration in the Azure AD organization For the following configuration we will use https localhost as the Grafana URL Replace it with your Grafana URL 1 In the SAML Toolkit application select Set up single sign on 1 In the Single sign on pane select SAML 1 In the Set up Single Sign On with SAML pane select the pencil icon for Basic SAML Configuration to edit the settings 1 In the Basic SAML Configuration pane click on the Edit button and update the following fields In the Identifier Entity ID field enter https localhost saml metadata In the Reply URL Assertion Consumer Service URL field enter https localhost saml acs In the Sign on URL field enter https localhost In the Relay State field enter https localhost In the Logout URL field enter https localhost saml slo 1 Select Save 1 At the SAML Certificate section copy the App Federation Metadata Url Use this URL in the idp metadata url field in the custom ini file Configure a Graph API application in Azure AD While an Azure AD tenant can be configured in Grafana via SAML some additional information is only accessible via the Graph API To retrieve this information create a new application in Azure AD and grant it the necessary permissions Azure AD SAML limitations https learn microsoft com en us entra identity platform id token claims reference groups overage claim For the following configuration the URL https localhost will be used as the Grafana URL Replace it with your Grafana instance URL Create a new Application registration This app registration will be used as a Service Account to retrieve more information about the user from the Azure AD 1 Go to the Azure portal https portal azure com home and sign in with your Azure AD account 1 In the left hand navigation pane select the Azure Active Directory service and then select App registrations 1 Click the New registration button 1 In the Register an application pane enter a name for the application 1 In the Supported account types section select the account types that can use the application 1 In the Redirect URI section select Web and enter https localhost login azuread 1 Click the Register button Set up permissions for the application 1 In the overview pane look for API permissions section and select Add a permission 1 In the Request API permissions pane select Microsoft Graph and click Application permissions 1 In the Select permissions pane under the GroupMember section select GroupMember Read All 1 In the Select permissions pane under the User section select User Read All 1 Click the Add permissions button at the bottom of the page 1 In the Request API permissions pane select Microsoft Graph and click Delegated permissions 1 In the Select permissions pane under the User section select User Read 1 Click the Add permissions button at the bottom of the page 1 In the API permissions section select Grant admin consent for your organization The following table shows what the permissions look like from the Azure AD portal Permissions name Type Admin consent required Status Group Read All Application Yes Granted User Read Delegated No Granted User Read All Application Yes Granted Generate a client secret 1 In the Overview pane select Certificates secrets 1 Select New client secret 1 In the Add a client secret pane enter a description for the secret 1 Set the expiration date for the secret 1 Select Add 1 Copy the value of the secret This value is used in the client secret field in the custom ini file Set up SAML with Okta Grafana supports user authentication through Okta which is useful when you want your users to access Grafana using single sign on This guide will follow you through the steps of configuring SAML authentication in Grafana with Okta https okta com You need to be an admin in your Okta organization to access Admin Console and create SAML integration You also need permissions to edit Grafana config file and restart Grafana server Before you begin To configure SAML integration with Okta create an app integration inside the Okta organization first Add app integration in Okta https help okta com en prod Content Topics Apps apps overview add apps htm Ensure you have permission to administer SAML authentication For more information about roles and permissions in Grafana refer to Roles and permissions To set up SAML with Okta 1 Log in to the Okta portal https login okta com 1 Go to the Admin Console in your Okta organization by clicking Admin in the upper right corner If you are in the Developer Console then click Developer Console in the upper left corner and then click Classic UI to switch over to the Admin Console 1 In the Admin Console navigate to Applications Applications 1 Click Create App Integration to start the Application Integration Wizard 1 Choose SAML 2 0 as the Sign in method 1 Click Create 1 On the General Settings tab enter a name for your Grafana integration You can also upload a logo 1 On the Configure SAML tab enter the SAML information related to your Grafana instance In the Single sign on URL field use the saml acs endpoint URL of your Grafana instance for example https grafana example com saml acs In the Audience URI SP Entity ID field use the saml metadata endpoint URL by default it is the saml metadata endpoint of your Grafana instance for example https example grafana com saml metadata This could be configured differently but the value here must match the entity id setting of the SAML settings of Grafana Leave the default values for Name ID format and Application username If you plan to enable SAML Single Logout consider setting the Name ID format to EmailAddress or Persistent This must match the name id format setting of the Grafana instance In the ATTRIBUTE STATEMENTS OPTIONAL section enter the SAML attributes to be shared with Grafana The attribute names in Okta need to match exactly what is defined within Grafana for example Attribute name in Grafana Name and value in Okta profile Grafana configuration under auth saml Login Login user login assertion attribute login Login Email Email user email assertion attribute email Email DisplayName DisplayName user firstName user lastName assertion attribute name DisplayName In the GROUP ATTRIBUTE STATEMENTS OPTIONAL section enter a group attribute name for example Group ensure it matches the asssertion attribute groups setting in Grafana and set filter to Matches regex to return all user groups 1 Click Next 1 On the final Feedback tab fill out the form and then click Finish Signature algorithm The SAML standard recommends using a digital signature for some types of messages like authentication or logout requests If the signature algorithm option is configured Grafana will put a digital signature into SAML requests Supported signature types are rsa sha1 rsa sha256 rsa sha512 This option should match your IdP configuration otherwise signature validation will fail Grafana uses key and certificate configured with private key and certificate options for signing SAML requests Specify user s Name ID The name id format configuration field specifies the format of the NameID element in the SAML assertion By default this is set to urn oasis names tc SAML 2 0 nameid format transient and does not need to be specified in the configuration file The following list includes valid configuration field values name id format value in the configuration file or Terraform Name identifier format on the UI urn oasis names tc SAML 2 0 nameid format transient Default urn oasis names tc SAML 1 1 nameid format unspecified Unspecified urn oasis names tc SAML 1 1 nameid format emailAddress Email address urn oasis names tc SAML 2 0 nameid format persistent Persistent urn oasis names tc SAML 2 0 nameid format transient Transient IdP metadata You also need to define the public part of the IdP for message verification The SAML IdP metadata XML defines where and how Grafana exchanges user information Grafana supports three ways of specifying the IdP metadata Without a suffix idp metadata Grafana assumes base64 encoded XML file contents With the path suffix Grafana assumes a file path and attempts to read the file from the file system With the url suffix Grafana assumes a URL and attempts to load the metadata from the given location Maximum issue delay Prevents SAML response replay attacks and internal clock skews between the SP Grafana and the IdP You can set a maximum amount of time between the IdP issuing a response and the SP Grafana processing it The configuration options is specified as a duration such as max issue delay 90s or max issue delay 1h Metadata valid duration SP metadata is likely to expire at some point perhaps due to a certificate rotation or change of location binding Grafana allows you to specify for how long the metadata should be valid Leveraging the validUntil field you can tell consumers until when your metadata is going to be valid The duration is computed by adding the duration to the current time The configuration option is specified as a duration such as metadata valid duration 48h Identity provider IdP registration For the SAML integration to work correctly you need to make the IdP aware of the SP The integration provides two key endpoints as part of Grafana The saml metadata endpoint which contains the SP metadata You can either download and upload it manually or you make the IdP request it directly from the endpoint Some providers name it Identifier or Entity ID The saml acs endpoint which is intended to receive the ACS Assertion Customer Service callback Some providers name it SSO URL or Reply URL IdP initiated Single Sign On SSO By default Grafana allows only service provider SP initiated logins when the user logs in with SAML via Grafana s login page If you want users to log in into Grafana directly from your identity provider IdP set the allow idp initiated configuration option to true and configure relay state with the same value specified in the IdP configuration IdP initiated SSO has some security risks so make sure you understand the risks before enabling this feature When using IdP initiated SSO Grafana receives unsolicited SAML requests and can t verify that login flow was started by the user This makes it hard to detect whether SAML message has been stolen or replaced Because of this IdP initiated SSO is vulnerable to login cross site request forgery CSRF and man in the middle MITM attacks We do not recommend using IdP initiated SSO and keeping it disabled whenever possible Single logout SAML s single logout feature allows users to log out from all applications associated with the current IdP session established via SAML SSO If the single logout option is set to true and a user logs out Grafana requests IdP to end the user session which in turn triggers logout from all other applications the user is logged into using the same IdP session applications should support single logout Conversely if another application connected to the same IdP logs out using single logout Grafana receives a logout request from IdP and ends the user session HTTP Redirect and HTTP POST bindings are supported for single logout When using HTTP Redirect bindings the query should include a request signature Assertion mapping During the SAML SSO authentication flow Grafana receives the ACS callback The callback contains all the relevant information of the user under authentication embedded in the SAML response Grafana parses the response to create or update the user within its internal database For Grafana to map the user information it looks at the individual attributes within the assertion You can think of these attributes as Key Value pairs although they contain more information than that Grafana provides configuration options that let you modify which keys to look at for these values The data we need to create the user in Grafana is Name Login handle and email The assertion attribute name option assertion attribute name is a special assertion mapping that can either be a simple key indicating a mapping to a single assertion attribute on the SAML response or a complex template with variables using the saml attribute syntax If this property is misconfigured Grafana will log an error message on startup and disallow SAML sign ins Grafana will also log errors after a login attempt if a variable in the template is missing from the SAML response Examples ini plain string mapping assertion attribute name displayName ini template mapping assertion attribute name saml firstName saml lastName Allow new user signups By default new Grafana users using SAML authentication will have an account created for them automatically To decouple authentication and account creation and ensure only users with existing accounts can log in with SAML set the allow sign up option to false Configure automatic login Set auto login option to true to attempt login automatically skipping the login screen This setting is ignored if multiple auth providers are configured to use auto login auto login true Configure group synchronization Group synchronization allows you to map user groups from an identity provider to Grafana teams and roles To use SAML group synchronization set assertion attribute groups to the attribute name where you store user groups Then Grafana will use attribute values extracted from SAML assertion to add user to Grafana teams and grant them roles Team sync allows you sync users from SAML to Grafana teams It does not automatically create teams in Grafana You need to create teams in Grafana before you can use this feature Given the following partial SAML assertion xml saml2 Attribute Name groups NameFormat urn oasis names tc SAML 2 0 attrname format unspecified saml2 AttributeValue xmlns xs http www w3 org 2001 XMLSchema xmlns xsi http www w3 org 2001 XMLSchema instance xsi type xs string admins group saml2 AttributeValue saml2 AttributeValue xmlns xs http www w3 org 2001 XMLSchema xmlns xsi http www w3 org 2001 XMLSchema instance xsi type xs string division 1 saml2 AttributeValue saml2 Attribute The configuration would look like this ini auth saml assertion attribute groups groups The following External Group ID s would be valid for configuring team sync or role sync in Grafana admins group division 1 To learn more about how to configure group synchronization refer to Configure team sync and Configure group attribute sync https grafana com docs grafana GRAFANA VERSION setup grafana configure security configure group attribute sync documentation Configure role sync Role sync allows you to map user roles from an identity provider to Grafana To enable role sync configure role attribute and possible values for the Editor Admin and Grafana Admin roles For more information about user roles refer to Roles and permissions 1 In the configuration file set assertion attribute role option to the attribute name where the role information will be extracted from 1 Set the role values none option to the values mapped to the None role 1 Set the role values viewer option to the values mapped to the Viewer role 1 Set the role values editor option to the values mapped to the Editor role 1 Set the role values admin option to the values mapped to the organization Admin role 1 Set the role values grafana admin option to the values mapped to the Grafana Admin role If a user role doesn t match any of configured values then the role specified by the auto assign org role config option will be assigned If the auto assign org role field is not set then the user role will default to Viewer For more information about roles and permissions in Grafana refer to Roles and permissions Example configuration ini auth saml assertion attribute role role role values none none role values viewer external role values editor editor developer role values admin admin operator role values grafana admin superadmin Important When role sync is configured any changes of user roles and organization membership made manually in Grafana will be overwritten on next user login Assign user organizations and roles in the IdP instead If you don t want user organizations and roles to be synchronized with the IdP you can use the skip org role sync configuration option Example configuration ini auth saml skip org role sync true Configure organization mapping Organization mapping allows you to assign users to particular organization in Grafana depending on attribute value obtained from identity provider 1 In configuration file set assertion attribute org to the attribute name you store organization info in This attribute can be an array if you want a user to be in multiple organizations 1 Set org mapping option to the comma separated list of Organization OrgId pairs to map organization from IdP to Grafana organization specified by id If you want users to have different roles in multiple organizations you can set this option to a comma separated list of Organization OrgId Role mappings For example use following configuration to assign users from Engineering organization to the Grafana organization with id 2 as Editor and users from Sales to the org with id 3 as Admin based on Org assertion attribute value bash auth saml assertion attribute org Org org mapping Engineering 2 Editor Sales 3 Admin You can specify multiple organizations both for the IdP and Grafana org mapping Engineering 2 Sales 2 to map users from Engineering and Sales to 2 in Grafana org mapping Engineering 2 Engineering 3 to assign Engineering to both 2 and 3 in Grafana You can use as the SAML Organization if you want all your users to be in some Grafana organizations with a default role org mapping 2 Editor to map all users to 2 in Grafana as Editors You can use as the Grafana organization in the mapping if you want all users from a given SAML Organization to be added to all existing Grafana organizations org mapping Engineering to map users from Engineering to all existing Grafana organizations org mapping Administration Admin to map users from Administration to all existing Grafana organizations as Admins Configure allowed organizations With the allowed organizations option you can specify a list of organizations where the user must be a member of at least one of them to be able to log in to Grafana To put values containing spaces in the list use the following JSON syntax ini allowed organizations org 1 second org Example SAML configuration bash auth saml enabled true auto login false certificate path path to certificate cert private key path path to private key pem idp metadata path my metadata xml max issue delay 90s metadata valid duration 48h assertion attribute name displayName assertion attribute login mail assertion attribute email mail assertion attribute groups Group assertion attribute role Role assertion attribute org Org role values viewer external role values editor editor developer role values admin admin operator role values grafana admin superadmin org mapping Engineering 2 Editor Engineering 3 Viewer Sales 3 Editor 1 Editor allowed organizations Engineering Sales Example SAML configuration in Terraform Available in Public Preview in Grafana v11 1 behind the ssoSettingsSAML feature toggle Supported in the Terraform provider since v2 17 0 terraform resource grafana sso settings saml sso settings provider name saml saml settings name SAML auto login false certificate path path to certificate cert private key path path to private key pem idp metadata path my metadata xml max issue delay 90s metadata valid duration 48h assertion attribute name displayName assertion attribute login mail assertion attribute email mail assertion attribute groups Group assertion attribute role Role assertion attribute org Org role values editor editor developer role values admin admin operator role values grafana admin superadmin org mapping Engineering 2 Editor Engineering 3 Viewer Sales 3 Editor 1 Editor allowed organizations Engineering Sales Go to Terraform Registry https registry terraform io providers grafana grafana latest docs resources sso settings for a complete reference on using the grafana sso settings resource Troubleshoot SAML authentication in Grafana To troubleshoot and get more log information enable SAML debug logging in the configuration file Refer to Configuration for more information bash log filters saml auth debug Troubleshooting Following are common issues found in configuring SAML authentication in Grafana and how to resolve them Infinite redirect loop User gets redirected to the login page after successful login on the IdP side If you experience an infinite redirect loop when auto login true or redirected to the login page after successful login it is likely that the grafana session cookie s SameSite setting is set to Strict This setting prevents the grafana session cookie from being sent to Grafana during cross site requests To resolve this issue set the security cookie samesite option to Lax in the Grafana configuration file SAML authentication fails with error asn1 structure error tags don t match We only support one private key format PKCS 8 The keys may be in a different format PKCS 1 or PKCS 12 in that case it may be necessary to convert the private key format The following command creates a pkcs8 key file bash openssl req x509 newkey rsa 4096 keyout key pem out cert pem days 365 nodes Convert the private key format to base64 The following command converts keys to base64 format Base64 encode the cert pem and key pem files w0 switch is not needed on Mac only for Linux sh base64 w0 key pem key pem base64 base64 w0 cert pem cert pem base64 The base64 encoded values key pem base64 cert pem base64 files are then used for certificate and private key The keys you provide should look like BEGIN PRIVATE KEY END PRIVATE KEY SAML login attempts fail with request response origin not allowed When the user logs in using SAML and gets presented with origin not allowed the user might be issuing the login from an IdP identity provider service or the user is behind a reverse proxy This potentially happens as Grafana s CSRF checks deem the requests to be invalid For more information CSRF https owasp org www community attacks csrf To solve this issue you can configure either the csrf trusted origins or csrf additional headers option in the SAML configuration Example of a configuration file bash config ini security csrf trusted origins https grafana example com csrf additional headers X Forwarded Host SAML login attempts fail with request response login session has expired Accessing the Grafana login page from a URL that is not the root URL of the Grafana server can cause the instance to return the following error login session has expired If you are accessing grafana through a proxy server ensure that cookies are correctly rewritten to the root URL of Grafana Cookies must be set on the same url as the root url of Grafana This is normally the reverse proxy s domain address Review the cookie settings in your proxy server configuration to ensure that cookies are not being discarded Review the following settings in your grafana config ini security cookie samesite none This setting should be set to none to allow grafana session cookies to work correctly with redirects ini security cookie secure true Ensure cookie secure is set to true to ensure that cookies are only sent over HTTPS Configure SAML authentication in Grafana The table below describes all SAML configuration options Continue reading below for details on specific options Like any other Grafana configuration you can apply these options as environment variables Setting Required Description Default enabled No Whether SAML authentication is allowed false name No Name used to refer to the SAML authentication in the Grafana user interface SAML entity id No The entity ID of the service provider This is the unique identifier of the service provider https Grafana URL saml metadata single logout No Whether SAML Single Logout is enabled false allow sign up No Whether to allow new Grafana user creation through SAML login If set to false then only existing Grafana users can log in with SAML true auto login No Whether SAML auto login is enabled false allow idp initiated No Whether SAML IdP initiated login is allowed false certificate or certificate path Yes Base64 encoded string or Path for the SP X 509 certificate private key or private key path Yes Base64 encoded string or Path for the SP private key signature algorithm No Signature algorithm used for signing requests to the IdP Supported values are rsa sha1 rsa sha256 rsa sha512 idp metadata idp metadata path or idp metadata url Yes Base64 encoded string Path or URL for the IdP SAML metadata XML max issue delay No Maximum time allowed between the issuance of an AuthnRequest by the SP and the processing of the Response 90s metadata valid duration No Duration for which the SP metadata remains valid 48h relay state No Relay state for IdP initiated login This should match the relay state configured in the IdP assertion attribute name No Friendly name or name of the attribute within the SAML assertion to use as the user name Alternatively this can be a template with variables that match the names of attributes within the SAML assertion displayName assertion attribute login No Friendly name or name of the attribute within the SAML assertion to use as the user login handle mail assertion attribute email No Friendly name or name of the attribute within the SAML assertion to use as the user email mail assertion attribute groups No Friendly name or name of the attribute within the SAML assertion to use as the user groups assertion attribute role No Friendly name or name of the attribute within the SAML assertion to use as the user roles assertion attribute org No Friendly name or name of the attribute within the SAML assertion to use as the user organization allowed organizations No List of comma or space separated organizations User should be a member of at least one organization to log in org mapping No List of comma or space separated Organization OrgId Role mappings Organization can be meaning All users Role is optional and can have the following values None Viewer Editor or Admin role values none No List of comma or space separated roles which will be mapped into the None role role values viewer No List of comma or space separated roles which will be mapped into the Viewer role role values editor No List of comma or space separated roles which will be mapped into the Editor role role values admin No List of comma or space separated roles which will be mapped into the Admin role role values grafana admin No List of comma or space separated roles which will be mapped into the Grafana Admin Super Admin role skip org role sync No Whether to skip organization role synchronization false name id format No Specifies the format of the requested NameID element in the SAML AuthnRequest urn oasis names tc SAML 2 0 nameid format transient client id No Client ID of the IdP service application used to retrieve more information about the user from the IdP Microsoft Entra ID only client secret No Client secret of the IdP service application used to retrieve more information about the user from the IdP Microsoft Entra ID only token url No URL to retrieve the access token from the IdP Microsoft Entra ID only force use graph api No Whether to use the IdP service application retrieve more information about the user from the IdP Microsoft Entra ID only false
grafana setup installation configuration aliases products Configuration documentation administration configuration labels title Configure Grafana oss enterprise
--- aliases: - ../administration/configuration/ - ../installation/configuration/ description: Configuration documentation labels: products: - enterprise - oss title: Configure Grafana weight: 200 --- # Configure Grafana Grafana has default and custom configuration files. You can customize your Grafana instance by modifying the custom configuration file or by using environment variables. To see the list of settings for a Grafana instance, refer to [View server settings](). After you add custom options, [uncomment](#remove-comments-in-the-ini-files) the relevant sections of the configuration file. Restart Grafana for your changes to take effect. ## Configuration file location The default settings for a Grafana instance are stored in the `$WORKING_DIR/conf/defaults.ini` file. _Do not_ change this file. Depending on your OS, your custom configuration file is either the `$WORKING_DIR/conf/custom.ini` file or the `/usr/local/etc/grafana/grafana.ini` file. The custom configuration file path can be overridden using the `--config` parameter. ### Linux If you installed Grafana using the `deb` or `rpm` packages, then your configuration file is located at `/etc/grafana/grafana.ini` and a separate `custom.ini` is not used. This path is specified in the Grafana init.d script using `--config` file parameter. ### Docker Refer to [Configure a Grafana Docker image]() for information about environmental variables, persistent storage, and building custom Docker images. ### Windows On Windows, the `sample.ini` file is located in the same directory as `defaults.ini` file. It contains all the settings commented out. Copy `sample.ini` and name it `custom.ini`. ### macOS By default, the configuration file is located at `/opt/homebrew/etc/grafana/grafana.ini` or `/usr/local/etc/grafana/grafana.ini`. For a Grafana instance installed using Homebrew, edit the `grafana.ini` file directly. Otherwise, add a configuration file named `custom.ini` to the `conf` folder to override the settings defined in `conf/defaults.ini`. ## Remove comments in the .ini files Grafana uses semicolons (the `;` char) to comment out lines in a `.ini` file. You must uncomment each line in the `custom.ini` or the `grafana.ini` file that you are modify by removing `;` from the beginning of that line. Otherwise your changes will be ignored. For example: ``` # The HTTP port to use ;http_port = 3000 ``` ## Override configuration with environment variables Do not use environment variables to _add_ new configuration settings. Instead, use environmental variables to _override_ existing options. To override an option: ```bash GF_<SectionName>_<KeyName> ``` Where the section name is the text within the brackets. Everything should be uppercase, `.` and `-` should be replaced by `_`. For example, if you have these configuration settings: ```bash # default section instance_name = ${HOSTNAME} [security] admin_user = admin [auth.google] client_secret = 0ldS3cretKey [plugin.grafana-image-renderer] rendering_ignore_https_errors = true [feature_toggles] enable = newNavigation ``` You can override variables on Linux machines with: ```bash export GF_DEFAULT_INSTANCE_NAME=my-instance export GF_SECURITY_ADMIN_USER=owner export GF_AUTH_GOOGLE_CLIENT_SECRET=newS3cretKey export GF_PLUGIN_GRAFANA_IMAGE_RENDERER_RENDERING_IGNORE_HTTPS_ERRORS=true export GF_FEATURE_TOGGLES_ENABLE=newNavigation ``` ## Variable expansion If any of your options contains the expression `$__<provider>{<argument>}` or `${<environment variable>}`, then they will be processed by Grafana's variable expander. The expander runs the provider with the provided argument to get the final value of the option. There are three providers: `env`, `file`, and `vault`. ### Env provider The `env` provider can be used to expand an environment variable. If you set an option to `$__env{PORT}` the `PORT` environment variable will be used in its place. For environment variables you can also use the short-hand syntax `${PORT}`. Grafana's log directory would be set to the `grafana` directory in the directory behind the `LOGDIR` environment variable in the following example. ```ini [paths] logs = $__env{LOGDIR}/grafana ``` ### File provider `file` reads a file from the filesystem. It trims whitespace from the beginning and the end of files. The database password in the following example would be replaced by the content of the `/etc/secrets/gf_sql_password` file: ```ini [database] password = $__file{/etc/secrets/gf_sql_password} ``` ### Vault provider The `vault` provider allows you to manage your secrets with [Hashicorp Vault](https://www.hashicorp.com/products/vault). > Vault provider is only available in Grafana Enterprise v7.1+. For more information, refer to [Vault integration]() in [Grafana Enterprise](). <hr /> ## app_mode Options are `production` and `development`. Default is `production`. _Do not_ change this option unless you are working on Grafana development. ## instance_name Set the name of the grafana-server instance. Used in logging, internal metrics, and clustering info. Defaults to: `${HOSTNAME}`, which will be replaced with environment variable `HOSTNAME`, if that is empty or does not exist Grafana will try to use system calls to get the machine name. <hr /> ## [paths] ### data Path to where Grafana stores the sqlite3 database (if used), file-based sessions (if used), and other data. This path is usually specified via command line in the init.d script or the systemd service file. **macOS:** The default SQLite database is located at `/usr/local/var/lib/grafana` ### temp_data_lifetime How long temporary images in `data` directory should be kept. Defaults to: `24h`. Supported modifiers: `h` (hours), `m` (minutes), for example: `168h`, `30m`, `10h30m`. Use `0` to never clean up temporary files. ### logs Path to where Grafana stores logs. This path is usually specified via command line in the init.d script or the systemd service file. You can override it in the configuration file or in the default environment variable file. However, please note that by overriding this the default log path will be used temporarily until Grafana has fully initialized/started. Override log path using the command line argument `cfg:default.paths.logs`: ```bash ./grafana-server --config /custom/config.ini --homepath /custom/homepath cfg:default.paths.logs=/custom/path ``` **macOS:** By default, the log file should be located at `/usr/local/var/log/grafana/grafana.log`. ### plugins Directory where Grafana automatically scans and looks for plugins. For information about manually or automatically installing plugins, refer to [Install Grafana plugins](). **macOS:** By default, the Mac plugin location is: `/usr/local/var/lib/grafana/plugins`. ### provisioning Folder that contains [provisioning]() config files that Grafana will apply on startup. Dashboards will be reloaded when the json files changes. <hr /> ## [server] ### protocol `http`,`https`,`h2` or `socket` ### min_tls_version The TLS Handshake requires a minimum TLS version. The available options are TLS1.2 and TLS1.3. If you do not specify a version, the system uses TLS1.2. ### http_addr The host for the server to listen on. If your machine has more than one network interface, you can use this setting to expose the Grafana service on only one network interface and not have it available on others, such as the loopback interface. An empty value is equivalent to setting the value to `0.0.0.0`, which means the Grafana service binds to all interfaces. In environments where network address translation (NAT) is used, ensure you use the network interface address and not a final public address; otherwise, you might see errors such as `bind: cannot assign requested address` in the logs. ### http_port The port to bind to, defaults to `3000`. To use port 80 you need to either give the Grafana binary permission for example: ```bash $ sudo setcap 'cap_net_bind_service=+ep' /usr/sbin/grafana-server ``` Or redirect port 80 to the Grafana port using: ```bash $ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000 ``` Another way is to put a web server like Nginx or Apache in front of Grafana and have them proxy requests to Grafana. ### domain This setting is only used in as a part of the `root_url` setting (see below). Important if you use GitHub or Google OAuth. ### enforce_domain Redirect to correct domain if the host header does not match the domain. Prevents DNS rebinding attacks. Default is `false`. ### root_url This is the full URL used to access Grafana from a web browser. This is important if you use Google or GitHub OAuth authentication (for the callback URL to be correct). This setting is also important if you have a reverse proxy in front of Grafana that exposes it through a subpath. In that case add the subpath to the end of this URL setting. ### serve_from_sub_path Serve Grafana from subpath specified in `root_url` setting. By default it is set to `false` for compatibility reasons. By enabling this setting and using a subpath in `root_url` above, e.g.`root_url = http://localhost:3000/grafana`, Grafana is accessible on `http://localhost:3000/grafana`. If accessed without subpath Grafana will redirect to an URL with the subpath. ### router_logging Set to `true` for Grafana to log all HTTP requests (not just errors). These are logged as Info level events to the Grafana log. ### static_root_path The path to the directory where the front end files (HTML, JS, and CSS files). Defaults to `public` which is why the Grafana binary needs to be executed with working directory set to the installation path. ### enable_gzip Set this option to `true` to enable HTTP compression, this can improve transfer speed and bandwidth utilization. It is recommended that most users set it to `true`. By default it is set to `false` for compatibility reasons. ### cert_file Path to the certificate file (if `protocol` is set to `https` or `h2`). ### cert_key Path to the certificate key file (if `protocol` is set to `https` or `h2`). ### certs_watch_interval Controls whether `cert_key` and `cert_file` are periodically watched for changes. Disabled, by default. When enabled, `cert_key` and `cert_file` are watched for changes. If there is change, the new certificates are loaded automatically. After the new certificates are loaded, connections with old certificates will not work. You must reload the connections to the old certs for them to work. ### socket_gid GID where the socket should be set when `protocol=socket`. Make sure that the target group is in the group of Grafana process and that Grafana process is the file owner before you change this setting. It is recommended to set the gid as http server user gid. Not set when the value is -1. ### socket_mode Mode where the socket should be set when `protocol=socket`. Make sure that Grafana process is the file owner before you change this setting. ### socket Path where the socket should be created when `protocol=socket`. Make sure Grafana has appropriate permissions for that path before you change this setting. ### cdn_url Specify a full HTTP URL address to the root of your Grafana CDN assets. Grafana will add edition and version paths. For example, given a cdn url like `https://cdn.myserver.com` grafana will try to load a javascript file from `http://cdn.myserver.com/grafana-oss/7.4.0/public/build/app.<hash>.js`. ### read_timeout Sets the maximum time using a duration format (5s/5m/5ms) before timing out read of an incoming request and closing idle connections. `0` means there is no timeout for reading the request. <hr /> ## [server.custom_response_headers] This setting enables you to specify additional headers that the server adds to HTTP(S) responses. ``` exampleHeader1 = exampleValue1 exampleHeader2 = exampleValue2 ``` <hr /> ## [database] Grafana needs a database to store users and dashboards (and other things). By default it is configured to use [`sqlite3`](https://www.sqlite.org/index.html) which is an embedded database (included in the main Grafana binary). ### type Either `mysql`, `postgres` or `sqlite3`, it's your choice. ### host Only applicable to MySQL or Postgres. Includes IP or hostname and port or in case of Unix sockets the path to it. For example, for MySQL running on the same host as Grafana: `host = 127.0.0.1:3306` or with Unix sockets: `host = /var/run/mysqld/mysqld.sock` ### name The name of the Grafana database. Leave it set to `grafana` or some other name. ### user The database user (not applicable for `sqlite3`). ### password The database user's password (not applicable for `sqlite3`). If the password contains `#` or `;` you have to wrap it with triple quotes. For example `"""#password;"""` ### url Use either URL or the other fields below to configure the database Example: `mysql://user:secret@host:port/database` ### max_idle_conn The maximum number of connections in the idle connection pool. ### max_open_conn The maximum number of open connections to the database. For MYSQL, configure this setting on both Grafana and the database. For more information, refer to [`sysvar_max_connections`](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_max_connections). ### conn_max_lifetime Sets the maximum amount of time a connection may be reused. The default is 14400 (which means 14400 seconds or 4 hours). For MySQL, this setting should be shorter than the [`wait_timeout`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout) variable. ### migration_locking Set to `false` to disable database locking during the migrations. Default is true. ### locking_attempt_timeout_sec For "mysql" and "postgres" only. Specify the time (in seconds) to wait before failing to lock the database for the migrations. Default is 0. ### log_queries Set to `true` to log the sql calls and execution times. ### ssl_mode For Postgres, use use any [valid libpq `sslmode`](https://www.postgresql.org/docs/current/libpq-ssl.html#LIBPQ-SSL-SSLMODE-STATEMENTS), e.g.`disable`, `require`, `verify-full`, etc. For MySQL, use either `true`, `false`, or `skip-verify`. ### ssl_sni For Postgres, set to `0` to disable [Server Name Indication](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-SSLSNI). This is enabled by default on SSL-enabled connections. ### isolation_level Only the MySQL driver supports isolation levels in Grafana. In case the value is empty, the driver's default isolation level is applied. Available options are "READ-UNCOMMITTED", "READ-COMMITTED", "REPEATABLE-READ" or "SERIALIZABLE". ### ca_cert_path The path to the CA certificate to use. On many Linux systems, certs can be found in `/etc/ssl/certs`. ### client_key_path The path to the client key. Only if server requires client authentication. ### client_cert_path The path to the client cert. Only if server requires client authentication. ### server_cert_name The common name field of the certificate used by the `mysql` or `postgres` server. Not necessary if `ssl_mode` is set to `skip-verify`. ### path Only applicable for `sqlite3` database. The file path where the database will be stored. ### cache_mode For "sqlite3" only. [Shared cache](https://www.sqlite.org/sharedcache.html) setting used for connecting to the database. (private, shared) Defaults to `private`. ### wal For "sqlite3" only. Setting to enable/disable [Write-Ahead Logging](https://sqlite.org/wal.html). The default value is `false` (disabled). ### query_retries This setting applies to `sqlite` only and controls the number of times the system retries a query when the database is locked. The default value is `0` (disabled). ### transaction_retries This setting applies to `sqlite` only and controls the number of times the system retries a transaction when the database is locked. The default value is `5`. ### instrument_queries Set to `true` to add metrics and tracing for database queries. The default value is `false`. <hr /> ## [remote_cache] Caches authentication details and session information in the configured database, Redis or Memcached. This setting does not configure [Query Caching in Grafana Enterprise](). ### type Either `redis`, `memcached`, or `database`. Defaults to `database` ### connstr The remote cache connection string. The format depends on the `type` of the remote cache. Options are `database`, `redis`, and `memcache`. #### database Leave empty when using `database` since it will use the primary database. #### redis Example connstr: `addr=127.0.0.1:6379,pool_size=100,db=0,ssl=false` - `addr` is the host `:` port of the redis server. - `pool_size` (optional) is the number of underlying connections that can be made to redis. - `db` (optional) is the number identifier of the redis database you want to use. - `ssl` (optional) is if SSL should be used to connect to redis server. The value may be `true`, `false`, or `insecure`. Setting the value to `insecure` skips verification of the certificate chain and hostname when making the connection. #### memcache Example connstr: `127.0.0.1:11211` <hr /> ## [dataproxy] ### logging This enables data proxy logging, default is `false`. ### timeout How long the data proxy should wait before timing out. Default is 30 seconds. This setting also applies to core backend HTTP data sources where query requests use an HTTP client with timeout set. ### keep_alive_seconds Interval between keep-alive probes. Default is `30` seconds. For more details check the [Dialer.KeepAlive](https://golang.org/pkg/net/#Dialer.KeepAlive) documentation. ### tls_handshake_timeout_seconds The length of time that Grafana will wait for a successful TLS handshake with the datasource. Default is `10` seconds. For more details check the [Transport.TLSHandshakeTimeout](https://golang.org/pkg/net/http/#Transport.TLSHandshakeTimeout) documentation. ### expect_continue_timeout_seconds The length of time that Grafana will wait for a datasource’s first response headers after fully writing the request headers, if the request has an β€œExpect: 100-continue” header. A value of `0` will result in the body being sent immediately. Default is `1` second. For more details check the [Transport.ExpectContinueTimeout](https://golang.org/pkg/net/http/#Transport.ExpectContinueTimeout) documentation. ### max_conns_per_host Optionally limits the total number of connections per host, including connections in the dialing, active, and idle states. On limit violation, dials are blocked. A value of `0` means that there are no limits. Default is `0`. For more details check the [Transport.MaxConnsPerHost](https://golang.org/pkg/net/http/#Transport.MaxConnsPerHost) documentation. ### max_idle_connections The maximum number of idle connections that Grafana will maintain. Default is `100`. For more details check the [Transport.MaxIdleConns](https://golang.org/pkg/net/http/#Transport.MaxIdleConns) documentation. ### idle_conn_timeout_seconds The length of time that Grafana maintains idle connections before closing them. Default is `90` seconds. For more details check the [Transport.IdleConnTimeout](https://golang.org/pkg/net/http/#Transport.IdleConnTimeout) documentation. ### send_user_header If enabled and user is not anonymous, data proxy will add X-Grafana-User header with username into the request. Default is `false`. ### response_limit Limits the amount of bytes that will be read/accepted from responses of outgoing HTTP requests. Default is `0` which means disabled. ### row_limit Limits the number of rows that Grafana will process from SQL (relational) data sources. Default is `1000000`. ### user_agent Sets a custom value for the `User-Agent` header for outgoing data proxy requests. If empty, the default value is `Grafana/<BuildVersion>` (for example `Grafana/9.0.0`). <hr /> ## [analytics] ### enabled This option is also known as _usage analytics_. When `false`, this option disables the writers that write to the Grafana database and the associated features, such as dashboard and data source insights, presence indicators, and advanced dashboard search. The default value is `true`. ### reporting_enabled When enabled Grafana will send anonymous usage statistics to `stats.grafana.org`. No IP addresses are being tracked, only simple counters to track running instances, versions, dashboard and error counts. It is very helpful to us, so please leave this enabled. Counters are sent every 24 hours. Default value is `true`. ### check_for_updates Set to false, disables checking for new versions of Grafana from Grafana's GitHub repository. When enabled, the check for a new version runs every 10 minutes. It will notify, via the UI, when a new version is available. The check itself will not prompt any auto-updates of the Grafana software, nor will it send any sensitive information. ### check_for_plugin_updates Set to false disables checking for new versions of installed plugins from https://grafana.com. When enabled, the check for a new plugin runs every 10 minutes. It will notify, via the UI, when a new plugin update exists. The check itself will not prompt any auto-updates of the plugin, nor will it send any sensitive information. ### google_analytics_ua_id If you want to track Grafana usage via Google analytics specify _your_ Universal Analytics ID here. By default this feature is disabled. ### google_analytics_4_id If you want to track Grafana usage via Google Analytics 4 specify _your_ GA4 ID here. By default this feature is disabled. ### google_tag_manager_id Google Tag Manager ID, only enabled if you enter an ID here. ### rudderstack_write_key If you want to track Grafana usage via Rudderstack specify _your_ Rudderstack Write Key here. The `rudderstack_data_plane_url` must also be provided for this feature to be enabled. By default this feature is disabled. ### rudderstack_data_plane_url Rudderstack data plane url that will receive Rudderstack events. The `rudderstack_write_key` must also be provided for this feature to be enabled. ### rudderstack_sdk_url Optional. If tracking with Rudderstack is enabled, you can provide a custom URL to load the Rudderstack SDK. ### rudderstack_config_url Optional. If tracking with Rudderstack is enabled, you can provide a custom URL to load the Rudderstack config. ### rudderstack_integrations_url Optional. If tracking with Rudderstack is enabled, you can provide a custom URL to load the SDK for destinations running in device mode. This setting is only valid for Rudderstack version 1.1 and higher. ### application_insights_connection_string If you want to track Grafana usage via Azure Application Insights, then specify _your_ Application Insights connection string. Since the connection string contains semicolons, you need to wrap it in backticks (`). By default, tracking usage is disabled. ### application_insights_endpoint_url Optionally, use this option to override the default endpoint address for Application Insights data collecting. For details, refer to the [Azure documentation](https://docs.microsoft.com/en-us/azure/azure-monitor/app/custom-endpoints?tabs=js). <hr /> ### feedback_links_enabled Set to `false` to remove all feedback links from the UI. Default is `true`. ## [security] ### disable_initial_admin_creation Disable creation of admin user on first start of Grafana. Default is `false`. ### admin_user The name of the default Grafana Admin user, who has full permissions. Default is `admin`. ### admin_password The password of the default Grafana Admin. Set once on first-run. Default is `admin`. ### admin_email The email of the default Grafana Admin, created on startup. Default is `admin@localhost`. ### secret_key Used for signing some data source settings like secrets and passwords, the encryption format used is AES-256 in CFB mode. Cannot be changed without requiring an update to data source settings to re-encode them. ### disable_gravatar Set to `true` to disable the use of Gravatar for user profile images. Default is `false`. ### data_source_proxy_whitelist Define a whitelist of allowed IP addresses or domains, with ports, to be used in data source URLs with the Grafana data source proxy. Format: `ip_or_domain:port` separated by spaces. PostgreSQL, MySQL, and MSSQL data sources do not use the proxy and are therefore unaffected by this setting. ### disable_brute_force_login_protection Set to `true` to disable [brute force login protection](https://cheatsheetseries.owasp.org/cheatsheets/Authentication_Cheat_Sheet.html#account-lockout). Default is `false`. An existing user's account will be unable to login for 5 minutes if all login attempts are spent within a 5 minute window. ### brute_force_login_protection_max_attempts Configure how many login attempts a user have within a 5 minute window before the account will be locked. Default is `5`. ### cookie_secure Set to `true` if you host Grafana behind HTTPS. Default is `false`. ### cookie_samesite Sets the `SameSite` cookie attribute and prevents the browser from sending this cookie along with cross-site requests. The main goal is to mitigate the risk of cross-origin information leakage. This setting also provides some protection against cross-site request forgery attacks (CSRF), [read more about SameSite here](https://owasp.org/www-community/SameSite). Valid values are `lax`, `strict`, `none`, and `disabled`. Default is `lax`. Using value `disabled` does not add any `SameSite` attribute to cookies. ### allow_embedding When `false`, the HTTP header `X-Frame-Options: deny` will be set in Grafana HTTP responses which will instruct browsers to not allow rendering Grafana in a `<frame>`, `<iframe>`, `<embed>` or `<object>`. The main goal is to mitigate the risk of [Clickjacking](https://owasp.org/www-community/attacks/Clickjacking). Default is `false`. ### strict_transport_security Set to `true` if you want to enable HTTP `Strict-Transport-Security` (HSTS) response header. Only use this when HTTPS is enabled in your configuration, or when there is another upstream system that ensures your application does HTTPS (like a frontend load balancer). HSTS tells browsers that the site should only be accessed using HTTPS. ### strict_transport_security_max_age_seconds Sets how long a browser should cache HSTS in seconds. Only applied if strict_transport_security is enabled. The default value is `86400`. ### strict_transport_security_preload Set to `true` to enable HSTS `preloading` option. Only applied if strict_transport_security is enabled. The default value is `false`. ### strict_transport_security_subdomains Set to `true` to enable the HSTS includeSubDomains option. Only applied if strict_transport_security is enabled. The default value is `false`. ### x_content_type_options Set to `false` to disable the X-Content-Type-Options response header. The X-Content-Type-Options response HTTP header is a marker used by the server to indicate that the MIME types advertised in the Content-Type headers should not be changed and be followed. The default value is `true`. ### x_xss_protection Set to `false` to disable the X-XSS-Protection header, which tells browsers to stop pages from loading when they detect reflected cross-site scripting (XSS) attacks. The default value is `true`. ### content_security_policy Set to `true` to add the Content-Security-Policy header to your requests. CSP allows to control resources that the user agent can load and helps prevent XSS attacks. ### content_security_policy_template Set the policy template that will be used when adding the `Content-Security-Policy` header to your requests. `$NONCE` in the template includes a random nonce. ### content_security_policy_report_only Set to `true` to add the `Content-Security-Policy-Report-Only` header to your requests. CSP in Report Only mode enables you to experiment with policies by monitoring their effects without enforcing them. You can enable both policies simultaneously. ### content_security_policy_template Set the policy template that will be used when adding the `Content-Security-Policy-Report-Only` header to your requests. `$NONCE` in the template includes a random nonce. ### actions_allow_post_url Sets API paths to be accessible between plugins using the POST verb. If the value is empty, you can only pass remote requests through the proxy. If the value is set, you can also send authenticated POST requests to the local server. You typically use this to enable backend communication between plugins. This is a comma-separated list which uses glob matching. This will allow access to all plugins that have a backend: `actions_allow_post_url=/api/plugins/*` This will limit access to the backend of a single plugin: `actions_allow_post_url=/api/plugins/grafana-special-app` <hr /> ### angular_support_enabled This is set to false by default, meaning that the angular framework and support components will not be loaded. This means that all [plugins]() and core features that depend on angular support will stop working. The core features that depend on angular are: - Old graph panel - Old table panel These features each have supported alternatives, and we recommend using them. ### csrf_trusted_origins List of additional allowed URLs to pass by the CSRF check. Suggested when authentication comes from an IdP. ### csrf_additional_headers List of allowed headers to be set by the user. Suggested to use for if authentication lives behind reverse proxies. ### csrf_always_check Set to `true` to execute the CSRF check even if the login cookie is not in a request (default `false`). ### enable_frontend_sandbox_for_plugins Comma-separated list of plugins ids that will be loaded inside the frontend sandbox. ## [snapshots] ### enabled Set to `false` to disable the snapshot feature (default `true`). ### external_enabled Set to `false` to disable external snapshot publish endpoint (default `true`). ### external_snapshot_url Set root URL to a Grafana instance where you want to publish external snapshots (defaults to https://snapshots.raintank.io). ### external_snapshot_name Set name for external snapshot button. Defaults to `Publish to snapshots.raintank.io`. ### public_mode Set to true to enable this Grafana instance to act as an external snapshot server and allow unauthenticated requests for creating and deleting snapshots. Default is `false`. <hr /> ## [dashboards] ### versions_to_keep Number dashboard versions to keep (per dashboard). Default: `20`, Minimum: `1`. ### min_refresh_interval This feature prevents users from setting the dashboard refresh interval to a lower value than a given interval value. The default interval value is 5 seconds. The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. `30s` or `1m`. This also limits the refresh interval options in Explore. ### default_home_dashboard_path Path to the default home dashboard. If this value is empty, then Grafana uses StaticRootPath + "dashboards/home.json". On Linux, Grafana uses `/usr/share/grafana/public/dashboards/home.json` as the default home dashboard location. <hr /> ## [sql_datasources] ### max_open_conns_default For SQL data sources (MySql, Postgres, MSSQL) you can override the default maximum number of open connections (default: 100). The value configured in data source settings will be preferred over the default value. ### max_idle_conns_default For SQL data sources (MySql, Postgres, MSSQL) you can override the default allowed number of idle connections (default: 100). The value configured in data source settings will be preferred over the default value. ### max_conn_lifetime_default For SQL data sources (MySql, Postgres, MSSQL) you can override the default maximum connection lifetime specified in seconds (default: 14400). The value configured in data source settings will be preferred over the default value. <hr/> ## [users] ### allow_sign_up Set to `false` to prohibit users from being able to sign up / create user accounts. Default is `false`. The admin user can still create users. For more information about creating a user, refer to [Add a user](). ### allow_org_create Set to `false` to prohibit users from creating new organizations. Default is `false`. ### auto_assign_org Set to `true` to automatically add new users to the main organization (id 1). When set to `false`, new users automatically cause a new organization to be created for that new user. The organization will be created even if the `allow_org_create` setting is set to `false`. Default is `true`. ### auto_assign_org_id Set this value to automatically add new users to the provided org. This requires `auto_assign_org` to be set to `true`. Please make sure that this organization already exists. Default is 1. ### auto_assign_org_role The `auto_assign_org_role` setting determines the default role assigned to new users in the main organization if `auto_assign_org` setting is set to `true`. You can set this to one of the following roles: (`Viewer` (default), `Admin`, `Editor`, and `None`). For example: `auto_assign_org_role = Viewer` ### verify_email_enabled Require email validation before sign up completes or when updating a user email address. Default is `false`. ### login_default_org_id Set the default organization for users when they sign in. The default is `-1`. ### login_hint Text used as placeholder text on login page for login/username input. ### password_hint Text used as placeholder text on login page for password input. ### default_theme Sets the default UI theme: `dark`, `light`, or `system`. The default theme is `dark`. `system` matches the user's system theme. ### default_language This option will set the default UI language if a supported IETF language tag like `en-US` is available. If set to `detect`, the default UI language will be determined by browser preference. The default is `en-US`. ### home_page Path to a custom home page. Users are only redirected to this if the default home dashboard is used. It should match a frontend route and contain a leading slash. ### External user management If you manage users externally you can replace the user invite button for organizations with a link to an external site together with a description. ### viewers_can_edit Viewers can access and use [Explore]() and perform temporary edits on panels in dashboards they have access to. They cannot save their changes. Default is `false`. ### editors_can_admin Editors can administrate dashboards, folders and teams they create. Default is `false`. ### user_invite_max_lifetime_duration The duration in time a user invitation remains valid before expiring. This setting should be expressed as a duration. Examples: 6h (hours), 2d (days), 1w (week). Default is `24h` (24 hours). The minimum supported duration is `15m` (15 minutes). ### verification_email_max_lifetime_duration The duration in time a verification email, used to update the email address of a user, remains valid before expiring. This setting should be expressed as a duration. Examples: 6h (hours), 2d (days), 1w (week). Default is 1h (1 hour). ### last_seen_update_interval The frequency of updating a user's last seen time. This setting should be expressed as a duration. Examples: 1h (hour), 15m (minutes) Default is `15m` (15 minutes). The minimum supported duration is `5m` (5 minutes). The maximum supported duration is `1h` (1 hour). ### hidden_users This is a comma-separated list of usernames. Users specified here are hidden in the Grafana UI. They are still visible to Grafana administrators and to themselves. <hr> ## [auth] Grafana provides many ways to authenticate users. Refer to the Grafana [Authentication overview]() and other authentication documentation for detailed instructions on how to set up and configure authentication. ### login_cookie_name The cookie name for storing the auth token. Default is `grafana_session`. ### login_maximum_inactive_lifetime_duration The maximum lifetime (duration) an authenticated user can be inactive before being required to login at next visit. Default is 7 days (7d). This setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month). The lifetime resets at each successful token rotation (token_rotation_interval_minutes). ### login_maximum_lifetime_duration The maximum lifetime (duration) an authenticated user can be logged in since login time before being required to login. Default is 30 days (30d). This setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month). ### token_rotation_interval_minutes How often auth tokens are rotated for authenticated users when the user is active. The default is each 10 minutes. ### disable_login_form Set to true to disable (hide) the login form, useful if you use OAuth. Default is false. ### disable_signout_menu Set to `true` to disable the signout link in the side menu. This is useful if you use auth.proxy. Default is `false`. ### signout_redirect_url The URL the user is redirected to upon signing out. To support [OpenID Connect RP-Initiated Logout](https://openid.net/specs/openid-connect-rpinitiated-1_0.html), the user must add `post_logout_redirect_uri` to the `signout_redirect_url`. Example: signout_redirect_url = http://localhost:8087/realms/grafana/protocol/openid-connect/logout?post_logout_redirect_uri=http%3A%2F%2Flocalhost%3A3000%2Flogin ### oauth_auto_login This option is deprecated - use `auto_login` option for specific OAuth provider instead. Set to `true` to attempt login with OAuth automatically, skipping the login screen. This setting is ignored if multiple OAuth providers are configured. Default is `false`. ### oauth_state_cookie_max_age How many seconds the OAuth state cookie lives before being deleted. Default is `600` (seconds) Administrators can increase this if they experience OAuth login state mismatch errors. ### oauth_login_error_message A custom error message for when users are unauthorized. Default is a key for an internationalized phrase in the frontend, `Login provider denied login request`. ### oauth_refresh_token_server_lock_min_wait_ms Minimum wait time in milliseconds for the server lock retry mechanism. Default is `1000` (milliseconds). The server lock retry mechanism is used to prevent multiple Grafana instances from simultaneously refreshing OAuth tokens. This mechanism waits at least this amount of time before retrying to acquire the server lock. There are five retries in total, so with the default value, the total wait time (for acquiring the lock) is at least 5 seconds (the wait time between retries is calculated as random(n, n + 500)), which means that the maximum token refresh duration must be less than 5-6 seconds. If you experience issues with the OAuth token refresh mechanism, you can increase this value to allow more time for the token refresh to complete. ### oauth_skip_org_role_update_sync This option is removed from G11 in favor of OAuth provider specific `skip_org_role_sync` settings. The following sections explain settings for each provider. If you want to change the `oauth_skip_org_role_update_sync` setting from `true` to `false`, then each provider you have set up, use the `skip_org_role_sync` setting to specify whether you want to skip the synchronization. Currently if no organization role mapping is found for a user, Grafana doesn't update the user's organization role. With Grafana 10, if `oauth_skip_org_role_update_sync` option is set to `false`, users with no mapping will be reset to the default organization role on every login. [See `auto_assign_org_role` option](). ### skip_org_role_sync `skip_org_role_sync` prevents the synchronization of organization roles for a specific OAuth integration, while the deprecated setting `oauth_skip_org_role_update_sync` affects all configured OAuth providers. The default value for `skip_org_role_sync` is `false`. With `skip_org_role_sync` set to `false`, the users' organization and role is reset on every new login, based on the external provider's role. See your provider in the tables below. With `skip_org_role_sync` set to `true`, when a user logs in for the first time, Grafana sets the organization role based on the value specified in `auto_assign_org_role` and forces the organization to `auto_assign_org_id` when specified, otherwise it falls back to OrgID `1`. > **Note**: Enabling `skip_org_role_sync` also disables the synchronization of Grafana Admins from the external provider, as such `allow_assign_grafana_admin` is ignored. Use this setting when you want to manage the organization roles of your users from within Grafana and be able to manually assign them to multiple organizations, or to prevent synchronization conflicts when they can be synchronized from another provider. The behavior of `oauth_skip_org_role_update_sync` and `skip_org_role_sync`, can be seen in the tables below: **[auth.grafana_com]** | `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable | |-----------------------------------|----------------------|-------------------------------------------------------------------------------------------------------------------------------------|---------------------------| | false | false | Synchronize user organization role with Grafana.com role. If no role is provided, `auto_assign_org_role` is set. | false | | true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true | | false | true | Skips organization role synchronization for Grafana.com users. Role is set to `auto_assign_org_role`. | true | | true | true | Skips organization role synchronization for Grafana.com users and all other OAuth providers. Role is set to `auto_assign_org_role`. | true | **[auth.azuread]** | `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable | |-----------------------------------|----------------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------| | false | false | Synchronize user organization role with AzureAD role. If no role is provided, `auto_assign_org_role` is set. | false | | true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true | | false | true | Skips organization role synchronization for AzureAD users. Role is set to `auto_assign_org_role`. | true | | true | true | Skips organization role synchronization for AzureAD users and all other OAuth providers. Role is set to `auto_assign_org_role`. | true | **[auth.google]** | `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable | |-----------------------------------|----------------------|----------------------------------------------------------------------------------------|---------------------------| | false | false | User organization role is set to `auto_assign_org_role` and cannot be changed. | false | | true | false | User organization role is set to `auto_assign_org_role` and can be changed in Grafana. | true | | false | true | User organization role is set to `auto_assign_org_role` and can be changed in Grafana. | true | | true | true | User organization role is set to `auto_assign_org_role` and can be changed in Grafana. | true | For GitLab, GitHub, Okta, Generic OAuth providers, Grafana synchronizes organization roles and sets Grafana Admins. The `allow_assign_grafana_admin` setting is also accounted for, to allow or not setting the Grafana Admin role from the external provider. **[auth.github]** | `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable | |-----------------------------------|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------| | false | false | Synchronize user organization role with GitHub role. If no role is provided, `auto_assign_org_role` is set. | false | | true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true | | false | true | Skips organization role and Grafana Admin synchronization for GitHub users. Role is set to `auto_assign_org_role`. | true | | true | true | Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for GitHub users. Role is set to `auto_assign_org_role`. | true | **[auth.gitlab]** | `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable | |-----------------------------------|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------| | false | false | Synchronize user organization role with Gitlab role. If no role is provided, `auto_assign_org_role` is set. | false | | true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true | | false | true | Skips organization role and Grafana Admin synchronization for Gitlab users. Role is set to `auto_assign_org_role`. | true | | true | true | Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for Gitlab users. Role is set to `auto_assign_org_role`. | true | **[auth.generic_oauth]** | `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable | |-----------------------------------|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------| | false | false | Synchronize user organization role with the provider's role. If no role is provided, `auto_assign_org_role` is set. | false | | true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true | | false | true | Skips organization role and Grafana Admin synchronization for the provider's users. Role is set to `auto_assign_org_role`. | true | | true | true | Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for the provider's users. Role is set to `auto_assign_org_role`. | true | **[auth.okta]** | `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable | |-----------------------------------|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------| | false | false | Synchronize user organization role with Okta role. If no role is provided, `auto_assign_org_role` is set. | false | | true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true | | false | true | Skips organization role and Grafana Admin synchronization for Okta users. Role is set to `auto_assign_org_role`. | true | | true | true | Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for Okta users. Role is set to `auto_assign_org_role`. | true | #### Example skip_org_role_sync [auth.google] | `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | **Example Scenario** | |-----------------------------------|----------------------|-----------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | false | false | Synchronized with Google Auth organization roles | A user logs in to Grafana using their Google account and their organization role is automatically set based on their role in Google. | | true | false | Skipped synchronization of organization roles from all OAuth providers | A user logs in to Grafana using their Google account and their organization role is **not** set based on their role. But Grafana Administrators can modify the role from the UI. | | false | true | Skipped synchronization of organization roles Google | A user logs in to Grafana using their Google account and their organization role is **not** set based on their role in Google. But Grafana Administrators can modify the role from the UI. | | true | true | Skipped synchronization of organization roles from all OAuth providers including Google | A user logs in to Grafana using their Google account and their organization role is **not** set based on their role in Google. But Grafana Administrators can modify the role from the UI. | ### api_key_max_seconds_to_live Limit of API key seconds to live before expiration. Default is -1 (unlimited). ### sigv4_auth_enabled Set to `true` to enable the AWS Signature Version 4 Authentication option for HTTP-based datasources. Default is `false`. ### sigv4_verbose_logging Set to `true` to enable verbose request signature logging when AWS Signature Version 4 Authentication is enabled. Default is `false`. <hr /> ### managed_service_accounts_enabled > Only available in Grafana 11.3+. Set to `true` to enable the use of managed service accounts for plugin authentication. Default is `false`. > **Limitations:** > This feature currently **only supports single-organization deployments**. > The plugin's service account is automatically created in the default organization. This means the plugin can only access data and resources within that specific organization. ## [auth.anonymous] Refer to [Anonymous authentication]() for detailed instructions. <hr /> ## [auth.github] Refer to [GitHub OAuth2 authentication]() for detailed instructions. <hr /> ## [auth.gitlab] Refer to [Gitlab OAuth2 authentication]() for detailed instructions. <hr /> ## [auth.google] Refer to [Google OAuth2 authentication]() for detailed instructions. <hr /> ## [auth.grafananet] Legacy key names, still in the config file so they work in env variables. <hr /> ## [auth.grafana_com] Legacy key names, still in the config file so they work in env variables. <hr /> ## [auth.azuread] Refer to [Azure AD OAuth2 authentication]() for detailed instructions. <hr /> ## [auth.okta] Refer to [Okta OAuth2 authentication]() for detailed instructions. <hr /> ## [auth.generic_oauth] Refer to [Generic OAuth authentication]() for detailed instructions. <hr /> ## [auth.basic] Refer to [Basic authentication]() for detailed instructions. <hr /> ## [auth.proxy] Refer to [Auth proxy authentication]() for detailed instructions. <hr /> ## [auth.ldap] Refer to [LDAP authentication]() for detailed instructions. ## [aws] You can configure core and external AWS plugins. ### allowed_auth_providers Specify what authentication providers the AWS plugins allow. For a list of allowed providers, refer to the data-source configuration page for a given plugin. If you configure a plugin by provisioning, only providers that are specified in `allowed_auth_providers` are allowed. Options: `default` (AWS SDK default), `keys` (Access and secret key), `credentials` (Credentials file), `ec2_iam_role` (EC2 IAM role) ### assume_role_enabled Set to `false` to disable AWS authentication from using an assumed role with temporary security credentials. For details about assume roles, refer to the AWS API reference documentation about the [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) operation. If this option is disabled, the **Assume Role** and the **External Id** field are removed from the AWS data source configuration page. If the plugin is configured using provisioning, it is possible to use an assumed role as long as `assume_role_enabled` is set to `true`. ### list_metrics_page_limit Use the [List Metrics API](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_ListMetrics.html) option to load metrics for custom namespaces in the CloudWatch data source. By default, the page limit is 500. <hr /> ## [azure] Grafana supports additional integration with Azure services when hosted in the Azure Cloud. ### cloud Azure cloud environment where Grafana is hosted: | Azure Cloud | Value | | ------------------------------------------------ | ---------------------- | | Microsoft Azure public cloud | AzureCloud (_default_) | | Microsoft Chinese national cloud | AzureChinaCloud | | US Government cloud | AzureUSGovernment | | Microsoft German national cloud ("Black Forest") | AzureGermanCloud | ### clouds_config The JSON config defines a list of Azure clouds and their associated properties when hosted in custom Azure environments. For example: ```ini clouds_config = `[ { "name":"CustomCloud1", "displayName":"Custom Cloud 1", "aadAuthority":"https://login.cloud1.contoso.com/", "properties":{ "azureDataExplorerSuffix": ".kusto.windows.cloud1.contoso.com", "logAnalytics": "https://api.loganalytics.cloud1.contoso.com", "portal": "https://portal.azure.cloud1.contoso.com", "prometheusResourceId": "https://prometheus.monitor.azure.cloud1.contoso.com", "resourceManager": "https://management.azure.cloud1.contoso.com" } }]` ``` ### managed_identity_enabled Specifies whether Grafana hosted in Azure service with Managed Identity configured (e.g. Azure Virtual Machines instance). Disabled by default, needs to be explicitly enabled. ### managed_identity_client_id The client ID to use for user-assigned managed identity. Should be set for user-assigned identity and should be empty for system-assigned identity. ### workload_identity_enabled Specifies whether Azure AD Workload Identity authentication should be enabled in datasources that support it. For more documentation on Azure AD Workload Identity, review [Azure AD Workload Identity](https://azure.github.io/azure-workload-identity/docs/) documentation. Disabled by default, needs to be explicitly enabled. ### workload_identity_tenant_id Tenant ID of the Azure AD Workload Identity. Allows to override default tenant ID of the Azure AD identity associated with the Kubernetes service account. ### workload_identity_client_id Client ID of the Azure AD Workload Identity. Allows to override default client ID of the Azure AD identity associated with the Kubernetes service account. ### workload_identity_token_file Custom path to token file for the Azure AD Workload Identity. Allows to set a custom path to the projected service account token file. ### user_identity_enabled Specifies whether user identity authentication (on behalf of currently signed-in user) should be enabled in datasources that support it (requires AAD authentication). Disabled by default, needs to be explicitly enabled. ### user_identity_fallback_credentials_enabled Specifies whether user identity authentication fallback credentials should be enabled in data sources. Enabling this allows data source creators to provide fallback credentials for backend-initiated requests, such as alerting, recorded queries, and so on. It is by default and needs to be explicitly disabled. It will not have any effect if user identity authentication is disabled. ### user_identity_token_url Override token URL for Azure Active Directory. By default is the same as token URL configured for AAD authentication settings. ### user_identity_client_id Override ADD application ID which would be used to exchange users token to an access token for the datasource. By default is the same as used in AAD authentication or can be set to another application (for OBO flow). ### user_identity_client_secret Override the AAD application client secret. By default is the same as used in AAD authentication or can be set to another application (for OBO flow). ### forward_settings_to_plugins Set plugins that will receive Azure settings via plugin context. By default, this will include all Grafana Labs owned Azure plugins or those that use Azure settings (Azure Monitor, Azure Data Explorer, Prometheus, MSSQL). ### azure_entra_password_credentials_enabled Specifies whether Entra password auth can be used for the MSSQL data source. This authentication is not recommended and consideration should be taken before enabling this. Disabled by default, needs to be explicitly enabled. ## [auth.jwt] Refer to [JWT authentication]() for more information. <hr /> ## [smtp] Email server settings. ### enabled Enable this to allow Grafana to send email. Default is `false`. ### host Default is `localhost:25`. Use port 465 for implicit TLS. ### user In case of SMTP auth, default is `empty`. ### password In case of SMTP auth, default is `empty`. If the password contains `#` or `;`, then you have to wrap it with triple quotes. Example: """#password;""" ### cert_file File path to a cert file, default is `empty`. ### key_file File path to a key file, default is `empty`. ### skip_verify Verify SSL for SMTP server, default is `false`. ### from_address Address used when sending out emails, default is `[email protected]`. ### from_name Name to be used when sending out emails, default is `Grafana`. ### ehlo_identity Name to be used as client identity for EHLO in SMTP dialog, default is `<instance_name>`. ### startTLS_policy Either "OpportunisticStartTLS", "MandatoryStartTLS", "NoStartTLS". Default is `empty`. ### enable_tracing Enable trace propagation in e-mail headers, using the `traceparent`, `tracestate` and (optionally) `baggage` fields. Default is `false`. To enable, you must first configure tracing in one of the `tracing.opentelemetry.*` sections. <hr> ## [smtp.static_headers] Enter key-value pairs on their own lines to be included as headers on outgoing emails. All keys must be in canonical mail header format. Examples: `Foo=bar`, `Foo-Header=bar`. <hr> ## [emails] ### welcome_email_on_sign_up Default is `false`. ### templates_pattern Enter a comma separated list of template patterns. Default is `emails/*.html, emails/*.txt`. ### content_types Enter a comma-separated list of content types that should be included in the emails that are sent. List the content types according descending preference, e.g. `text/html, text/plain` for HTML as the most preferred. The order of the parts is significant as the mail clients will use the content type that is supported and most preferred by the sender. Supported content types are `text/html` and `text/plain`. Default is `text/html`. <hr> ## [log] Grafana logging options. ### mode Options are "console", "file", and "syslog". Default is "console" and "file". Use spaces to separate multiple modes, e.g. `console file`. ### level Options are "debug", "info", "warn", "error", and "critical". Default is `info`. ### filters Optional settings to set different levels for specific loggers. For example: `filters = sqlstore:debug` ### user_facing_default_error Use this configuration option to set the default error message shown to users. This message is displayed instead of sensitive backend errors, which should be obfuscated. The default message is `Please inspect the Grafana server log for details.`. <hr> ## [log.console] Only applicable when "console" is used in `[log]` mode. ### level Options are "debug", "info", "warn", "error", and "critical". Default is inherited from `[log]` level. ### format Log line format, valid options are text, console and json. Default is `console`. <hr> ## [log.file] Only applicable when "file" used in `[log]` mode. ### level Options are "debug", "info", "warn", "error", and "critical". Default is inherited from `[log]` level. ### format Log line format, valid options are text, console and json. Default is `text`. ### log_rotate Enable automated log rotation, valid options are `false` or `true`. Default is `true`. When enabled use the `max_lines`, `max_size_shift`, `daily_rotate` and `max_days` to configure the behavior of the log rotation. ### max_lines Maximum lines per file before rotating it. Default is `1000000`. ### max_size_shift Maximum size of file before rotating it. Default is `28`, which means `1 << 28`, `256MB`. ### daily_rotate Enable daily rotation of files, valid options are `false` or `true`. Default is `true`. ### max_days Maximum number of days to keep log files. Default is `7`. <hr> ## [log.syslog] Only applicable when "syslog" used in `[log]` mode. ### level Options are "debug", "info", "warn", "error", and "critical". Default is inherited from `[log]` level. ### format Log line format, valid options are text, console, and json. Default is `text`. ### network and address Syslog network type and address. This can be UDP, TCP, or UNIX. If left blank, then the default UNIX endpoints are used. ### facility Syslog facility. Valid options are user, daemon or local0 through local7. Default is empty. ### tag Syslog tag. By default, the process's `argv[0]` is used. <hr> ## [log.frontend] ### enabled Faro javascript agent is initialized. Default is `false`. ### custom_endpoint Custom HTTP endpoint to send events captured by the Faro agent to. Default, `/log-grafana-javascript-agent`, will log the events to stdout. ### log_endpoint_requests_per_second_limit Requests per second limit enforced per an extended period, for Grafana backend log ingestion endpoint, `/log-grafana-javascript-agent`. Default is `3`. ### log_endpoint_burst_limit Maximum requests accepted per short interval of time for Grafana backend log ingestion endpoint, `/log-grafana-javascript-agent`. Default is `15`. ### instrumentations_all_enabled Enables all Faro default instrumentation by using `getWebInstrumentations`. Overrides other instrumentation flags. ### instrumentations_errors_enabled Turn on error instrumentation. Only affects Grafana Javascript Agent. ### instrumentations_console_enabled Turn on console instrumentation. Only affects Grafana Javascript Agent ### instrumentations_webvitals_enabled Turn on webvitals instrumentation. Only affects Grafana Javascript Agent ### instrumentations_tracing_enabled Turns on tracing instrumentation. Only affects Grafana Javascript Agent. ### api_key If `custom_endpoint` required authentication, you can set the API key here. Only relevant for Grafana Javascript Agent provider. <hr> ## [quota] Set quotas to `-1` to make unlimited. ### enabled Enable usage quotas. Default is `false`. ### org_user Limit the number of users allowed per organization. Default is 10. ### org_dashboard Limit the number of dashboards allowed per organization. Default is 100. ### org_data_source Limit the number of data sources allowed per organization. Default is 10. ### org_api_key Limit the number of API keys that can be entered per organization. Default is 10. ### org_alert_rule Limit the number of alert rules that can be entered per organization. Default is 100. ### user_org Limit the number of organizations a user can create. Default is 10. ### global_user Sets a global limit of users. Default is -1 (unlimited). ### global_org Sets a global limit on the number of organizations that can be created. Default is -1 (unlimited). ### global_dashboard Sets a global limit on the number of dashboards that can be created. Default is -1 (unlimited). ### global_api_key Sets global limit of API keys that can be entered. Default is -1 (unlimited). ### global_session Sets a global limit on number of users that can be logged in at one time. Default is -1 (unlimited). ### global_alert_rule Sets a global limit on number of alert rules that can be created. Default is -1 (unlimited). ### global_correlations Sets a global limit on number of correlations that can be created. Default is -1 (unlimited). ### alerting_rule_evaluation_results Limit the number of query evaluation results per alert rule. If the condition query of an alert rule produces more results than this limit, the evaluation results in an error. Default is -1 (unlimited). <hr> ## [unified_alerting] For more information about the Grafana alerts, refer to [Grafana Alerting](). ### enabled Enable or disable Grafana Alerting. The default value is `true`. Alerting rules migrated from dashboards and panels will include a link back via the `annotations`. ### disabled_orgs Comma-separated list of organization IDs for which to disable Grafana 8 Unified Alerting. ### admin_config_poll_interval Specify the frequency of polling for admin config changes. The default value is `60s`. The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. ### alertmanager_config_poll_interval Specify the frequency of polling for Alertmanager config changes. The default value is `60s`. The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. ### ha_redis_address The Redis server address that should be connected to. For more information on Redis, refer to [Enable alerting high availability using Redis](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/alerting/set-up/configure-high-availability/#enable-alerting-high-availability-using-redis). ### ha_redis_username The username that should be used to authenticate with the Redis server. ### ha_redis_password The password that should be used to authenticate with the Redis server. ### ha_redis_db The Redis database. The default value is `0`. ### ha_redis_prefix A prefix that is used for every key or channel that is created on the Redis server as part of HA for alerting. ### ha_redis_peer_name The name of the cluster peer that will be used as an identifier. If none is provided, a random one will be generated. ### ha_redis_max_conns The maximum number of simultaneous Redis connections. ### ha_listen_address Listen IP address and port to receive unified alerting messages for other Grafana instances. The port is used for both TCP and UDP. It is assumed other Grafana instances are also running on the same port. The default value is `0.0.0.0:9094`. ### ha_advertise_address Explicit IP address and port to advertise other Grafana instances. The port is used for both TCP and UDP. ### ha_peers Comma-separated list of initial instances (in a format of host:port) that will form the HA cluster. Configuring this setting will enable High Availability mode for alerting. ### ha_peer_timeout Time to wait for an instance to send a notification via the Alertmanager. In HA, each Grafana instance will be assigned a position (e.g. 0, 1). We then multiply this position with the timeout to indicate how long should each instance wait before sending the notification to take into account replication lag. The default value is `15s`. The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. ### ha_label The label is an optional string to include on each packet and stream. It uniquely identifies the cluster and prevents cross-communication issues when sending gossip messages in an environment with multiple clusters. ### ha_gossip_interval The interval between sending gossip messages. By lowering this value (more frequent) gossip messages are propagated across cluster more quickly at the expense of increased bandwidth usage. The default value is `200ms`. The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. ### ha_reconnect_timeout Length of time to attempt to reconnect to a lost peer. When running Grafana in a Kubernetes cluster, set this duration to less than `15m`. The string is a possibly signed sequence of decimal numbers followed by a unit suffix (ms, s, m, h, d), such as `30s` or `1m`. ### ha_push_pull_interval The interval between gossip full state syncs. Setting this interval lower (more frequent) will increase convergence speeds across larger clusters at the expense of increased bandwidth usage. The default value is `60s`. The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. ### execute_alerts Enable or disable alerting rule execution. The default value is `true`. The alerting UI remains visible. ### evaluation_timeout Sets the alert evaluation timeout when fetching data from the data source. The default value is `30s`. The timeout string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. ### max_attempts Sets a maximum number of times we'll attempt to evaluate an alert rule before giving up on that evaluation. The default value is `1`. ### min_interval Sets the minimum interval to enforce between rule evaluations. The default value is `10s` which equals the scheduler interval. Rules will be adjusted if they are less than this value or if they are not multiple of the scheduler interval (10s). Higher values can help with resource management as we'll schedule fewer evaluations over time. The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. > **Note.** This setting has precedence over each individual rule frequency. If a rule frequency is lower than this value, then this value is enforced. <hr> ## [unified_alerting.screenshots] For more information about screenshots, refer to [Images in notifications](). ### capture Enable screenshots in notifications. This option requires a remote HTTP image rendering service. Please see `[rendering]` for further configuration options. ### capture_timeout The timeout for capturing screenshots. If a screenshot cannot be captured within the timeout then the notification is sent without a screenshot. The maximum duration is 30 seconds. This timeout should be less than the minimum Interval of all Evaluation Groups to avoid back pressure on alert rule evaluation. ### max_concurrent_screenshots The maximum number of screenshots that can be taken at the same time. This option is different from `concurrent_render_request_limit` as `max_concurrent_screenshots` sets the number of concurrent screenshots that can be taken at the same time for all firing alerts where as concurrent_render_request_limit sets the total number of concurrent screenshots across all Grafana services. ### upload_external_image_storage Uploads screenshots to the local Grafana server or remote storage such as Azure, S3 and GCS. Please see `[external_image_storage]` for further configuration options. If this option is false then screenshots will be persisted to disk for up to `temp_data_lifetime`. <hr> ## [unified_alerting.reserved_labels] For more information about Grafana Reserved Labels, refer to [Labels in Grafana Alerting](/docs/grafana/next/alerting/fundamentals/annotation-label/how-to-use-labels/) ### disabled_labels Comma-separated list of reserved labels added by the Grafana Alerting engine that should be disabled. For example: `disabled_labels=grafana_folder` <hr> ## [unified_alerting.state_history.annotations] This section controls retention of annotations automatically created while evaluating alert rules when alerting state history backend is configured to be annotations (see setting [unified_alerting.state_history].backend) ### max_age Configures for how long alert annotations are stored. Default is 0, which keeps them forever. This setting should be expressed as an duration. Ex 6h (hours), 10d (days), 2w (weeks), 1M (month). ### max_annotations_to_keep Configures max number of alert annotations that Grafana stores. Default value is 0, which keeps all alert annotations. <hr> ## [annotations] ### cleanupjob_batchsize Configures the batch size for the annotation clean-up job. This setting is used for dashboard, API, and alert annotations. ### tags_length Enforces the maximum allowed length of the tags for any newly introduced annotations. It can be between 500 and 4096 (inclusive). Default value is 500. Setting it to a higher value would impact performance therefore is not recommended. ## [annotations.dashboard] Dashboard annotations means that annotations are associated with the dashboard they are created on. ### max_age Configures how long dashboard annotations are stored. Default is 0, which keeps them forever. This setting should be expressed as a duration. Examples: 6h (hours), 10d (days), 2w (weeks), 1M (month). ### max_annotations_to_keep Configures max number of dashboard annotations that Grafana stores. Default value is 0, which keeps all dashboard annotations. ## [annotations.api] API annotations means that the annotations have been created using the API without any association with a dashboard. ### max_age Configures how long Grafana stores API annotations. Default is 0, which keeps them forever. This setting should be expressed as a duration. Examples: 6h (hours), 10d (days), 2w (weeks), 1M (month). ### max_annotations_to_keep Configures max number of API annotations that Grafana keeps. Default value is 0, which keeps all API annotations. <hr> ## [explore] For more information about this feature, refer to [Explore](). ### enabled Enable or disable the Explore section. Default is `enabled`. ### defaultTimeOffset Set a default time offset from now on the time picker. Default is 1 hour. This setting should be expressed as a duration. Examples: 1h (hour), 1d (day), 1w (week), 1M (month). ## [help] Configures the help section. ### enabled Enable or disable the Help section. Default is `enabled`. ## [profile] Configures the Profile section. ### enabled Enable or disable the Profile section. Default is `enabled`. ## [news] ### news_feed_enabled Enables the news feed section. Default is `true` <hr> ## [query] ### concurrent_query_limit Set the number of queries that can be executed concurrently in a mixed data source panel. Default is the number of CPUs. ## [query_history] Configures Query history in Explore. ### enabled Enable or disable the Query history. Default is `enabled`. <hr> ## [short_links] Configures settings around the short link feature. ### expire_time Short links that are never accessed are considered expired or stale and will be deleted as cleanup. Set the expiration time in days. The default is `7` days. The maximum is `365` days, and setting above the maximum will have `365` set instead. Setting `0` means the short links will be cleaned up approximately every 10 minutes. A negative value such as `-1` will disable expiry. Short links without an expiration increase the size of the database and can’t be deleted. <hr> ## [metrics] For detailed instructions, refer to [Internal Grafana metrics](). ### enabled Enable metrics reporting. defaults true. Available via HTTP API `<URL>/metrics`. ### interval_seconds Flush/write interval when sending metrics to external TSDB. Defaults to `10`. ### disable_total_stats If set to `true`, then total stats generation (`stat_totals_*` metrics) is disabled. Default is `false`. ### total_stats_collector_interval_seconds Sets the total stats collector interval. The default is 1800 seconds (30 minutes). ### basic_auth_username and basic_auth_password If both are set, then basic authentication is required to access the metrics endpoint. <hr> ## [metrics.environment_info] Adds dimensions to the `grafana_environment_info` metric, which can expose more information about the Grafana instance. ``` ; exampleLabel1 = exampleValue1 ; exampleLabel2 = exampleValue2 ``` ## [metrics.graphite] Use these options if you want to send internal Grafana metrics to Graphite. ### address Enable by setting the address. Format is `<Hostname or ip>`:port. ### prefix Graphite metric prefix. Defaults to `prod.grafana.%(instance_name)s.` <hr> ## [grafana_net] Refer to [grafana_com] config as that is the new and preferred config name. The grafana_net config is still accepted and parsed to grafana_com config. <hr> ## [grafana_com] ### url Default is https://grafana.com. The default authentication identity provider for Grafana Cloud. <hr> ## [tracing.jaeger] [Deprecated - use tracing.opentelemetry.jaeger or tracing.opentelemetry.otlp instead] Configure Grafana's Jaeger client for distributed tracing. You can also use the standard `JAEGER_*` environment variables to configure Jaeger. See the table at the end of https://www.jaegertracing.io/docs/1.16/client-features/ for the full list. Environment variables will override any settings provided here. ### address The host:port destination for reporting spans. (ex: `localhost:6831`) Can be set with the environment variables `JAEGER_AGENT_HOST` and `JAEGER_AGENT_PORT`. ### always_included_tag Comma-separated list of tags to include in all new spans, such as `tag1:value1,tag2:value2`. Can be set with the environment variable `JAEGER_TAGS` (use `=` instead of `:` with the environment variable). ### sampler_type Default value is `const`. Specifies the type of sampler: `const`, `probabilistic`, `ratelimiting`, or `remote`. Refer to https://www.jaegertracing.io/docs/1.16/sampling/#client-sampling-configuration for details on the different tracing types. Can be set with the environment variable `JAEGER_SAMPLER_TYPE`. _To override this setting, enter `sampler_type` in the `tracing.opentelemetry` section._ ### sampler_param Default value is `1`. This is the sampler configuration parameter. Depending on the value of `sampler_type`, it can be `0`, `1`, or a decimal value in between. - For `const` sampler, `0` or `1` for always `false`/`true` respectively - For `probabilistic` sampler, a probability between `0` and `1.0` - For `rateLimiting` sampler, the number of spans per second - For `remote` sampler, param is the same as for `probabilistic` and indicates the initial sampling rate before the actual one is received from the mothership May be set with the environment variable `JAEGER_SAMPLER_PARAM`. _Setting `sampler_param` in the `tracing.opentelemetry` section will override this setting._ ### sampling_server_url sampling_server_url is the URL of a sampling manager providing a sampling strategy. _Setting `sampling_server_url` in the `tracing.opentelemetry` section will override this setting._ ### zipkin_propagation Default value is `false`. Controls whether or not to use Zipkin's span propagation format (with `x-b3-` HTTP headers). By default, Jaeger's format is used. Can be set with the environment variable and value `JAEGER_PROPAGATION=b3`. ### disable_shared_zipkin_spans Default value is `false`. Setting this to `true` turns off shared RPC spans. Leaving this available is the most common setting when using Zipkin elsewhere in your infrastructure. <hr> ## [tracing.opentelemetry] Configure general parameters shared between OpenTelemetry providers. ### custom_attributes Comma-separated list of attributes to include in all new spans, such as `key1:value1,key2:value2`. Can be set or overridden with the environment variable `OTEL_RESOURCE_ATTRIBUTES` (use `=` instead of `:` with the environment variable). The service name can be set or overridden using attributes or with the environment variable `OTEL_SERVICE_NAME`. ### sampler_type Default value is `const`. Specifies the type of sampler: `const`, `probabilistic`, `ratelimiting`, or `remote`. ### sampler_param Default value is `1`. Depending on the value of `sampler_type`, the sampler configuration parameter can be `0`, `1`, or any decimal value between `0` and `1`. - For the `const` sampler, use `0` to never sample or `1` to always sample - For the `probabilistic` sampler, you can use a decimal value between `0.0` and `1.0` - For the `rateLimiting` sampler, enter the number of spans per second - For the `remote` sampler, use a decimal value between `0.0` and `1.0` to specify the initial sampling rate used before the first update is received from the sampling server ### sampling_server_url When `sampler_type` is `remote`, this specifies the URL of the sampling server. This can be used by all tracing providers. Use a sampling server that supports the Jaeger remote sampling API, such as jaeger-agent, jaeger-collector, opentelemetry-collector-contrib, or [Grafana Alloy](https://grafana.com/oss/alloy-opentelemetry-collector/). <hr> ## [tracing.opentelemetry.jaeger] Configure Grafana's Jaeger client for distributed tracing. ### address The host:port destination for reporting spans. (ex: `localhost:14268/api/traces`) ### propagation The propagation specifies the text map propagation format. The values `jaeger` and `w3c` are supported. Add a comma (`,`) between values to specify multiple formats (for example, `"jaeger,w3c"`). The default value is `w3c`. <hr> ## [tracing.opentelemetry.otlp] Configure Grafana's otlp client for distributed tracing. ### address The host:port destination for reporting spans. (ex: `localhost:4317`) ### propagation The propagation specifies the text map propagation format. The values `jaeger` and `w3c` are supported. Add a comma (`,`) between values to specify multiple formats (for example, `"jaeger,w3c"`). The default value is `w3c`. <hr> ## [external_image_storage] These options control how images should be made public so they can be shared on services like Slack or email message. ### provider Options are s3, webdav, gcs, azure_blob, local). If left empty, then Grafana ignores the upload action. <hr> ## [external_image_storage.s3] ### endpoint Optional endpoint URL (hostname or fully qualified URI) to override the default generated S3 endpoint. If you want to keep the default, just leave this empty. You must still provide a `region` value if you specify an endpoint. ### path_style_access Set this to true to force path-style addressing in S3 requests, i.e., `http://s3.amazonaws.com/BUCKET/KEY`, instead of the default, which is virtual hosted bucket addressing when possible (`http://BUCKET.s3.amazonaws.com/KEY`). This option is specific to the Amazon S3 service. ### bucket_url (for backward compatibility, only works when no bucket or region are configured) Bucket URL for S3. AWS region can be specified within URL or defaults to 'us-east-1', e.g. - http://grafana.s3.amazonaws.com/ - https://grafana.s3-ap-southeast-2.amazonaws.com/ ### bucket Bucket name for S3. e.g. grafana.snapshot. ### region Region name for S3. e.g. 'us-east-1', 'cn-north-1', etc. ### path Optional extra path inside bucket, useful to apply expiration policies. ### access_key Access key, e.g. AAAAAAAAAAAAAAAAAAAA. Access key requires permissions to the S3 bucket for the 's3:PutObject' and 's3:PutObjectAcl' actions. ### secret_key Secret key, e.g. AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA. <hr> ## [external_image_storage.webdav] ### url URL where Grafana sends PUT request with images. ### username Basic auth username. ### password Basic auth password. ### public_url Optional URL to send to users in notifications. If the string contains the sequence ``, it is replaced with the uploaded filename. Otherwise, the file name is appended to the path part of the URL, leaving any query string unchanged. <hr> ## [external_image_storage.gcs] ### key_file Optional path to JSON key file associated with a Google service account to authenticate and authorize. If no value is provided it tries to use the [application default credentials](https://cloud.google.com/docs/authentication/production#finding_credentials_automatically). Service Account keys can be created and downloaded from https://console.developers.google.com/permissions/serviceaccounts. Service Account should have "Storage Object Writer" role. The access control model of the bucket needs to be "Set object-level and bucket-level permissions". Grafana itself will make the images public readable when signed urls are not enabled. ### bucket Bucket Name on Google Cloud Storage. ### path Optional extra path inside bucket. ### enable_signed_urls If set to true, Grafana creates a [signed URL](https://cloud.google.com/storage/docs/access-control/signed-urls) for the image uploaded to Google Cloud Storage. ### signed_url_expiration Sets the signed URL expiration, which defaults to seven days. ## [external_image_storage.azure_blob] ### account_name Storage account name. ### account_key Storage account key ### container_name Container name where to store "Blob" images with random names. Creating the blob container beforehand is required. Only public containers are supported. ### sas_token_expiration_days Number of days for SAS token validity. If specified SAS token will be attached to image URL. Allow storing images in private containers. <hr> ## [external_image_storage.local] This option does not require any configuration. <hr> ## [rendering] Options to configure a remote HTTP image rendering service, e.g. using https://github.com/grafana/grafana-image-renderer. #### renderer_token An auth token will be sent to and verified by the renderer. The renderer will deny any request without an auth token matching the one configured on the renderer. ### server_url URL to a remote HTTP image renderer service, e.g. http://localhost:8081/render, will enable Grafana to render panels and dashboards to PNG-images using HTTP requests to an external service. ### callback_url If the remote HTTP image renderer service runs on a different server than the Grafana server you may have to configure this to a URL where Grafana is reachable, e.g. http://grafana.domain/. ### concurrent_render_request_limit Concurrent render request limit affects when the /render HTTP endpoint is used. Rendering many images at the same time can overload the server, which this setting can help protect against by only allowing a certain number of concurrent requests. Default is `30`. ### default_image_width Configures the width of the rendered image. The default width is `1000`. ### default_image_height Configures the height of the rendered image. The default height is `500`. ### default_image_scale Configures the scale of the rendered image. The default scale is `1`. ## [panels] ### enable_alpha Set to `true` if you want to test alpha panels that are not yet ready for general usage. Default is `false`. ### disable_sanitize_html This configuration is not available in Grafana Cloud instances. If set to true Grafana will allow script tags in text panels. Not recommended as it enables XSS vulnerabilities. Default is false. ## [plugins] ### enable_alpha Set to `true` if you want to test alpha plugins that are not yet ready for general usage. Default is `false`. ### allow_loading_unsigned_plugins Enter a comma-separated list of plugin identifiers to identify plugins to load even if they are unsigned. Plugins with modified signatures are never loaded. We do _not_ recommend using this option. For more information, refer to [Plugin signatures](). ### plugin_admin_enabled Available to Grafana administrators only, enables installing / uninstalling / updating plugins directly from the Grafana UI. Set to `true` by default. Setting it to `false` will hide the install / uninstall / update controls. For more information, refer to [Plugin catalog](). ### plugin_admin_external_manage_enabled Set to `true` if you want to enable external management of plugins. Default is `false`. This is only applicable to Grafana Cloud users. ### plugin_catalog_url Custom install/learn more URL for enterprise plugins. Defaults to https://grafana.com/grafana/plugins/. ### plugin_catalog_hidden_plugins Enter a comma-separated list of plugin identifiers to hide in the plugin catalog. ### public_key_retrieval_disabled Disable download of the public key for verifying plugin signature. The default is `false`. If disabled, it will use the hardcoded public key. ### public_key_retrieval_on_startup Force download of the public key for verifying plugin signature on startup. The default is `false`. If disabled, the public key will be retrieved every 10 days. Requires `public_key_retrieval_disabled` to be false to have any effect. ### disable_plugins Enter a comma-separated list of plugin identifiers to avoid loading (including core plugins). These plugins will be hidden in the catalog. ### preinstall Enter a comma-separated list of plugin identifiers to preinstall. These plugins will be installed on startup, using the Grafana catalog as the source. Preinstalled plugins cannot be uninstalled from the Grafana user interface; they need to be removed from this list first. To pin plugins to a specific version, use the format `plugin_id@version`, for example,`[email protected]`. If no version is specified, the latest version is installed. _The plugin is automatically updated_ to the latest version when a new version is available in the Grafana plugin catalog on startup (except for new major versions). To use a custom URL to download a plugin, use the format `plugin_id@version@url`, for example, `[email protected]@https://example.com/grafana-piechart-panel-1.6.0.zip`. By default, Grafana preinstalls some suggested plugins. Check the default configuration file for the list of plugins. ### preinstall_async By default, plugins are preinstalled asynchronously, as a background process. This means that Grafana will start up faster, but the plugins may not be available immediately. If you need a plugin to be installed for provisioning, set this option to `false`. This causes Grafana to wait for the plugins to be installed before starting up (and fail if a plugin can't be installed). ### preinstall_disabled This option disables all preinstalled plugins. The default is `false`. To disable a specific plugin from being preinstalled, use the `disable_plugins` option. <hr> ## [live] ### max_connections The `max_connections` option specifies the maximum number of connections to the Grafana Live WebSocket endpoint per Grafana server instance. Default is `100`. Refer to [Grafana Live configuration documentation]() if you specify a number higher than default since this can require some operating system and infrastructure tuning. 0 disables Grafana Live, -1 means unlimited connections. ### allowed_origins The `allowed_origins` option is a comma-separated list of additional origins (`Origin` header of HTTP Upgrade request during WebSocket connection establishment) that will be accepted by Grafana Live. If not set (default), then the origin is matched over [root_url]() which should be sufficient for most scenarios. Origin patterns support wildcard symbol "\*". For example: ```ini [live] allowed_origins = "https://*.example.com" ``` ### ha_engine **Experimental** The high availability (HA) engine name for Grafana Live. By default, it's not set. The only possible value is "redis". For more information, refer to the [Configure Grafana Live HA setup](). ### ha_engine_address **Experimental** Address string of selected the high availability (HA) Live engine. For Redis, it's a `host:port` string. Example: ```ini [live] ha_engine = redis ha_engine_address = 127.0.0.1:6379 ``` <hr> ## [plugin.plugin_id] This section can be used to configure plugin-specific settings. Replace the `plugin_id` attribute with the plugin ID present in `plugin.json`. Properties described in this section are available for all plugins, but you must set them individually for each plugin. ### tracing [OpenTelemetry must be configured as well](#tracingopentelemetry). If `true`, propagate the tracing context to the plugin backend and enable tracing (if the backend supports it). ## as_external Load an external version of a core plugin if it has been installed. Experimental. Requires the feature toggle `externalCorePlugins` to be enabled. <hr> ## [plugin.grafana-image-renderer] For more information, refer to [Image rendering](). ### rendering_timezone Instruct headless browser instance to use a default timezone when not provided by Grafana, e.g. when rendering panel image of alert. See [ICUs metaZones.txt](https://cs.chromium.org/chromium/src/third_party/icu/source/data/misc/metaZones.txt) for a list of supported timezone IDs. Fallbacks to TZ environment variable if not set. ### rendering_language Instruct headless browser instance to use a default language when not provided by Grafana, e.g. when rendering panel image of alert. Refer to the HTTP header Accept-Language to understand how to format this value, e.g. 'fr-CH, fr;q=0.9, en;q=0.8, de;q=0.7, \*;q=0.5'. ### rendering_viewport_device_scale_factor Instruct headless browser instance to use a default device scale factor when not provided by Grafana, e.g. when rendering panel image of alert. Default is `1`. Using a higher value will produce more detailed images (higher DPI), but requires more disk space to store an image. ### rendering_ignore_https_errors Instruct headless browser instance whether to ignore HTTPS errors during navigation. Per default HTTPS errors are not ignored. Due to the security risk, we do not recommend that you ignore HTTPS errors. ### rendering_verbose_logging Instruct headless browser instance whether to capture and log verbose information when rendering an image. Default is `false` and will only capture and log error messages. When enabled, debug messages are captured and logged as well. For the verbose information to be included in the Grafana server log you have to adjust the rendering log level to debug, configure [log].filter = rendering:debug. ### rendering_dumpio Instruct headless browser instance whether to output its debug and error messages into running process of remote rendering service. Default is `false`. It can be useful to set this to `true` when troubleshooting. ### rendering_timing_metrics > **Note:** Available from grafana-image-renderer v3.9.0+ Instruct a headless browser instance on whether to record metrics for the duration of every rendering step. Default is `false`. Setting this to `true` when optimizing the rendering mode settings to improve the plugin performance or when troubleshooting can be useful. ### rendering_args Additional arguments to pass to the headless browser instance. Defaults are `--no-sandbox,--disable-gpu`. The list of Chromium flags can be found at (https://peter.sh/experiments/chromium-command-line-switches/). Separate multiple arguments with commas. ### rendering_chrome_bin You can configure the plugin to use a different browser binary instead of the pre-packaged version of Chromium. Please note that this is _not_ recommended. You might encounter problems if the installed version of Chrome/Chromium is not compatible with the plugin. ### rendering_mode Instruct how headless browser instances are created. Default is `default` and will create a new browser instance on each request. Mode `clustered` will make sure that only a maximum of browsers/incognito pages can execute concurrently. Mode `reusable` will have one browser instance and will create a new incognito page on each request. ### rendering_clustering_mode When rendering_mode = clustered, you can instruct how many browsers or incognito pages can execute concurrently. Default is `browser` and will cluster using browser instances. Mode `context` will cluster using incognito pages. ### rendering_clustering_max_concurrency When rendering_mode = clustered, you can define the maximum number of browser instances/incognito pages that can execute concurrently. Default is `5`. ### rendering_clustering_timeout Available in grafana-image-renderer v3.3.0 and later versions. When rendering_mode = clustered, you can specify the duration a rendering request can take before it will time out. Default is `30` seconds. ### rendering_viewport_max_width Limit the maximum viewport width that can be requested. ### rendering_viewport_max_height Limit the maximum viewport height that can be requested. ### rendering_viewport_max_device_scale_factor Limit the maximum viewport device scale factor that can be requested. ### grpc_host Change the listening host of the gRPC server. Default host is `127.0.0.1`. ### grpc_port Change the listening port of the gRPC server. Default port is `0` and will automatically assign a port not in use. <hr> ## [enterprise] For more information about Grafana Enterprise, refer to [Grafana Enterprise](). <hr> ## [feature_toggles] ### enable Keys of features to enable, separated by space. ### FEATURE_TOGGLE_NAME = false Some feature toggles for stable features are on by default. Use this setting to disable an on-by-default feature toggle with the name FEATURE_TOGGLE_NAME, for example, `exploreMixedDatasource = false`. <hr> ## [feature_management] The options in this section configure the experimental Feature Toggle Admin Page feature, which is enabled using the `featureToggleAdminPage` feature toggle. Grafana Labs offers support on a best-effort basis, and breaking changes might occur prior to the feature being made generally available. Please see [Configure feature toggles]() for more information. ### allow_editing Lets you switch the feature toggle state in the feature management page. The default is `false`. ### update_webhook Set the URL of the controller that manages the feature toggle updates. If not set, feature toggles in the feature management page will be read-only. The API for feature toggle updates has not been defined yet. ### hidden_toggles Hide additional specific feature toggles from the feature management page. By default, feature toggles in the `unknown`, `experimental`, and `private preview` stages are hidden from the UI. Use this option to hide toggles in the `public preview`, `general availability`, and `deprecated` stages. ### read_only_toggles Use to disable updates for additional specific feature toggles in the feature management page. By default, feature toggles can only be updated if they are in the `general availability` and `deprecated`stages. Use this option to disable updates for toggles in those stages. <hr> ## [date_formats] This section controls system-wide defaults for date formats used in time ranges, graphs, and date input boxes. The format patterns use [Moment.js](https://momentjs.com/docs/#/displaying/) formatting tokens. ### full_date Full date format used by time range picker and in other places where a full date is rendered. ### intervals These intervals formats are used in the graph to show only a partial date or time. For example, if there are only minutes between Y-axis tick labels then the `interval_minute` format is used. Defaults ``` interval_second = HH:mm:ss interval_minute = HH:mm interval_hour = MM/DD HH:mm interval_day = MM/DD interval_month = YYYY-MM interval_year = YYYY ``` ### use_browser_locale Set this to `true` to have date formats automatically derived from your browser location. Defaults to `false`. This is an experimental feature. ### default_timezone Used as the default time zone for user preferences. Can be either `browser` for the browser local time zone or a time zone name from the IANA Time Zone database, such as `UTC` or `Europe/Amsterdam`. ### default_week_start Set the default start of the week, valid values are: `saturday`, `sunday`, `monday` or `browser` to use the browser locale to define the first day of the week. Default is `browser`. ## [expressions] ### enabled Set this to `false` to disable expressions and hide them in the Grafana UI. Default is `true`. ## [geomap] This section controls the defaults settings for Geomap Plugin. ### default_baselayer_config The json config used to define the default base map. Four base map options to choose from are `carto`, `esriXYZTiles`, `xyzTiles`, `standard`. For example, to set cartoDB light as the default base layer: ```ini default_baselayer_config = `{ "type": "xyz", "config": { "attribution": "Open street map", "url": "https://tile.openstreetmap.org/{z}/{x}/{y}.png" } }` ``` ### enable_custom_baselayers Set this to `false` to disable loading other custom base maps and hide them in the Grafana UI. Default is `true`. ## [rbac] Refer to [Role-based access control]() for more information. ## [navigation.app_sections] Move an app plugin (referenced by its id), including all its pages, to a specific navigation section. Format: `<pluginId> = <sectionId> <sortWeight>` ## [navigation.app_standalone_pages] Move an individual app plugin page (referenced by its `path` field) to a specific navigation section. Format: `<pageUrl> = <sectionId> <sortWeight>` ## [public_dashboards] This section configures the [shared dashboards](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/dashboards/share-dashboards-panels/shared-dashboards/) feature. ### enabled Set this to `false` to disable the shared dashboards feature. This prevents users from creating new shared dashboards and disables existing ones.
grafana setup
aliases administration configuration installation configuration description Configuration documentation labels products enterprise oss title Configure Grafana weight 200 Configure Grafana Grafana has default and custom configuration files You can customize your Grafana instance by modifying the custom configuration file or by using environment variables To see the list of settings for a Grafana instance refer to View server settings After you add custom options uncomment remove comments in the ini files the relevant sections of the configuration file Restart Grafana for your changes to take effect Configuration file location The default settings for a Grafana instance are stored in the WORKING DIR conf defaults ini file Do not change this file Depending on your OS your custom configuration file is either the WORKING DIR conf custom ini file or the usr local etc grafana grafana ini file The custom configuration file path can be overridden using the config parameter Linux If you installed Grafana using the deb or rpm packages then your configuration file is located at etc grafana grafana ini and a separate custom ini is not used This path is specified in the Grafana init d script using config file parameter Docker Refer to Configure a Grafana Docker image for information about environmental variables persistent storage and building custom Docker images Windows On Windows the sample ini file is located in the same directory as defaults ini file It contains all the settings commented out Copy sample ini and name it custom ini macOS By default the configuration file is located at opt homebrew etc grafana grafana ini or usr local etc grafana grafana ini For a Grafana instance installed using Homebrew edit the grafana ini file directly Otherwise add a configuration file named custom ini to the conf folder to override the settings defined in conf defaults ini Remove comments in the ini files Grafana uses semicolons the char to comment out lines in a ini file You must uncomment each line in the custom ini or the grafana ini file that you are modify by removing from the beginning of that line Otherwise your changes will be ignored For example The HTTP port to use http port 3000 Override configuration with environment variables Do not use environment variables to add new configuration settings Instead use environmental variables to override existing options To override an option bash GF SectionName KeyName Where the section name is the text within the brackets Everything should be uppercase and should be replaced by For example if you have these configuration settings bash default section instance name HOSTNAME security admin user admin auth google client secret 0ldS3cretKey plugin grafana image renderer rendering ignore https errors true feature toggles enable newNavigation You can override variables on Linux machines with bash export GF DEFAULT INSTANCE NAME my instance export GF SECURITY ADMIN USER owner export GF AUTH GOOGLE CLIENT SECRET newS3cretKey export GF PLUGIN GRAFANA IMAGE RENDERER RENDERING IGNORE HTTPS ERRORS true export GF FEATURE TOGGLES ENABLE newNavigation Variable expansion If any of your options contains the expression provider argument or environment variable then they will be processed by Grafana s variable expander The expander runs the provider with the provided argument to get the final value of the option There are three providers env file and vault Env provider The env provider can be used to expand an environment variable If you set an option to env PORT the PORT environment variable will be used in its place For environment variables you can also use the short hand syntax PORT Grafana s log directory would be set to the grafana directory in the directory behind the LOGDIR environment variable in the following example ini paths logs env LOGDIR grafana File provider file reads a file from the filesystem It trims whitespace from the beginning and the end of files The database password in the following example would be replaced by the content of the etc secrets gf sql password file ini database password file etc secrets gf sql password Vault provider The vault provider allows you to manage your secrets with Hashicorp Vault https www hashicorp com products vault Vault provider is only available in Grafana Enterprise v7 1 For more information refer to Vault integration in Grafana Enterprise hr app mode Options are production and development Default is production Do not change this option unless you are working on Grafana development instance name Set the name of the grafana server instance Used in logging internal metrics and clustering info Defaults to HOSTNAME which will be replaced with environment variable HOSTNAME if that is empty or does not exist Grafana will try to use system calls to get the machine name hr paths data Path to where Grafana stores the sqlite3 database if used file based sessions if used and other data This path is usually specified via command line in the init d script or the systemd service file macOS The default SQLite database is located at usr local var lib grafana temp data lifetime How long temporary images in data directory should be kept Defaults to 24h Supported modifiers h hours m minutes for example 168h 30m 10h30m Use 0 to never clean up temporary files logs Path to where Grafana stores logs This path is usually specified via command line in the init d script or the systemd service file You can override it in the configuration file or in the default environment variable file However please note that by overriding this the default log path will be used temporarily until Grafana has fully initialized started Override log path using the command line argument cfg default paths logs bash grafana server config custom config ini homepath custom homepath cfg default paths logs custom path macOS By default the log file should be located at usr local var log grafana grafana log plugins Directory where Grafana automatically scans and looks for plugins For information about manually or automatically installing plugins refer to Install Grafana plugins macOS By default the Mac plugin location is usr local var lib grafana plugins provisioning Folder that contains provisioning config files that Grafana will apply on startup Dashboards will be reloaded when the json files changes hr server protocol http https h2 or socket min tls version The TLS Handshake requires a minimum TLS version The available options are TLS1 2 and TLS1 3 If you do not specify a version the system uses TLS1 2 http addr The host for the server to listen on If your machine has more than one network interface you can use this setting to expose the Grafana service on only one network interface and not have it available on others such as the loopback interface An empty value is equivalent to setting the value to 0 0 0 0 which means the Grafana service binds to all interfaces In environments where network address translation NAT is used ensure you use the network interface address and not a final public address otherwise you might see errors such as bind cannot assign requested address in the logs http port The port to bind to defaults to 3000 To use port 80 you need to either give the Grafana binary permission for example bash sudo setcap cap net bind service ep usr sbin grafana server Or redirect port 80 to the Grafana port using bash sudo iptables t nat A PREROUTING p tcp dport 80 j REDIRECT to port 3000 Another way is to put a web server like Nginx or Apache in front of Grafana and have them proxy requests to Grafana domain This setting is only used in as a part of the root url setting see below Important if you use GitHub or Google OAuth enforce domain Redirect to correct domain if the host header does not match the domain Prevents DNS rebinding attacks Default is false root url This is the full URL used to access Grafana from a web browser This is important if you use Google or GitHub OAuth authentication for the callback URL to be correct This setting is also important if you have a reverse proxy in front of Grafana that exposes it through a subpath In that case add the subpath to the end of this URL setting serve from sub path Serve Grafana from subpath specified in root url setting By default it is set to false for compatibility reasons By enabling this setting and using a subpath in root url above e g root url http localhost 3000 grafana Grafana is accessible on http localhost 3000 grafana If accessed without subpath Grafana will redirect to an URL with the subpath router logging Set to true for Grafana to log all HTTP requests not just errors These are logged as Info level events to the Grafana log static root path The path to the directory where the front end files HTML JS and CSS files Defaults to public which is why the Grafana binary needs to be executed with working directory set to the installation path enable gzip Set this option to true to enable HTTP compression this can improve transfer speed and bandwidth utilization It is recommended that most users set it to true By default it is set to false for compatibility reasons cert file Path to the certificate file if protocol is set to https or h2 cert key Path to the certificate key file if protocol is set to https or h2 certs watch interval Controls whether cert key and cert file are periodically watched for changes Disabled by default When enabled cert key and cert file are watched for changes If there is change the new certificates are loaded automatically After the new certificates are loaded connections with old certificates will not work You must reload the connections to the old certs for them to work socket gid GID where the socket should be set when protocol socket Make sure that the target group is in the group of Grafana process and that Grafana process is the file owner before you change this setting It is recommended to set the gid as http server user gid Not set when the value is 1 socket mode Mode where the socket should be set when protocol socket Make sure that Grafana process is the file owner before you change this setting socket Path where the socket should be created when protocol socket Make sure Grafana has appropriate permissions for that path before you change this setting cdn url Specify a full HTTP URL address to the root of your Grafana CDN assets Grafana will add edition and version paths For example given a cdn url like https cdn myserver com grafana will try to load a javascript file from http cdn myserver com grafana oss 7 4 0 public build app hash js read timeout Sets the maximum time using a duration format 5s 5m 5ms before timing out read of an incoming request and closing idle connections 0 means there is no timeout for reading the request hr server custom response headers This setting enables you to specify additional headers that the server adds to HTTP S responses exampleHeader1 exampleValue1 exampleHeader2 exampleValue2 hr database Grafana needs a database to store users and dashboards and other things By default it is configured to use sqlite3 https www sqlite org index html which is an embedded database included in the main Grafana binary type Either mysql postgres or sqlite3 it s your choice host Only applicable to MySQL or Postgres Includes IP or hostname and port or in case of Unix sockets the path to it For example for MySQL running on the same host as Grafana host 127 0 0 1 3306 or with Unix sockets host var run mysqld mysqld sock name The name of the Grafana database Leave it set to grafana or some other name user The database user not applicable for sqlite3 password The database user s password not applicable for sqlite3 If the password contains or you have to wrap it with triple quotes For example password url Use either URL or the other fields below to configure the database Example mysql user secret host port database max idle conn The maximum number of connections in the idle connection pool max open conn The maximum number of open connections to the database For MYSQL configure this setting on both Grafana and the database For more information refer to sysvar max connections https dev mysql com doc refman 8 0 en server system variables html sysvar max connections conn max lifetime Sets the maximum amount of time a connection may be reused The default is 14400 which means 14400 seconds or 4 hours For MySQL this setting should be shorter than the wait timeout https dev mysql com doc refman 5 7 en server system variables html sysvar wait timeout variable migration locking Set to false to disable database locking during the migrations Default is true locking attempt timeout sec For mysql and postgres only Specify the time in seconds to wait before failing to lock the database for the migrations Default is 0 log queries Set to true to log the sql calls and execution times ssl mode For Postgres use use any valid libpq sslmode https www postgresql org docs current libpq ssl html LIBPQ SSL SSLMODE STATEMENTS e g disable require verify full etc For MySQL use either true false or skip verify ssl sni For Postgres set to 0 to disable Server Name Indication https www postgresql org docs current libpq connect html LIBPQ CONNECT SSLSNI This is enabled by default on SSL enabled connections isolation level Only the MySQL driver supports isolation levels in Grafana In case the value is empty the driver s default isolation level is applied Available options are READ UNCOMMITTED READ COMMITTED REPEATABLE READ or SERIALIZABLE ca cert path The path to the CA certificate to use On many Linux systems certs can be found in etc ssl certs client key path The path to the client key Only if server requires client authentication client cert path The path to the client cert Only if server requires client authentication server cert name The common name field of the certificate used by the mysql or postgres server Not necessary if ssl mode is set to skip verify path Only applicable for sqlite3 database The file path where the database will be stored cache mode For sqlite3 only Shared cache https www sqlite org sharedcache html setting used for connecting to the database private shared Defaults to private wal For sqlite3 only Setting to enable disable Write Ahead Logging https sqlite org wal html The default value is false disabled query retries This setting applies to sqlite only and controls the number of times the system retries a query when the database is locked The default value is 0 disabled transaction retries This setting applies to sqlite only and controls the number of times the system retries a transaction when the database is locked The default value is 5 instrument queries Set to true to add metrics and tracing for database queries The default value is false hr remote cache Caches authentication details and session information in the configured database Redis or Memcached This setting does not configure Query Caching in Grafana Enterprise type Either redis memcached or database Defaults to database connstr The remote cache connection string The format depends on the type of the remote cache Options are database redis and memcache database Leave empty when using database since it will use the primary database redis Example connstr addr 127 0 0 1 6379 pool size 100 db 0 ssl false addr is the host port of the redis server pool size optional is the number of underlying connections that can be made to redis db optional is the number identifier of the redis database you want to use ssl optional is if SSL should be used to connect to redis server The value may be true false or insecure Setting the value to insecure skips verification of the certificate chain and hostname when making the connection memcache Example connstr 127 0 0 1 11211 hr dataproxy logging This enables data proxy logging default is false timeout How long the data proxy should wait before timing out Default is 30 seconds This setting also applies to core backend HTTP data sources where query requests use an HTTP client with timeout set keep alive seconds Interval between keep alive probes Default is 30 seconds For more details check the Dialer KeepAlive https golang org pkg net Dialer KeepAlive documentation tls handshake timeout seconds The length of time that Grafana will wait for a successful TLS handshake with the datasource Default is 10 seconds For more details check the Transport TLSHandshakeTimeout https golang org pkg net http Transport TLSHandshakeTimeout documentation expect continue timeout seconds The length of time that Grafana will wait for a datasource s first response headers after fully writing the request headers if the request has an Expect 100 continue header A value of 0 will result in the body being sent immediately Default is 1 second For more details check the Transport ExpectContinueTimeout https golang org pkg net http Transport ExpectContinueTimeout documentation max conns per host Optionally limits the total number of connections per host including connections in the dialing active and idle states On limit violation dials are blocked A value of 0 means that there are no limits Default is 0 For more details check the Transport MaxConnsPerHost https golang org pkg net http Transport MaxConnsPerHost documentation max idle connections The maximum number of idle connections that Grafana will maintain Default is 100 For more details check the Transport MaxIdleConns https golang org pkg net http Transport MaxIdleConns documentation idle conn timeout seconds The length of time that Grafana maintains idle connections before closing them Default is 90 seconds For more details check the Transport IdleConnTimeout https golang org pkg net http Transport IdleConnTimeout documentation send user header If enabled and user is not anonymous data proxy will add X Grafana User header with username into the request Default is false response limit Limits the amount of bytes that will be read accepted from responses of outgoing HTTP requests Default is 0 which means disabled row limit Limits the number of rows that Grafana will process from SQL relational data sources Default is 1000000 user agent Sets a custom value for the User Agent header for outgoing data proxy requests If empty the default value is Grafana BuildVersion for example Grafana 9 0 0 hr analytics enabled This option is also known as usage analytics When false this option disables the writers that write to the Grafana database and the associated features such as dashboard and data source insights presence indicators and advanced dashboard search The default value is true reporting enabled When enabled Grafana will send anonymous usage statistics to stats grafana org No IP addresses are being tracked only simple counters to track running instances versions dashboard and error counts It is very helpful to us so please leave this enabled Counters are sent every 24 hours Default value is true check for updates Set to false disables checking for new versions of Grafana from Grafana s GitHub repository When enabled the check for a new version runs every 10 minutes It will notify via the UI when a new version is available The check itself will not prompt any auto updates of the Grafana software nor will it send any sensitive information check for plugin updates Set to false disables checking for new versions of installed plugins from https grafana com When enabled the check for a new plugin runs every 10 minutes It will notify via the UI when a new plugin update exists The check itself will not prompt any auto updates of the plugin nor will it send any sensitive information google analytics ua id If you want to track Grafana usage via Google analytics specify your Universal Analytics ID here By default this feature is disabled google analytics 4 id If you want to track Grafana usage via Google Analytics 4 specify your GA4 ID here By default this feature is disabled google tag manager id Google Tag Manager ID only enabled if you enter an ID here rudderstack write key If you want to track Grafana usage via Rudderstack specify your Rudderstack Write Key here The rudderstack data plane url must also be provided for this feature to be enabled By default this feature is disabled rudderstack data plane url Rudderstack data plane url that will receive Rudderstack events The rudderstack write key must also be provided for this feature to be enabled rudderstack sdk url Optional If tracking with Rudderstack is enabled you can provide a custom URL to load the Rudderstack SDK rudderstack config url Optional If tracking with Rudderstack is enabled you can provide a custom URL to load the Rudderstack config rudderstack integrations url Optional If tracking with Rudderstack is enabled you can provide a custom URL to load the SDK for destinations running in device mode This setting is only valid for Rudderstack version 1 1 and higher application insights connection string If you want to track Grafana usage via Azure Application Insights then specify your Application Insights connection string Since the connection string contains semicolons you need to wrap it in backticks By default tracking usage is disabled application insights endpoint url Optionally use this option to override the default endpoint address for Application Insights data collecting For details refer to the Azure documentation https docs microsoft com en us azure azure monitor app custom endpoints tabs js hr feedback links enabled Set to false to remove all feedback links from the UI Default is true security disable initial admin creation Disable creation of admin user on first start of Grafana Default is false admin user The name of the default Grafana Admin user who has full permissions Default is admin admin password The password of the default Grafana Admin Set once on first run Default is admin admin email The email of the default Grafana Admin created on startup Default is admin localhost secret key Used for signing some data source settings like secrets and passwords the encryption format used is AES 256 in CFB mode Cannot be changed without requiring an update to data source settings to re encode them disable gravatar Set to true to disable the use of Gravatar for user profile images Default is false data source proxy whitelist Define a whitelist of allowed IP addresses or domains with ports to be used in data source URLs with the Grafana data source proxy Format ip or domain port separated by spaces PostgreSQL MySQL and MSSQL data sources do not use the proxy and are therefore unaffected by this setting disable brute force login protection Set to true to disable brute force login protection https cheatsheetseries owasp org cheatsheets Authentication Cheat Sheet html account lockout Default is false An existing user s account will be unable to login for 5 minutes if all login attempts are spent within a 5 minute window brute force login protection max attempts Configure how many login attempts a user have within a 5 minute window before the account will be locked Default is 5 cookie secure Set to true if you host Grafana behind HTTPS Default is false cookie samesite Sets the SameSite cookie attribute and prevents the browser from sending this cookie along with cross site requests The main goal is to mitigate the risk of cross origin information leakage This setting also provides some protection against cross site request forgery attacks CSRF read more about SameSite here https owasp org www community SameSite Valid values are lax strict none and disabled Default is lax Using value disabled does not add any SameSite attribute to cookies allow embedding When false the HTTP header X Frame Options deny will be set in Grafana HTTP responses which will instruct browsers to not allow rendering Grafana in a frame iframe embed or object The main goal is to mitigate the risk of Clickjacking https owasp org www community attacks Clickjacking Default is false strict transport security Set to true if you want to enable HTTP Strict Transport Security HSTS response header Only use this when HTTPS is enabled in your configuration or when there is another upstream system that ensures your application does HTTPS like a frontend load balancer HSTS tells browsers that the site should only be accessed using HTTPS strict transport security max age seconds Sets how long a browser should cache HSTS in seconds Only applied if strict transport security is enabled The default value is 86400 strict transport security preload Set to true to enable HSTS preloading option Only applied if strict transport security is enabled The default value is false strict transport security subdomains Set to true to enable the HSTS includeSubDomains option Only applied if strict transport security is enabled The default value is false x content type options Set to false to disable the X Content Type Options response header The X Content Type Options response HTTP header is a marker used by the server to indicate that the MIME types advertised in the Content Type headers should not be changed and be followed The default value is true x xss protection Set to false to disable the X XSS Protection header which tells browsers to stop pages from loading when they detect reflected cross site scripting XSS attacks The default value is true content security policy Set to true to add the Content Security Policy header to your requests CSP allows to control resources that the user agent can load and helps prevent XSS attacks content security policy template Set the policy template that will be used when adding the Content Security Policy header to your requests NONCE in the template includes a random nonce content security policy report only Set to true to add the Content Security Policy Report Only header to your requests CSP in Report Only mode enables you to experiment with policies by monitoring their effects without enforcing them You can enable both policies simultaneously content security policy template Set the policy template that will be used when adding the Content Security Policy Report Only header to your requests NONCE in the template includes a random nonce actions allow post url Sets API paths to be accessible between plugins using the POST verb If the value is empty you can only pass remote requests through the proxy If the value is set you can also send authenticated POST requests to the local server You typically use this to enable backend communication between plugins This is a comma separated list which uses glob matching This will allow access to all plugins that have a backend actions allow post url api plugins This will limit access to the backend of a single plugin actions allow post url api plugins grafana special app hr angular support enabled This is set to false by default meaning that the angular framework and support components will not be loaded This means that all plugins and core features that depend on angular support will stop working The core features that depend on angular are Old graph panel Old table panel These features each have supported alternatives and we recommend using them csrf trusted origins List of additional allowed URLs to pass by the CSRF check Suggested when authentication comes from an IdP csrf additional headers List of allowed headers to be set by the user Suggested to use for if authentication lives behind reverse proxies csrf always check Set to true to execute the CSRF check even if the login cookie is not in a request default false enable frontend sandbox for plugins Comma separated list of plugins ids that will be loaded inside the frontend sandbox snapshots enabled Set to false to disable the snapshot feature default true external enabled Set to false to disable external snapshot publish endpoint default true external snapshot url Set root URL to a Grafana instance where you want to publish external snapshots defaults to https snapshots raintank io external snapshot name Set name for external snapshot button Defaults to Publish to snapshots raintank io public mode Set to true to enable this Grafana instance to act as an external snapshot server and allow unauthenticated requests for creating and deleting snapshots Default is false hr dashboards versions to keep Number dashboard versions to keep per dashboard Default 20 Minimum 1 min refresh interval This feature prevents users from setting the dashboard refresh interval to a lower value than a given interval value The default interval value is 5 seconds The interval string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d e g 30s or 1m This also limits the refresh interval options in Explore default home dashboard path Path to the default home dashboard If this value is empty then Grafana uses StaticRootPath dashboards home json On Linux Grafana uses usr share grafana public dashboards home json as the default home dashboard location hr sql datasources max open conns default For SQL data sources MySql Postgres MSSQL you can override the default maximum number of open connections default 100 The value configured in data source settings will be preferred over the default value max idle conns default For SQL data sources MySql Postgres MSSQL you can override the default allowed number of idle connections default 100 The value configured in data source settings will be preferred over the default value max conn lifetime default For SQL data sources MySql Postgres MSSQL you can override the default maximum connection lifetime specified in seconds default 14400 The value configured in data source settings will be preferred over the default value hr users allow sign up Set to false to prohibit users from being able to sign up create user accounts Default is false The admin user can still create users For more information about creating a user refer to Add a user allow org create Set to false to prohibit users from creating new organizations Default is false auto assign org Set to true to automatically add new users to the main organization id 1 When set to false new users automatically cause a new organization to be created for that new user The organization will be created even if the allow org create setting is set to false Default is true auto assign org id Set this value to automatically add new users to the provided org This requires auto assign org to be set to true Please make sure that this organization already exists Default is 1 auto assign org role The auto assign org role setting determines the default role assigned to new users in the main organization if auto assign org setting is set to true You can set this to one of the following roles Viewer default Admin Editor and None For example auto assign org role Viewer verify email enabled Require email validation before sign up completes or when updating a user email address Default is false login default org id Set the default organization for users when they sign in The default is 1 login hint Text used as placeholder text on login page for login username input password hint Text used as placeholder text on login page for password input default theme Sets the default UI theme dark light or system The default theme is dark system matches the user s system theme default language This option will set the default UI language if a supported IETF language tag like en US is available If set to detect the default UI language will be determined by browser preference The default is en US home page Path to a custom home page Users are only redirected to this if the default home dashboard is used It should match a frontend route and contain a leading slash External user management If you manage users externally you can replace the user invite button for organizations with a link to an external site together with a description viewers can edit Viewers can access and use Explore and perform temporary edits on panels in dashboards they have access to They cannot save their changes Default is false editors can admin Editors can administrate dashboards folders and teams they create Default is false user invite max lifetime duration The duration in time a user invitation remains valid before expiring This setting should be expressed as a duration Examples 6h hours 2d days 1w week Default is 24h 24 hours The minimum supported duration is 15m 15 minutes verification email max lifetime duration The duration in time a verification email used to update the email address of a user remains valid before expiring This setting should be expressed as a duration Examples 6h hours 2d days 1w week Default is 1h 1 hour last seen update interval The frequency of updating a user s last seen time This setting should be expressed as a duration Examples 1h hour 15m minutes Default is 15m 15 minutes The minimum supported duration is 5m 5 minutes The maximum supported duration is 1h 1 hour hidden users This is a comma separated list of usernames Users specified here are hidden in the Grafana UI They are still visible to Grafana administrators and to themselves hr auth Grafana provides many ways to authenticate users Refer to the Grafana Authentication overview and other authentication documentation for detailed instructions on how to set up and configure authentication login cookie name The cookie name for storing the auth token Default is grafana session login maximum inactive lifetime duration The maximum lifetime duration an authenticated user can be inactive before being required to login at next visit Default is 7 days 7d This setting should be expressed as a duration e g 5m minutes 6h hours 10d days 2w weeks 1M month The lifetime resets at each successful token rotation token rotation interval minutes login maximum lifetime duration The maximum lifetime duration an authenticated user can be logged in since login time before being required to login Default is 30 days 30d This setting should be expressed as a duration e g 5m minutes 6h hours 10d days 2w weeks 1M month token rotation interval minutes How often auth tokens are rotated for authenticated users when the user is active The default is each 10 minutes disable login form Set to true to disable hide the login form useful if you use OAuth Default is false disable signout menu Set to true to disable the signout link in the side menu This is useful if you use auth proxy Default is false signout redirect url The URL the user is redirected to upon signing out To support OpenID Connect RP Initiated Logout https openid net specs openid connect rpinitiated 1 0 html the user must add post logout redirect uri to the signout redirect url Example signout redirect url http localhost 8087 realms grafana protocol openid connect logout post logout redirect uri http 3A 2F 2Flocalhost 3A3000 2Flogin oauth auto login This option is deprecated use auto login option for specific OAuth provider instead Set to true to attempt login with OAuth automatically skipping the login screen This setting is ignored if multiple OAuth providers are configured Default is false oauth state cookie max age How many seconds the OAuth state cookie lives before being deleted Default is 600 seconds Administrators can increase this if they experience OAuth login state mismatch errors oauth login error message A custom error message for when users are unauthorized Default is a key for an internationalized phrase in the frontend Login provider denied login request oauth refresh token server lock min wait ms Minimum wait time in milliseconds for the server lock retry mechanism Default is 1000 milliseconds The server lock retry mechanism is used to prevent multiple Grafana instances from simultaneously refreshing OAuth tokens This mechanism waits at least this amount of time before retrying to acquire the server lock There are five retries in total so with the default value the total wait time for acquiring the lock is at least 5 seconds the wait time between retries is calculated as random n n 500 which means that the maximum token refresh duration must be less than 5 6 seconds If you experience issues with the OAuth token refresh mechanism you can increase this value to allow more time for the token refresh to complete oauth skip org role update sync This option is removed from G11 in favor of OAuth provider specific skip org role sync settings The following sections explain settings for each provider If you want to change the oauth skip org role update sync setting from true to false then each provider you have set up use the skip org role sync setting to specify whether you want to skip the synchronization Currently if no organization role mapping is found for a user Grafana doesn t update the user s organization role With Grafana 10 if oauth skip org role update sync option is set to false users with no mapping will be reset to the default organization role on every login See auto assign org role option skip org role sync skip org role sync prevents the synchronization of organization roles for a specific OAuth integration while the deprecated setting oauth skip org role update sync affects all configured OAuth providers The default value for skip org role sync is false With skip org role sync set to false the users organization and role is reset on every new login based on the external provider s role See your provider in the tables below With skip org role sync set to true when a user logs in for the first time Grafana sets the organization role based on the value specified in auto assign org role and forces the organization to auto assign org id when specified otherwise it falls back to OrgID 1 Note Enabling skip org role sync also disables the synchronization of Grafana Admins from the external provider as such allow assign grafana admin is ignored Use this setting when you want to manage the organization roles of your users from within Grafana and be able to manually assign them to multiple organizations or to prevent synchronization conflicts when they can be synchronized from another provider The behavior of oauth skip org role update sync and skip org role sync can be seen in the tables below auth grafana com oauth skip org role update sync skip org role sync Resulting Org Role Modifiable false false Synchronize user organization role with Grafana com role If no role is provided auto assign org role is set false true false Skips organization role synchronization for all OAuth providers users Role is set to auto assign org role true false true Skips organization role synchronization for Grafana com users Role is set to auto assign org role true true true Skips organization role synchronization for Grafana com users and all other OAuth providers Role is set to auto assign org role true auth azuread oauth skip org role update sync skip org role sync Resulting Org Role Modifiable false false Synchronize user organization role with AzureAD role If no role is provided auto assign org role is set false true false Skips organization role synchronization for all OAuth providers users Role is set to auto assign org role true false true Skips organization role synchronization for AzureAD users Role is set to auto assign org role true true true Skips organization role synchronization for AzureAD users and all other OAuth providers Role is set to auto assign org role true auth google oauth skip org role update sync skip org role sync Resulting Org Role Modifiable false false User organization role is set to auto assign org role and cannot be changed false true false User organization role is set to auto assign org role and can be changed in Grafana true false true User organization role is set to auto assign org role and can be changed in Grafana true true true User organization role is set to auto assign org role and can be changed in Grafana true For GitLab GitHub Okta Generic OAuth providers Grafana synchronizes organization roles and sets Grafana Admins The allow assign grafana admin setting is also accounted for to allow or not setting the Grafana Admin role from the external provider auth github oauth skip org role update sync skip org role sync Resulting Org Role Modifiable false false Synchronize user organization role with GitHub role If no role is provided auto assign org role is set false true false Skips organization role synchronization for all OAuth providers users Role is set to auto assign org role true false true Skips organization role and Grafana Admin synchronization for GitHub users Role is set to auto assign org role true true true Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for GitHub users Role is set to auto assign org role true auth gitlab oauth skip org role update sync skip org role sync Resulting Org Role Modifiable false false Synchronize user organization role with Gitlab role If no role is provided auto assign org role is set false true false Skips organization role synchronization for all OAuth providers users Role is set to auto assign org role true false true Skips organization role and Grafana Admin synchronization for Gitlab users Role is set to auto assign org role true true true Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for Gitlab users Role is set to auto assign org role true auth generic oauth oauth skip org role update sync skip org role sync Resulting Org Role Modifiable false false Synchronize user organization role with the provider s role If no role is provided auto assign org role is set false true false Skips organization role synchronization for all OAuth providers users Role is set to auto assign org role true false true Skips organization role and Grafana Admin synchronization for the provider s users Role is set to auto assign org role true true true Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for the provider s users Role is set to auto assign org role true auth okta oauth skip org role update sync skip org role sync Resulting Org Role Modifiable false false Synchronize user organization role with Okta role If no role is provided auto assign org role is set false true false Skips organization role synchronization for all OAuth providers users Role is set to auto assign org role true false true Skips organization role and Grafana Admin synchronization for Okta users Role is set to auto assign org role true true true Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for Okta users Role is set to auto assign org role true Example skip org role sync auth google oauth skip org role update sync skip org role sync Resulting Org Role Example Scenario false false Synchronized with Google Auth organization roles A user logs in to Grafana using their Google account and their organization role is automatically set based on their role in Google true false Skipped synchronization of organization roles from all OAuth providers A user logs in to Grafana using their Google account and their organization role is not set based on their role But Grafana Administrators can modify the role from the UI false true Skipped synchronization of organization roles Google A user logs in to Grafana using their Google account and their organization role is not set based on their role in Google But Grafana Administrators can modify the role from the UI true true Skipped synchronization of organization roles from all OAuth providers including Google A user logs in to Grafana using their Google account and their organization role is not set based on their role in Google But Grafana Administrators can modify the role from the UI api key max seconds to live Limit of API key seconds to live before expiration Default is 1 unlimited sigv4 auth enabled Set to true to enable the AWS Signature Version 4 Authentication option for HTTP based datasources Default is false sigv4 verbose logging Set to true to enable verbose request signature logging when AWS Signature Version 4 Authentication is enabled Default is false hr managed service accounts enabled Only available in Grafana 11 3 Set to true to enable the use of managed service accounts for plugin authentication Default is false Limitations This feature currently only supports single organization deployments The plugin s service account is automatically created in the default organization This means the plugin can only access data and resources within that specific organization auth anonymous Refer to Anonymous authentication for detailed instructions hr auth github Refer to GitHub OAuth2 authentication for detailed instructions hr auth gitlab Refer to Gitlab OAuth2 authentication for detailed instructions hr auth google Refer to Google OAuth2 authentication for detailed instructions hr auth grafananet Legacy key names still in the config file so they work in env variables hr auth grafana com Legacy key names still in the config file so they work in env variables hr auth azuread Refer to Azure AD OAuth2 authentication for detailed instructions hr auth okta Refer to Okta OAuth2 authentication for detailed instructions hr auth generic oauth Refer to Generic OAuth authentication for detailed instructions hr auth basic Refer to Basic authentication for detailed instructions hr auth proxy Refer to Auth proxy authentication for detailed instructions hr auth ldap Refer to LDAP authentication for detailed instructions aws You can configure core and external AWS plugins allowed auth providers Specify what authentication providers the AWS plugins allow For a list of allowed providers refer to the data source configuration page for a given plugin If you configure a plugin by provisioning only providers that are specified in allowed auth providers are allowed Options default AWS SDK default keys Access and secret key credentials Credentials file ec2 iam role EC2 IAM role assume role enabled Set to false to disable AWS authentication from using an assumed role with temporary security credentials For details about assume roles refer to the AWS API reference documentation about the AssumeRole https docs aws amazon com STS latest APIReference API AssumeRole html operation If this option is disabled the Assume Role and the External Id field are removed from the AWS data source configuration page If the plugin is configured using provisioning it is possible to use an assumed role as long as assume role enabled is set to true list metrics page limit Use the List Metrics API https docs aws amazon com AmazonCloudWatch latest APIReference API ListMetrics html option to load metrics for custom namespaces in the CloudWatch data source By default the page limit is 500 hr azure Grafana supports additional integration with Azure services when hosted in the Azure Cloud cloud Azure cloud environment where Grafana is hosted Azure Cloud Value Microsoft Azure public cloud AzureCloud default Microsoft Chinese national cloud AzureChinaCloud US Government cloud AzureUSGovernment Microsoft German national cloud Black Forest AzureGermanCloud clouds config The JSON config defines a list of Azure clouds and their associated properties when hosted in custom Azure environments For example ini clouds config name CustomCloud1 displayName Custom Cloud 1 aadAuthority https login cloud1 contoso com properties azureDataExplorerSuffix kusto windows cloud1 contoso com logAnalytics https api loganalytics cloud1 contoso com portal https portal azure cloud1 contoso com prometheusResourceId https prometheus monitor azure cloud1 contoso com resourceManager https management azure cloud1 contoso com managed identity enabled Specifies whether Grafana hosted in Azure service with Managed Identity configured e g Azure Virtual Machines instance Disabled by default needs to be explicitly enabled managed identity client id The client ID to use for user assigned managed identity Should be set for user assigned identity and should be empty for system assigned identity workload identity enabled Specifies whether Azure AD Workload Identity authentication should be enabled in datasources that support it For more documentation on Azure AD Workload Identity review Azure AD Workload Identity https azure github io azure workload identity docs documentation Disabled by default needs to be explicitly enabled workload identity tenant id Tenant ID of the Azure AD Workload Identity Allows to override default tenant ID of the Azure AD identity associated with the Kubernetes service account workload identity client id Client ID of the Azure AD Workload Identity Allows to override default client ID of the Azure AD identity associated with the Kubernetes service account workload identity token file Custom path to token file for the Azure AD Workload Identity Allows to set a custom path to the projected service account token file user identity enabled Specifies whether user identity authentication on behalf of currently signed in user should be enabled in datasources that support it requires AAD authentication Disabled by default needs to be explicitly enabled user identity fallback credentials enabled Specifies whether user identity authentication fallback credentials should be enabled in data sources Enabling this allows data source creators to provide fallback credentials for backend initiated requests such as alerting recorded queries and so on It is by default and needs to be explicitly disabled It will not have any effect if user identity authentication is disabled user identity token url Override token URL for Azure Active Directory By default is the same as token URL configured for AAD authentication settings user identity client id Override ADD application ID which would be used to exchange users token to an access token for the datasource By default is the same as used in AAD authentication or can be set to another application for OBO flow user identity client secret Override the AAD application client secret By default is the same as used in AAD authentication or can be set to another application for OBO flow forward settings to plugins Set plugins that will receive Azure settings via plugin context By default this will include all Grafana Labs owned Azure plugins or those that use Azure settings Azure Monitor Azure Data Explorer Prometheus MSSQL azure entra password credentials enabled Specifies whether Entra password auth can be used for the MSSQL data source This authentication is not recommended and consideration should be taken before enabling this Disabled by default needs to be explicitly enabled auth jwt Refer to JWT authentication for more information hr smtp Email server settings enabled Enable this to allow Grafana to send email Default is false host Default is localhost 25 Use port 465 for implicit TLS user In case of SMTP auth default is empty password In case of SMTP auth default is empty If the password contains or then you have to wrap it with triple quotes Example password cert file File path to a cert file default is empty key file File path to a key file default is empty skip verify Verify SSL for SMTP server default is false from address Address used when sending out emails default is admin grafana localhost from name Name to be used when sending out emails default is Grafana ehlo identity Name to be used as client identity for EHLO in SMTP dialog default is instance name startTLS policy Either OpportunisticStartTLS MandatoryStartTLS NoStartTLS Default is empty enable tracing Enable trace propagation in e mail headers using the traceparent tracestate and optionally baggage fields Default is false To enable you must first configure tracing in one of the tracing opentelemetry sections hr smtp static headers Enter key value pairs on their own lines to be included as headers on outgoing emails All keys must be in canonical mail header format Examples Foo bar Foo Header bar hr emails welcome email on sign up Default is false templates pattern Enter a comma separated list of template patterns Default is emails html emails txt content types Enter a comma separated list of content types that should be included in the emails that are sent List the content types according descending preference e g text html text plain for HTML as the most preferred The order of the parts is significant as the mail clients will use the content type that is supported and most preferred by the sender Supported content types are text html and text plain Default is text html hr log Grafana logging options mode Options are console file and syslog Default is console and file Use spaces to separate multiple modes e g console file level Options are debug info warn error and critical Default is info filters Optional settings to set different levels for specific loggers For example filters sqlstore debug user facing default error Use this configuration option to set the default error message shown to users This message is displayed instead of sensitive backend errors which should be obfuscated The default message is Please inspect the Grafana server log for details hr log console Only applicable when console is used in log mode level Options are debug info warn error and critical Default is inherited from log level format Log line format valid options are text console and json Default is console hr log file Only applicable when file used in log mode level Options are debug info warn error and critical Default is inherited from log level format Log line format valid options are text console and json Default is text log rotate Enable automated log rotation valid options are false or true Default is true When enabled use the max lines max size shift daily rotate and max days to configure the behavior of the log rotation max lines Maximum lines per file before rotating it Default is 1000000 max size shift Maximum size of file before rotating it Default is 28 which means 1 28 256MB daily rotate Enable daily rotation of files valid options are false or true Default is true max days Maximum number of days to keep log files Default is 7 hr log syslog Only applicable when syslog used in log mode level Options are debug info warn error and critical Default is inherited from log level format Log line format valid options are text console and json Default is text network and address Syslog network type and address This can be UDP TCP or UNIX If left blank then the default UNIX endpoints are used facility Syslog facility Valid options are user daemon or local0 through local7 Default is empty tag Syslog tag By default the process s argv 0 is used hr log frontend enabled Faro javascript agent is initialized Default is false custom endpoint Custom HTTP endpoint to send events captured by the Faro agent to Default log grafana javascript agent will log the events to stdout log endpoint requests per second limit Requests per second limit enforced per an extended period for Grafana backend log ingestion endpoint log grafana javascript agent Default is 3 log endpoint burst limit Maximum requests accepted per short interval of time for Grafana backend log ingestion endpoint log grafana javascript agent Default is 15 instrumentations all enabled Enables all Faro default instrumentation by using getWebInstrumentations Overrides other instrumentation flags instrumentations errors enabled Turn on error instrumentation Only affects Grafana Javascript Agent instrumentations console enabled Turn on console instrumentation Only affects Grafana Javascript Agent instrumentations webvitals enabled Turn on webvitals instrumentation Only affects Grafana Javascript Agent instrumentations tracing enabled Turns on tracing instrumentation Only affects Grafana Javascript Agent api key If custom endpoint required authentication you can set the API key here Only relevant for Grafana Javascript Agent provider hr quota Set quotas to 1 to make unlimited enabled Enable usage quotas Default is false org user Limit the number of users allowed per organization Default is 10 org dashboard Limit the number of dashboards allowed per organization Default is 100 org data source Limit the number of data sources allowed per organization Default is 10 org api key Limit the number of API keys that can be entered per organization Default is 10 org alert rule Limit the number of alert rules that can be entered per organization Default is 100 user org Limit the number of organizations a user can create Default is 10 global user Sets a global limit of users Default is 1 unlimited global org Sets a global limit on the number of organizations that can be created Default is 1 unlimited global dashboard Sets a global limit on the number of dashboards that can be created Default is 1 unlimited global api key Sets global limit of API keys that can be entered Default is 1 unlimited global session Sets a global limit on number of users that can be logged in at one time Default is 1 unlimited global alert rule Sets a global limit on number of alert rules that can be created Default is 1 unlimited global correlations Sets a global limit on number of correlations that can be created Default is 1 unlimited alerting rule evaluation results Limit the number of query evaluation results per alert rule If the condition query of an alert rule produces more results than this limit the evaluation results in an error Default is 1 unlimited hr unified alerting For more information about the Grafana alerts refer to Grafana Alerting enabled Enable or disable Grafana Alerting The default value is true Alerting rules migrated from dashboards and panels will include a link back via the annotations disabled orgs Comma separated list of organization IDs for which to disable Grafana 8 Unified Alerting admin config poll interval Specify the frequency of polling for admin config changes The default value is 60s The interval string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d e g 30s or 1m alertmanager config poll interval Specify the frequency of polling for Alertmanager config changes The default value is 60s The interval string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d e g 30s or 1m ha redis address The Redis server address that should be connected to For more information on Redis refer to Enable alerting high availability using Redis https grafana com docs grafana GRAFANA VERSION alerting set up configure high availability enable alerting high availability using redis ha redis username The username that should be used to authenticate with the Redis server ha redis password The password that should be used to authenticate with the Redis server ha redis db The Redis database The default value is 0 ha redis prefix A prefix that is used for every key or channel that is created on the Redis server as part of HA for alerting ha redis peer name The name of the cluster peer that will be used as an identifier If none is provided a random one will be generated ha redis max conns The maximum number of simultaneous Redis connections ha listen address Listen IP address and port to receive unified alerting messages for other Grafana instances The port is used for both TCP and UDP It is assumed other Grafana instances are also running on the same port The default value is 0 0 0 0 9094 ha advertise address Explicit IP address and port to advertise other Grafana instances The port is used for both TCP and UDP ha peers Comma separated list of initial instances in a format of host port that will form the HA cluster Configuring this setting will enable High Availability mode for alerting ha peer timeout Time to wait for an instance to send a notification via the Alertmanager In HA each Grafana instance will be assigned a position e g 0 1 We then multiply this position with the timeout to indicate how long should each instance wait before sending the notification to take into account replication lag The default value is 15s The interval string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d e g 30s or 1m ha label The label is an optional string to include on each packet and stream It uniquely identifies the cluster and prevents cross communication issues when sending gossip messages in an environment with multiple clusters ha gossip interval The interval between sending gossip messages By lowering this value more frequent gossip messages are propagated across cluster more quickly at the expense of increased bandwidth usage The default value is 200ms The interval string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d e g 30s or 1m ha reconnect timeout Length of time to attempt to reconnect to a lost peer When running Grafana in a Kubernetes cluster set this duration to less than 15m The string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d such as 30s or 1m ha push pull interval The interval between gossip full state syncs Setting this interval lower more frequent will increase convergence speeds across larger clusters at the expense of increased bandwidth usage The default value is 60s The interval string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d e g 30s or 1m execute alerts Enable or disable alerting rule execution The default value is true The alerting UI remains visible evaluation timeout Sets the alert evaluation timeout when fetching data from the data source The default value is 30s The timeout string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d e g 30s or 1m max attempts Sets a maximum number of times we ll attempt to evaluate an alert rule before giving up on that evaluation The default value is 1 min interval Sets the minimum interval to enforce between rule evaluations The default value is 10s which equals the scheduler interval Rules will be adjusted if they are less than this value or if they are not multiple of the scheduler interval 10s Higher values can help with resource management as we ll schedule fewer evaluations over time The interval string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d e g 30s or 1m Note This setting has precedence over each individual rule frequency If a rule frequency is lower than this value then this value is enforced hr unified alerting screenshots For more information about screenshots refer to Images in notifications capture Enable screenshots in notifications This option requires a remote HTTP image rendering service Please see rendering for further configuration options capture timeout The timeout for capturing screenshots If a screenshot cannot be captured within the timeout then the notification is sent without a screenshot The maximum duration is 30 seconds This timeout should be less than the minimum Interval of all Evaluation Groups to avoid back pressure on alert rule evaluation max concurrent screenshots The maximum number of screenshots that can be taken at the same time This option is different from concurrent render request limit as max concurrent screenshots sets the number of concurrent screenshots that can be taken at the same time for all firing alerts where as concurrent render request limit sets the total number of concurrent screenshots across all Grafana services upload external image storage Uploads screenshots to the local Grafana server or remote storage such as Azure S3 and GCS Please see external image storage for further configuration options If this option is false then screenshots will be persisted to disk for up to temp data lifetime hr unified alerting reserved labels For more information about Grafana Reserved Labels refer to Labels in Grafana Alerting docs grafana next alerting fundamentals annotation label how to use labels disabled labels Comma separated list of reserved labels added by the Grafana Alerting engine that should be disabled For example disabled labels grafana folder hr unified alerting state history annotations This section controls retention of annotations automatically created while evaluating alert rules when alerting state history backend is configured to be annotations see setting unified alerting state history backend max age Configures for how long alert annotations are stored Default is 0 which keeps them forever This setting should be expressed as an duration Ex 6h hours 10d days 2w weeks 1M month max annotations to keep Configures max number of alert annotations that Grafana stores Default value is 0 which keeps all alert annotations hr annotations cleanupjob batchsize Configures the batch size for the annotation clean up job This setting is used for dashboard API and alert annotations tags length Enforces the maximum allowed length of the tags for any newly introduced annotations It can be between 500 and 4096 inclusive Default value is 500 Setting it to a higher value would impact performance therefore is not recommended annotations dashboard Dashboard annotations means that annotations are associated with the dashboard they are created on max age Configures how long dashboard annotations are stored Default is 0 which keeps them forever This setting should be expressed as a duration Examples 6h hours 10d days 2w weeks 1M month max annotations to keep Configures max number of dashboard annotations that Grafana stores Default value is 0 which keeps all dashboard annotations annotations api API annotations means that the annotations have been created using the API without any association with a dashboard max age Configures how long Grafana stores API annotations Default is 0 which keeps them forever This setting should be expressed as a duration Examples 6h hours 10d days 2w weeks 1M month max annotations to keep Configures max number of API annotations that Grafana keeps Default value is 0 which keeps all API annotations hr explore For more information about this feature refer to Explore enabled Enable or disable the Explore section Default is enabled defaultTimeOffset Set a default time offset from now on the time picker Default is 1 hour This setting should be expressed as a duration Examples 1h hour 1d day 1w week 1M month help Configures the help section enabled Enable or disable the Help section Default is enabled profile Configures the Profile section enabled Enable or disable the Profile section Default is enabled news news feed enabled Enables the news feed section Default is true hr query concurrent query limit Set the number of queries that can be executed concurrently in a mixed data source panel Default is the number of CPUs query history Configures Query history in Explore enabled Enable or disable the Query history Default is enabled hr short links Configures settings around the short link feature expire time Short links that are never accessed are considered expired or stale and will be deleted as cleanup Set the expiration time in days The default is 7 days The maximum is 365 days and setting above the maximum will have 365 set instead Setting 0 means the short links will be cleaned up approximately every 10 minutes A negative value such as 1 will disable expiry Short links without an expiration increase the size of the database and can t be deleted hr metrics For detailed instructions refer to Internal Grafana metrics enabled Enable metrics reporting defaults true Available via HTTP API URL metrics interval seconds Flush write interval when sending metrics to external TSDB Defaults to 10 disable total stats If set to true then total stats generation stat totals metrics is disabled Default is false total stats collector interval seconds Sets the total stats collector interval The default is 1800 seconds 30 minutes basic auth username and basic auth password If both are set then basic authentication is required to access the metrics endpoint hr metrics environment info Adds dimensions to the grafana environment info metric which can expose more information about the Grafana instance exampleLabel1 exampleValue1 exampleLabel2 exampleValue2 metrics graphite Use these options if you want to send internal Grafana metrics to Graphite address Enable by setting the address Format is Hostname or ip port prefix Graphite metric prefix Defaults to prod grafana instance name s hr grafana net Refer to grafana com config as that is the new and preferred config name The grafana net config is still accepted and parsed to grafana com config hr grafana com url Default is https grafana com The default authentication identity provider for Grafana Cloud hr tracing jaeger Deprecated use tracing opentelemetry jaeger or tracing opentelemetry otlp instead Configure Grafana s Jaeger client for distributed tracing You can also use the standard JAEGER environment variables to configure Jaeger See the table at the end of https www jaegertracing io docs 1 16 client features for the full list Environment variables will override any settings provided here address The host port destination for reporting spans ex localhost 6831 Can be set with the environment variables JAEGER AGENT HOST and JAEGER AGENT PORT always included tag Comma separated list of tags to include in all new spans such as tag1 value1 tag2 value2 Can be set with the environment variable JAEGER TAGS use instead of with the environment variable sampler type Default value is const Specifies the type of sampler const probabilistic ratelimiting or remote Refer to https www jaegertracing io docs 1 16 sampling client sampling configuration for details on the different tracing types Can be set with the environment variable JAEGER SAMPLER TYPE To override this setting enter sampler type in the tracing opentelemetry section sampler param Default value is 1 This is the sampler configuration parameter Depending on the value of sampler type it can be 0 1 or a decimal value in between For const sampler 0 or 1 for always false true respectively For probabilistic sampler a probability between 0 and 1 0 For rateLimiting sampler the number of spans per second For remote sampler param is the same as for probabilistic and indicates the initial sampling rate before the actual one is received from the mothership May be set with the environment variable JAEGER SAMPLER PARAM Setting sampler param in the tracing opentelemetry section will override this setting sampling server url sampling server url is the URL of a sampling manager providing a sampling strategy Setting sampling server url in the tracing opentelemetry section will override this setting zipkin propagation Default value is false Controls whether or not to use Zipkin s span propagation format with x b3 HTTP headers By default Jaeger s format is used Can be set with the environment variable and value JAEGER PROPAGATION b3 disable shared zipkin spans Default value is false Setting this to true turns off shared RPC spans Leaving this available is the most common setting when using Zipkin elsewhere in your infrastructure hr tracing opentelemetry Configure general parameters shared between OpenTelemetry providers custom attributes Comma separated list of attributes to include in all new spans such as key1 value1 key2 value2 Can be set or overridden with the environment variable OTEL RESOURCE ATTRIBUTES use instead of with the environment variable The service name can be set or overridden using attributes or with the environment variable OTEL SERVICE NAME sampler type Default value is const Specifies the type of sampler const probabilistic ratelimiting or remote sampler param Default value is 1 Depending on the value of sampler type the sampler configuration parameter can be 0 1 or any decimal value between 0 and 1 For the const sampler use 0 to never sample or 1 to always sample For the probabilistic sampler you can use a decimal value between 0 0 and 1 0 For the rateLimiting sampler enter the number of spans per second For the remote sampler use a decimal value between 0 0 and 1 0 to specify the initial sampling rate used before the first update is received from the sampling server sampling server url When sampler type is remote this specifies the URL of the sampling server This can be used by all tracing providers Use a sampling server that supports the Jaeger remote sampling API such as jaeger agent jaeger collector opentelemetry collector contrib or Grafana Alloy https grafana com oss alloy opentelemetry collector hr tracing opentelemetry jaeger Configure Grafana s Jaeger client for distributed tracing address The host port destination for reporting spans ex localhost 14268 api traces propagation The propagation specifies the text map propagation format The values jaeger and w3c are supported Add a comma between values to specify multiple formats for example jaeger w3c The default value is w3c hr tracing opentelemetry otlp Configure Grafana s otlp client for distributed tracing address The host port destination for reporting spans ex localhost 4317 propagation The propagation specifies the text map propagation format The values jaeger and w3c are supported Add a comma between values to specify multiple formats for example jaeger w3c The default value is w3c hr external image storage These options control how images should be made public so they can be shared on services like Slack or email message provider Options are s3 webdav gcs azure blob local If left empty then Grafana ignores the upload action hr external image storage s3 endpoint Optional endpoint URL hostname or fully qualified URI to override the default generated S3 endpoint If you want to keep the default just leave this empty You must still provide a region value if you specify an endpoint path style access Set this to true to force path style addressing in S3 requests i e http s3 amazonaws com BUCKET KEY instead of the default which is virtual hosted bucket addressing when possible http BUCKET s3 amazonaws com KEY This option is specific to the Amazon S3 service bucket url for backward compatibility only works when no bucket or region are configured Bucket URL for S3 AWS region can be specified within URL or defaults to us east 1 e g http grafana s3 amazonaws com https grafana s3 ap southeast 2 amazonaws com bucket Bucket name for S3 e g grafana snapshot region Region name for S3 e g us east 1 cn north 1 etc path Optional extra path inside bucket useful to apply expiration policies access key Access key e g AAAAAAAAAAAAAAAAAAAA Access key requires permissions to the S3 bucket for the s3 PutObject and s3 PutObjectAcl actions secret key Secret key e g AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA hr external image storage webdav url URL where Grafana sends PUT request with images username Basic auth username password Basic auth password public url Optional URL to send to users in notifications If the string contains the sequence it is replaced with the uploaded filename Otherwise the file name is appended to the path part of the URL leaving any query string unchanged hr external image storage gcs key file Optional path to JSON key file associated with a Google service account to authenticate and authorize If no value is provided it tries to use the application default credentials https cloud google com docs authentication production finding credentials automatically Service Account keys can be created and downloaded from https console developers google com permissions serviceaccounts Service Account should have Storage Object Writer role The access control model of the bucket needs to be Set object level and bucket level permissions Grafana itself will make the images public readable when signed urls are not enabled bucket Bucket Name on Google Cloud Storage path Optional extra path inside bucket enable signed urls If set to true Grafana creates a signed URL https cloud google com storage docs access control signed urls for the image uploaded to Google Cloud Storage signed url expiration Sets the signed URL expiration which defaults to seven days external image storage azure blob account name Storage account name account key Storage account key container name Container name where to store Blob images with random names Creating the blob container beforehand is required Only public containers are supported sas token expiration days Number of days for SAS token validity If specified SAS token will be attached to image URL Allow storing images in private containers hr external image storage local This option does not require any configuration hr rendering Options to configure a remote HTTP image rendering service e g using https github com grafana grafana image renderer renderer token An auth token will be sent to and verified by the renderer The renderer will deny any request without an auth token matching the one configured on the renderer server url URL to a remote HTTP image renderer service e g http localhost 8081 render will enable Grafana to render panels and dashboards to PNG images using HTTP requests to an external service callback url If the remote HTTP image renderer service runs on a different server than the Grafana server you may have to configure this to a URL where Grafana is reachable e g http grafana domain concurrent render request limit Concurrent render request limit affects when the render HTTP endpoint is used Rendering many images at the same time can overload the server which this setting can help protect against by only allowing a certain number of concurrent requests Default is 30 default image width Configures the width of the rendered image The default width is 1000 default image height Configures the height of the rendered image The default height is 500 default image scale Configures the scale of the rendered image The default scale is 1 panels enable alpha Set to true if you want to test alpha panels that are not yet ready for general usage Default is false disable sanitize html This configuration is not available in Grafana Cloud instances If set to true Grafana will allow script tags in text panels Not recommended as it enables XSS vulnerabilities Default is false plugins enable alpha Set to true if you want to test alpha plugins that are not yet ready for general usage Default is false allow loading unsigned plugins Enter a comma separated list of plugin identifiers to identify plugins to load even if they are unsigned Plugins with modified signatures are never loaded We do not recommend using this option For more information refer to Plugin signatures plugin admin enabled Available to Grafana administrators only enables installing uninstalling updating plugins directly from the Grafana UI Set to true by default Setting it to false will hide the install uninstall update controls For more information refer to Plugin catalog plugin admin external manage enabled Set to true if you want to enable external management of plugins Default is false This is only applicable to Grafana Cloud users plugin catalog url Custom install learn more URL for enterprise plugins Defaults to https grafana com grafana plugins plugin catalog hidden plugins Enter a comma separated list of plugin identifiers to hide in the plugin catalog public key retrieval disabled Disable download of the public key for verifying plugin signature The default is false If disabled it will use the hardcoded public key public key retrieval on startup Force download of the public key for verifying plugin signature on startup The default is false If disabled the public key will be retrieved every 10 days Requires public key retrieval disabled to be false to have any effect disable plugins Enter a comma separated list of plugin identifiers to avoid loading including core plugins These plugins will be hidden in the catalog preinstall Enter a comma separated list of plugin identifiers to preinstall These plugins will be installed on startup using the Grafana catalog as the source Preinstalled plugins cannot be uninstalled from the Grafana user interface they need to be removed from this list first To pin plugins to a specific version use the format plugin id version for example grafana piechart panel 1 6 0 If no version is specified the latest version is installed The plugin is automatically updated to the latest version when a new version is available in the Grafana plugin catalog on startup except for new major versions To use a custom URL to download a plugin use the format plugin id version url for example grafana piechart panel 1 6 0 https example com grafana piechart panel 1 6 0 zip By default Grafana preinstalls some suggested plugins Check the default configuration file for the list of plugins preinstall async By default plugins are preinstalled asynchronously as a background process This means that Grafana will start up faster but the plugins may not be available immediately If you need a plugin to be installed for provisioning set this option to false This causes Grafana to wait for the plugins to be installed before starting up and fail if a plugin can t be installed preinstall disabled This option disables all preinstalled plugins The default is false To disable a specific plugin from being preinstalled use the disable plugins option hr live max connections The max connections option specifies the maximum number of connections to the Grafana Live WebSocket endpoint per Grafana server instance Default is 100 Refer to Grafana Live configuration documentation if you specify a number higher than default since this can require some operating system and infrastructure tuning 0 disables Grafana Live 1 means unlimited connections allowed origins The allowed origins option is a comma separated list of additional origins Origin header of HTTP Upgrade request during WebSocket connection establishment that will be accepted by Grafana Live If not set default then the origin is matched over root url which should be sufficient for most scenarios Origin patterns support wildcard symbol For example ini live allowed origins https example com ha engine Experimental The high availability HA engine name for Grafana Live By default it s not set The only possible value is redis For more information refer to the Configure Grafana Live HA setup ha engine address Experimental Address string of selected the high availability HA Live engine For Redis it s a host port string Example ini live ha engine redis ha engine address 127 0 0 1 6379 hr plugin plugin id This section can be used to configure plugin specific settings Replace the plugin id attribute with the plugin ID present in plugin json Properties described in this section are available for all plugins but you must set them individually for each plugin tracing OpenTelemetry must be configured as well tracingopentelemetry If true propagate the tracing context to the plugin backend and enable tracing if the backend supports it as external Load an external version of a core plugin if it has been installed Experimental Requires the feature toggle externalCorePlugins to be enabled hr plugin grafana image renderer For more information refer to Image rendering rendering timezone Instruct headless browser instance to use a default timezone when not provided by Grafana e g when rendering panel image of alert See ICUs metaZones txt https cs chromium org chromium src third party icu source data misc metaZones txt for a list of supported timezone IDs Fallbacks to TZ environment variable if not set rendering language Instruct headless browser instance to use a default language when not provided by Grafana e g when rendering panel image of alert Refer to the HTTP header Accept Language to understand how to format this value e g fr CH fr q 0 9 en q 0 8 de q 0 7 q 0 5 rendering viewport device scale factor Instruct headless browser instance to use a default device scale factor when not provided by Grafana e g when rendering panel image of alert Default is 1 Using a higher value will produce more detailed images higher DPI but requires more disk space to store an image rendering ignore https errors Instruct headless browser instance whether to ignore HTTPS errors during navigation Per default HTTPS errors are not ignored Due to the security risk we do not recommend that you ignore HTTPS errors rendering verbose logging Instruct headless browser instance whether to capture and log verbose information when rendering an image Default is false and will only capture and log error messages When enabled debug messages are captured and logged as well For the verbose information to be included in the Grafana server log you have to adjust the rendering log level to debug configure log filter rendering debug rendering dumpio Instruct headless browser instance whether to output its debug and error messages into running process of remote rendering service Default is false It can be useful to set this to true when troubleshooting rendering timing metrics Note Available from grafana image renderer v3 9 0 Instruct a headless browser instance on whether to record metrics for the duration of every rendering step Default is false Setting this to true when optimizing the rendering mode settings to improve the plugin performance or when troubleshooting can be useful rendering args Additional arguments to pass to the headless browser instance Defaults are no sandbox disable gpu The list of Chromium flags can be found at https peter sh experiments chromium command line switches Separate multiple arguments with commas rendering chrome bin You can configure the plugin to use a different browser binary instead of the pre packaged version of Chromium Please note that this is not recommended You might encounter problems if the installed version of Chrome Chromium is not compatible with the plugin rendering mode Instruct how headless browser instances are created Default is default and will create a new browser instance on each request Mode clustered will make sure that only a maximum of browsers incognito pages can execute concurrently Mode reusable will have one browser instance and will create a new incognito page on each request rendering clustering mode When rendering mode clustered you can instruct how many browsers or incognito pages can execute concurrently Default is browser and will cluster using browser instances Mode context will cluster using incognito pages rendering clustering max concurrency When rendering mode clustered you can define the maximum number of browser instances incognito pages that can execute concurrently Default is 5 rendering clustering timeout Available in grafana image renderer v3 3 0 and later versions When rendering mode clustered you can specify the duration a rendering request can take before it will time out Default is 30 seconds rendering viewport max width Limit the maximum viewport width that can be requested rendering viewport max height Limit the maximum viewport height that can be requested rendering viewport max device scale factor Limit the maximum viewport device scale factor that can be requested grpc host Change the listening host of the gRPC server Default host is 127 0 0 1 grpc port Change the listening port of the gRPC server Default port is 0 and will automatically assign a port not in use hr enterprise For more information about Grafana Enterprise refer to Grafana Enterprise hr feature toggles enable Keys of features to enable separated by space FEATURE TOGGLE NAME false Some feature toggles for stable features are on by default Use this setting to disable an on by default feature toggle with the name FEATURE TOGGLE NAME for example exploreMixedDatasource false hr feature management The options in this section configure the experimental Feature Toggle Admin Page feature which is enabled using the featureToggleAdminPage feature toggle Grafana Labs offers support on a best effort basis and breaking changes might occur prior to the feature being made generally available Please see Configure feature toggles for more information allow editing Lets you switch the feature toggle state in the feature management page The default is false update webhook Set the URL of the controller that manages the feature toggle updates If not set feature toggles in the feature management page will be read only The API for feature toggle updates has not been defined yet hidden toggles Hide additional specific feature toggles from the feature management page By default feature toggles in the unknown experimental and private preview stages are hidden from the UI Use this option to hide toggles in the public preview general availability and deprecated stages read only toggles Use to disable updates for additional specific feature toggles in the feature management page By default feature toggles can only be updated if they are in the general availability and deprecated stages Use this option to disable updates for toggles in those stages hr date formats This section controls system wide defaults for date formats used in time ranges graphs and date input boxes The format patterns use Moment js https momentjs com docs displaying formatting tokens full date Full date format used by time range picker and in other places where a full date is rendered intervals These intervals formats are used in the graph to show only a partial date or time For example if there are only minutes between Y axis tick labels then the interval minute format is used Defaults interval second HH mm ss interval minute HH mm interval hour MM DD HH mm interval day MM DD interval month YYYY MM interval year YYYY use browser locale Set this to true to have date formats automatically derived from your browser location Defaults to false This is an experimental feature default timezone Used as the default time zone for user preferences Can be either browser for the browser local time zone or a time zone name from the IANA Time Zone database such as UTC or Europe Amsterdam default week start Set the default start of the week valid values are saturday sunday monday or browser to use the browser locale to define the first day of the week Default is browser expressions enabled Set this to false to disable expressions and hide them in the Grafana UI Default is true geomap This section controls the defaults settings for Geomap Plugin default baselayer config The json config used to define the default base map Four base map options to choose from are carto esriXYZTiles xyzTiles standard For example to set cartoDB light as the default base layer ini default baselayer config type xyz config attribution Open street map url https tile openstreetmap org z x y png enable custom baselayers Set this to false to disable loading other custom base maps and hide them in the Grafana UI Default is true rbac Refer to Role based access control for more information navigation app sections Move an app plugin referenced by its id including all its pages to a specific navigation section Format pluginId sectionId sortWeight navigation app standalone pages Move an individual app plugin page referenced by its path field to a specific navigation section Format pageUrl sectionId sortWeight public dashboards This section configures the shared dashboards https grafana com docs grafana GRAFANA VERSION dashboards share dashboards panels shared dashboards feature enabled Set this to false to disable the shared dashboards feature This prevents users from creating new shared dashboards and disables existing ones
grafana setup documentation guide troubleshooting diagnostics enable diagnostics grafana aliases troubleshooting keywords Learn how to configure profiling and tracing so that you can troubleshoot Grafana
--- aliases: - ../../troubleshooting/diagnostics/ - ../enable-diagnostics/ description: Learn how to configure profiling and tracing so that you can troubleshoot Grafana. keywords: - grafana - troubleshooting - documentation - guide labels: products: - enterprise - oss menuTitle: Configure profiling and tracing title: Configure profiling and tracing to troubleshoot Grafana weight: 200 --- # Configure profiling and tracing to troubleshoot Grafana You can set up the `grafana-server` process to enable certain diagnostics when it starts. This can be useful when investigating certain performance problems. It's _not_ recommended to have these enabled by default. ## Turn on profiling and collect profiles The `grafana-server` can be started with the command-line option `-profile` to enable profiling, `-profile-addr` to override the default HTTP address (`localhost`), and `-profile-port` to override the default HTTP port (`6060`) where the `pprof` debugging endpoints are available. Further, [`-profile-block-rate`](https://pkg.go.dev/runtime#SetBlockProfileRate) controls the fraction of goroutine blocking events that are reported in the blocking profile, default `1` (i.e. track every event) for backward compatibility reasons, and [`-profile-mutex-rate`](https://pkg.go.dev/runtime#SetMutexProfileFraction) controls the fraction of mutex contention events that are reported in the mutex profile, default `0` (i.e. track no events). The higher the fraction (that is, the smaller this value) the more overhead it adds to normal operations. Running Grafana with profiling enabled and without block and mutex profiling enabled should only add a fraction of overhead and is suitable for [continuous profiling](https://grafana.com/oss/pyroscope/). Adding a small fraction of block and mutex profiling, such as 10-5 (10%-20%) should in general be fine. Enable profiling: ```bash ./grafana server -profile -profile-addr=0.0.0.0 -profile-port=8080 ``` Enable profiling with block and mutex profiling enabled with a fraction of 20%: ```bash ./grafana server -profile -profile-addr=0.0.0.0 -profile-port=8080 -profile-block-rate=5 -profile-mutex-rate=5 ``` Note that `pprof` debugging endpoints are served on a different port than the Grafana HTTP server. Check what debugging endpoints are available by browsing `http://<profile-addr><profile-port>/debug/pprof`. There are some additional [godeltaprof](https://github.com/grafana/pyroscope-go/tree/main/godeltaprof) endpoints available which are more suitable in a continuous profiling scenario. These endpoints are `/debug/pprof/delta_heap`, `/debug/pprof/delta_block`, `/debug/pprof/delta_mutex`. You can configure or override profiling settings using environment variables: ```bash export GF_DIAGNOSTICS_PROFILING_ENABLED=true export GF_DIAGNOSTICS_PROFILING_ADDR=0.0.0.0 export GF_DIAGNOSTICS_PROFILING_PORT=8080 export GF_DIAGNOSTICS_PROFILING_BLOCK_RATE=5 export GF_DIAGNOSTICS_PROFILING_MUTEX_RATE=5 ``` In general, you use the [Go command pprof](https://golang.org/cmd/pprof/) to both collect and analyze profiling data. You can also use [curl](https://curl.se/) or similar to collect profiles which could be convenient in environments where you don't have the Go/pprof command available. Next, some usage examples of using curl and pprof to collect and analyze memory and CPU profiles. **Analyzing high memory usage/memory leaks:** When experiencing high memory usage or potential memory leaks it's useful to collect several heap profiles and later when analyzing, compare them. It's a good idea to wait some time, e.g. 30 seconds, between collecting each profile to allow memory consumption to increase. ```bash curl http://<profile-addr>:<profile-port>/debug/pprof/heap > heap1.pprof sleep 30 curl http://<profile-addr>:<profile-port>/debug/pprof/heap > heap2.pprof ``` You can then use pprof tool to compare two heap profiles: ```bash go tool pprof -http=localhost:8081 --base heap1.pprof heap2.pprof ``` **Analyzing high CPU usage:** When experiencing high CPU usage it's suggested to collect CPU profiles over a period of time, e.g. 30 seconds. ```bash curl 'http://<profile-addr>:<profile-port>/debug/pprof/profile?seconds=30' > profile.pprof ``` You can then use pprof tool to analyze the collected CPU profile: ```bash go tool pprof -http=localhost:8081 profile.pprof ``` ## Use tracing The `grafana-server` can be started with the arguments `-tracing` to enable tracing and `-tracing-file` to override the default trace file (`trace.out`) where trace result is written to. For example: ```bash ./grafana server -tracing -tracing-file=/tmp/trace.out ``` You can configure or override profiling settings using environment variables: ```bash export GF_DIAGNOSTICS_TRACING_ENABLED=true export GF_DIAGNOSTICS_TRACING_FILE=/tmp/trace.out ``` View the trace in a web browser (Go required to be installed): ```bash go tool trace <trace file> 2019/11/24 22:20:42 Parsing trace... 2019/11/24 22:20:42 Splitting trace... 2019/11/24 22:20:42 Opening browser. Trace viewer is listening on http://127.0.0.1:39735 ``` For more information about how to analyze trace files, refer to [Go command trace](https://golang.org/cmd/trace/).
grafana setup
aliases troubleshooting diagnostics enable diagnostics description Learn how to configure profiling and tracing so that you can troubleshoot Grafana keywords grafana troubleshooting documentation guide labels products enterprise oss menuTitle Configure profiling and tracing title Configure profiling and tracing to troubleshoot Grafana weight 200 Configure profiling and tracing to troubleshoot Grafana You can set up the grafana server process to enable certain diagnostics when it starts This can be useful when investigating certain performance problems It s not recommended to have these enabled by default Turn on profiling and collect profiles The grafana server can be started with the command line option profile to enable profiling profile addr to override the default HTTP address localhost and profile port to override the default HTTP port 6060 where the pprof debugging endpoints are available Further profile block rate https pkg go dev runtime SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the blocking profile default 1 i e track every event for backward compatibility reasons and profile mutex rate https pkg go dev runtime SetMutexProfileFraction controls the fraction of mutex contention events that are reported in the mutex profile default 0 i e track no events The higher the fraction that is the smaller this value the more overhead it adds to normal operations Running Grafana with profiling enabled and without block and mutex profiling enabled should only add a fraction of overhead and is suitable for continuous profiling https grafana com oss pyroscope Adding a small fraction of block and mutex profiling such as 10 5 10 20 should in general be fine Enable profiling bash grafana server profile profile addr 0 0 0 0 profile port 8080 Enable profiling with block and mutex profiling enabled with a fraction of 20 bash grafana server profile profile addr 0 0 0 0 profile port 8080 profile block rate 5 profile mutex rate 5 Note that pprof debugging endpoints are served on a different port than the Grafana HTTP server Check what debugging endpoints are available by browsing http profile addr profile port debug pprof There are some additional godeltaprof https github com grafana pyroscope go tree main godeltaprof endpoints available which are more suitable in a continuous profiling scenario These endpoints are debug pprof delta heap debug pprof delta block debug pprof delta mutex You can configure or override profiling settings using environment variables bash export GF DIAGNOSTICS PROFILING ENABLED true export GF DIAGNOSTICS PROFILING ADDR 0 0 0 0 export GF DIAGNOSTICS PROFILING PORT 8080 export GF DIAGNOSTICS PROFILING BLOCK RATE 5 export GF DIAGNOSTICS PROFILING MUTEX RATE 5 In general you use the Go command pprof https golang org cmd pprof to both collect and analyze profiling data You can also use curl https curl se or similar to collect profiles which could be convenient in environments where you don t have the Go pprof command available Next some usage examples of using curl and pprof to collect and analyze memory and CPU profiles Analyzing high memory usage memory leaks When experiencing high memory usage or potential memory leaks it s useful to collect several heap profiles and later when analyzing compare them It s a good idea to wait some time e g 30 seconds between collecting each profile to allow memory consumption to increase bash curl http profile addr profile port debug pprof heap heap1 pprof sleep 30 curl http profile addr profile port debug pprof heap heap2 pprof You can then use pprof tool to compare two heap profiles bash go tool pprof http localhost 8081 base heap1 pprof heap2 pprof Analyzing high CPU usage When experiencing high CPU usage it s suggested to collect CPU profiles over a period of time e g 30 seconds bash curl http profile addr profile port debug pprof profile seconds 30 profile pprof You can then use pprof tool to analyze the collected CPU profile bash go tool pprof http localhost 8081 profile pprof Use tracing The grafana server can be started with the arguments tracing to enable tracing and tracing file to override the default trace file trace out where trace result is written to For example bash grafana server tracing tracing file tmp trace out You can configure or override profiling settings using environment variables bash export GF DIAGNOSTICS TRACING ENABLED true export GF DIAGNOSTICS TRACING FILE tmp trace out View the trace in a web browser Go required to be installed bash go tool trace trace file 2019 11 24 22 20 42 Parsing trace 2019 11 24 22 20 42 Splitting trace 2019 11 24 22 20 42 Opening browser Trace viewer is listening on http 127 0 0 1 39735 For more information about how to analyze trace files refer to Go command trace https golang org cmd trace
grafana setup settings enterprise settings updates labels grafana aliases products keywords Settings updates at runtime runtime
--- aliases: - ../../enterprise/settings-updates/ description: Settings updates at runtime keywords: - grafana - runtime - settings labels: products: - enterprise - oss title: Settings updates at runtime weight: 500 --- # Settings updates at runtime This functionality is deprecated and will be removed in a future release. For configuring SAML authentication, please use the new [SSO settings API](). By updating settings at runtime, you can update Grafana settings without needing to restart the Grafana server. Updates that happen at runtime are stored in the database and override [settings from other sources]() (arguments, environment variables, settings file, etc). Therefore, every time a specific setting key is removed at runtime, the value used for that key is the inherited one from the other sources in the reverse order of precedence (`arguments > environment variables > settings file`). When no value is provided through any of these options, then the value used will be the application default Currently, **it only supports updates on the `auth.saml` section.** ## Update settings via the API You can update settings through the [Admin API](). When you submit a settings update via API, Grafana verifies if the given settings updates are allowed and valid. If they are, then Grafana stores the settings in the database and reloads Grafana services with no need to restart the instance. So, the payload of a `PUT` request to the update settings endpoint (`/api/admin/settings`) should contain (either one or both): - An `updates` map with a key, and a value per section you want to set. - A `removals` list with keys per section you want to unset. For example, if you provide the following `updates`: ```json { "updates": { "auth.saml": { "enabled": "true", "single_logout": "false" } } } ``` it would enable SAML and disable single logouts. And, if you provide the following `removals`: ```json { "removals": { "auth.saml": ["allow_idp_initiated"] } } ``` it would remove the key/value setting identified by `allow_idp_initiated` within the `auth.saml`. So, the SAML service would be reloaded and that value would be inherited for either (settings `.ini` file, environment variable, command line arguments or any other accepted mechanism to provide configuration). Therefore, the complete HTTP payload would looks like: ```json { "updates": { "auth.saml": { "enabled": "true", "single_logout": "false" } }, "removals": { "auth.saml": ["allow_idp_initiated"] } } ``` In case any of these settings cannot be overridden nor valid, it would return an error and these settings won't be persisted into the database. ## Background job (high availability set-ups) Grafana Enterprise has a built-in scheduled background job that looks into the database every minute for settings updates. If there are updates, it reloads the Grafana services affected by the detected changes. The background job synchronizes settings between instances in a highly available set-up. So, after you perform some changes through the HTTP API, then the other instances are synchronized through the database and the background job. ## Control access with role-based access control If you have [role-based access control]() enabled, you can control who can read or update settings. Refer to the [Admin API]() for more information.
grafana setup
aliases enterprise settings updates description Settings updates at runtime keywords grafana runtime settings labels products enterprise oss title Settings updates at runtime weight 500 Settings updates at runtime This functionality is deprecated and will be removed in a future release For configuring SAML authentication please use the new SSO settings API By updating settings at runtime you can update Grafana settings without needing to restart the Grafana server Updates that happen at runtime are stored in the database and override settings from other sources arguments environment variables settings file etc Therefore every time a specific setting key is removed at runtime the value used for that key is the inherited one from the other sources in the reverse order of precedence arguments environment variables settings file When no value is provided through any of these options then the value used will be the application default Currently it only supports updates on the auth saml section Update settings via the API You can update settings through the Admin API When you submit a settings update via API Grafana verifies if the given settings updates are allowed and valid If they are then Grafana stores the settings in the database and reloads Grafana services with no need to restart the instance So the payload of a PUT request to the update settings endpoint api admin settings should contain either one or both An updates map with a key and a value per section you want to set A removals list with keys per section you want to unset For example if you provide the following updates json updates auth saml enabled true single logout false it would enable SAML and disable single logouts And if you provide the following removals json removals auth saml allow idp initiated it would remove the key value setting identified by allow idp initiated within the auth saml So the SAML service would be reloaded and that value would be inherited for either settings ini file environment variable command line arguments or any other accepted mechanism to provide configuration Therefore the complete HTTP payload would looks like json updates auth saml enabled true single logout false removals auth saml allow idp initiated In case any of these settings cannot be overridden nor valid it would return an error and these settings won t be persisted into the database Background job high availability set ups Grafana Enterprise has a built in scheduled background job that looks into the database every minute for settings updates If there are updates it reloads the Grafana services affected by the detected changes The background job synchronizes settings between instances in a highly available set up So after you perform some changes through the HTTP API then the other instances are synchronized through the database and the background job Control access with role based access control If you have role based access control enabled you can control who can read or update settings Refer to the Admin API for more information
grafana setup weight 100 Learn about Grafana Enterprise configuration options that you can specify labels aliases products title Configure Grafana Enterprise enterprise enterprise configuration oss enterprise
--- aliases: - ../../enterprise/enterprise-configuration/ description: Learn about Grafana Enterprise configuration options that you can specify. labels: products: - enterprise - oss title: Configure Grafana Enterprise weight: 100 --- # Configure Grafana Enterprise This page describes Grafana Enterprise-specific configuration options that you can specify in a `.ini` configuration file or using environment variables. Refer to [Configuration]() for more information about available configuration options. ## [enterprise] ### license_path Local filesystem path to Grafana Enterprise's license file. Defaults to `<paths.data>/license.jwt`. ### license_text When set to the text representation (i.e. content of the license file) of the license, Grafana will evaluate and apply the given license to the instance. ### auto_refresh_license When enabled, Grafana will send the license and usage statistics to the license issuer. If the license has been updated on the issuer's side to be valid for a different number of users or a new duration, your Grafana instance will be updated with the new terms automatically. Defaults to `true`. The license only automatically updates once per day. To immediately update the terms for a license, use the Grafana UI to renew your license token. ### license_validation_type When set to `aws`, Grafana will validate its license status with Amazon Web Services (AWS) instead of with Grafana Labs. Only use this setting if you purchased an Enterprise license from AWS Marketplace. Defaults to empty, which means that by default Grafana Enterprise will validate using a license issued by Grafana Labs. For details about licenses issued by AWS, refer to [Activate a Grafana Enterprise license purchased through AWS Marketplace](). ## [white_labeling] ### app_title Set to your company name to override application title. ### login_logo Set to complete URL to override login logo. ### login_background Set to complete CSS background expression to override login background. Example: ```bash [white_labeling] login_background = url(http://www.bhmpics.com/wallpapers/starfield-1920x1080.jpg) ``` ### menu_logo Set to complete URL to override menu logo. ### fav_icon Set to complete URL to override fav icon (icon shown in browser tab). ### apple_touch_icon Set to complete URL to override Apple/iOS icon. ### hide_edition Set to `true` to remove the Grafana edition from appearing in the footer. ### footer_links List the link IDs to use here. Grafana will look for matching link configurations, the link IDs should be space-separated and contain no whitespace. ## [usage_insights.export] By [exporting usage logs](), you can directly query them and create dashboards of the information that matters to you most, such as dashboard errors, most active organizations, or your top-10 most-used queries. ### enabled Enable the usage insights export feature. ### storage Specify a storage type. Defaults to `loki`. ## [usage_insights.export.storage.loki] ### type Set the communication protocol to use with Loki, which is either `grpc` or `http`. Defaults to `grpc`. ### url Set the address for writing logs to Loki (format must be host:port). ### tls Decide whether or not to enable the TLS (Transport Layer Security) protocol when establishing the connection to Loki. Defaults to true. ### tenant_id Set the tenant ID for Loki communication, which is disabled by default. The tenant ID is required to interact with Loki running in [multi-tenant mode](/docs/loki/latest/operations/multi-tenancy/). ## [analytics.summaries] ### buffer_write_interval Interval for writing dashboard usage stats buffer to database. ### buffer_write_timeout Timeout for writing dashboard usage stats buffer to database. ### rollup_interval Interval for trying to roll up per dashboard usage summary. Only rolled up at most once per day. ### rollup_timeout Timeout for trying to rollup per dashboard usage summary. ## [analytics.views] ### recent_users_age Age for recent active users. ## [reporting] ### rendering_timeout Timeout for each panel rendering request. ### concurrent_render_limit Maximum number of concurrent calls to the rendering service. ### image_scale_factor Scale factor for rendering images. Value `2` is enough for monitor resolutions, `4` would be better for printed material. Setting a higher value affects performance and memory. ### max_attachment_size_mb Set the maximum file size in megabytes for the CSV attachments. ### fonts_path Path to the directory containing font files. ### font_regular Name of the TrueType font file with regular style. ### font_bold Name of the TrueType font file with bold style. ### font_italic Name of the TrueType font file with italic style. ### max_retries_per_panel Maximum number of panel rendering request retries before returning an error. To disable the retry feature, enter `0`. This is available in public preview and requires the `reportingRetries` feature toggle. ### allowed_domains Allowed domains to receive reports. Use an asterisk (`*`) to allow all domains. Use a comma-separated list to allow multiple domains. Example: allowed_domains = grafana.com, example.org ## [auditing] [Auditing]() allows you to track important changes to your Grafana instance. By default, audit logs are logged to file but the auditing feature also supports sending logs directly to Loki. ### enabled Enable the auditing feature. Defaults to false. ### loggers List of enabled loggers. ### log_dashboard_content Keep dashboard content in the logs (request or response fields). This can significantly increase the size of your logs. ### verbose Log all requests and keep requests and responses body. This can significantly increase the size of your logs. ### log_all_status_codes Set to false to only log requests with 2xx, 3xx, 401, 403, 500 responses. ### max_response_size_bytes Maximum response body (in bytes) to be recorded. May help reducing the memory footprint caused by auditing. ## [auditing.logs.file] ### path Path to logs folder. ### max_files Maximum log files to keep. ### max_file_size_mb Max size in megabytes per log file. ## [auditing.logs.loki] ### url Set the URL for writing logs to Loki. ### tls If true, it establishes a secure connection to Loki. Defaults to true. ### tenant_id Set the tenant ID for Loki communication, which is disabled by default. The tenant ID is required to interact with Loki running in [multi-tenant mode](/docs/loki/latest/operations/multi-tenancy/). ## [auth.saml] ### enabled If true, the feature is enabled. Defaults to false. ### allow_sign_up If true, allow new Grafana users to be created through SAML logins. Defaults to true. ### certificate Base64-encoded public X.509 certificate. Used to sign requests to the IdP. ### certificate_path Path to the public X.509 certificate. Used to sign requests to the IdP. ### private_key Base64-encoded private key. Used to decrypt assertions from the IdP. ### private_key_path Path to the private key. Used to decrypt assertions from the IdP. ### idp_metadata Base64-encoded IdP SAML metadata XML. Used to verify and obtain binding locations from the IdP. ### idp_metadata_path Path to the SAML metadata XML. Used to verify and obtain binding locations from the IdP. ### idp_metadata_url URL to fetch SAML IdP metadata. Used to verify and obtain binding locations from the IdP. ### max_issue_delay Time since the IdP issued a response and the SP is allowed to process it. Defaults to 90 seconds. ### metadata_valid_duration How long the SPs metadata is valid. Defaults to 48 hours. ### assertion_attribute_name Friendly name or name of the attribute within the SAML assertion to use as the user name. Alternatively, this can be a template with variables that match the names of attributes within the SAML assertion. ### assertion_attribute_login Friendly name or name of the attribute within the SAML assertion to use as the user login handle. ### assertion_attribute_email Friendly name or name of the attribute within the SAML assertion to use as the user email. ### assertion_attribute_groups Friendly name or name of the attribute within the SAML assertion to use as the user groups. ### assertion_attribute_role Friendly name or name of the attribute within the SAML assertion to use as the user roles. ### assertion_attribute_org Friendly name or name of the attribute within the SAML assertion to use as the user organization. ### allowed_organizations List of comma- or space-separated organizations. Each user must be a member of at least one organization to log in. ### org_mapping List of comma- or space-separated Organization:OrgId:Role mappings. Organization can be `*` meaning "All users". Role is optional and can have the following values: `Admin`, `Editor` ,`Viewer` or `None`. ### role_values_none List of comma- or space-separated roles that will be mapped to the None role. ### role_values_viewer List of comma- or space-separated roles that will be mapped to the Viewer role. ### role_values_editor List of comma- or space-separated roles that will be mapped to the Editor role. ### role_values_admin List of comma- or space-separated roles that will be mapped to the Admin role. ### role_values_grafana_admin List of comma- or space-separated roles that will be mapped to the Grafana Admin (Super Admin) role. ## [keystore.vault] ### url Location of the Vault server. ### namespace Vault namespace if using Vault with multi-tenancy. ### auth_method Method for authenticating towards Vault. Vault is inactive if this option is not set. Current possible values: `token`. ### token Secret token to connect to Vault when auth_method is `token`. ### lease_renewal_interval Time between checking if there are any secrets which needs to be renewed. ### lease_renewal_expires_within Time until expiration for tokens which are renewed. Should have a value higher than lease_renewal_interval. ### lease_renewal_increment New duration for renewed tokens. Vault may be configured to ignore this value and impose a stricter limit. ## [security.egress] Security egress makes it possible to control outgoing traffic from the Grafana server. ### host_deny_list A list of hostnames or IP addresses separated by spaces for which requests are blocked. ### host_allow_list A list of hostnames or IP addresses separated by spaces for which requests are allowed. All other requests are blocked. ### header_drop_list A list of headers that are stripped from the outgoing data source and alerting requests. ### cookie_drop_list A list of cookies that are stripped from the outgoing data source and alerting requests. ## [security.encryption] ### algorithm Encryption algorithm used to encrypt secrets stored in the database and cookies. Possible values are `aes-cfb` (default) and `aes-gcm`. AES-CFB stands for _Advanced Encryption Standard_ in _cipher feedback_ mode, and AES-GCM stands for _Advanced Encryption Standard_ in _Galois/Counter Mode_. ## [caching] When query caching is enabled, Grafana can temporarily store the results of data source queries and serve cached responses to similar requests. ### backend The caching backend to use when storing cached queries. Options: `memory`, `redis`, and `memcached`. The default is `memory`. ### enabled Setting 'enabled' to `true` allows users to configure query caching for data sources. This value is `true` by default. This setting enables the caching feature, but it does not turn on query caching for any data source. To turn on query caching for a data source, update the setting on the data source configuration page. For more information, refer to the [query caching docs](). ### ttl _Time to live_ (TTL) is the time that a query result is stored in the caching system before it is deleted or refreshed. This setting defines the time to live for query caching, when TTL is not configured in data source settings. The default value is `1m` (1 minute). ### max_ttl The max duration that a query result is stored in the caching system before it is deleted or refreshed. This value will override `ttl` config option or data source setting if the `ttl` value is greater than `max_ttl`. To disable this constraint, set this value to `0s`. The default is `0s` (disabled). Disabling this constraint is not recommended in production environments. ### max_value_mb This value limits the size of a single cache value. If a cache value (or query result) exceeds this size, then it is not cached. To disable this limit, set this value to `0`. The default is `1`. ### connection_timeout This setting defines the duration to wait for a connection to the caching backend. The default is `5s`. ### read_timeout This setting defines the duration to wait for the caching backend to return a cached result. To disable this timeout, set this value to `0s`. The default is `0s` (disabled). Disabling this timeout is not recommended in production environments. ### write_timeout This setting defines the number of seconds to wait for the caching backend to store a result. To disable this timeout, set this value to `0s`. The default is `0s` (disabled). Disabling this timeout is not recommended in production environments. ## [caching.encryption] ### enabled When 'enabled' is `true`, query values in the cache are encrypted. The default is `false`. ### encryption_key A string used to generate a key for encrypting the cache. For the encrypted cache data to persist between Grafana restarts, you must specify this key. If it is empty when encryption is enabled, then the key is automatically generated on startup, and the cache clears upon restarts. The default is `""`. ## [caching.memory] ### gc_interval When storing cache data in-memory, this setting defines how often a background process cleans up stale data from the in-memory cache. More frequent "garbage collection" can keep memory usage from climbing but will increase CPU usage. The default is `1m`. ### max_size_mb The maximum size of the in-memory cache in megabytes. Once this size is reached, new cache items are rejected. For more flexible control over cache eviction policies and size, use the Redis or Memcached backend. To disable the maximum, set this value to `0`. The default is `25`. Disabling the maximum is not recommended in production environments. ## [caching.redis] ### url The full Redis URL of your Redis server. For example: `redis://username:password@localhost:6379`. To enable TLS, use the `rediss` scheme. The default is `"redis://localhost:6379"`. ### cluster A comma-separated list of Redis cluster members, either in `host:port` format or using the full Redis URLs (`redis://username:password@localhost:6379`). For example, `localhost:7000, localhost: 7001, localhost:7002`. If you use the full Redis URLs, then you can specify the scheme, username, and password only once. For example, `redis://username:password@localhost:0000,localhost:1111,localhost:2222`. You cannot specify a different username and password for each URL. If you have specify `cluster`, the value for `url` is ignored. You can enable TLS for cluster mode using the `rediss` scheme in Grafana Enterprise v8.5 and later versions. ### prefix A string that prefixes all Redis keys. This value must be set if using a shared database in Redis. If `prefix` is empty, then one will not be used. The default is `"grafana"`. ## [caching.memcached] ### servers A space-separated list of memcached servers. Example: `memcached-server-1:11211 memcached-server-2:11212 memcached-server-3:11211`. Or if there's only one server: `memcached-server:11211`. The default is `"localhost:11211"`. The following memcached configuration requires the `tlsMemcached` feature toggle. ### tls_enabled Enables TLS authentication for memcached. Defaults to `false`. ### tls_cert_path Path to the client certificate, which will be used for authenticating with the server. Also requires the key path to be configured. ### tls_key_path Path to the key for the client certificate. Also requires the client certificate to be configured. ### tls_ca_path Path to the CA certificates to validate the server certificate against. If not set, the host's root CA certificates are used. ### tls_server_name Override the expected name on the server certificate. ### connection_timeout Timeout for the memcached client to connect to memcached. Defaults to `0`, which uses the memcached client default timeout per connection scheme. ## [recorded_queries] ### enabled Whether the recorded queries feature is enabled ### min_interval Sets the minimum interval to enforce between query evaluations. The default value is `10s`. Query evaluation will be adjusted if they are less than this value. Higher values can help with resource management. The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. ### max_queries The maximum number of recorded queries that can exist. ### default_remote_write_datasource_uid The UID of the datasource where the query data will be written. If all `default_remote_write_*` properties are set, this information will be populated at startup. If a remote write target has already been configured, nothing will happen. ### default_remote_write_path The api path where metrics will be written If all `default_remote_write_*` properties are set, this information will be populated at startup. If a remote write target has already been configured, nothing will happen. ### default_remote_write_datasource_org_id The org id of the datasource where the query data will be written. If all `default_remote_write_*` properties are set, this information will be populated at startup. If a remote write target has already been configured, nothing will happen.
grafana setup
aliases enterprise enterprise configuration description Learn about Grafana Enterprise configuration options that you can specify labels products enterprise oss title Configure Grafana Enterprise weight 100 Configure Grafana Enterprise This page describes Grafana Enterprise specific configuration options that you can specify in a ini configuration file or using environment variables Refer to Configuration for more information about available configuration options enterprise license path Local filesystem path to Grafana Enterprise s license file Defaults to paths data license jwt license text When set to the text representation i e content of the license file of the license Grafana will evaluate and apply the given license to the instance auto refresh license When enabled Grafana will send the license and usage statistics to the license issuer If the license has been updated on the issuer s side to be valid for a different number of users or a new duration your Grafana instance will be updated with the new terms automatically Defaults to true The license only automatically updates once per day To immediately update the terms for a license use the Grafana UI to renew your license token license validation type When set to aws Grafana will validate its license status with Amazon Web Services AWS instead of with Grafana Labs Only use this setting if you purchased an Enterprise license from AWS Marketplace Defaults to empty which means that by default Grafana Enterprise will validate using a license issued by Grafana Labs For details about licenses issued by AWS refer to Activate a Grafana Enterprise license purchased through AWS Marketplace white labeling app title Set to your company name to override application title login logo Set to complete URL to override login logo login background Set to complete CSS background expression to override login background Example bash white labeling login background url http www bhmpics com wallpapers starfield 1920x1080 jpg menu logo Set to complete URL to override menu logo fav icon Set to complete URL to override fav icon icon shown in browser tab apple touch icon Set to complete URL to override Apple iOS icon hide edition Set to true to remove the Grafana edition from appearing in the footer footer links List the link IDs to use here Grafana will look for matching link configurations the link IDs should be space separated and contain no whitespace usage insights export By exporting usage logs you can directly query them and create dashboards of the information that matters to you most such as dashboard errors most active organizations or your top 10 most used queries enabled Enable the usage insights export feature storage Specify a storage type Defaults to loki usage insights export storage loki type Set the communication protocol to use with Loki which is either grpc or http Defaults to grpc url Set the address for writing logs to Loki format must be host port tls Decide whether or not to enable the TLS Transport Layer Security protocol when establishing the connection to Loki Defaults to true tenant id Set the tenant ID for Loki communication which is disabled by default The tenant ID is required to interact with Loki running in multi tenant mode docs loki latest operations multi tenancy analytics summaries buffer write interval Interval for writing dashboard usage stats buffer to database buffer write timeout Timeout for writing dashboard usage stats buffer to database rollup interval Interval for trying to roll up per dashboard usage summary Only rolled up at most once per day rollup timeout Timeout for trying to rollup per dashboard usage summary analytics views recent users age Age for recent active users reporting rendering timeout Timeout for each panel rendering request concurrent render limit Maximum number of concurrent calls to the rendering service image scale factor Scale factor for rendering images Value 2 is enough for monitor resolutions 4 would be better for printed material Setting a higher value affects performance and memory max attachment size mb Set the maximum file size in megabytes for the CSV attachments fonts path Path to the directory containing font files font regular Name of the TrueType font file with regular style font bold Name of the TrueType font file with bold style font italic Name of the TrueType font file with italic style max retries per panel Maximum number of panel rendering request retries before returning an error To disable the retry feature enter 0 This is available in public preview and requires the reportingRetries feature toggle allowed domains Allowed domains to receive reports Use an asterisk to allow all domains Use a comma separated list to allow multiple domains Example allowed domains grafana com example org auditing Auditing allows you to track important changes to your Grafana instance By default audit logs are logged to file but the auditing feature also supports sending logs directly to Loki enabled Enable the auditing feature Defaults to false loggers List of enabled loggers log dashboard content Keep dashboard content in the logs request or response fields This can significantly increase the size of your logs verbose Log all requests and keep requests and responses body This can significantly increase the size of your logs log all status codes Set to false to only log requests with 2xx 3xx 401 403 500 responses max response size bytes Maximum response body in bytes to be recorded May help reducing the memory footprint caused by auditing auditing logs file path Path to logs folder max files Maximum log files to keep max file size mb Max size in megabytes per log file auditing logs loki url Set the URL for writing logs to Loki tls If true it establishes a secure connection to Loki Defaults to true tenant id Set the tenant ID for Loki communication which is disabled by default The tenant ID is required to interact with Loki running in multi tenant mode docs loki latest operations multi tenancy auth saml enabled If true the feature is enabled Defaults to false allow sign up If true allow new Grafana users to be created through SAML logins Defaults to true certificate Base64 encoded public X 509 certificate Used to sign requests to the IdP certificate path Path to the public X 509 certificate Used to sign requests to the IdP private key Base64 encoded private key Used to decrypt assertions from the IdP private key path Path to the private key Used to decrypt assertions from the IdP idp metadata Base64 encoded IdP SAML metadata XML Used to verify and obtain binding locations from the IdP idp metadata path Path to the SAML metadata XML Used to verify and obtain binding locations from the IdP idp metadata url URL to fetch SAML IdP metadata Used to verify and obtain binding locations from the IdP max issue delay Time since the IdP issued a response and the SP is allowed to process it Defaults to 90 seconds metadata valid duration How long the SPs metadata is valid Defaults to 48 hours assertion attribute name Friendly name or name of the attribute within the SAML assertion to use as the user name Alternatively this can be a template with variables that match the names of attributes within the SAML assertion assertion attribute login Friendly name or name of the attribute within the SAML assertion to use as the user login handle assertion attribute email Friendly name or name of the attribute within the SAML assertion to use as the user email assertion attribute groups Friendly name or name of the attribute within the SAML assertion to use as the user groups assertion attribute role Friendly name or name of the attribute within the SAML assertion to use as the user roles assertion attribute org Friendly name or name of the attribute within the SAML assertion to use as the user organization allowed organizations List of comma or space separated organizations Each user must be a member of at least one organization to log in org mapping List of comma or space separated Organization OrgId Role mappings Organization can be meaning All users Role is optional and can have the following values Admin Editor Viewer or None role values none List of comma or space separated roles that will be mapped to the None role role values viewer List of comma or space separated roles that will be mapped to the Viewer role role values editor List of comma or space separated roles that will be mapped to the Editor role role values admin List of comma or space separated roles that will be mapped to the Admin role role values grafana admin List of comma or space separated roles that will be mapped to the Grafana Admin Super Admin role keystore vault url Location of the Vault server namespace Vault namespace if using Vault with multi tenancy auth method Method for authenticating towards Vault Vault is inactive if this option is not set Current possible values token token Secret token to connect to Vault when auth method is token lease renewal interval Time between checking if there are any secrets which needs to be renewed lease renewal expires within Time until expiration for tokens which are renewed Should have a value higher than lease renewal interval lease renewal increment New duration for renewed tokens Vault may be configured to ignore this value and impose a stricter limit security egress Security egress makes it possible to control outgoing traffic from the Grafana server host deny list A list of hostnames or IP addresses separated by spaces for which requests are blocked host allow list A list of hostnames or IP addresses separated by spaces for which requests are allowed All other requests are blocked header drop list A list of headers that are stripped from the outgoing data source and alerting requests cookie drop list A list of cookies that are stripped from the outgoing data source and alerting requests security encryption algorithm Encryption algorithm used to encrypt secrets stored in the database and cookies Possible values are aes cfb default and aes gcm AES CFB stands for Advanced Encryption Standard in cipher feedback mode and AES GCM stands for Advanced Encryption Standard in Galois Counter Mode caching When query caching is enabled Grafana can temporarily store the results of data source queries and serve cached responses to similar requests backend The caching backend to use when storing cached queries Options memory redis and memcached The default is memory enabled Setting enabled to true allows users to configure query caching for data sources This value is true by default This setting enables the caching feature but it does not turn on query caching for any data source To turn on query caching for a data source update the setting on the data source configuration page For more information refer to the query caching docs ttl Time to live TTL is the time that a query result is stored in the caching system before it is deleted or refreshed This setting defines the time to live for query caching when TTL is not configured in data source settings The default value is 1m 1 minute max ttl The max duration that a query result is stored in the caching system before it is deleted or refreshed This value will override ttl config option or data source setting if the ttl value is greater than max ttl To disable this constraint set this value to 0s The default is 0s disabled Disabling this constraint is not recommended in production environments max value mb This value limits the size of a single cache value If a cache value or query result exceeds this size then it is not cached To disable this limit set this value to 0 The default is 1 connection timeout This setting defines the duration to wait for a connection to the caching backend The default is 5s read timeout This setting defines the duration to wait for the caching backend to return a cached result To disable this timeout set this value to 0s The default is 0s disabled Disabling this timeout is not recommended in production environments write timeout This setting defines the number of seconds to wait for the caching backend to store a result To disable this timeout set this value to 0s The default is 0s disabled Disabling this timeout is not recommended in production environments caching encryption enabled When enabled is true query values in the cache are encrypted The default is false encryption key A string used to generate a key for encrypting the cache For the encrypted cache data to persist between Grafana restarts you must specify this key If it is empty when encryption is enabled then the key is automatically generated on startup and the cache clears upon restarts The default is caching memory gc interval When storing cache data in memory this setting defines how often a background process cleans up stale data from the in memory cache More frequent garbage collection can keep memory usage from climbing but will increase CPU usage The default is 1m max size mb The maximum size of the in memory cache in megabytes Once this size is reached new cache items are rejected For more flexible control over cache eviction policies and size use the Redis or Memcached backend To disable the maximum set this value to 0 The default is 25 Disabling the maximum is not recommended in production environments caching redis url The full Redis URL of your Redis server For example redis username password localhost 6379 To enable TLS use the rediss scheme The default is redis localhost 6379 cluster A comma separated list of Redis cluster members either in host port format or using the full Redis URLs redis username password localhost 6379 For example localhost 7000 localhost 7001 localhost 7002 If you use the full Redis URLs then you can specify the scheme username and password only once For example redis username password localhost 0000 localhost 1111 localhost 2222 You cannot specify a different username and password for each URL If you have specify cluster the value for url is ignored You can enable TLS for cluster mode using the rediss scheme in Grafana Enterprise v8 5 and later versions prefix A string that prefixes all Redis keys This value must be set if using a shared database in Redis If prefix is empty then one will not be used The default is grafana caching memcached servers A space separated list of memcached servers Example memcached server 1 11211 memcached server 2 11212 memcached server 3 11211 Or if there s only one server memcached server 11211 The default is localhost 11211 The following memcached configuration requires the tlsMemcached feature toggle tls enabled Enables TLS authentication for memcached Defaults to false tls cert path Path to the client certificate which will be used for authenticating with the server Also requires the key path to be configured tls key path Path to the key for the client certificate Also requires the client certificate to be configured tls ca path Path to the CA certificates to validate the server certificate against If not set the host s root CA certificates are used tls server name Override the expected name on the server certificate connection timeout Timeout for the memcached client to connect to memcached Defaults to 0 which uses the memcached client default timeout per connection scheme recorded queries enabled Whether the recorded queries feature is enabled min interval Sets the minimum interval to enforce between query evaluations The default value is 10s Query evaluation will be adjusted if they are less than this value Higher values can help with resource management The interval string is a possibly signed sequence of decimal numbers followed by a unit suffix ms s m h d e g 30s or 1m max queries The maximum number of recorded queries that can exist default remote write datasource uid The UID of the datasource where the query data will be written If all default remote write properties are set this information will be populated at startup If a remote write target has already been configured nothing will happen default remote write path The api path where metrics will be written If all default remote write properties are set this information will be populated at startup If a remote write target has already been configured nothing will happen default remote write datasource org id The org id of the datasource where the query data will be written If all default remote write properties are set this information will be populated at startup If a remote write target has already been configured nothing will happen
grafana setup weight 300 aliases products enterprise white labeling enable custom branding title Configure custom branding Change the look of Grafana to match your corporate brand labels enterprise
--- aliases: - ../../enterprise/white-labeling/ - ../enable-custom-branding/ description: Change the look of Grafana to match your corporate brand. labels: products: - enterprise title: Configure custom branding weight: 300 --- # Configure custom branding Custom branding enables you to replace the Grafana Labs brand and logo with your corporate brand and logo. Available in [Grafana Enterprise]() and [Grafana Cloud](/docs/grafana-cloud). For Cloud Advanced and Enterprise customers, please provide custom elements and logos to our Support team. We will help you host your images and update your custom branding. This feature is not available for Grafana Free and Pro tiers. For more information on feature availability across plans, refer to our [feature comparison page](/docs/grafana-cloud/cost-management-and-billing/understand-grafana-cloud-features/) The `grafana.ini` file includes Grafana Enterprise custom branding. As with all configuration options, you can use environment variables to set custom branding. With custom branding, you have the ability to modify the following elements: - Application title - Login background - Login logo - Side menu top logo - Footer and help menu links - Fav icon (shown in browser tab) - Login title (will not appear if a login logo is set) - Login subtitle (will not appear if a login logo is set) - Login box background - Loading logo > You will have to host your logo and other images used by the custom branding feature separately. Make sure Grafana can access the URL where the assets are stored. The configuration file in Grafana Enterprise contains the following options. For more information about configuring Grafana, refer to [Configure Grafana](). ```ini # Enterprise only [white_labeling] # Set to your company name to override application title ;app_title = # Set to main title on the login page (Will not appear if a login logo is set) ;login_title = # Set to login subtitle (Will not appear if a login logo is set) ;login_subtitle = # Set to complete URL to override login logo ;login_logo = # Set to complete CSS background expression to override login background # example: login_background = url(http://www.bhmpics.com/wallpapers/starfield-1920x1080.jpg) ;login_background = # Set to complete CSS background expression to override login box background ;login_box_background = # Set to complete URL to override menu logo ;menu_logo = # Set to complete URL to override fav icon (icon shown in browser tab) ;fav_icon = # Set to complete URL to override apple/ios icon ;apple_touch_icon = # Set to complete URL to override loading logo ;loading_logo = # Set to `true` to remove the Grafana edition from appearing in the footer ;hide_edition = ``` You have the option of adding custom links in place of the default footer links (Documentation, Support, Community). Below is an example of how to replace the default footer and help links with custom links. ```ini footer_links = support guides extracustom footer_links_support_text = Support footer_links_support_url = http://your.support.site footer_links_guides_text = Guides footer_links_guides_url = http://your.guides.site footer_links_extracustom_text = Custom text footer_links_extracustom_url = http://your.custom.site ``` The following example shows configuring custom branding using environment variables instead of the `custom.ini` or `grafana.ini` files. ``` GF_WHITE_LABELING_FOOTER_LINKS=support guides extracustom GF_WHITE_LABELING_FOOTER_LINKS_SUPPORT_TEXT=Support GF_WHITE_LABELING_FOOTER_LINKS_SUPPORT_URL=http://your.support.site GF_WHITE_LABELING_FOOTER_LINKS_GUIDES_TEXT=Guides GF_WHITE_LABELING_FOOTER_LINKS_GUIDES_URL=http://your.guides.site GF_WHITE_LABELING_FOOTER_LINKS_EXTRACUSTOM_TEXT=Custom Text GF_WHITE_LABELING_FOOTER_LINKS_EXTRACUSTOM_URL=http://your.custom.site ``` The following two links are always present in the footer: - Grafana edition - Grafana version with build number If you specify `footer_links` or `GF_WHITE_LABELING_FOOTER_LINKS`, then all other default links are removed from the footer, and only what is specified is included. ## Custom branding for shared dashboards In addition to the customizations described below, you can customize the footer of your shared dashboards. To customize the footer of a shared dashboard, add the following section to the `grafana.ini` file. ```ini [white_labeling.public_dashboards] # Hides the footer for the shared dashboards if set to `true`. # example: footer_hide = "true" ;footer_hide = # Set to text shown in the footer ;footer_text = # Set to complete url to override shared dashboard footer logo. Default is `grafana-logo` and will display the Grafana logo. # An empty value will hide the footer logo. ;footer_logo = # Set to link for the footer ;footer_link = # Set to `true` to hide the Grafana logo next to the title ;header_logo_hide = ``` If you specify `footer_hide` to `true`, all the other values are ignored because the footer will not be shown.
grafana setup
aliases enterprise white labeling enable custom branding description Change the look of Grafana to match your corporate brand labels products enterprise title Configure custom branding weight 300 Configure custom branding Custom branding enables you to replace the Grafana Labs brand and logo with your corporate brand and logo Available in Grafana Enterprise and Grafana Cloud docs grafana cloud For Cloud Advanced and Enterprise customers please provide custom elements and logos to our Support team We will help you host your images and update your custom branding This feature is not available for Grafana Free and Pro tiers For more information on feature availability across plans refer to our feature comparison page docs grafana cloud cost management and billing understand grafana cloud features The grafana ini file includes Grafana Enterprise custom branding As with all configuration options you can use environment variables to set custom branding With custom branding you have the ability to modify the following elements Application title Login background Login logo Side menu top logo Footer and help menu links Fav icon shown in browser tab Login title will not appear if a login logo is set Login subtitle will not appear if a login logo is set Login box background Loading logo You will have to host your logo and other images used by the custom branding feature separately Make sure Grafana can access the URL where the assets are stored The configuration file in Grafana Enterprise contains the following options For more information about configuring Grafana refer to Configure Grafana ini Enterprise only white labeling Set to your company name to override application title app title Set to main title on the login page Will not appear if a login logo is set login title Set to login subtitle Will not appear if a login logo is set login subtitle Set to complete URL to override login logo login logo Set to complete CSS background expression to override login background example login background url http www bhmpics com wallpapers starfield 1920x1080 jpg login background Set to complete CSS background expression to override login box background login box background Set to complete URL to override menu logo menu logo Set to complete URL to override fav icon icon shown in browser tab fav icon Set to complete URL to override apple ios icon apple touch icon Set to complete URL to override loading logo loading logo Set to true to remove the Grafana edition from appearing in the footer hide edition You have the option of adding custom links in place of the default footer links Documentation Support Community Below is an example of how to replace the default footer and help links with custom links ini footer links support guides extracustom footer links support text Support footer links support url http your support site footer links guides text Guides footer links guides url http your guides site footer links extracustom text Custom text footer links extracustom url http your custom site The following example shows configuring custom branding using environment variables instead of the custom ini or grafana ini files GF WHITE LABELING FOOTER LINKS support guides extracustom GF WHITE LABELING FOOTER LINKS SUPPORT TEXT Support GF WHITE LABELING FOOTER LINKS SUPPORT URL http your support site GF WHITE LABELING FOOTER LINKS GUIDES TEXT Guides GF WHITE LABELING FOOTER LINKS GUIDES URL http your guides site GF WHITE LABELING FOOTER LINKS EXTRACUSTOM TEXT Custom Text GF WHITE LABELING FOOTER LINKS EXTRACUSTOM URL http your custom site The following two links are always present in the footer Grafana edition Grafana version with build number If you specify footer links or GF WHITE LABELING FOOTER LINKS then all other default links are removed from the footer and only what is specified is included Custom branding for shared dashboards In addition to the customizations described below you can customize the footer of your shared dashboards To customize the footer of a shared dashboard add the following section to the grafana ini file ini white labeling public dashboards Hides the footer for the shared dashboards if set to true example footer hide true footer hide Set to text shown in the footer footer text Set to complete url to override shared dashboard footer logo Default is grafana logo and will display the Grafana logo An empty value will hide the footer logo footer logo Set to link for the footer footer link Set to true to hide the Grafana logo next to the title header logo hide If you specify footer hide to true all the other values are ignored because the footer will not be shown
grafana setup docs grafana latest setup grafana configure grafana feature toggles weight 150 Learn about feature toggles which you can enable or disable DO NOT EDIT THIS PAGE it is machine generated by running the test in aliases https github com grafana grafana blob main pkg services featuremgmt togglesgentest go L27 title Configure feature toggles
--- aliases: - /docs/grafana/latest/setup-grafana/configure-grafana/feature-toggles/ description: Learn about feature toggles, which you can enable or disable. title: Configure feature toggles weight: 150 --- <!-- DO NOT EDIT THIS PAGE, it is machine generated by running the test in --> <!-- https://github.com/grafana/grafana/blob/main/pkg/services/featuremgmt/toggles_gen_test.go#L27 --> # Configure feature toggles You use feature toggles, also known as feature flags, to enable or disable features in Grafana. You can turn on feature toggles to try out new functionality in development or test environments. This page contains a list of available feature toggles. To learn how to turn on feature toggles, refer to our [Configure Grafana documentation](). Feature toggles are also available to Grafana Cloud Advanced customers. If you use Grafana Cloud Advanced, you can open a support ticket and specify the feature toggles and stack for which you want them enabled. For more information about feature release stages, refer to [Release life cycle for Grafana Labs](https://grafana.com/docs/release-life-cycle/) and [Manage feature toggles](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/administration/feature-toggles/#manage-feature-toggles). ## General availability feature toggles Most [generally available](https://grafana.com/docs/release-life-cycle/#general-availability) features are enabled by default. You can disable these feature by setting the feature flag to "false" in the configuration. | Feature toggle name | Description | Enabled by default | | -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------ | | `disableEnvelopeEncryption` | Disable envelope encryption (emergency only) | | | `publicDashboardsScene` | Enables public dashboard rendering using scenes | Yes | | `featureHighlights` | Highlight Grafana Enterprise features | | | `correlations` | Correlations page | Yes | | `cloudWatchCrossAccountQuerying` | Enables cross-account querying in CloudWatch datasources | Yes | | `accessControlOnCall` | Access control primitives for OnCall | Yes | | `nestedFolders` | Enable folder nesting | Yes | | `logsContextDatasourceUi` | Allow datasource to provide custom UI for context view | Yes | | `lokiQuerySplitting` | Split large interval queries into subqueries with smaller time intervals | Yes | | `prometheusMetricEncyclopedia` | Adds the metrics explorer component to the Prometheus query builder as an option in metric select | Yes | | `influxdbBackendMigration` | Query InfluxDB InfluxQL without the proxy | Yes | | `dataplaneFrontendFallback` | Support dataplane contract field name change for transformations and field name matchers where the name is different | Yes | | `unifiedRequestLog` | Writes error logs to the request logger | Yes | | `recordedQueriesMulti` | Enables writing multiple items from a single query within Recorded Queries | Yes | | `logsExploreTableVisualisation` | A table visualisation for logs in Explore | Yes | | `transformationsRedesign` | Enables the transformations redesign | Yes | | `traceQLStreaming` | Enables response streaming of TraceQL queries of the Tempo data source | | | `awsAsyncQueryCaching` | Enable caching for async queries for Redshift and Athena. Requires that the datasource has caching and async query support enabled | Yes | | `prometheusConfigOverhaulAuth` | Update the Prometheus configuration page with the new auth component | Yes | | `alertingNoDataErrorExecution` | Changes how Alerting state manager handles execution of NoData/Error | Yes | | `angularDeprecationUI` | Display Angular warnings in dashboards and panels | Yes | | `dashgpt` | Enable AI powered features in dashboards | Yes | | `alertingInsights` | Show the new alerting insights landing page | Yes | | `panelMonitoring` | Enables panel monitoring through logs and measurements | Yes | | `formatString` | Enable format string transformer | Yes | | `transformationsVariableSupport` | Allows using variables in transformations | Yes | | `kubernetesPlaylists` | Use the kubernetes API in the frontend for playlists, and route /api/playlist requests to k8s | Yes | | `recoveryThreshold` | Enables feature recovery threshold (aka hysteresis) for threshold server-side expression | Yes | | `lokiStructuredMetadata` | Enables the loki data source to request structured metadata from the Loki server | Yes | | `managedPluginsInstall` | Install managed plugins directly from plugins catalog | Yes | | `addFieldFromCalculationStatFunctions` | Add cumulative and window functions to the add field from calculation transformation | Yes | | `annotationPermissionUpdate` | Change the way annotation permissions work by scoping them to folders and dashboards. | Yes | | `dashboardSceneForViewers` | Enables dashboard rendering using Scenes for viewer roles | Yes | | `dashboardSceneSolo` | Enables rendering dashboards using scenes for solo panels | Yes | | `dashboardScene` | Enables dashboard rendering using scenes for all roles | Yes | | `ssoSettingsApi` | Enables the SSO settings API and the OAuth configuration UIs in Grafana | Yes | | `logsInfiniteScrolling` | Enables infinite scrolling for the Logs panel in Explore and Dashboards | Yes | | `exploreMetrics` | Enables the new Explore Metrics core app | Yes | | `alertingSimplifiedRouting` | Enables users to easily configure alert notifications by specifying a contact point directly when editing or creating an alert rule | Yes | | `logRowsPopoverMenu` | Enable filtering menu displayed when text of a log line is selected | Yes | | `lokiQueryHints` | Enables query hints for Loki | Yes | | `alertingQueryOptimization` | Optimizes eligible queries in order to reduce load on datasources | | | `promQLScope` | In-development feature that will allow injection of labels into prometheus queries. | Yes | | `groupToNestedTableTransformation` | Enables the group to nested table transformation | Yes | | `tlsMemcached` | Use TLS-enabled memcached in the enterprise caching feature | Yes | | `cloudWatchNewLabelParsing` | Updates CloudWatch label parsing to be more accurate | Yes | | `accessActionSets` | Introduces action sets for resource permissions. Also ensures that all folder editors and admins can create subfolders without needing any additional permissions. | Yes | | `newDashboardSharingComponent` | Enables the new sharing drawer design | | | `notificationBanner` | Enables the notification banner UI and API | Yes | | `pluginProxyPreserveTrailingSlash` | Preserve plugin proxy trailing slash. | | | `pinNavItems` | Enables pinning of nav items | Yes | | `openSearchBackendFlowEnabled` | Enables the backend query flow for Open Search datasource plugin | Yes | | `cloudWatchRoundUpEndTime` | Round up end time for metric queries to the next minute to avoid missing data | Yes | | `cloudwatchMetricInsightsCrossAccount` | Enables cross account observability for Cloudwatch Metric Insights query builder | Yes | | `singleTopNav` | Unifies the top search bar and breadcrumb bar into one | Yes | | `azureMonitorDisableLogLimit` | Disables the log limit restriction for Azure Monitor when true. The limit is enabled by default. | | | `preinstallAutoUpdate` | Enables automatic updates for pre-installed plugins | Yes | | `alertingUIOptimizeReducer` | Enables removing the reducer from the alerting UI when creating a new alert rule and using instant query | Yes | | `azureMonitorEnableUserAuth` | Enables user auth for Azure Monitor datasource only | Yes | ## Public preview feature toggles [Public preview](https://grafana.com/docs/release-life-cycle/#public-preview) features are supported by our Support teams, but might be limited to enablement, configuration, and some troubleshooting. | Feature toggle name | Description | | --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `panelTitleSearch` | Search for dashboards using panel title | | `autoMigrateOldPanels` | Migrate old angular panels to supported versions (graph, table-old, worldmap, etc) | | `autoMigrateGraphPanel` | Migrate old graph panel to supported time series panel - broken out from autoMigrateOldPanels to enable granular tracking | | `autoMigrateTablePanel` | Migrate old table panel to supported table panel - broken out from autoMigrateOldPanels to enable granular tracking | | `autoMigratePiechartPanel` | Migrate old piechart panel to supported piechart panel - broken out from autoMigrateOldPanels to enable granular tracking | | `autoMigrateWorldmapPanel` | Migrate old worldmap panel to supported geomap panel - broken out from autoMigrateOldPanels to enable granular tracking | | `autoMigrateStatPanel` | Migrate old stat panel to supported stat panel - broken out from autoMigrateOldPanels to enable granular tracking | | `disableAngular` | Dynamic flag to disable angular at runtime. The preferred method is to set `angular_support_enabled` to `false` in the [security] settings, which allows you to change the state at runtime. | | `grpcServer` | Run the GRPC server | | `alertingNoNormalState` | Stop maintaining state of alerts that are not firing | | `renderAuthJWT` | Uses JWT-based auth for rendering instead of relying on remote cache | | `refactorVariablesTimeRange` | Refactor time range variables flow to reduce number of API calls made when query variables are chained | | `faroDatasourceSelector` | Enable the data source selector within the Frontend Apps section of the Frontend Observability | | `enableDatagridEditing` | Enables the edit functionality in the datagrid panel | | `sqlDatasourceDatabaseSelection` | Enables previous SQL data source dataset dropdown behavior | | `reportingRetries` | Enables rendering retries for the reporting feature | | `externalServiceAccounts` | Automatic service account and token setup for plugins | | `cloudWatchBatchQueries` | Runs CloudWatch metrics queries as separate batches | | `teamHttpHeaders` | Enables LBAC for datasources to apply LogQL filtering of logs to the client requests for users in teams | | `pdfTables` | Enables generating table data as PDF in reporting | | `canvasPanelPanZoom` | Allow pan and zoom in canvas panel | | `regressionTransformation` | Enables regression analysis transformation | | `onPremToCloudMigrations` | Enable the Grafana Migration Assistant, which helps you easily migrate on-prem dashboards, folders, and data source configurations to your Grafana Cloud stack. | | `newPDFRendering` | New implementation for the dashboard-to-PDF rendering | | `ssoSettingsSAML` | Use the new SSO Settings API to configure the SAML connector | | `azureMonitorPrometheusExemplars` | Allows configuration of Azure Monitor as a data source that can provide Prometheus exemplars | | `ssoSettingsLDAP` | Use the new SSO Settings API to configure LDAP | | `useSessionStorageForRedirection` | Use session storage for handling the redirection after login | | `reportingUseRawTimeRange` | Uses the original report or dashboard time range instead of making an absolute transformation | ## Experimental feature toggles [Experimental](https://grafana.com/docs/release-life-cycle/#experimental) features are early in their development lifecycle and so are not yet supported in Grafana Cloud. Experimental features might be changed or removed without prior notice. | Feature toggle name | Description | | --------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `live-service-web-worker` | This will use a webworker thread to processes events rather than the main thread | | `queryOverLive` | Use Grafana Live WebSocket to execute backend queries | | `lokiExperimentalStreaming` | Support new streaming approach for loki (prototype, needs special loki build) | | `storage` | Configurable storage for dashboards, datasources, and resources | | `canvasPanelNesting` | Allow elements nesting | | `vizActions` | Allow actions in visualizations | | `disableSecretsCompatibility` | Disable duplicated secret storage in legacy tables | | `logRequestsInstrumentedAsUnknown` | Logs the path for requests that are instrumented as unknown | | `showDashboardValidationWarnings` | Show warnings when dashboards do not validate against the schema | | `mysqlAnsiQuotes` | Use double quotes to escape keyword in a MySQL query | | `mysqlParseTime` | Ensure the parseTime flag is set for MySQL driver | | `alertingBacktesting` | Rule backtesting API for alerting | | `editPanelCSVDragAndDrop` | Enables drag and drop for CSV and Excel files | | `lokiShardSplitting` | Use stream shards to split queries into smaller subqueries | | `lokiQuerySplittingConfig` | Give users the option to configure split durations for Loki queries | | `individualCookiePreferences` | Support overriding cookie preferences per user | | `influxqlStreamingParser` | Enable streaming JSON parser for InfluxDB datasource InfluxQL query language | | `lokiLogsDataplane` | Changes logs responses from Loki to be compliant with the dataplane specification. | | `disableSSEDataplane` | Disables dataplane specific processing in server side expressions. | | `alertStateHistoryLokiSecondary` | Enable Grafana to write alert state history to an external Loki instance in addition to Grafana annotations. | | `alertStateHistoryLokiPrimary` | Enable a remote Loki instance as the primary source for state history reads. | | `alertStateHistoryLokiOnly` | Disable Grafana alerts from emitting annotations when a remote Loki instance is available. | | `extraThemes` | Enables extra themes | | `lokiPredefinedOperations` | Adds predefined query operations to Loki query editor | | `pluginsFrontendSandbox` | Enables the plugins frontend sandbox | | `frontendSandboxMonitorOnly` | Enables monitor only in the plugin frontend sandbox (if enabled) | | `pluginsDetailsRightPanel` | Enables right panel for the plugins details page | | `awsDatasourcesTempCredentials` | Support temporary security credentials in AWS plugins for Grafana Cloud customers | | `mlExpressions` | Enable support for Machine Learning in server-side expressions | | `metricsSummary` | Enables metrics summary queries in the Tempo data source | | `datasourceAPIServers` | Expose some datasources as apiservers. | | `provisioning` | Next generation provisioning... and git | | `permissionsFilterRemoveSubquery` | Alternative permission filter implementation that does not use subqueries for fetching the dashboard folder | | `aiGeneratedDashboardChanges` | Enable AI powered features for dashboards to auto-summary changes when saving | | `sseGroupByDatasource` | Send query to the same datasource in a single request when using server side expressions. The `cloudWatchBatchQueries` feature toggle should be enabled if this used with CloudWatch. | | `libraryPanelRBAC` | Enables RBAC support for library panels | | `wargamesTesting` | Placeholder feature flag for internal testing | | `externalCorePlugins` | Allow core plugins to be loaded as external | | `pluginsAPIMetrics` | Sends metrics of public grafana packages usage by plugins | | `enableNativeHTTPHistogram` | Enables native HTTP Histograms | | `disableClassicHTTPHistogram` | Disables classic HTTP Histogram (use with enableNativeHTTPHistogram) | | `kubernetesSnapshots` | Routes snapshot requests from /api to the /apis endpoint | | `kubernetesDashboards` | Use the kubernetes API in the frontend for dashboards | | `kubernetesDashboardsAPI` | Use the kubernetes API in the backend for dashboards | | `kubernetesFolders` | Use the kubernetes API in the frontend for folders, and route /api/folders requests to k8s | | `grafanaAPIServerTestingWithExperimentalAPIs` | Facilitate integration testing of experimental APIs | | `datasourceQueryTypes` | Show query type endpoints in datasource API servers (currently hardcoded for testdata, expressions, and prometheus) | | `queryService` | Register /apis/query.grafana.app/ -- will eventually replace /api/ds/query | | `queryServiceRewrite` | Rewrite requests targeting /ds/query to the query service | | `queryServiceFromUI` | Routes requests to the new query service | | `cachingOptimizeSerializationMemoryUsage` | If enabled, the caching backend gradually serializes query responses for the cache, comparing against the configured `[caching]max_value_mb` value as it goes. This can can help prevent Grafana from running out of memory while attempting to cache very large query responses. | | `prometheusPromQAIL` | Prometheus and AI/ML to assist users in creating a query | | `prometheusCodeModeMetricNamesSearch` | Enables search for metric names in Code Mode, to improve performance when working with an enormous number of metric names | | `alertmanagerRemoteSecondary` | Enable Grafana to sync configuration and state with a remote Alertmanager. | | `alertmanagerRemotePrimary` | Enable Grafana to have a remote Alertmanager instance as the primary Alertmanager. | | `alertmanagerRemoteOnly` | Disable the internal Alertmanager and only use the external one defined. | | `extractFieldsNameDeduplication` | Make sure extracted field names are unique in the dataframe | | `dashboardNewLayouts` | Enables experimental new dashboard layouts | | `pluginsSkipHostEnvVars` | Disables passing host environment variable to plugin processes | | `tableSharedCrosshair` | Enables shared crosshair in table panel | | `kubernetesFeatureToggles` | Use the kubernetes API for feature toggle management in the frontend | | `newFolderPicker` | Enables the nested folder picker without having nested folders enabled | | `onPremToCloudMigrationsAlerts` | Enables the migration of alerts and its child resources to your Grafana Cloud stack. Requires `onPremToCloudMigrations` to be enabled in conjunction. | | `onPremToCloudMigrationsAuthApiMig` | Enables the use of auth api instead of gcom for internal token services. Requires `onPremToCloudMigrations` to be enabled in conjunction. | | `scopeApi` | In-development feature flag for the scope api using the app platform. | | `sqlExpressions` | Enables using SQL and DuckDB functions as Expressions. | | `nodeGraphDotLayout` | Changed the layout algorithm for the node graph | | `kubernetesAggregator` | Enable grafana's embedded kube-aggregator | | `expressionParser` | Enable new expression parser | | `disableNumericMetricsSortingInExpressions` | In server-side expressions, disable the sorting of numeric-kind metrics by their metric name or labels. | | `queryLibrary` | Enables Query Library feature in Explore | | `logsExploreTableDefaultVisualization` | Sets the logs table as default visualisation in logs explore | | `alertingListViewV2` | Enables the new alert list view design | | `dashboardRestore` | Enables deleted dashboard restore feature | | `alertingCentralAlertHistory` | Enables the new central alert history. | | `sqlQuerybuilderFunctionParameters` | Enables SQL query builder function parameters | | `failWrongDSUID` | Throws an error if a datasource has an invalid UIDs | | `alertingApiServer` | Register Alerting APIs with the K8s API server | | `dataplaneAggregator` | Enable grafana dataplane aggregator | | `newFiltersUI` | Enables new combobox style UI for the Ad hoc filters variable in scenes architecture | | `lokiSendDashboardPanelNames` | Send dashboard and panel names to Loki when querying | | `alertingPrometheusRulesPrimary` | Uses Prometheus rules as the primary source of truth for ruler-enabled data sources | | `exploreLogsShardSplitting` | Used in Explore Logs to split queries into multiple queries based on the number of shards | | `exploreLogsAggregatedMetrics` | Used in Explore Logs to query by aggregated metrics | | `exploreLogsLimitedTimeRange` | Used in Explore Logs to limit the time range | | `homeSetupGuide` | Used in Home for users who want to return to the onboarding flow or quickly find popular config pages | | `appSidecar` | Enable the app sidecar feature that allows rendering 2 apps at the same time | | `alertingQueryAndExpressionsStepMode` | Enables step mode for alerting queries and expressions | | `rolePickerDrawer` | Enables the new role picker drawer design | | `pluginsSriChecks` | Enables SRI checks for plugin assets | | `unifiedStorageBigObjectsSupport` | Enables to save big objects in blob storage | | `timeRangeProvider` | Enables time pickers sync | | `prometheusUsesCombobox` | Use new combobox component for Prometheus query editor | | `userStorageAPI` | Enables the user storage API | | `dashboardSchemaV2` | Enables the new dashboard schema version 2, implementing changes necessary for dynamic dashboards and dashboards as code. | | `playlistsWatcher` | Enables experimental watcher for playlists | | `enableExtensionsAdminPage` | Enables the extension admin page regardless of development mode | | `zipkinBackendMigration` | Enables querying Zipkin data source without the proxy | | `enableSCIM` | Enables SCIM support for user and group management | | `crashDetection` | Enables browser crash detection reporting to Faro. | | `jaegerBackendMigration` | Enables querying the Jaeger data source without the proxy | | `alertingNotificationsStepMode` | Enables simplified step mode in the notifications section | ## Development feature toggles The following toggles require explicitly setting Grafana's [app mode]() to 'development' before you can enable this feature toggle. These features tend to be experimental. | Feature toggle name | Description | | -------------------------------------- | ----------------------------------------------------------------------------- | | `grafanaAPIServerWithExperimentalAPIs` | Register experimental APIs with the k8s API server, including all datasources | | `grafanaAPIServerEnsureKubectlAccess` | Start an additional https handler and write kubectl options | | `panelTitleSearchInV1` | Enable searching for dashboards using panel title in search v1 |
grafana setup
aliases docs grafana latest setup grafana configure grafana feature toggles description Learn about feature toggles which you can enable or disable title Configure feature toggles weight 150 DO NOT EDIT THIS PAGE it is machine generated by running the test in https github com grafana grafana blob main pkg services featuremgmt toggles gen test go L27 Configure feature toggles You use feature toggles also known as feature flags to enable or disable features in Grafana You can turn on feature toggles to try out new functionality in development or test environments This page contains a list of available feature toggles To learn how to turn on feature toggles refer to our Configure Grafana documentation Feature toggles are also available to Grafana Cloud Advanced customers If you use Grafana Cloud Advanced you can open a support ticket and specify the feature toggles and stack for which you want them enabled For more information about feature release stages refer to Release life cycle for Grafana Labs https grafana com docs release life cycle and Manage feature toggles https grafana com docs grafana GRAFANA VERSION administration feature toggles manage feature toggles General availability feature toggles Most generally available https grafana com docs release life cycle general availability features are enabled by default You can disable these feature by setting the feature flag to false in the configuration Feature toggle name Description Enabled by default disableEnvelopeEncryption Disable envelope encryption emergency only publicDashboardsScene Enables public dashboard rendering using scenes Yes featureHighlights Highlight Grafana Enterprise features correlations Correlations page Yes cloudWatchCrossAccountQuerying Enables cross account querying in CloudWatch datasources Yes accessControlOnCall Access control primitives for OnCall Yes nestedFolders Enable folder nesting Yes logsContextDatasourceUi Allow datasource to provide custom UI for context view Yes lokiQuerySplitting Split large interval queries into subqueries with smaller time intervals Yes prometheusMetricEncyclopedia Adds the metrics explorer component to the Prometheus query builder as an option in metric select Yes influxdbBackendMigration Query InfluxDB InfluxQL without the proxy Yes dataplaneFrontendFallback Support dataplane contract field name change for transformations and field name matchers where the name is different Yes unifiedRequestLog Writes error logs to the request logger Yes recordedQueriesMulti Enables writing multiple items from a single query within Recorded Queries Yes logsExploreTableVisualisation A table visualisation for logs in Explore Yes transformationsRedesign Enables the transformations redesign Yes traceQLStreaming Enables response streaming of TraceQL queries of the Tempo data source awsAsyncQueryCaching Enable caching for async queries for Redshift and Athena Requires that the datasource has caching and async query support enabled Yes prometheusConfigOverhaulAuth Update the Prometheus configuration page with the new auth component Yes alertingNoDataErrorExecution Changes how Alerting state manager handles execution of NoData Error Yes angularDeprecationUI Display Angular warnings in dashboards and panels Yes dashgpt Enable AI powered features in dashboards Yes alertingInsights Show the new alerting insights landing page Yes panelMonitoring Enables panel monitoring through logs and measurements Yes formatString Enable format string transformer Yes transformationsVariableSupport Allows using variables in transformations Yes kubernetesPlaylists Use the kubernetes API in the frontend for playlists and route api playlist requests to k8s Yes recoveryThreshold Enables feature recovery threshold aka hysteresis for threshold server side expression Yes lokiStructuredMetadata Enables the loki data source to request structured metadata from the Loki server Yes managedPluginsInstall Install managed plugins directly from plugins catalog Yes addFieldFromCalculationStatFunctions Add cumulative and window functions to the add field from calculation transformation Yes annotationPermissionUpdate Change the way annotation permissions work by scoping them to folders and dashboards Yes dashboardSceneForViewers Enables dashboard rendering using Scenes for viewer roles Yes dashboardSceneSolo Enables rendering dashboards using scenes for solo panels Yes dashboardScene Enables dashboard rendering using scenes for all roles Yes ssoSettingsApi Enables the SSO settings API and the OAuth configuration UIs in Grafana Yes logsInfiniteScrolling Enables infinite scrolling for the Logs panel in Explore and Dashboards Yes exploreMetrics Enables the new Explore Metrics core app Yes alertingSimplifiedRouting Enables users to easily configure alert notifications by specifying a contact point directly when editing or creating an alert rule Yes logRowsPopoverMenu Enable filtering menu displayed when text of a log line is selected Yes lokiQueryHints Enables query hints for Loki Yes alertingQueryOptimization Optimizes eligible queries in order to reduce load on datasources promQLScope In development feature that will allow injection of labels into prometheus queries Yes groupToNestedTableTransformation Enables the group to nested table transformation Yes tlsMemcached Use TLS enabled memcached in the enterprise caching feature Yes cloudWatchNewLabelParsing Updates CloudWatch label parsing to be more accurate Yes accessActionSets Introduces action sets for resource permissions Also ensures that all folder editors and admins can create subfolders without needing any additional permissions Yes newDashboardSharingComponent Enables the new sharing drawer design notificationBanner Enables the notification banner UI and API Yes pluginProxyPreserveTrailingSlash Preserve plugin proxy trailing slash pinNavItems Enables pinning of nav items Yes openSearchBackendFlowEnabled Enables the backend query flow for Open Search datasource plugin Yes cloudWatchRoundUpEndTime Round up end time for metric queries to the next minute to avoid missing data Yes cloudwatchMetricInsightsCrossAccount Enables cross account observability for Cloudwatch Metric Insights query builder Yes singleTopNav Unifies the top search bar and breadcrumb bar into one Yes azureMonitorDisableLogLimit Disables the log limit restriction for Azure Monitor when true The limit is enabled by default preinstallAutoUpdate Enables automatic updates for pre installed plugins Yes alertingUIOptimizeReducer Enables removing the reducer from the alerting UI when creating a new alert rule and using instant query Yes azureMonitorEnableUserAuth Enables user auth for Azure Monitor datasource only Yes Public preview feature toggles Public preview https grafana com docs release life cycle public preview features are supported by our Support teams but might be limited to enablement configuration and some troubleshooting Feature toggle name Description panelTitleSearch Search for dashboards using panel title autoMigrateOldPanels Migrate old angular panels to supported versions graph table old worldmap etc autoMigrateGraphPanel Migrate old graph panel to supported time series panel broken out from autoMigrateOldPanels to enable granular tracking autoMigrateTablePanel Migrate old table panel to supported table panel broken out from autoMigrateOldPanels to enable granular tracking autoMigratePiechartPanel Migrate old piechart panel to supported piechart panel broken out from autoMigrateOldPanels to enable granular tracking autoMigrateWorldmapPanel Migrate old worldmap panel to supported geomap panel broken out from autoMigrateOldPanels to enable granular tracking autoMigrateStatPanel Migrate old stat panel to supported stat panel broken out from autoMigrateOldPanels to enable granular tracking disableAngular Dynamic flag to disable angular at runtime The preferred method is to set angular support enabled to false in the security settings which allows you to change the state at runtime grpcServer Run the GRPC server alertingNoNormalState Stop maintaining state of alerts that are not firing renderAuthJWT Uses JWT based auth for rendering instead of relying on remote cache refactorVariablesTimeRange Refactor time range variables flow to reduce number of API calls made when query variables are chained faroDatasourceSelector Enable the data source selector within the Frontend Apps section of the Frontend Observability enableDatagridEditing Enables the edit functionality in the datagrid panel sqlDatasourceDatabaseSelection Enables previous SQL data source dataset dropdown behavior reportingRetries Enables rendering retries for the reporting feature externalServiceAccounts Automatic service account and token setup for plugins cloudWatchBatchQueries Runs CloudWatch metrics queries as separate batches teamHttpHeaders Enables LBAC for datasources to apply LogQL filtering of logs to the client requests for users in teams pdfTables Enables generating table data as PDF in reporting canvasPanelPanZoom Allow pan and zoom in canvas panel regressionTransformation Enables regression analysis transformation onPremToCloudMigrations Enable the Grafana Migration Assistant which helps you easily migrate on prem dashboards folders and data source configurations to your Grafana Cloud stack newPDFRendering New implementation for the dashboard to PDF rendering ssoSettingsSAML Use the new SSO Settings API to configure the SAML connector azureMonitorPrometheusExemplars Allows configuration of Azure Monitor as a data source that can provide Prometheus exemplars ssoSettingsLDAP Use the new SSO Settings API to configure LDAP useSessionStorageForRedirection Use session storage for handling the redirection after login reportingUseRawTimeRange Uses the original report or dashboard time range instead of making an absolute transformation Experimental feature toggles Experimental https grafana com docs release life cycle experimental features are early in their development lifecycle and so are not yet supported in Grafana Cloud Experimental features might be changed or removed without prior notice Feature toggle name Description live service web worker This will use a webworker thread to processes events rather than the main thread queryOverLive Use Grafana Live WebSocket to execute backend queries lokiExperimentalStreaming Support new streaming approach for loki prototype needs special loki build storage Configurable storage for dashboards datasources and resources canvasPanelNesting Allow elements nesting vizActions Allow actions in visualizations disableSecretsCompatibility Disable duplicated secret storage in legacy tables logRequestsInstrumentedAsUnknown Logs the path for requests that are instrumented as unknown showDashboardValidationWarnings Show warnings when dashboards do not validate against the schema mysqlAnsiQuotes Use double quotes to escape keyword in a MySQL query mysqlParseTime Ensure the parseTime flag is set for MySQL driver alertingBacktesting Rule backtesting API for alerting editPanelCSVDragAndDrop Enables drag and drop for CSV and Excel files lokiShardSplitting Use stream shards to split queries into smaller subqueries lokiQuerySplittingConfig Give users the option to configure split durations for Loki queries individualCookiePreferences Support overriding cookie preferences per user influxqlStreamingParser Enable streaming JSON parser for InfluxDB datasource InfluxQL query language lokiLogsDataplane Changes logs responses from Loki to be compliant with the dataplane specification disableSSEDataplane Disables dataplane specific processing in server side expressions alertStateHistoryLokiSecondary Enable Grafana to write alert state history to an external Loki instance in addition to Grafana annotations alertStateHistoryLokiPrimary Enable a remote Loki instance as the primary source for state history reads alertStateHistoryLokiOnly Disable Grafana alerts from emitting annotations when a remote Loki instance is available extraThemes Enables extra themes lokiPredefinedOperations Adds predefined query operations to Loki query editor pluginsFrontendSandbox Enables the plugins frontend sandbox frontendSandboxMonitorOnly Enables monitor only in the plugin frontend sandbox if enabled pluginsDetailsRightPanel Enables right panel for the plugins details page awsDatasourcesTempCredentials Support temporary security credentials in AWS plugins for Grafana Cloud customers mlExpressions Enable support for Machine Learning in server side expressions metricsSummary Enables metrics summary queries in the Tempo data source datasourceAPIServers Expose some datasources as apiservers provisioning Next generation provisioning and git permissionsFilterRemoveSubquery Alternative permission filter implementation that does not use subqueries for fetching the dashboard folder aiGeneratedDashboardChanges Enable AI powered features for dashboards to auto summary changes when saving sseGroupByDatasource Send query to the same datasource in a single request when using server side expressions The cloudWatchBatchQueries feature toggle should be enabled if this used with CloudWatch libraryPanelRBAC Enables RBAC support for library panels wargamesTesting Placeholder feature flag for internal testing externalCorePlugins Allow core plugins to be loaded as external pluginsAPIMetrics Sends metrics of public grafana packages usage by plugins enableNativeHTTPHistogram Enables native HTTP Histograms disableClassicHTTPHistogram Disables classic HTTP Histogram use with enableNativeHTTPHistogram kubernetesSnapshots Routes snapshot requests from api to the apis endpoint kubernetesDashboards Use the kubernetes API in the frontend for dashboards kubernetesDashboardsAPI Use the kubernetes API in the backend for dashboards kubernetesFolders Use the kubernetes API in the frontend for folders and route api folders requests to k8s grafanaAPIServerTestingWithExperimentalAPIs Facilitate integration testing of experimental APIs datasourceQueryTypes Show query type endpoints in datasource API servers currently hardcoded for testdata expressions and prometheus queryService Register apis query grafana app will eventually replace api ds query queryServiceRewrite Rewrite requests targeting ds query to the query service queryServiceFromUI Routes requests to the new query service cachingOptimizeSerializationMemoryUsage If enabled the caching backend gradually serializes query responses for the cache comparing against the configured caching max value mb value as it goes This can can help prevent Grafana from running out of memory while attempting to cache very large query responses prometheusPromQAIL Prometheus and AI ML to assist users in creating a query prometheusCodeModeMetricNamesSearch Enables search for metric names in Code Mode to improve performance when working with an enormous number of metric names alertmanagerRemoteSecondary Enable Grafana to sync configuration and state with a remote Alertmanager alertmanagerRemotePrimary Enable Grafana to have a remote Alertmanager instance as the primary Alertmanager alertmanagerRemoteOnly Disable the internal Alertmanager and only use the external one defined extractFieldsNameDeduplication Make sure extracted field names are unique in the dataframe dashboardNewLayouts Enables experimental new dashboard layouts pluginsSkipHostEnvVars Disables passing host environment variable to plugin processes tableSharedCrosshair Enables shared crosshair in table panel kubernetesFeatureToggles Use the kubernetes API for feature toggle management in the frontend newFolderPicker Enables the nested folder picker without having nested folders enabled onPremToCloudMigrationsAlerts Enables the migration of alerts and its child resources to your Grafana Cloud stack Requires onPremToCloudMigrations to be enabled in conjunction onPremToCloudMigrationsAuthApiMig Enables the use of auth api instead of gcom for internal token services Requires onPremToCloudMigrations to be enabled in conjunction scopeApi In development feature flag for the scope api using the app platform sqlExpressions Enables using SQL and DuckDB functions as Expressions nodeGraphDotLayout Changed the layout algorithm for the node graph kubernetesAggregator Enable grafana s embedded kube aggregator expressionParser Enable new expression parser disableNumericMetricsSortingInExpressions In server side expressions disable the sorting of numeric kind metrics by their metric name or labels queryLibrary Enables Query Library feature in Explore logsExploreTableDefaultVisualization Sets the logs table as default visualisation in logs explore alertingListViewV2 Enables the new alert list view design dashboardRestore Enables deleted dashboard restore feature alertingCentralAlertHistory Enables the new central alert history sqlQuerybuilderFunctionParameters Enables SQL query builder function parameters failWrongDSUID Throws an error if a datasource has an invalid UIDs alertingApiServer Register Alerting APIs with the K8s API server dataplaneAggregator Enable grafana dataplane aggregator newFiltersUI Enables new combobox style UI for the Ad hoc filters variable in scenes architecture lokiSendDashboardPanelNames Send dashboard and panel names to Loki when querying alertingPrometheusRulesPrimary Uses Prometheus rules as the primary source of truth for ruler enabled data sources exploreLogsShardSplitting Used in Explore Logs to split queries into multiple queries based on the number of shards exploreLogsAggregatedMetrics Used in Explore Logs to query by aggregated metrics exploreLogsLimitedTimeRange Used in Explore Logs to limit the time range homeSetupGuide Used in Home for users who want to return to the onboarding flow or quickly find popular config pages appSidecar Enable the app sidecar feature that allows rendering 2 apps at the same time alertingQueryAndExpressionsStepMode Enables step mode for alerting queries and expressions rolePickerDrawer Enables the new role picker drawer design pluginsSriChecks Enables SRI checks for plugin assets unifiedStorageBigObjectsSupport Enables to save big objects in blob storage timeRangeProvider Enables time pickers sync prometheusUsesCombobox Use new combobox component for Prometheus query editor userStorageAPI Enables the user storage API dashboardSchemaV2 Enables the new dashboard schema version 2 implementing changes necessary for dynamic dashboards and dashboards as code playlistsWatcher Enables experimental watcher for playlists enableExtensionsAdminPage Enables the extension admin page regardless of development mode zipkinBackendMigration Enables querying Zipkin data source without the proxy enableSCIM Enables SCIM support for user and group management crashDetection Enables browser crash detection reporting to Faro jaegerBackendMigration Enables querying the Jaeger data source without the proxy alertingNotificationsStepMode Enables simplified step mode in the notifications section Development feature toggles The following toggles require explicitly setting Grafana s app mode to development before you can enable this feature toggle These features tend to be experimental Feature toggle name Description grafanaAPIServerWithExperimentalAPIs Register experimental APIs with the k8s API server including all datasources grafanaAPIServerEnsureKubectlAccess Start an additional https handler and write kubectl options panelTitleSearchInV1 Enable searching for dashboards using panel title in search v1
grafana setup title Run Grafana Docker image labels menuTitle Grafana Docker image aliases installation docker products Guide for running Grafana using Docker oss enterprise
--- aliases: - ../../installation/docker/ description: Guide for running Grafana using Docker labels: products: - enterprise - oss menuTitle: Grafana Docker image title: Run Grafana Docker image weight: 400 --- # Run Grafana Docker image This topic guides you through installing Grafana via the official Docker images. Specifically, it covers running Grafana via the Docker command line interface (CLI) and docker-compose. Grafana Docker images come in two editions: - **Grafana Enterprise**: `grafana/grafana-enterprise` - **Grafana Open Source**: `grafana/grafana-oss` > **Note:** The recommended and default edition of Grafana is Grafana Enterprise. It is free and includes all the features of the OSS edition. Additionally, you have the option to upgrade to the [full Enterprise feature set](/products/enterprise/?utm_source=grafana-install-page), which includes support for [Enterprise plugins](/grafana/plugins/?enterprise=1&utcm_source=grafana-install-page). The default images for Grafana are created using the Alpine Linux project and can be found in the Alpine official image. For instructions on configuring a Docker image for Grafana, refer to [Configure a Grafana Docker image](). ## Run Grafana via Docker CLI This section shows you how to run Grafana using the Docker CLI. > **Note:** If you are on a Linux system (for example, Debian or Ubuntu), you might need to add `sudo` before the command or add your user to the `docker` group. For more information, refer to [Linux post-installation steps for Docker Engine](https://docs.docker.com/engine/install/linux-postinstall/). To run the latest stable version of Grafana, run the following command: ```bash docker run -d -p 3000:3000 --name=grafana grafana/grafana-enterprise ``` Where: - [`docker run`](https://docs.docker.com/engine/reference/commandline/run/) is a Docker CLI command that runs a new container from an image - `-d` (`--detach`) runs the container in the background - `-p <host-port>:<container-port>` (`--publish`) publish a container's port(s) to the host, allowing you to reach the container's port via a host port. In this case, we can reach the container's port `3000` via the host's port `3000` - `--name` assign a logical name to the container (e.g. `grafana`). This allows you to refer to the container by name instead of by ID. - `grafana/grafana-enterprise` is the image to run ### Stop the Grafana container To stop the Grafana container, run the following command: ```bash # The `docker ps` command shows the processes running in Docker docker ps # This will display a list of containers that looks like the following: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cd48d3994968 grafana/grafana-enterprise "/run.sh" 8 seconds ago Up 7 seconds 0.0.0.0:3000->3000/tcp grafana # To stop the grafana container run the command # docker stop CONTAINER-ID or use # docker stop NAME, which is `grafana` as previously defined docker stop grafana ``` ### Save your Grafana data By default, Grafana uses an embedded SQLite version 3 database to store configuration, users, dashboards, and other data. When you run Docker images as containers, changes to these Grafana data are written to the filesystem within the container, which will only persist for as long as the container exists. If you stop and remove the container, any filesystem changes (i.e. the Grafana data) will be discarded. To avoid losing your data, you can set up persistent storage using [Docker volumes](https://docs.docker.com/storage/volumes/) or [bind mounts](https://docs.docker.com/storage/bind-mounts/) for your container. > **Note:** Though both methods are similar, there is a slight difference. If you want your storage to be fully managed by Docker and accessed only through Docker containers and the Docker CLI, you should choose to use persistent storage. However, if you need full control of the storage and want to allow other processes besides Docker to access or modify the storage layer, then bind mounts is the right choice for your environment. #### Use Docker volumes (recommended) Use Docker volumes when you want the Docker Engine to manage the storage volume. To use Docker volumes for persistent storage, complete the following steps: 1. Create a Docker volume to be used by the Grafana container, giving it a descriptive name (e.g. `grafana-storage`). Run the following command: ```bash # create a persistent volume for your data docker volume create grafana-storage # verify that the volume was created correctly # you should see some JSON output docker volume inspect grafana-storage ``` 1. Start the Grafana container by running the following command: ```bash # start grafana docker run -d -p 3000:3000 --name=grafana \ --volume grafana-storage:/var/lib/grafana \ grafana/grafana-enterprise ``` #### Use bind mounts If you plan to use directories on your host for the database or configuration when running Grafana in Docker, you must start the container with a user with permission to access and write to the directory you map. To use bind mounts, run the following command: ```bash # create a directory for your data mkdir data # start grafana with your user id and using the data directory docker run -d -p 3000:3000 --name=grafana \ --user "$(id -u)" \ --volume "$PWD/data:/var/lib/grafana" \ grafana/grafana-enterprise ``` ### Use environment variables to configure Grafana Grafana supports specifying custom configuration settings using [environment variables](). ```bash # enable debug logs docker run -d -p 3000:3000 --name=grafana \ -e "GF_LOG_LEVEL=debug" \ grafana/grafana-enterprise ``` ## Install plugins in the Docker container You can install plugins in Grafana from the official and community [plugins page](/grafana/plugins) or by using a custom URL to install a private plugin. These plugins allow you to add new visualization types, data sources, and applications to help you better visualize your data. Grafana currently supports three types of plugins: panel, data source, and app. For more information on managing plugins, refer to [Plugin Management](). To install plugins in the Docker container, complete the following steps: 1. Pass the plugins you want to be installed to Docker with the `GF_PLUGINS_PREINSTALL` environment variable as a comma-separated list. This starts a background process that installs the list of plugins while Grafana server starts. For example: ```bash docker run -d -p 3000:3000 --name=grafana \ -e "GF_PLUGINS_PREINSTALL=grafana-clock-panel, grafana-simple-json-datasource" \ grafana/grafana-enterprise ``` 1. To specify the version of a plugin, add the version number to the `GF_PLUGINS_PREINSTALL` environment variable. For example: ```bash docker run -d -p 3000:3000 --name=grafana \ -e "GF_PLUGINS_PREINSTALL=grafana-clock-panel 1.0.1" \ grafana/grafana-enterprise ``` > **Note:** If you do not specify a version number, the latest version is used. 1. To install a plugin from a custom URL, use the following convention to specify the URL: `<plugin ID>@[<plugin version>]@<url to plugin zip>`. For example: ```bash docker run -d -p 3000:3000 --name=grafana \ -e "GF_PLUGINS_PREINSTALL=custom-plugin@@https://github.com/VolkovLabs/custom-plugin.zip" \ grafana/grafana-enterprise ``` ## Example The following example runs the latest stable version of Grafana, listening on port 3000, with the container named `grafana`, persistent storage in the `grafana-storage` docker volume, the server root URL set, and the official [clock panel](/grafana/plugins/grafana-clock-panel) plugin installed. ```bash # create a persistent volume for your data docker volume create grafana-storage # start grafana by using the above persistent storage # and defining environment variables docker run -d -p 3000:3000 --name=grafana \ --volume grafana-storage:/var/lib/grafana \ -e "GF_SERVER_ROOT_URL=http://my.grafana.server/" \ -e "GF_PLUGINS_PREINSTALL=grafana-clock-panel" \ grafana/grafana-enterprise ``` ## Run Grafana via Docker Compose Docker Compose is a software tool that makes it easy to define and share applications that consist of multiple containers. It works by using a YAML file, usually called `docker-compose.yaml`, which lists all the services that make up the application. You can start the containers in the correct order with a single command, and with another command, you can shut them down. For more information about the benefits of using Docker Compose and how to use it refer to [Use Docker Compose](https://docs.docker.com/get-started/08_using_compose/). ### Before you begin To run Grafana via Docker Compose, install the compose tool on your machine. To determine if the compose tool is available, run the following command: ```bash docker compose version ``` If the compose tool is unavailable, refer to [Install Docker Compose](https://docs.docker.com/compose/install/). ### Run the latest stable version of Grafana This section shows you how to run Grafana using Docker Compose. The examples in this section use Compose version 3. For more information about compatibility, refer to [Compose and Docker compatibility matrix](https://docs.docker.com/compose/compose-file/compose-file-v3/). > **Note:** If you are on a Linux system (for example, Debian or Ubuntu), you might need to add `sudo` before the command or add your user to the `docker` group. For more information, refer to [Linux post-installation steps for Docker Engine](https://docs.docker.com/engine/install/linux-postinstall/). To run the latest stable version of Grafana using Docker Compose, complete the following steps: 1. Create a `docker-compose.yaml` file. ```bash # first go into the directory where you have created this docker-compose.yaml file cd /path/to/docker-compose-directory # now create the docker-compose.yaml file touch docker-compose.yaml ``` 1. Now, add the following code into the `docker-compose.yaml` file. For example: ```bash services: grafana: image: grafana/grafana-enterprise container_name: grafana restart: unless-stopped ports: - '3000:3000' ``` 1. To run `docker-compose.yaml`, run the following command: ```bash # start the grafana container docker compose up -d ``` Where: d = detached mode up = to bring the container up and running To determine that Grafana is running, open a browser window and type `IP_ADDRESS:3000`. The sign in screen should appear. ### Stop the Grafana container To stop the Grafana container, run the following command: ```bash docker compose down ``` > **Note:** For more information about using Docker Compose commands, refer to [docker compose](https://docs.docker.com/engine/reference/commandline/compose/). ### Save your Grafana data By default, Grafana uses an embedded SQLite version 3 database to store configuration, users, dashboards, and other data. When you run Docker images as containers, changes to these Grafana data are written to the filesystem within the container, which will only persist for as long as the container exists. If you stop and remove the container, any filesystem changes (i.e. the Grafana data) will be discarded. To avoid losing your data, you can set up persistent storage using [Docker volumes](https://docs.docker.com/storage/volumes/) or [bind mounts](https://docs.docker.com/storage/bind-mounts/) for your container. #### Use Docker volumes (recommended) Use Docker volumes when you want the Docker Engine to manage the storage volume. To use Docker volumes for persistent storage, complete the following steps: 1. Create a `docker-compose.yaml` file ```bash # first go into the directory where you have created this docker-compose.yaml file cd /path/to/docker-compose-directory # now create the docker-compose.yaml file touch docker-compose.yaml ``` 1. Add the following code into the `docker-compose.yaml` file. ```yaml services: grafana: image: grafana/grafana-enterprise container_name: grafana restart: unless-stopped ports: - '3000:3000' volumes: - grafana-storage:/var/lib/grafana volumes: grafana-storage: {} ``` 1. Save the file and run the following command: ```bash docker compose up -d ``` #### Use bind mounts If you plan to use directories on your host for the database or configuration when running Grafana in Docker, you must start the container with a user that has the permission to access and write to the directory you map. To use bind mounts, complete the following steps: 1. Create a `docker-compose.yaml` file ```bash # first go into the directory where you have created this docker-compose.yaml file cd /path/to/docker-compose-directory # now create the docker-compose.yaml file touch docker-compose.yaml ``` 1. Create the directory where you will be mounting your data, in this case is `/data` e.g. in your current working directory: ```bash mkdir $PWD/data ``` 1. Now, add the following code into the `docker-compose.yaml` file. ```yaml services: grafana: image: grafana/grafana-enterprise container_name: grafana restart: unless-stopped # if you are running as root then set it to 0 # else find the right id with the id -u command user: '0' ports: - '3000:3000' # adding the mount volume point which we create earlier volumes: - '$PWD/data:/var/lib/grafana' ``` 1. Save the file and run the following command: ```bash docker compose up -d ``` ### Example The following example runs the latest stable version of Grafana, listening on port 3000, with the container named `grafana`, persistent storage in the `grafana-storage` docker volume, the server root URL set, and the official [clock panel](/grafana/plugins/grafana-clock-panel/) plugin installed. ```bash services: grafana: image: grafana/grafana-enterprise container_name: grafana restart: unless-stopped environment: - GF_SERVER_ROOT_URL=http://my.grafana.server/ - GF_PLUGINS_PREINSTALL=grafana-clock-panel ports: - '3000:3000' volumes: - 'grafana_storage:/var/lib/grafana' volumes: grafana_storage: {} ``` > **Note:** If you want to specify the version of a plugin, add the version number to the `GF_PLUGINS_PREINSTALL` environment variable. For example: `-e "[email protected],[email protected]"`. If you do not specify a version number, the latest version is used. ## Next steps Refer to the [Getting Started]() guide for information about logging in, setting up data sources, and so on. ## Configure Docker image Refer to [Configure a Grafana Docker image]() page for details on options for customizing your environment, logging, database, and so on. ## Configure Grafana Refer to the [Configuration]() page for details on options for customizing your environment, logging, database, and so on.
grafana setup
aliases installation docker description Guide for running Grafana using Docker labels products enterprise oss menuTitle Grafana Docker image title Run Grafana Docker image weight 400 Run Grafana Docker image This topic guides you through installing Grafana via the official Docker images Specifically it covers running Grafana via the Docker command line interface CLI and docker compose Grafana Docker images come in two editions Grafana Enterprise grafana grafana enterprise Grafana Open Source grafana grafana oss Note The recommended and default edition of Grafana is Grafana Enterprise It is free and includes all the features of the OSS edition Additionally you have the option to upgrade to the full Enterprise feature set products enterprise utm source grafana install page which includes support for Enterprise plugins grafana plugins enterprise 1 utcm source grafana install page The default images for Grafana are created using the Alpine Linux project and can be found in the Alpine official image For instructions on configuring a Docker image for Grafana refer to Configure a Grafana Docker image Run Grafana via Docker CLI This section shows you how to run Grafana using the Docker CLI Note If you are on a Linux system for example Debian or Ubuntu you might need to add sudo before the command or add your user to the docker group For more information refer to Linux post installation steps for Docker Engine https docs docker com engine install linux postinstall To run the latest stable version of Grafana run the following command bash docker run d p 3000 3000 name grafana grafana grafana enterprise Where docker run https docs docker com engine reference commandline run is a Docker CLI command that runs a new container from an image d detach runs the container in the background p host port container port publish publish a container s port s to the host allowing you to reach the container s port via a host port In this case we can reach the container s port 3000 via the host s port 3000 name assign a logical name to the container e g grafana This allows you to refer to the container by name instead of by ID grafana grafana enterprise is the image to run Stop the Grafana container To stop the Grafana container run the following command bash The docker ps command shows the processes running in Docker docker ps This will display a list of containers that looks like the following CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cd48d3994968 grafana grafana enterprise run sh 8 seconds ago Up 7 seconds 0 0 0 0 3000 3000 tcp grafana To stop the grafana container run the command docker stop CONTAINER ID or use docker stop NAME which is grafana as previously defined docker stop grafana Save your Grafana data By default Grafana uses an embedded SQLite version 3 database to store configuration users dashboards and other data When you run Docker images as containers changes to these Grafana data are written to the filesystem within the container which will only persist for as long as the container exists If you stop and remove the container any filesystem changes i e the Grafana data will be discarded To avoid losing your data you can set up persistent storage using Docker volumes https docs docker com storage volumes or bind mounts https docs docker com storage bind mounts for your container Note Though both methods are similar there is a slight difference If you want your storage to be fully managed by Docker and accessed only through Docker containers and the Docker CLI you should choose to use persistent storage However if you need full control of the storage and want to allow other processes besides Docker to access or modify the storage layer then bind mounts is the right choice for your environment Use Docker volumes recommended Use Docker volumes when you want the Docker Engine to manage the storage volume To use Docker volumes for persistent storage complete the following steps 1 Create a Docker volume to be used by the Grafana container giving it a descriptive name e g grafana storage Run the following command bash create a persistent volume for your data docker volume create grafana storage verify that the volume was created correctly you should see some JSON output docker volume inspect grafana storage 1 Start the Grafana container by running the following command bash start grafana docker run d p 3000 3000 name grafana volume grafana storage var lib grafana grafana grafana enterprise Use bind mounts If you plan to use directories on your host for the database or configuration when running Grafana in Docker you must start the container with a user with permission to access and write to the directory you map To use bind mounts run the following command bash create a directory for your data mkdir data start grafana with your user id and using the data directory docker run d p 3000 3000 name grafana user id u volume PWD data var lib grafana grafana grafana enterprise Use environment variables to configure Grafana Grafana supports specifying custom configuration settings using environment variables bash enable debug logs docker run d p 3000 3000 name grafana e GF LOG LEVEL debug grafana grafana enterprise Install plugins in the Docker container You can install plugins in Grafana from the official and community plugins page grafana plugins or by using a custom URL to install a private plugin These plugins allow you to add new visualization types data sources and applications to help you better visualize your data Grafana currently supports three types of plugins panel data source and app For more information on managing plugins refer to Plugin Management To install plugins in the Docker container complete the following steps 1 Pass the plugins you want to be installed to Docker with the GF PLUGINS PREINSTALL environment variable as a comma separated list This starts a background process that installs the list of plugins while Grafana server starts For example bash docker run d p 3000 3000 name grafana e GF PLUGINS PREINSTALL grafana clock panel grafana simple json datasource grafana grafana enterprise 1 To specify the version of a plugin add the version number to the GF PLUGINS PREINSTALL environment variable For example bash docker run d p 3000 3000 name grafana e GF PLUGINS PREINSTALL grafana clock panel 1 0 1 grafana grafana enterprise Note If you do not specify a version number the latest version is used 1 To install a plugin from a custom URL use the following convention to specify the URL plugin ID plugin version url to plugin zip For example bash docker run d p 3000 3000 name grafana e GF PLUGINS PREINSTALL custom plugin https github com VolkovLabs custom plugin zip grafana grafana enterprise Example The following example runs the latest stable version of Grafana listening on port 3000 with the container named grafana persistent storage in the grafana storage docker volume the server root URL set and the official clock panel grafana plugins grafana clock panel plugin installed bash create a persistent volume for your data docker volume create grafana storage start grafana by using the above persistent storage and defining environment variables docker run d p 3000 3000 name grafana volume grafana storage var lib grafana e GF SERVER ROOT URL http my grafana server e GF PLUGINS PREINSTALL grafana clock panel grafana grafana enterprise Run Grafana via Docker Compose Docker Compose is a software tool that makes it easy to define and share applications that consist of multiple containers It works by using a YAML file usually called docker compose yaml which lists all the services that make up the application You can start the containers in the correct order with a single command and with another command you can shut them down For more information about the benefits of using Docker Compose and how to use it refer to Use Docker Compose https docs docker com get started 08 using compose Before you begin To run Grafana via Docker Compose install the compose tool on your machine To determine if the compose tool is available run the following command bash docker compose version If the compose tool is unavailable refer to Install Docker Compose https docs docker com compose install Run the latest stable version of Grafana This section shows you how to run Grafana using Docker Compose The examples in this section use Compose version 3 For more information about compatibility refer to Compose and Docker compatibility matrix https docs docker com compose compose file compose file v3 Note If you are on a Linux system for example Debian or Ubuntu you might need to add sudo before the command or add your user to the docker group For more information refer to Linux post installation steps for Docker Engine https docs docker com engine install linux postinstall To run the latest stable version of Grafana using Docker Compose complete the following steps 1 Create a docker compose yaml file bash first go into the directory where you have created this docker compose yaml file cd path to docker compose directory now create the docker compose yaml file touch docker compose yaml 1 Now add the following code into the docker compose yaml file For example bash services grafana image grafana grafana enterprise container name grafana restart unless stopped ports 3000 3000 1 To run docker compose yaml run the following command bash start the grafana container docker compose up d Where d detached mode up to bring the container up and running To determine that Grafana is running open a browser window and type IP ADDRESS 3000 The sign in screen should appear Stop the Grafana container To stop the Grafana container run the following command bash docker compose down Note For more information about using Docker Compose commands refer to docker compose https docs docker com engine reference commandline compose Save your Grafana data By default Grafana uses an embedded SQLite version 3 database to store configuration users dashboards and other data When you run Docker images as containers changes to these Grafana data are written to the filesystem within the container which will only persist for as long as the container exists If you stop and remove the container any filesystem changes i e the Grafana data will be discarded To avoid losing your data you can set up persistent storage using Docker volumes https docs docker com storage volumes or bind mounts https docs docker com storage bind mounts for your container Use Docker volumes recommended Use Docker volumes when you want the Docker Engine to manage the storage volume To use Docker volumes for persistent storage complete the following steps 1 Create a docker compose yaml file bash first go into the directory where you have created this docker compose yaml file cd path to docker compose directory now create the docker compose yaml file touch docker compose yaml 1 Add the following code into the docker compose yaml file yaml services grafana image grafana grafana enterprise container name grafana restart unless stopped ports 3000 3000 volumes grafana storage var lib grafana volumes grafana storage 1 Save the file and run the following command bash docker compose up d Use bind mounts If you plan to use directories on your host for the database or configuration when running Grafana in Docker you must start the container with a user that has the permission to access and write to the directory you map To use bind mounts complete the following steps 1 Create a docker compose yaml file bash first go into the directory where you have created this docker compose yaml file cd path to docker compose directory now create the docker compose yaml file touch docker compose yaml 1 Create the directory where you will be mounting your data in this case is data e g in your current working directory bash mkdir PWD data 1 Now add the following code into the docker compose yaml file yaml services grafana image grafana grafana enterprise container name grafana restart unless stopped if you are running as root then set it to 0 else find the right id with the id u command user 0 ports 3000 3000 adding the mount volume point which we create earlier volumes PWD data var lib grafana 1 Save the file and run the following command bash docker compose up d Example The following example runs the latest stable version of Grafana listening on port 3000 with the container named grafana persistent storage in the grafana storage docker volume the server root URL set and the official clock panel grafana plugins grafana clock panel plugin installed bash services grafana image grafana grafana enterprise container name grafana restart unless stopped environment GF SERVER ROOT URL http my grafana server GF PLUGINS PREINSTALL grafana clock panel ports 3000 3000 volumes grafana storage var lib grafana volumes grafana storage Note If you want to specify the version of a plugin add the version number to the GF PLUGINS PREINSTALL environment variable For example e GF PLUGINS PREINSTALL grafana clock panel 1 0 1 grafana simple json datasource 1 3 5 If you do not specify a version number the latest version is used Next steps Refer to the Getting Started guide for information about logging in setting up data sources and so on Configure Docker image Refer to Configure a Grafana Docker image page for details on options for customizing your environment logging database and so on Configure Grafana Refer to the Configuration page for details on options for customizing your environment logging database and so on
grafana setup menuTitle Debian or Ubuntu installation debian aliases products installation installation debian Install guide for Grafana on Debian or Ubuntu labels oss enterprise
--- aliases: - ../../installation/debian/ - ../../installation/installation/debian/ description: Install guide for Grafana on Debian or Ubuntu labels: products: - enterprise - oss menuTitle: Debian or Ubuntu title: Install Grafana on Debian or Ubuntu weight: 100 --- # Install Grafana on Debian or Ubuntu This topic explains how to install Grafana dependencies, install Grafana on Linux Debian or Ubuntu, and start the Grafana server on your Debian or Ubuntu system. There are multiple ways to install Grafana: using the Grafana Labs APT repository, by downloading a `.deb` package, or by downloading a binary `.tar.gz` file. Choose only one of the methods below that best suits your needs. If you install via the `.deb` package or `.tar.gz` file, then you must manually update Grafana for each new version. The following video demonstrates how to install Grafana on Debian and Ubuntu as outlined in this document: ## Install from APT repository If you install from the APT repository, Grafana automatically updates when you run `apt-get update`. | Grafana Version | Package | Repository | | ------------------------- | ------------------ | ------------------------------------- | | Grafana Enterprise | grafana-enterprise | `https://apt.grafana.com stable main` | | Grafana Enterprise (Beta) | grafana-enterprise | `https://apt.grafana.com beta main` | | Grafana OSS | grafana | `https://apt.grafana.com stable main` | | Grafana OSS (Beta) | grafana | `https://apt.grafana.com beta main` | Grafana Enterprise is the recommended and default edition. It is available for free and includes all the features of the OSS edition. You can also upgrade to the [full Enterprise feature set](/products/enterprise/?utm_source=grafana-install-page), which has support for [Enterprise plugins](/grafana/plugins/?enterprise=1&utcm_source=grafana-install-page). Complete the following steps to install Grafana from the APT repository: 1. Install the prerequisite packages: ```bash sudo apt-get install -y apt-transport-https software-properties-common wget ``` 1. Import the GPG key: ```bash sudo mkdir -p /etc/apt/keyrings/ wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null ``` 1. To add a repository for stable releases, run the following command: ```bash echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list ``` 1. To add a repository for beta releases, run the following command: ```bash echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com beta main" | sudo tee -a /etc/apt/sources.list.d/grafana.list ``` 1. Run the following command to update the list of available packages: ```bash # Updates the list of available packages sudo apt-get update ``` 1. To install Grafana OSS, run the following command: ```bash # Installs the latest OSS release: sudo apt-get install grafana ``` 1. To install Grafana Enterprise, run the following command: ```bash # Installs the latest Enterprise release: sudo apt-get install grafana-enterprise ``` ## Install Grafana using a deb package or as a standalone binary If you choose not to install Grafana using APT, you can download and install Grafana using the deb package or as a standalone binary. Complete the following steps to install Grafana using DEB or the standalone binaries: 1. Navigate to the [Grafana download page](/grafana/download). 1. Select the Grafana version you want to install. - The most recent Grafana version is selected by default. - The **Version** field displays only tagged releases. If you want to install a nightly build, click **Nightly Builds** and then select a version. 1. Select an **Edition**. - **Enterprise:** This is the recommended version. It is functionally identical to the open source version, but includes features you can unlock with a license, if you so choose. - **Open Source:** This version is functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features. 1. Depending on which system you are running, click the **Linux** or **ARM** tab on the [download page](/grafana/download). 1. Copy and paste the code from the [download page](/grafana/download) into your command line and run. ## Uninstall on Debian or Ubuntu Complete any of the following steps to uninstall Grafana. To uninstall Grafana, run the following commands in a terminal window: 1. If you configured Grafana to run with systemd, stop the systemd service for Grafana server: ```shell sudo systemctl stop grafana-server ``` 1. If you configured Grafana to run with init.d, stop the init.d service for Grafana server: ```shell sudo service grafana-server stop ``` 1. To uninstall Grafana OSS: ```shell sudo apt-get remove grafana ``` 1. To uninstall Grafana Enterprise: ```shell sudo apt-get remove grafana-enterprise ``` 1. Optional: To remove the Grafana repository: ```bash sudo rm -i /etc/apt/sources.list.d/grafana.list ``` ## Next steps - [Start the Grafana server]()
grafana setup
aliases installation debian installation installation debian description Install guide for Grafana on Debian or Ubuntu labels products enterprise oss menuTitle Debian or Ubuntu title Install Grafana on Debian or Ubuntu weight 100 Install Grafana on Debian or Ubuntu This topic explains how to install Grafana dependencies install Grafana on Linux Debian or Ubuntu and start the Grafana server on your Debian or Ubuntu system There are multiple ways to install Grafana using the Grafana Labs APT repository by downloading a deb package or by downloading a binary tar gz file Choose only one of the methods below that best suits your needs If you install via the deb package or tar gz file then you must manually update Grafana for each new version The following video demonstrates how to install Grafana on Debian and Ubuntu as outlined in this document Install from APT repository If you install from the APT repository Grafana automatically updates when you run apt get update Grafana Version Package Repository Grafana Enterprise grafana enterprise https apt grafana com stable main Grafana Enterprise Beta grafana enterprise https apt grafana com beta main Grafana OSS grafana https apt grafana com stable main Grafana OSS Beta grafana https apt grafana com beta main Grafana Enterprise is the recommended and default edition It is available for free and includes all the features of the OSS edition You can also upgrade to the full Enterprise feature set products enterprise utm source grafana install page which has support for Enterprise plugins grafana plugins enterprise 1 utcm source grafana install page Complete the following steps to install Grafana from the APT repository 1 Install the prerequisite packages bash sudo apt get install y apt transport https software properties common wget 1 Import the GPG key bash sudo mkdir p etc apt keyrings wget q O https apt grafana com gpg key gpg dearmor sudo tee etc apt keyrings grafana gpg dev null 1 To add a repository for stable releases run the following command bash echo deb signed by etc apt keyrings grafana gpg https apt grafana com stable main sudo tee a etc apt sources list d grafana list 1 To add a repository for beta releases run the following command bash echo deb signed by etc apt keyrings grafana gpg https apt grafana com beta main sudo tee a etc apt sources list d grafana list 1 Run the following command to update the list of available packages bash Updates the list of available packages sudo apt get update 1 To install Grafana OSS run the following command bash Installs the latest OSS release sudo apt get install grafana 1 To install Grafana Enterprise run the following command bash Installs the latest Enterprise release sudo apt get install grafana enterprise Install Grafana using a deb package or as a standalone binary If you choose not to install Grafana using APT you can download and install Grafana using the deb package or as a standalone binary Complete the following steps to install Grafana using DEB or the standalone binaries 1 Navigate to the Grafana download page grafana download 1 Select the Grafana version you want to install The most recent Grafana version is selected by default The Version field displays only tagged releases If you want to install a nightly build click Nightly Builds and then select a version 1 Select an Edition Enterprise This is the recommended version It is functionally identical to the open source version but includes features you can unlock with a license if you so choose Open Source This version is functionally identical to the Enterprise version but you will need to download the Enterprise version if you want Enterprise features 1 Depending on which system you are running click the Linux or ARM tab on the download page grafana download 1 Copy and paste the code from the download page grafana download into your command line and run Uninstall on Debian or Ubuntu Complete any of the following steps to uninstall Grafana To uninstall Grafana run the following commands in a terminal window 1 If you configured Grafana to run with systemd stop the systemd service for Grafana server shell sudo systemctl stop grafana server 1 If you configured Grafana to run with init d stop the init d service for Grafana server shell sudo service grafana server stop 1 To uninstall Grafana OSS shell sudo apt get remove grafana 1 To uninstall Grafana Enterprise shell sudo apt get remove grafana enterprise 1 Optional To remove the Grafana repository bash sudo rm i etc apt sources list d grafana list Next steps Start the Grafana server
grafana setup Install guide for Grafana on RHEL and Fedora menuTitle RHEL or Fedora products weight 200 title Install Grafana on RHEL or Fedora labels oss enterprise
--- description: Install guide for Grafana on RHEL and Fedora. labels: products: - enterprise - oss menuTitle: RHEL or Fedora title: Install Grafana on RHEL or Fedora weight: 200 --- # Install Grafana on RHEL or Fedora This topic explains how to install Grafana dependencies, install Grafana on RHEL or Fedora, and start the Grafana server on your system. You can install Grafana from the RPM repository, from standalone RPM, or with the binary `.tar.gz` file. If you install via RPM or the `.tar.gz` file, then you must manually update Grafana for each new version. The following video demonstrates how to install Grafana on RHEL or Fedora as outlined in this document: ## Install Grafana from the RPM repository If you install from the RPM repository, then Grafana is automatically updated every time you update your applications. | Grafana Version | Package | Repository | | ------------------------- | ------------------ | ------------------------------ | | Grafana Enterprise | grafana-enterprise | `https://rpm.grafana.com` | | Grafana Enterprise (Beta) | grafana-enterprise | `https://rpm-beta.grafana.com` | | Grafana OSS | grafana | `https://rpm.grafana.com` | | Grafana OSS (Beta) | grafana | `https://rpm-beta.grafana.com` | Grafana Enterprise is the recommended and default edition. It is available for free and includes all the features of the OSS edition. You can also upgrade to the [full Enterprise feature set](/products/enterprise/?utm_source=grafana-install-page), which has support for [Enterprise plugins](/grafana/plugins/?enterprise=1&utcm_source=grafana-install-page). To install Grafana from the RPM repository, complete the following steps: If you wish to install beta versions of Grafana, substitute the repository URL for the beta URL listed above. 1. Import the GPG key: ```bash wget -q -O gpg.key https://rpm.grafana.com/gpg.key sudo rpm --import gpg.key ``` 1. Create `/etc/yum.repos.d/grafana.repo` with the following content: ```bash [grafana] name=grafana baseurl=https://rpm.grafana.com repo_gpgcheck=1 enabled=1 gpgcheck=1 gpgkey=https://rpm.grafana.com/gpg.key sslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt ``` 1. To install Grafana OSS, run the following command: ```bash sudo dnf install grafana ``` 1. To install Grafana Enterprise, run the following command: ```bash sudo dnf install grafana-enterprise ``` ## Install the Grafana RPM package manually If you install Grafana manually using YUM or RPM, then you must manually update Grafana for each new version. This method varies according to which Linux OS you are running. **Note:** The RPM files are signed. You can verify the signature with this [public GPG key](https://rpm.grafana.com/gpg.key). 1. On the [Grafana download page](/grafana/download), select the Grafana version you want to install. - The most recent Grafana version is selected by default. - The **Version** field displays only finished releases. If you want to install a beta version, click **Nightly Builds** and then select a version. 1. Select an **Edition**. - **Enterprise** - Recommended download. Functionally identical to the open source version, but includes features you can unlock with a license if you so choose. - **Open Source** - Functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features. 1. Depending on which system you are running, click **Linux** or **ARM**. 1. Copy and paste the RPM package URL and the local RPM package information from the [download page](/grafana/download) into the pattern shown below and run the command. ```bash sudo yum install -y <rpm package url> ``` ## Install Grafana as a standalone binary Complete the following steps to install Grafana using the standalone binaries: 1. Navigate to the [Grafana download page](/grafana/download). 1. Select the Grafana version you want to install. - The most recent Grafana version is selected by default. - The **Version** field displays only tagged releases. If you want to install a nightly build, click **Nightly Builds** and then select a version. 1. Select an **Edition**. - **Enterprise:** This is the recommended version. It is functionally identical to the open-source version but includes features you can unlock with a license if you so choose. - **Open Source:** This version is functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features. 1. Depending on which system you are running, click the **Linux** or **ARM** tab on the [download page](/grafana/download). 1. Copy and paste the code from the [download page](/grafana/download) page into your command line and run. ## Uninstall on RHEL or Fedora To uninstall Grafana, run the following commands in a terminal window: 1. If you configured Grafana to run with systemd, stop the systemd service for Grafana server: ```shell sudo systemctl stop grafana-server ``` 1. If you configured Grafana to run with init.d, stop the init.d service for Grafana server: ```shell sudo service grafana-server stop ``` 1. To uninstall Grafana OSS: ```shell sudo dnf remove grafana ``` 1. To uninstall Grafana Enterprise: ```shell sudo dnf remove grafana-enterprise ``` 1. Optional: To remove the Grafana repository: ```shell sudo rm -i /etc/yum.repos.d/grafana.repo ``` ## Next steps Refer to [Start the Grafana server]().
grafana setup
description Install guide for Grafana on RHEL and Fedora labels products enterprise oss menuTitle RHEL or Fedora title Install Grafana on RHEL or Fedora weight 200 Install Grafana on RHEL or Fedora This topic explains how to install Grafana dependencies install Grafana on RHEL or Fedora and start the Grafana server on your system You can install Grafana from the RPM repository from standalone RPM or with the binary tar gz file If you install via RPM or the tar gz file then you must manually update Grafana for each new version The following video demonstrates how to install Grafana on RHEL or Fedora as outlined in this document Install Grafana from the RPM repository If you install from the RPM repository then Grafana is automatically updated every time you update your applications Grafana Version Package Repository Grafana Enterprise grafana enterprise https rpm grafana com Grafana Enterprise Beta grafana enterprise https rpm beta grafana com Grafana OSS grafana https rpm grafana com Grafana OSS Beta grafana https rpm beta grafana com Grafana Enterprise is the recommended and default edition It is available for free and includes all the features of the OSS edition You can also upgrade to the full Enterprise feature set products enterprise utm source grafana install page which has support for Enterprise plugins grafana plugins enterprise 1 utcm source grafana install page To install Grafana from the RPM repository complete the following steps If you wish to install beta versions of Grafana substitute the repository URL for the beta URL listed above 1 Import the GPG key bash wget q O gpg key https rpm grafana com gpg key sudo rpm import gpg key 1 Create etc yum repos d grafana repo with the following content bash grafana name grafana baseurl https rpm grafana com repo gpgcheck 1 enabled 1 gpgcheck 1 gpgkey https rpm grafana com gpg key sslverify 1 sslcacert etc pki tls certs ca bundle crt 1 To install Grafana OSS run the following command bash sudo dnf install grafana 1 To install Grafana Enterprise run the following command bash sudo dnf install grafana enterprise Install the Grafana RPM package manually If you install Grafana manually using YUM or RPM then you must manually update Grafana for each new version This method varies according to which Linux OS you are running Note The RPM files are signed You can verify the signature with this public GPG key https rpm grafana com gpg key 1 On the Grafana download page grafana download select the Grafana version you want to install The most recent Grafana version is selected by default The Version field displays only finished releases If you want to install a beta version click Nightly Builds and then select a version 1 Select an Edition Enterprise Recommended download Functionally identical to the open source version but includes features you can unlock with a license if you so choose Open Source Functionally identical to the Enterprise version but you will need to download the Enterprise version if you want Enterprise features 1 Depending on which system you are running click Linux or ARM 1 Copy and paste the RPM package URL and the local RPM package information from the download page grafana download into the pattern shown below and run the command bash sudo yum install y rpm package url Install Grafana as a standalone binary Complete the following steps to install Grafana using the standalone binaries 1 Navigate to the Grafana download page grafana download 1 Select the Grafana version you want to install The most recent Grafana version is selected by default The Version field displays only tagged releases If you want to install a nightly build click Nightly Builds and then select a version 1 Select an Edition Enterprise This is the recommended version It is functionally identical to the open source version but includes features you can unlock with a license if you so choose Open Source This version is functionally identical to the Enterprise version but you will need to download the Enterprise version if you want Enterprise features 1 Depending on which system you are running click the Linux or ARM tab on the download page grafana download 1 Copy and paste the code from the download page grafana download page into your command line and run Uninstall on RHEL or Fedora To uninstall Grafana run the following commands in a terminal window 1 If you configured Grafana to run with systemd stop the systemd service for Grafana server shell sudo systemctl stop grafana server 1 If you configured Grafana to run with init d stop the init d service for Grafana server shell sudo service grafana server stop 1 To uninstall Grafana OSS shell sudo dnf remove grafana 1 To uninstall Grafana Enterprise shell sudo dnf remove grafana enterprise 1 Optional To remove the Grafana repository shell sudo rm i etc yum repos d grafana repo Next steps Refer to Start the Grafana server
grafana setup menuTitle SUSE or openSUSE weight 300 Install guide for Grafana on SUSE or openSUSE products title Install Grafana on SUSE or openSUSE labels oss enterprise
--- description: Install guide for Grafana on SUSE or openSUSE. labels: products: - enterprise - oss menuTitle: SUSE or openSUSE title: Install Grafana on SUSE or openSUSE weight: 300 --- # Install Grafana on SUSE or openSUSE This topic explains how to install Grafana dependencies, install Grafana on SUSE or openSUSE and start the Grafana server on your system. You can install Grafana using the RPM repository, or by downloading a binary `.tar.gz` file. If you install via RPM or the `.tar.gz` file, then you must manually update Grafana for each new version. The following video demonstrates how to install Grafana on SUSE or openSUSE as outlined in this document: ## Install Grafana from the RPM repository If you install from the RPM repository, then Grafana is automatically updated every time you run `sudo zypper update`. | Grafana Version | Package | Repository | | ------------------ | ------------------ | ------------------------- | | Grafana Enterprise | grafana-enterprise | `https://rpm.grafana.com` | | Grafana OSS | grafana | `https://rpm.grafana.com` | Grafana Enterprise is the recommended and default edition. It is available for free and includes all the features of the OSS edition. You can also upgrade to the [full Enterprise feature set](/products/enterprise/?utm_source=grafana-install-page), which has support for [Enterprise plugins](/grafana/plugins/?enterprise=1&utcm_source=grafana-install-page). To install Grafana using the RPM repository, complete the following steps: 1. Import the GPG key: ```bash wget -q -O gpg.key https://rpm.grafana.com/gpg.key sudo rpm --import gpg.key ``` 1. Use zypper to add the grafana repo. ```bash sudo zypper addrepo https://rpm.grafana.com grafana ``` 1. To install Grafana OSS, run the following command: ```bash sudo zypper install grafana ``` 1. To install Grafana Enterprise, run the following command: ```bash sudo zypper install grafana-enterprise ``` ## Install the Grafana RPM package manually If you install Grafana manually using RPM, then you must manually update Grafana for each new version. This method varies according to which Linux OS you are running. **Note:** The RPM files are signed. You can verify the signature with this [public GPG key](https://rpm.grafana.com/gpg.key). 1. On the [Grafana download page](/grafana/download), select the Grafana version you want to install. - The most recent Grafana version is selected by default. - The **Version** field displays only finished releases. If you want to install a beta version, click **Nightly Builds** and then select a version. 1. Select an **Edition**. - **Enterprise** - Recommended download. Functionally identical to the open source version, but includes features you can unlock with a license if you so choose. - **Open Source** - Functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features. 1. Depending on which system you are running, click **Linux** or **ARM**. 1. Copy and paste the RPM package URL and the local RPM package information from the installation page into the pattern shown below, then run the commands. ```bash sudo zypper install initscripts urw-fonts wget wget <rpm package url> sudo rpm -Uvh <local rpm package> ``` ## Install Grafana as a standalone binary Complete the following steps to install Grafana using the standalone binaries: 1. Navigate to the [Grafana download page](/grafana/download). 1. Select the Grafana version you want to install. - The most recent Grafana version is selected by default. - The **Version** field displays only tagged releases. If you want to install a nightly build, click **Nightly Builds** and then select a version. 1. Select an **Edition**. - **Enterprise:** This is the recommended version. It is functionally identical to the open-source version but includes features you can unlock with a license if you so choose. - **Open Source:** This version is functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features. 1. Depending on which system you are running, click the **Linux** or **ARM** tab on the [download page](/grafana/download). 1. Copy and paste the code from the [download page](/grafana/download) into your command line and run. ## Uninstall on SUSE or openSUSE To uninstall Grafana, run the following commands in a terminal window: 1. If you configured Grafana to run with systemd, stop the systemd service for Grafana server: ```shell sudo systemctl stop grafana-server ``` 1. If you configured Grafana to run with init.d, stop the init.d service for Grafana server: ```shell sudo service grafana-server stop ``` 1. To uninstall Grafana OSS: ```shell sudo zypper remove grafana ``` 1. To uninstall Grafana Enterprise: ```shell sudo zypper remove grafana-enterprise ``` 1. Optional: To remove the Grafana repository: ```shell sudo zypper removerepo grafana ``` ## Next steps Refer to [Start the Grafana server]().
grafana setup
description Install guide for Grafana on SUSE or openSUSE labels products enterprise oss menuTitle SUSE or openSUSE title Install Grafana on SUSE or openSUSE weight 300 Install Grafana on SUSE or openSUSE This topic explains how to install Grafana dependencies install Grafana on SUSE or openSUSE and start the Grafana server on your system You can install Grafana using the RPM repository or by downloading a binary tar gz file If you install via RPM or the tar gz file then you must manually update Grafana for each new version The following video demonstrates how to install Grafana on SUSE or openSUSE as outlined in this document Install Grafana from the RPM repository If you install from the RPM repository then Grafana is automatically updated every time you run sudo zypper update Grafana Version Package Repository Grafana Enterprise grafana enterprise https rpm grafana com Grafana OSS grafana https rpm grafana com Grafana Enterprise is the recommended and default edition It is available for free and includes all the features of the OSS edition You can also upgrade to the full Enterprise feature set products enterprise utm source grafana install page which has support for Enterprise plugins grafana plugins enterprise 1 utcm source grafana install page To install Grafana using the RPM repository complete the following steps 1 Import the GPG key bash wget q O gpg key https rpm grafana com gpg key sudo rpm import gpg key 1 Use zypper to add the grafana repo bash sudo zypper addrepo https rpm grafana com grafana 1 To install Grafana OSS run the following command bash sudo zypper install grafana 1 To install Grafana Enterprise run the following command bash sudo zypper install grafana enterprise Install the Grafana RPM package manually If you install Grafana manually using RPM then you must manually update Grafana for each new version This method varies according to which Linux OS you are running Note The RPM files are signed You can verify the signature with this public GPG key https rpm grafana com gpg key 1 On the Grafana download page grafana download select the Grafana version you want to install The most recent Grafana version is selected by default The Version field displays only finished releases If you want to install a beta version click Nightly Builds and then select a version 1 Select an Edition Enterprise Recommended download Functionally identical to the open source version but includes features you can unlock with a license if you so choose Open Source Functionally identical to the Enterprise version but you will need to download the Enterprise version if you want Enterprise features 1 Depending on which system you are running click Linux or ARM 1 Copy and paste the RPM package URL and the local RPM package information from the installation page into the pattern shown below then run the commands bash sudo zypper install initscripts urw fonts wget wget rpm package url sudo rpm Uvh local rpm package Install Grafana as a standalone binary Complete the following steps to install Grafana using the standalone binaries 1 Navigate to the Grafana download page grafana download 1 Select the Grafana version you want to install The most recent Grafana version is selected by default The Version field displays only tagged releases If you want to install a nightly build click Nightly Builds and then select a version 1 Select an Edition Enterprise This is the recommended version It is functionally identical to the open source version but includes features you can unlock with a license if you so choose Open Source This version is functionally identical to the Enterprise version but you will need to download the Enterprise version if you want Enterprise features 1 Depending on which system you are running click the Linux or ARM tab on the download page grafana download 1 Copy and paste the code from the download page grafana download into your command line and run Uninstall on SUSE or openSUSE To uninstall Grafana run the following commands in a terminal window 1 If you configured Grafana to run with systemd stop the systemd service for Grafana server shell sudo systemctl stop grafana server 1 If you configured Grafana to run with init d stop the init d service for Grafana server shell sudo service grafana server stop 1 To uninstall Grafana OSS shell sudo zypper remove grafana 1 To uninstall Grafana Enterprise shell sudo zypper remove grafana enterprise 1 Optional To remove the Grafana repository shell sudo zypper removerepo grafana Next steps Refer to Start the Grafana server
grafana setup title Deploy Grafana on Kubernetes aliases products oss menuTitle Grafana on Kubernetes labels Guide for deploying Grafana on Kubernetes installation kubernetes enterprise
--- aliases: - ../../installation/kubernetes/ description: Guide for deploying Grafana on Kubernetes labels: products: - enterprise - oss menuTitle: Grafana on Kubernetes title: Deploy Grafana on Kubernetes weight: 500 --- # Deploy Grafana on Kubernetes On this page, you will find instructions for installing and running Grafana on Kubernetes using Kubernetes manifests for the setup. If Helm is your preferred option, refer to [Grafana Helm community charts](https://github.com/grafana/helm-charts). Watch this video to learn more about installing Grafana on Kubernetes: ## Before you begin To follow this guide: - You need the latest version of [Kubernetes](https://kubernetes.io/) running either locally or remotely on a public or private cloud. - If you plan to use it in a local environment, you can use various Kubernetes options such as [minikube](https://minikube.sigs.k8s.io/docs/), [kind](https://kind.sigs.k8s.io/), [Docker Desktop](https://docs.docker.com/desktop/kubernetes/), and others. - If you plan to use Kubernetes in a production setting, it's recommended to utilize managed cloud services like [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine), [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/), or [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/products/kubernetes-service/). ## System requirements This section provides minimum hardware and software requirements. ### Minimum Hardware Requirements - Disk space: 1 GB - Memory: 750 MiB (approx 750 MB) - CPU: 250m (approx 0.25 cores) ### Supported databases For a list of supported databases, refer to [supported databases](/docs/grafana/latest/setup-grafana/installation#supported-databases). ### Supported web browsers For a list of support web browsers, refer to [supported web browsers](/docs/grafana/latest/setup-grafana/installation#supported-web-browsers). Enable port `3000` in your network environment, as this is the Grafana default port. ## Deploy Grafana OSS on Kubernetes This section explains how to install Grafana OSS using Kubernetes. If you want to install Grafana Enterprise on Kubernetes,Β refer to [Deploy Grafana Enterprise on Kubernetes](#deploy-grafana-enterprise-on-kubernetes). If you deploy an application in Kubernetes, it will use the default namespace which may already have other applications running. This can result in conflicts and other issues. It is recommended to create a new namespace in Kubernetes to better manage, organize, allocate, and manage cluster resources. For more information about Namespaces, refer to the official [Kubernetes documentation](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). 1. To create a namespace, run the following command: ```bash kubectl create namespace my-grafana ``` In this example, the namespace is `my-grafana` 1. To verify and view the newly created namespace, run the following command: ```bash kubectl get namespace my-grafana ``` The output of the command provides more information about the newly created namespace. 1. Create a YAML manifest file named `grafana.yaml`. This file will contain the necessary code for deployment. ```bash touch grafana.yaml ``` In the next step you define the following three objects in the YAML file. | Object | Description | | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | | Persistent Volume Claim (PVC) | This object stores the data. | | Service | This object provides network access to the Pod defined in the deployment. | | Deployment | This object is responsible for creating the pods, ensuring they stay up to date, and managing Replicaset and Rolling updates. | 1. Copy and paste the following contents and save it in the `grafana.yaml` file. ```yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: grafana-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: grafana name: grafana spec: selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: securityContext: fsGroup: 472 supplementalGroups: - 0 containers: - name: grafana image: grafana/grafana:latest imagePullPolicy: IfNotPresent ports: - containerPort: 3000 name: http-grafana protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /robots.txt port: 3000 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 2 livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 3000 timeoutSeconds: 1 resources: requests: cpu: 250m memory: 750Mi volumeMounts: - mountPath: /var/lib/grafana name: grafana-pv volumes: - name: grafana-pv persistentVolumeClaim: claimName: grafana-pvc --- apiVersion: v1 kind: Service metadata: name: grafana spec: ports: - port: 3000 protocol: TCP targetPort: http-grafana selector: app: grafana sessionAffinity: None type: LoadBalancer ``` 1. Run the following command to send the manifest to the Kubernetes API server: ```bash kubectl apply -f grafana.yaml --namespace=my-grafana ``` This command creates the PVC, Deployment, and Service objects. 1. Complete the following steps to verify the deployment status of each object. a. For PVC, run the following command: ```bash kubectl get pvc --namespace=my-grafana -o wide ``` b. For Deployment, run the following command: ```bash kubectl get deployments --namespace=my-grafana -o wide ``` c. For Service, run the following command: ```bash kubectl get svc --namespace=my-grafana -o wide ``` ## Access Grafana on Managed K8s Providers In this task, you access Grafana deployed on a Managed Kubernetes provider using a web browser. Accessing Grafana via a web browser is straightforward if it is deployed on a Managed Kubernetes Provider as it uses the cloud provider’s **LoadBalancer** to which the external load balancer routes are automatically created. 1. Run the following command to obtain the deployment information: ```bash kubectl get all --namespace=my-grafana ``` The output returned should look similar to the following: ```bash NAME READY STATUS RESTARTS AGE pod/grafana-69946c9bd6-kwjb6 1/1 Running 0 7m27s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/grafana LoadBalancer 10.5.243.226 1.120.130.330 3000:31171/TCP 7m27s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/grafana 1/1 1 1 7m29s NAME DESIRED CURRENT READY AGE replicaset.apps/grafana-69946c9bd6 1 1 1 7m30s ``` 1. Identify the **EXTERNAL-IP** value in the output and type it into your browser. The Grafana sign-in page appears. 1. To sign in, enter `admin` for both the username and password. 1. If you do not see the EXTERNAL-IP, complete the following steps: a. Run the following command to do a port-forwarding of the Grafana service on port `3000`. ```bash kubectl port-forward service/grafana 3000:3000 --namespace=my-grafana ``` For more information about port-forwarding, refer to [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/). b. Navigate to `localhost:3000` in your browser. The Grafana sign-in page appears. c. To sign in, enter `admin` for both the username and password. ## Access Grafana using minikube There are multiple ways to access the Grafana UI on a web browser when using minikube. For more information about minikube, refer to [How to access applications running within minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/). This section lists the two most common options for accessing an application running in minikube. ### Option 1: Expose the service This option uses the `type: LoadBalancer` in the `grafana.yaml` service manifest, which makes the service accessible through the `minikube service` command. For more information, refer to [minikube Service command usage](https://minikube.sigs.k8s.io/docs/commands/service/). 1. Run the following command to obtain the Grafana service IP: ```bash minikube service grafana --namespace=my-grafana ``` The output returns the Kubernetes URL for service in your local cluster. ```bash |------------|---------|-------------|------------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |------------|---------|-------------|------------------------------| | my-grafana | grafana | 3000 | http://192.168.122.144:32182 | |------------|---------|-------------|------------------------------| Opening service my-grafana/grafana in default browser... http://192.168.122.144:32182 ``` 1. Run a `curl` command to verify whether a given connection should work in a browser under ideal circumstances. ```bash curl 192.168.122.144:32182 ``` The following example output shows that an endpoint has been located: `<a href="/login">Found</a>.` 1. Access the Grafana UI in the browser using the provided IP:Port from the command above. For example `192.168.122.144:32182` The Grafana sign-in page appears. 1. To sign in to Grafana, enter `admin` for both the username and password. ### Option 2: Use port forwarding If Option 1 does not work in your minikube environment (this mostly depends on the network), then as an alternative you can use the port forwarding option for the Grafana service on port `3000`. For more information about port forwarding, refer to [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/). 1. To find the minikube IP address, run the following command: ```bash minikube ip ``` The output contains the IP address that you use to access the Grafana Pod during port forwarding. A Pod is the smallest deployment unit in Kubernetes and is the core building block for running applications in a Kubernetes cluster. For more information about Pods, refer to [Pods](https://kubernetes.io/docs/concepts/workloads/pods/). 1. To obtain the Grafana Pod information, run the following command: ```bash kubectl get pods --namespace=my-grafana ``` The output should look similar to the following: ```bash NAME READY STATUS RESTARTS AGE grafana-58445b6986-dxrrw 1/1 Running 0 9m54s ``` The output shows the Grafana POD name in the `NAME` column, that you use for port forwarding. 1. Run the following command for enabling the port forwarding on the POD: ```bash kubectl port-forward pod/grafana-58445b6986-dxrrw --namespace=my-grafana --address 0.0.0.0 3000:3000 ``` 1. To access the Grafana UI on the web browser, type the minikube IP along with the forwarded port. For example `192.168.122.144:3000` The Grafana sign-in page appears. 1. To sign in to Grafana, enter `admin` for both the username and password. ## Update an existing deployment using a rolling update strategy Rolling updates enable deployment updates to take place with no downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on nodes with available resources. For more information about rolling updates, refer to [Performing a Rolling Update](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/). The following steps use the `kubectl annotate` command to add the metadata and keep track of the deployment. For more information about `kubectl annotate`, refer to [kubectl annotate documentation](https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_annotate/). Instead of using the `annotate` flag, you can still use the `--record` flag. However, it has been deprecated and will be removed in the future version of Kubernetes. See: https://github.com/kubernetes/kubernetes/issues/40422 1. To view the current status of the rollout, run the following command: ```bash kubectl rollout history deployment/grafana --namespace=my-grafana ``` The output will look similar to this: ```bash deployment.apps/grafana REVISION CHANGE-CAUSE 1 NONE ``` The output shows that nothing has been updated or changed after applying the `grafana.yaml` file. 1. To add metadata to keep record of the initial deployment, run the following command: ```bash kubectl annotate deployment/grafana kubernetes.io/change-cause='deployed the default base yaml file' --namespace=my-grafana ``` 1. To review the rollout history and verify the changes, run the following command: ```bash kubectl rollout history deployment/grafana --namespace=my-grafana ``` You should see the updated information that you added in the `CHANGE-CAUSE` earlier. ### Change Grafana image version 1. To change the deployed Grafana version, run the following `kubectl edit` command: ```bash kubectl edit deployment grafana --namespace=my-grafana ``` 1. In the editor, change the container image under the `kind: Deployment` section. For example: - From - `yaml image: grafana/grafana-oss:10.0.1` - To - `yaml image: grafana/grafana-oss-dev:10.1.0-124419pre` 1. Save the changes. Once you save the file, you receive a message similar to the following: ```bash deployment.apps/grafana edited ``` This means that the changes have been applied. 1. To verify that the rollout on the cluster is successful, run the following command: ```bash kubectl rollout status deployment grafana --namespace=my-grafana ``` A successful deployment rollout means that the Grafana Dev cluster is now available. 1. To check the statuses of all deployed objects, run the following command and include the `-o wide` flag to get more detailed output: ```bash kubectl get all --namespace=my-grafana -o wide ``` You should see the newly deployed `grafana-oss-dev` image. 1. To verify it, access the Grafana UI in the browser using the provided IP:Port from the command above. The Grafana sign-in page appears. 1. To sign in to Grafana, enter `admin` for both the username and password. 1. In the top-right corner, click the help icon. The version information appears. 1. Add the `change cause` metadata to keep track of things using the commands: ```bash kubectl annotate deployment grafana --namespace=my-grafana kubernetes.io/change-cause='using grafana-oss-dev:10.1.0-124419pre for testing' ``` 1. To verify, run the `kubectl rollout history` command: ```bash kubectl rollout history deployment grafana --namespace=my-grafana ``` You will see an output similar to this: ```bash deployment.apps/grafana REVISION CHANGE-CAUSE 1 deploying the default yaml 2 using grafana-oss-dev:10.1.0-124419pre for testing ``` This means that `REVISION#2` is the current version. The last line of the `kubectl rollout history deployment` command output is the one which is currently active and running on your Kubernetes environment. ### Roll back a deployment When the Grafana deployment becomes unstable due to crash looping, bugs, and so on, you can roll back a deployment to an earlier version (a `REVISION`). By default, Kubernetes deployment rollout history remains in the system so that you can roll back at any time. For more information, refer toΒ [Rolling Back to a Previous Revision](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-to-a-previous-revision). 1. To list all possible `REVISION` values, run the following command: ```bash kubectl rollout history deployment grafana --namespace=my-grafana ``` 1. To roll back to a previous version, run the `kubectl rollout undo` command and provide a revision number. Example: To roll back to a previous version, specify the `REVISION` number, which appears after you run the `kubectl rollout history deployment` command, in the `--to-revision` parameter. ```bash kubectl rollout undo deployment grafana --to-revision=1 --namespace=my-grafana ``` 1. To verify that the rollback on the cluster is successful, run the following command: ```bash kubectl rollout status deployment grafana --namespace=my-grafana ``` 1. Access the Grafana UI in the browser using the provided IP:Port from the command above. The Grafana sign-in page appears. 1. To sign in to Grafana, enter `admin` for both the username and password. 1. In the top-right corner, click the help icon to display the version number. 1. To see the new rollout history, run the following command: ```bash kubectl rollout history deployment grafana --namespace=my-grafana ``` If you need to go back to any other `REVISION`, just repeat the steps above and use the correct revision number in the `--to-revision` parameter. ## Provision Grafana resources using configuration files Provisioning can add, update, or delete resources specified in your configuration files when Grafana starts. For detailed information, refer to [Grafana Provisioning](/docs/grafana/<GRAFANA_VERSION>/administration/provisioning). This section outlines general instructions for provisioning Grafana resources within Kubernetes, using a persistent volume to supply the configuration files to the Grafana pod. 1. Add a new `PersistentVolumeClaim` to the `grafana.yaml` file. ```yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: grafana-provisioning-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Mi ``` 1. In the `grafana.yaml` file, mount the persistent volume into `/etc/grafana/provisioning` as follows. ```yaml ... volumeMounts: - mountPath: /etc/grafana/provisioning name: grafana-provisioning-pv ... volumes: - name: grafana-provisioning-pv persistentVolumeClaim: claimName: grafana-provisioning-pvc ... ``` 1. Find or create the provision resources you want to add. For instance, create a `alerting.yaml` file adding a mute timing (alerting resource). ```yaml apiVersion: 1 muteTimes: - orgId: 1 name: MuteWeekends time_intervals: - weekdays: [saturday, sunday] ``` 1. By default, configuration files for alerting resources need to be placed in the `provisioning/alerting` directory. Save the `alerting.yaml` file in a directory named `alerting`, as we will next supply this `alerting` directory to the `/etc/grafana/provisioning` folder of the Grafana pod. 1. Verify first the content of the provisioning directory in the running Grafana pod. ```bash kubectl exec -n my-grafana <pod_name> -- ls /etc/grafana/provisioning/ ``` ```bash kubectl exec -n my-grafana <pod_name> -- ls /etc/grafana/provisioning/alerting ``` Because the `alerting` folder is not available yet, the last command should output a `No such file or directory` error. 1. Copy the local `alerting` directory to `/etc/grafana/provisioning/` in the Grafana pod. ```bash kubectl cp alerting my-grafana/<pod_name>:/etc/grafana/provisioning/ ``` You can follow the same process to provision additional Grafana resources by supplying the following folders: - `provisioning/dashboards` - `provisioning/datasources` - `provisioning/plugins` 1. Verify the `alerting` directory in the running Grafana pod includes the `alerting.yaml` file. ```bash kubectl exec -n my-grafana <pod_name> -- ls /etc/grafana/provisioning/alerting ``` 1. Restart the Grafana pod to provision the resources. ```bash kubectl rollout restart -n my-grafana deployment --selector=app=grafana ``` Note that `rollout restart` kills the previous pod and scales a new pod. When the old pod terminates, you may have to enable port-forwarding in the new pod. For instructions, refer to the previous sections about port forwarding in this guide. 1. Verify the Grafana resources are properly provisioned within the Grafana instance. ## Troubleshooting This section includes troubleshooting tips you might find helpful when deploying Grafana on Kubernetes. ### Collecting logs It is important to view the Grafana server logs while troubleshooting any issues. 1. To check the Grafana logs, run the following command: ```bash # dump Pod logs for a Deployment (single-container case) kubectl logs --namespace=my-grafana deploy/grafana ``` 1. If you have multiple containers running in the deployment, run the following command to obtain the logs only for the Grafana deployment: ```bash # dump Pod logs for a Deployment (multi-container case) kubectl logs --namespace=my-grafana deploy/grafana -c grafana ``` For more information about accessing Kubernetes application logs, refer to [Pods](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods) and [Deployments](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-deployments-and-services). ### Increasing log levels to debug mode By default, the Grafana log level is set to `info`, but you can increase it to `debug` mode to fetch information needed to diagnose and troubleshoot a problem. For more information about Grafana log levels, refer to [Configuring logs](/docs/grafana/latest/setup-grafana/configure-grafana#log). The following example uses the Kubernetes ConfigMap which is an API object that stores non-confidential data in key-value pairs. For more information, refer to [Kubernetes ConfigMap Concept](https://kubernetes.io/docs/concepts/configuration/configmap/). 1. Create a empty file and name it `grafana.ini` and add the following: ```bash [log] ; # Either "debug", "info", "warn", "error", "critical", default is "info" ; # we change from info to debug level level = debug ``` This example adds the portion of the log section from the configuration file. You can refer to the [Configure Grafana](/docs/grafana/latest/setup-grafana/configure-grafana/) documentation to view all the default configuration settings. 1. To add the configuration file into the Kubernetes cluster via the ConfigMap object, run the following command: ```bash kubectl create configmap ge-config --from-file=/path/to/file/grafana.ini --namespace=my-grafana ``` 1. To verify the ConfigMap object creation, run the following command: ```bash kubectl get configmap --namespace=my-grafana ``` 1. Open the `grafana.yaml` file and In the Deployment section, provide the mount path to the custom configuration (`/etc/grafana`) and reference the newly created ConfigMap for it. ```bash --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: grafana name: grafana # the rest of the code remains the same. ... .... ... requests: cpu: 250m memory: 750Mi volumeMounts: - mountPath: /var/lib/grafana name: grafana-pv # This is to mount the volume for the custom configuration - mountPath: /etc/grafana name: ge-config volumes: - name: grafana-pv persistentVolumeClaim: claimName: grafana-pvc # This is to provide the reference to the ConfigMap for the volume - name: ge-config configMap: name: ge-config ``` 1. Deploy the manifest using the following kubectl apply command: ```bash kubectl apply -f grafana.yaml --namespace=my-grafana ``` 1. To verify the status, run the following commands: ```bash # first check the rollout status kubectl rollout status deployment grafana --namespace=my-grafana # then check the deployment and configMap information kubectl get all --namespace=my-grafana ``` 1. To verify it, access the Grafana UI in the browser using the provided IP:Port The Grafana sign-in page appears. 1. To sign in to Grafana, enter `admin` for both the username and password. 1. Navigate to **Server Admin > Settings** and then search for log. You should see the level to debug mode. ### Using the --dry-run command You can use the Kubernetes `--dry-run` command to send requests to modifying endpoints and determine if the request would have succeeded. Performing a dry run can be useful for catching errors or unintended consequences before they occur.Β For more information, refer to [Kubernetes Dry-run](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/576-dry-run/README.md). Example: The following example shows how to perform a dry run when you make changes to the `grafana.yaml` such as using a new image version, or adding new labels and you want to determine if there are syntax errors or conflicts. To perform a dry run, run the following command: ```bash kubectl apply -f grafana.yaml --dry-run=server --namespace=grafana ``` If there are no errors, then the output will look similar to this: ```bash persistentvolumeclaim/grafana-pvc unchanged (server dry run) deployment.apps/grafana unchanged (server dry run) service/grafana unchanged (server dry run) ``` If there are errors or warnings, you will see them in the terminal. ## Remove Grafana If you want to remove any of the Grafana deployment objects, use the `kubectl delete command`. 1. If you want to remove the complete Grafana deployment, run the following command: ```bash kubectl delete -f grafana.yaml --namespace=my-grafana ``` This command deletes the deployment, persistentvolumeclaim, and service objects. 1. To delete the ConfigMap, run the following command: ```bash kubectl delete configmap ge-config --namespace=my-grafana ``` ## Deploy Grafana Enterprise on Kubernetes The process for deploying Grafana Enterprise is almost identical to the preceding process, except for additional steps that are required for adding your license file. ### Obtain Grafana Enterprise license To run Grafana Enterprise, you need a valid license. To obtain a license, [contact a Grafana Labs representative](/contact?about=grafana-enterprise). This topic assumes that you have a valid license in a `license.jwt` file. Associate your license with a URL that you can use later in the topic. ### Create license secret Create a Kubernetes secret from your license file using the following command: ```bash kubectl create secret generic ge-license --from-file=/path/to/your/license.jwt ``` ### Create Grafana Enterprise configuration 1. Create a Grafana configuration file with the name `grafana.ini` 1. Paste the following YAML contents into the file you created: ```yaml [enterprise] license_path = /etc/grafana/license/license.jwt [server] root_url =/your/license/root/url ``` 1. Update the `root_url` field to the url associated with the license provided to you. ### Create Configmap for Grafana Enterprise configuration Create a Kubernetes Configmap from your `grafana.ini` file with the following command: ```bash kubectl create configmap ge-config --from-file=/path/to/your/grafana.ini ``` ### Create Grafana Enterprise Kubernetes manifest 1. Create a `grafana.yaml` file, and copy-and-paste the following content into it. The following YAML is identical to the one for a Grafana installation, except for the additional references to the Configmap that contains your Grafana configuration file and the secret that has your license. ```yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: grafana-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: grafana name: grafana spec: selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: securityContext: fsGroup: 472 supplementalGroups: - 0 containers: - image: grafana/grafana-enterprise:latest imagePullPolicy: IfNotPresent name: grafana ports: - containerPort: 3000 name: http-grafana protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /robots.txt port: 3000 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 2 resources: limits: memory: 4Gi requests: cpu: 100m memory: 2Gi volumeMounts: - mountPath: /var/lib/grafana name: grafana-pv - mountPath: /etc/grafana name: ge-config - mountPath: /etc/grafana/license name: ge-license volumes: - name: grafana-pv persistentVolumeClaim: claimName: grafana-pvc - name: ge-config configMap: name: ge-config - name: ge-license secret: secretName: ge-license --- apiVersion: v1 kind: Service metadata: name: grafana spec: ports: - port: 3000 protocol: TCP targetPort: http-grafana selector: app: grafana sessionAffinity: None type: LoadBalancer ``` If you use `LoadBalancer` in the Service and depending on your cloud platform and network configuration, doing so might expose your Grafana instance to the Internet. To eliminate this risk, use `ClusterIP` to restrict access from within the cluster Grafana is deployed to. 1. To send the manifest to Kubernetes API Server, run the following command: `kubectl apply -f grafana.yaml` 1. To verify the manifest was sent, run the following command: `kubectl port-forward service/grafana 3000:3000` 1. Navigate to `localhost:3000` in your browser. You should see the Grafana login page. 1. Use `admin` for both the username and password to login. 1. To verify you are working with an enterprise license, scroll to the bottom of the page where you should see `Enterprise (Licensed)`.
grafana setup
aliases installation kubernetes description Guide for deploying Grafana on Kubernetes labels products enterprise oss menuTitle Grafana on Kubernetes title Deploy Grafana on Kubernetes weight 500 Deploy Grafana on Kubernetes On this page you will find instructions for installing and running Grafana on Kubernetes using Kubernetes manifests for the setup If Helm is your preferred option refer to Grafana Helm community charts https github com grafana helm charts Watch this video to learn more about installing Grafana on Kubernetes Before you begin To follow this guide You need the latest version of Kubernetes https kubernetes io running either locally or remotely on a public or private cloud If you plan to use it in a local environment you can use various Kubernetes options such as minikube https minikube sigs k8s io docs kind https kind sigs k8s io Docker Desktop https docs docker com desktop kubernetes and others If you plan to use Kubernetes in a production setting it s recommended to utilize managed cloud services like Google Kubernetes Engine GKE https cloud google com kubernetes engine Amazon Elastic Kubernetes Service EKS https aws amazon com eks or Azure Kubernetes Service AKS https azure microsoft com en us products kubernetes service System requirements This section provides minimum hardware and software requirements Minimum Hardware Requirements Disk space 1 GB Memory 750 MiB approx 750 MB CPU 250m approx 0 25 cores Supported databases For a list of supported databases refer to supported databases docs grafana latest setup grafana installation supported databases Supported web browsers For a list of support web browsers refer to supported web browsers docs grafana latest setup grafana installation supported web browsers Enable port 3000 in your network environment as this is the Grafana default port Deploy Grafana OSS on Kubernetes This section explains how to install Grafana OSS using Kubernetes If you want to install Grafana Enterprise on Kubernetes refer to Deploy Grafana Enterprise on Kubernetes deploy grafana enterprise on kubernetes If you deploy an application in Kubernetes it will use the default namespace which may already have other applications running This can result in conflicts and other issues It is recommended to create a new namespace in Kubernetes to better manage organize allocate and manage cluster resources For more information about Namespaces refer to the official Kubernetes documentation https kubernetes io docs concepts overview working with objects namespaces 1 To create a namespace run the following command bash kubectl create namespace my grafana In this example the namespace is my grafana 1 To verify and view the newly created namespace run the following command bash kubectl get namespace my grafana The output of the command provides more information about the newly created namespace 1 Create a YAML manifest file named grafana yaml This file will contain the necessary code for deployment bash touch grafana yaml In the next step you define the following three objects in the YAML file Object Description Persistent Volume Claim PVC This object stores the data Service This object provides network access to the Pod defined in the deployment Deployment This object is responsible for creating the pods ensuring they stay up to date and managing Replicaset and Rolling updates 1 Copy and paste the following contents and save it in the grafana yaml file yaml apiVersion v1 kind PersistentVolumeClaim metadata name grafana pvc spec accessModes ReadWriteOnce resources requests storage 1Gi apiVersion apps v1 kind Deployment metadata labels app grafana name grafana spec selector matchLabels app grafana template metadata labels app grafana spec securityContext fsGroup 472 supplementalGroups 0 containers name grafana image grafana grafana latest imagePullPolicy IfNotPresent ports containerPort 3000 name http grafana protocol TCP readinessProbe failureThreshold 3 httpGet path robots txt port 3000 scheme HTTP initialDelaySeconds 10 periodSeconds 30 successThreshold 1 timeoutSeconds 2 livenessProbe failureThreshold 3 initialDelaySeconds 30 periodSeconds 10 successThreshold 1 tcpSocket port 3000 timeoutSeconds 1 resources requests cpu 250m memory 750Mi volumeMounts mountPath var lib grafana name grafana pv volumes name grafana pv persistentVolumeClaim claimName grafana pvc apiVersion v1 kind Service metadata name grafana spec ports port 3000 protocol TCP targetPort http grafana selector app grafana sessionAffinity None type LoadBalancer 1 Run the following command to send the manifest to the Kubernetes API server bash kubectl apply f grafana yaml namespace my grafana This command creates the PVC Deployment and Service objects 1 Complete the following steps to verify the deployment status of each object a For PVC run the following command bash kubectl get pvc namespace my grafana o wide b For Deployment run the following command bash kubectl get deployments namespace my grafana o wide c For Service run the following command bash kubectl get svc namespace my grafana o wide Access Grafana on Managed K8s Providers In this task you access Grafana deployed on a Managed Kubernetes provider using a web browser Accessing Grafana via a web browser is straightforward if it is deployed on a Managed Kubernetes Provider as it uses the cloud provider s LoadBalancer to which the external load balancer routes are automatically created 1 Run the following command to obtain the deployment information bash kubectl get all namespace my grafana The output returned should look similar to the following bash NAME READY STATUS RESTARTS AGE pod grafana 69946c9bd6 kwjb6 1 1 Running 0 7m27s NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE service grafana LoadBalancer 10 5 243 226 1 120 130 330 3000 31171 TCP 7m27s NAME READY UP TO DATE AVAILABLE AGE deployment apps grafana 1 1 1 1 7m29s NAME DESIRED CURRENT READY AGE replicaset apps grafana 69946c9bd6 1 1 1 7m30s 1 Identify the EXTERNAL IP value in the output and type it into your browser The Grafana sign in page appears 1 To sign in enter admin for both the username and password 1 If you do not see the EXTERNAL IP complete the following steps a Run the following command to do a port forwarding of the Grafana service on port 3000 bash kubectl port forward service grafana 3000 3000 namespace my grafana For more information about port forwarding refer to Use Port Forwarding to Access Applications in a Cluster https kubernetes io docs tasks access application cluster port forward access application cluster b Navigate to localhost 3000 in your browser The Grafana sign in page appears c To sign in enter admin for both the username and password Access Grafana using minikube There are multiple ways to access the Grafana UI on a web browser when using minikube For more information about minikube refer to How to access applications running within minikube https minikube sigs k8s io docs handbook accessing This section lists the two most common options for accessing an application running in minikube Option 1 Expose the service This option uses the type LoadBalancer in the grafana yaml service manifest which makes the service accessible through the minikube service command For more information refer to minikube Service command usage https minikube sigs k8s io docs commands service 1 Run the following command to obtain the Grafana service IP bash minikube service grafana namespace my grafana The output returns the Kubernetes URL for service in your local cluster bash NAMESPACE NAME TARGET PORT URL my grafana grafana 3000 http 192 168 122 144 32182 Opening service my grafana grafana in default browser http 192 168 122 144 32182 1 Run a curl command to verify whether a given connection should work in a browser under ideal circumstances bash curl 192 168 122 144 32182 The following example output shows that an endpoint has been located a href login Found a 1 Access the Grafana UI in the browser using the provided IP Port from the command above For example 192 168 122 144 32182 The Grafana sign in page appears 1 To sign in to Grafana enter admin for both the username and password Option 2 Use port forwarding If Option 1 does not work in your minikube environment this mostly depends on the network then as an alternative you can use the port forwarding option for the Grafana service on port 3000 For more information about port forwarding refer to Use Port Forwarding to Access Applications in a Cluster https kubernetes io docs tasks access application cluster port forward access application cluster 1 To find the minikube IP address run the following command bash minikube ip The output contains the IP address that you use to access the Grafana Pod during port forwarding A Pod is the smallest deployment unit in Kubernetes and is the core building block for running applications in a Kubernetes cluster For more information about Pods refer to Pods https kubernetes io docs concepts workloads pods 1 To obtain the Grafana Pod information run the following command bash kubectl get pods namespace my grafana The output should look similar to the following bash NAME READY STATUS RESTARTS AGE grafana 58445b6986 dxrrw 1 1 Running 0 9m54s The output shows the Grafana POD name in the NAME column that you use for port forwarding 1 Run the following command for enabling the port forwarding on the POD bash kubectl port forward pod grafana 58445b6986 dxrrw namespace my grafana address 0 0 0 0 3000 3000 1 To access the Grafana UI on the web browser type the minikube IP along with the forwarded port For example 192 168 122 144 3000 The Grafana sign in page appears 1 To sign in to Grafana enter admin for both the username and password Update an existing deployment using a rolling update strategy Rolling updates enable deployment updates to take place with no downtime by incrementally updating Pods instances with new ones The new Pods will be scheduled on nodes with available resources For more information about rolling updates refer to Performing a Rolling Update https kubernetes io docs tutorials kubernetes basics update update intro The following steps use the kubectl annotate command to add the metadata and keep track of the deployment For more information about kubectl annotate refer to kubectl annotate documentation https jamesdefabia github io docs user guide kubectl kubectl annotate Instead of using the annotate flag you can still use the record flag However it has been deprecated and will be removed in the future version of Kubernetes See https github com kubernetes kubernetes issues 40422 1 To view the current status of the rollout run the following command bash kubectl rollout history deployment grafana namespace my grafana The output will look similar to this bash deployment apps grafana REVISION CHANGE CAUSE 1 NONE The output shows that nothing has been updated or changed after applying the grafana yaml file 1 To add metadata to keep record of the initial deployment run the following command bash kubectl annotate deployment grafana kubernetes io change cause deployed the default base yaml file namespace my grafana 1 To review the rollout history and verify the changes run the following command bash kubectl rollout history deployment grafana namespace my grafana You should see the updated information that you added in the CHANGE CAUSE earlier Change Grafana image version 1 To change the deployed Grafana version run the following kubectl edit command bash kubectl edit deployment grafana namespace my grafana 1 In the editor change the container image under the kind Deployment section For example From yaml image grafana grafana oss 10 0 1 To yaml image grafana grafana oss dev 10 1 0 124419pre 1 Save the changes Once you save the file you receive a message similar to the following bash deployment apps grafana edited This means that the changes have been applied 1 To verify that the rollout on the cluster is successful run the following command bash kubectl rollout status deployment grafana namespace my grafana A successful deployment rollout means that the Grafana Dev cluster is now available 1 To check the statuses of all deployed objects run the following command and include the o wide flag to get more detailed output bash kubectl get all namespace my grafana o wide You should see the newly deployed grafana oss dev image 1 To verify it access the Grafana UI in the browser using the provided IP Port from the command above The Grafana sign in page appears 1 To sign in to Grafana enter admin for both the username and password 1 In the top right corner click the help icon The version information appears 1 Add the change cause metadata to keep track of things using the commands bash kubectl annotate deployment grafana namespace my grafana kubernetes io change cause using grafana oss dev 10 1 0 124419pre for testing 1 To verify run the kubectl rollout history command bash kubectl rollout history deployment grafana namespace my grafana You will see an output similar to this bash deployment apps grafana REVISION CHANGE CAUSE 1 deploying the default yaml 2 using grafana oss dev 10 1 0 124419pre for testing This means that REVISION 2 is the current version The last line of the kubectl rollout history deployment command output is the one which is currently active and running on your Kubernetes environment Roll back a deployment When the Grafana deployment becomes unstable due to crash looping bugs and so on you can roll back a deployment to an earlier version a REVISION By default Kubernetes deployment rollout history remains in the system so that you can roll back at any time For more information refer to Rolling Back to a Previous Revision https kubernetes io docs concepts workloads controllers deployment rolling back to a previous revision 1 To list all possible REVISION values run the following command bash kubectl rollout history deployment grafana namespace my grafana 1 To roll back to a previous version run the kubectl rollout undo command and provide a revision number Example To roll back to a previous version specify the REVISION number which appears after you run the kubectl rollout history deployment command in the to revision parameter bash kubectl rollout undo deployment grafana to revision 1 namespace my grafana 1 To verify that the rollback on the cluster is successful run the following command bash kubectl rollout status deployment grafana namespace my grafana 1 Access the Grafana UI in the browser using the provided IP Port from the command above The Grafana sign in page appears 1 To sign in to Grafana enter admin for both the username and password 1 In the top right corner click the help icon to display the version number 1 To see the new rollout history run the following command bash kubectl rollout history deployment grafana namespace my grafana If you need to go back to any other REVISION just repeat the steps above and use the correct revision number in the to revision parameter Provision Grafana resources using configuration files Provisioning can add update or delete resources specified in your configuration files when Grafana starts For detailed information refer to Grafana Provisioning docs grafana GRAFANA VERSION administration provisioning This section outlines general instructions for provisioning Grafana resources within Kubernetes using a persistent volume to supply the configuration files to the Grafana pod 1 Add a new PersistentVolumeClaim to the grafana yaml file yaml apiVersion v1 kind PersistentVolumeClaim metadata name grafana provisioning pvc spec accessModes ReadWriteOnce resources requests storage 1Mi 1 In the grafana yaml file mount the persistent volume into etc grafana provisioning as follows yaml volumeMounts mountPath etc grafana provisioning name grafana provisioning pv volumes name grafana provisioning pv persistentVolumeClaim claimName grafana provisioning pvc 1 Find or create the provision resources you want to add For instance create a alerting yaml file adding a mute timing alerting resource yaml apiVersion 1 muteTimes orgId 1 name MuteWeekends time intervals weekdays saturday sunday 1 By default configuration files for alerting resources need to be placed in the provisioning alerting directory Save the alerting yaml file in a directory named alerting as we will next supply this alerting directory to the etc grafana provisioning folder of the Grafana pod 1 Verify first the content of the provisioning directory in the running Grafana pod bash kubectl exec n my grafana pod name ls etc grafana provisioning bash kubectl exec n my grafana pod name ls etc grafana provisioning alerting Because the alerting folder is not available yet the last command should output a No such file or directory error 1 Copy the local alerting directory to etc grafana provisioning in the Grafana pod bash kubectl cp alerting my grafana pod name etc grafana provisioning You can follow the same process to provision additional Grafana resources by supplying the following folders provisioning dashboards provisioning datasources provisioning plugins 1 Verify the alerting directory in the running Grafana pod includes the alerting yaml file bash kubectl exec n my grafana pod name ls etc grafana provisioning alerting 1 Restart the Grafana pod to provision the resources bash kubectl rollout restart n my grafana deployment selector app grafana Note that rollout restart kills the previous pod and scales a new pod When the old pod terminates you may have to enable port forwarding in the new pod For instructions refer to the previous sections about port forwarding in this guide 1 Verify the Grafana resources are properly provisioned within the Grafana instance Troubleshooting This section includes troubleshooting tips you might find helpful when deploying Grafana on Kubernetes Collecting logs It is important to view the Grafana server logs while troubleshooting any issues 1 To check the Grafana logs run the following command bash dump Pod logs for a Deployment single container case kubectl logs namespace my grafana deploy grafana 1 If you have multiple containers running in the deployment run the following command to obtain the logs only for the Grafana deployment bash dump Pod logs for a Deployment multi container case kubectl logs namespace my grafana deploy grafana c grafana For more information about accessing Kubernetes application logs refer to Pods https kubernetes io docs reference kubectl cheatsheet interacting with running pods and Deployments https kubernetes io docs reference kubectl cheatsheet interacting with deployments and services Increasing log levels to debug mode By default the Grafana log level is set to info but you can increase it to debug mode to fetch information needed to diagnose and troubleshoot a problem For more information about Grafana log levels refer to Configuring logs docs grafana latest setup grafana configure grafana log The following example uses the Kubernetes ConfigMap which is an API object that stores non confidential data in key value pairs For more information refer to Kubernetes ConfigMap Concept https kubernetes io docs concepts configuration configmap 1 Create a empty file and name it grafana ini and add the following bash log Either debug info warn error critical default is info we change from info to debug level level debug This example adds the portion of the log section from the configuration file You can refer to the Configure Grafana docs grafana latest setup grafana configure grafana documentation to view all the default configuration settings 1 To add the configuration file into the Kubernetes cluster via the ConfigMap object run the following command bash kubectl create configmap ge config from file path to file grafana ini namespace my grafana 1 To verify the ConfigMap object creation run the following command bash kubectl get configmap namespace my grafana 1 Open the grafana yaml file and In the Deployment section provide the mount path to the custom configuration etc grafana and reference the newly created ConfigMap for it bash apiVersion apps v1 kind Deployment metadata labels app grafana name grafana the rest of the code remains the same requests cpu 250m memory 750Mi volumeMounts mountPath var lib grafana name grafana pv This is to mount the volume for the custom configuration mountPath etc grafana name ge config volumes name grafana pv persistentVolumeClaim claimName grafana pvc This is to provide the reference to the ConfigMap for the volume name ge config configMap name ge config 1 Deploy the manifest using the following kubectl apply command bash kubectl apply f grafana yaml namespace my grafana 1 To verify the status run the following commands bash first check the rollout status kubectl rollout status deployment grafana namespace my grafana then check the deployment and configMap information kubectl get all namespace my grafana 1 To verify it access the Grafana UI in the browser using the provided IP Port The Grafana sign in page appears 1 To sign in to Grafana enter admin for both the username and password 1 Navigate to Server Admin Settings and then search for log You should see the level to debug mode Using the dry run command You can use the Kubernetes dry run command to send requests to modifying endpoints and determine if the request would have succeeded Performing a dry run can be useful for catching errors or unintended consequences before they occur For more information refer to Kubernetes Dry run https github com kubernetes enhancements blob master keps sig api machinery 576 dry run README md Example The following example shows how to perform a dry run when you make changes to the grafana yaml such as using a new image version or adding new labels and you want to determine if there are syntax errors or conflicts To perform a dry run run the following command bash kubectl apply f grafana yaml dry run server namespace grafana If there are no errors then the output will look similar to this bash persistentvolumeclaim grafana pvc unchanged server dry run deployment apps grafana unchanged server dry run service grafana unchanged server dry run If there are errors or warnings you will see them in the terminal Remove Grafana If you want to remove any of the Grafana deployment objects use the kubectl delete command 1 If you want to remove the complete Grafana deployment run the following command bash kubectl delete f grafana yaml namespace my grafana This command deletes the deployment persistentvolumeclaim and service objects 1 To delete the ConfigMap run the following command bash kubectl delete configmap ge config namespace my grafana Deploy Grafana Enterprise on Kubernetes The process for deploying Grafana Enterprise is almost identical to the preceding process except for additional steps that are required for adding your license file Obtain Grafana Enterprise license To run Grafana Enterprise you need a valid license To obtain a license contact a Grafana Labs representative contact about grafana enterprise This topic assumes that you have a valid license in a license jwt file Associate your license with a URL that you can use later in the topic Create license secret Create a Kubernetes secret from your license file using the following command bash kubectl create secret generic ge license from file path to your license jwt Create Grafana Enterprise configuration 1 Create a Grafana configuration file with the name grafana ini 1 Paste the following YAML contents into the file you created yaml enterprise license path etc grafana license license jwt server root url your license root url 1 Update the root url field to the url associated with the license provided to you Create Configmap for Grafana Enterprise configuration Create a Kubernetes Configmap from your grafana ini file with the following command bash kubectl create configmap ge config from file path to your grafana ini Create Grafana Enterprise Kubernetes manifest 1 Create a grafana yaml file and copy and paste the following content into it The following YAML is identical to the one for a Grafana installation except for the additional references to the Configmap that contains your Grafana configuration file and the secret that has your license yaml apiVersion v1 kind PersistentVolumeClaim metadata name grafana pvc spec accessModes ReadWriteOnce resources requests storage 1Gi apiVersion apps v1 kind Deployment metadata labels app grafana name grafana spec selector matchLabels app grafana template metadata labels app grafana spec securityContext fsGroup 472 supplementalGroups 0 containers image grafana grafana enterprise latest imagePullPolicy IfNotPresent name grafana ports containerPort 3000 name http grafana protocol TCP readinessProbe failureThreshold 3 httpGet path robots txt port 3000 scheme HTTP initialDelaySeconds 10 periodSeconds 30 successThreshold 1 timeoutSeconds 2 resources limits memory 4Gi requests cpu 100m memory 2Gi volumeMounts mountPath var lib grafana name grafana pv mountPath etc grafana name ge config mountPath etc grafana license name ge license volumes name grafana pv persistentVolumeClaim claimName grafana pvc name ge config configMap name ge config name ge license secret secretName ge license apiVersion v1 kind Service metadata name grafana spec ports port 3000 protocol TCP targetPort http grafana selector app grafana sessionAffinity None type LoadBalancer If you use LoadBalancer in the Service and depending on your cloud platform and network configuration doing so might expose your Grafana instance to the Internet To eliminate this risk use ClusterIP to restrict access from within the cluster Grafana is deployed to 1 To send the manifest to Kubernetes API Server run the following command kubectl apply f grafana yaml 1 To verify the manifest was sent run the following command kubectl port forward service grafana 3000 3000 1 Navigate to localhost 3000 in your browser You should see the Grafana login page 1 Use admin for both the username and password to login 1 To verify you are working with an enterprise license scroll to the bottom of the page where you should see Enterprise Licensed
grafana setup title Deploy Grafana using Helm Charts aliases products weight 500 labels menuTitle Grafana on Helm Charts installation helm oss Guide for deploying Grafana using Helm Charts
--- aliases: - ../../installation/helm/ description: Guide for deploying Grafana using Helm Charts labels: products: - oss menuTitle: Grafana on Helm Charts title: Deploy Grafana using Helm Charts weight: 500 --- # Deploy Grafana using Helm Charts This topic includes instructions for installing and running Grafana on Kubernetes using Helm Charts. [Helm](https://helm.sh/) is an open-source command line tool used for managing Kubernetes applications. It is a graduate project in the [CNCF Landscape](https://www.cncf.io/projects/helm/). The Grafana open-source community offers Helm Charts for running it on Kubernetes. Please be aware that the code is provided without any warranties. If you encounter any problems, you can report them to the [Official GitHub repository](https://github.com/grafana/helm-charts/). Watch this video to learn more about installing Grafana using Helm Charts: ## Before you begin To install Grafana using Helm, ensure you have completed the following: - Install a Kubernetes server on your machine. For information about installing Kubernetes, refer to [Install Kubernetes](https://kubernetes.io/docs/setup/). - Install the latest stable version of Helm. For information on installing Helm, refer to [Install Helm](https://helm.sh/docs/intro/install/). ## Install Grafana using Helm When you install Grafana using Helm, you complete the following tasks: 1. Set up the Grafana Helm repository, which provides a space in which you will install Grafana. 1. Deploy Grafana using Helm, which installs Grafana into a namespace. 1. Accessing Grafana, which provides steps to sign into Grafana. ### Set up the Grafana Helm repository To set up the Grafana Helm repository so that you download the correct Grafana Helm charts on your machine, complete the following steps: 1. To add the Grafana repository, use the following command syntax: `helm repo add <DESIRED-NAME> <HELM-REPO-URL>` The following example adds the `grafana` Helm repository. ```bash helm repo add grafana https://grafana.github.io/helm-charts ``` 1. Run the following command to verify the repository was added: ```bash helm repo list ``` After you add the repository, you should see an output similar to the following: ```bash NAME URL grafana https://grafana.github.io/helm-charts ``` 1. Run the following command to update the repository to download the latest Grafana Helm charts: ```bash helm repo update ``` ### Deploy the Grafana Helm charts After you have set up the Grafana Helm repository, you can start to deploy it on your Kubernetes cluster. When you deploy Grafana Helm charts, use a separate namespace instead of relying on the default namespace. The default namespace might already have other applications running, which can lead to conflicts and other potential issues. When you create a new namespace in Kubernetes, you can better organize, allocate, and manage cluster resources. For more information, refer to [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). 1. To create a namespace, run the following command: ```bash kubectl create namespace monitoring ``` You will see an output similar to this, which means that the namespace has been successfully created: ```bash namespace/monitoring created ``` 1. Search for the official `grafana/grafana` repository using the command: `helm search repo <repo-name/package-name>` For example, the following command provides a list of the Grafana Helm Charts from which you will install the latest version of the Grafana chart. ```bash helm search repo grafana/grafana ``` 1. Run the following command to deploy the Grafana Helm Chart inside your namespace. ```bash helm install my-grafana grafana/grafana --namespace monitoring ``` Where: - `helm install`: Installs the chart by deploying it on the Kubernetes cluster - `my-grafana`: The logical chart name that you provided - `grafana/grafana`: The repository and package name to install - `--namespace`: The Kubernetes namespace (i.e. `monitoring`) where you want to deploy the chart 1. To verify the deployment status, run the following command and verify that `deployed` appears in the **STATUS** column: ```bash helm list -n monitoring ``` You should see an output similar to the following: ```bash NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION my-grafana monitoring 1 2024-01-13 23:06:42.737989554 +0000 UTC deployed grafana-6.59.0 10.1.0 ``` 1. To check the overall status of all the objects in the namespace, run the following command: ```bash kubectl get all -n monitoring ``` If you encounter errors or warnings in the **STATUS** column, check the logs and refer to the Troubleshooting section of this documentation. ### Access Grafana This section describes the steps you must complete to access Grafana via web browser. 1. Run the following `helm get notes` command: ```bash helm get notes my-grafana -n monitoring ``` This command will print out the chart notes. You will the output `NOTES` that provide the complete instructions about: - How to decode the login password for the Grafana admin account - Access Grafana service to the web browser 1. To get the Grafana admin password, run the command as follows: ```bash kubectl get secret --namespace monitoring my-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo ``` It will give you a decoded `base64` string output which is the password for the admin account. 1. Save the decoded password to a file on your machine. 1. To access Grafana service on the web browser, run the following command: ```bash export POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=my-grafana" -o jsonpath="{.items[0].metadata.name}") ``` The above command will export a shell variable named `POD_NAME` that will save the complete name of the pod which got deployed. 1. Run the following port forwarding command to direct the Grafana pod to listen to port `3000`: ```bash kubectl --namespace monitoring port-forward $POD_NAME 3000 ``` For more information about port-forwarding, refer to [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/). 1. Navigate to `127.0.0.1:3000` in your browser. 1. The Grafana sign-in page appears. 1. To sign in, enter `admin` for the username. 1. For the password paste it which you have saved to a file after decoding it earlier. ## Customize Grafana default configuration Helm is a popular package manager for Kubernetes. It bundles Kubernetes resource manifests to be re-used across different environments. These manifests are written in a templating language, allowing you to provide configuration values via `values.yaml` file, or in-line using Helm, to replace the placeholders in the manifest where these configurations should reside. The `values.yaml` file allows you to customize the chart's configuration by specifying values for various parameters such as image versions, resource limits, service configurations, etc. By modifying the values in the `values.yaml` file, you can tailor the deployment of a Helm chart to your specific requirements by using the helm install or upgrade commands. For more information about configuring Helm, refer to [Values Files](https://helm.sh/docs/chart_template_guide/values_files/). ### Download the values.yaml file In order to make any configuration changes, download the `values.yaml` file from the Grafana Helm Charts repository: https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml Depending on your use case requirements, you can use a single YAML file that contains your configuration changes or you can create multiple YAML files. ### Enable persistent storage **(recommended)** By default, persistent storage is disabled, which means that Grafana uses ephemeral storage, and all data will be stored within the container's file system. This data will be lost if the container is stopped, restarted, or if the container crashes. It is highly recommended that you enable persistent storage in Grafana Helm charts if you want to ensure that your data persists and is not lost in case of container restarts or failures. Enabling persistent storage in Grafana Helm charts ensures a reliable solution for running Grafana in production environments. To enable the persistent storage in the Grafana Helm charts, complete the following steps: 1. Open the `values.yaml` file in your favorite editor. 1. Edit the values and under the section of `persistence`, change the `enable` flag from `false` to `true` ```yaml ....... ............ ...... persistence: type: pvc enabled: true # storageClassName: default ....... ............ ...... ``` 1. Run the following `helm upgrade` command by specifying the `values.yaml` file to make the changes take effect: ```bash helm upgrade my-grafana grafana/grafana -f values.yaml -n monitoring ``` The PVC will now store all your data such as dashboards, data sources, and so on. ### Install plugins (e.g. Zabbix app, Clock panel, etc.) You can install plugins in Grafana from the official and community [plugins page](https://grafana.com/grafana/plugins). These plugins allow you to add new visualization types, data sources, and applications to help you better visualize your data. Grafana currently supports three types of plugins: panel, data source, and app. For more information on managing plugins, refer to [Plugin Management](https://grafana.com/docs/grafana/latest/administration/plugin-management/). To install plugins in the Grafana Helm Charts, complete the following steps: 1. Open the `values.yaml` file in your favorite editor. 1. Find the line that says `plugins:` and under that section, define the plugins that you want to install. ```yaml ....... ............ ...... plugins: # here we are installing two plugins, make sure to keep the indentation correct as written here. - alexanderzobnin-zabbix-app - grafana-clock-panel ....... ............ ...... ``` 1. Save the changes and use the `helm upgrade` command to get these plugins installed: ```bash helm upgrade my-grafana grafana/grafana -f values.yaml -n monitoring ``` 1. Navigate to `127.0.0.1:3000` in your browser. 1. Login with admin credentials when the Grafana sign-in page appears. 1. Navigate to UI -> Administration -> Plugins 1. Search for the above plugins and they should be marked as installed. ### Configure a Private CA (Certificate Authority) In many enterprise networks, TLS certificates are issued by a private certificate authority and are not trusted by default (using the provided OS trust chain). If your Grafana instance needs to interact with services exposing certificates issued by these private CAs, then you need to ensure Grafana trusts the root certificate. You might need to configure this if you: - have plugins that require connectivity to other self hosted systems. For example, if you've installed the Grafana Enterprise Metrics, Logs, or Traces (GEM, GEL, GET) plugins, and your GEM (or GEL/GET) cluster is using a private certificate. - want to connect to data sources which are listening on HTTPS with a private certificate. - are using a backend database for persistence, or caching service that uses private certificates for encryption in transit. In some cases you can specify a self-signed certificate within Grafana (such as in some data sources), or choose to skip TLS certificate validation (this is not recommended unless absolutely necessary). A simple solution which should work across your entire instance (plugins, data sources, and backend connections) is to add your self-signed CA certificate to your Kubernetes deployment. 1. Create a ConfigMap containing the certificate, and deploy it to your Kubernetes cluster ```yaml # grafana-ca-configmap.yaml --- apiVersion: v1 kind: ConfigMap metadata: name: grafana-ca-cert data: ca.pem: | -----BEGIN CERTIFICATE----- (rest of the CA cert) -----END CERTIFICATE----- ``` ```bash kubectl apply --filename grafana-ca-configmap.yaml --namespace monitoring ``` 1. Open the Helm `values.yaml` file in your favorite editor. 1. Find the line that says `extraConfigmapMounts:` and under that section, specify the additional ConfigMap that you want to mount. ```yaml ....... ............ ...... extraConfigmapMounts: - name: ca-certs-configmap mountPath: /etc/ssl/certs/ca.pem subPath: ca.pem configMap: grafana-ca-cert readOnly: true ....... ............ ...... ``` 1. Save the changes and use the `helm upgrade` command to update your Grafana deployment and mount the new ConfigMap: ```bash helm upgrade my-grafana grafana/grafana --values values.yaml --namespace monitoring ``` ## Troubleshooting This section includes troubleshooting tips you might find helpful when deploying Grafana on Kubernetes via Helm. ### Collect logs It is important to view the Grafana server logs while troubleshooting any issues. To check the Grafana logs, run the following command: ```bash # dump Pod logs for a Deployment (single-container case) kubectl logs --namespace=monitoring deploy/my-grafana ``` If you have multiple containers running in the deployment, run the following command to obtain the logs only for the Grafana deployment: ```bash # dump Pod logs for a Deployment (multi-container case) kubectl logs --namespace=monitoring deploy/grafana -c my-grafana ``` For more information about accessing Kubernetes application logs, refer to [Pods](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods) and [Deployments](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-deployments-and-services). ### Increase log levels By default, the Grafana log level is set to `info`, but you can increase it to `debug` mode to fetch information needed to diagnose and troubleshoot a problem. For more information about Grafana log levels, refer to [Configuring logs](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana#log). To increase log level to `debug` mode, use the following steps: 1. Open the `values.yaml` file in your favorite editor and search for the string `grafana.ini` and there you will find a section about log mode. 1. Add level: `debug` just below the line `mode: console` ```yaml # This is the values.yaml file ..... ....... .... grafana.ini: paths: data: /var/lib/grafana/ ..... ....... .... mode: console level: debug ``` Make sure to keep the indentation level the same otherwise it will not work. 1. Now to apply this, run the `helm upgrade` command as follows: ```bash helm upgrade my-grafana grafana/grafana -f values.yaml -n monitoring ``` 1. To verify it, access the Grafana UI in the browser using the provided `IP:Port`. The Grafana sign-in page appears. 1. To sign in to Grafana, enter `admin` for the username and paste the password which was decoded earlier. Navigate to Server Admin > Settings and then search for log. You should see the level to `debug` mode. ### Reset Grafana admin secrets (login credentials) By default the login credentials for the super admin account are generated via `secrets`. However, this can be changed easily. To achieve this, use the following steps: 1. Edit the `values.yaml` file and search for the string `adminPassword`. There you can define a new password: ```yaml # Administrator credentials when not using an existing secret (see below) adminUser: admin adminPassword: admin ``` 1. Then use the `helm upgrade` command as follows: ```bash helm upgrade my-grafana grafana/grafana -f values.yaml -n monitoring ``` This command will now make your super admin login credentials as `admin` for both username and password. 1. To verify it, sign in to Grafana, enter `admin` for both username and password. You should be able to login as super admin. ## Uninstall the Grafana deployment To uninstall the Grafana deployment, run the command: `helm uninstall <RELEASE-NAME> <NAMESPACE-NAME>` ```bash helm uninstall my-grafana -n monitoring ``` This deletes all of the objects from the given namespace monitoring. If you want to delete the namespace `monitoring`, then run the command: ```bash kubectl delete namespace monitoring ```
grafana setup
aliases installation helm description Guide for deploying Grafana using Helm Charts labels products oss menuTitle Grafana on Helm Charts title Deploy Grafana using Helm Charts weight 500 Deploy Grafana using Helm Charts This topic includes instructions for installing and running Grafana on Kubernetes using Helm Charts Helm https helm sh is an open source command line tool used for managing Kubernetes applications It is a graduate project in the CNCF Landscape https www cncf io projects helm The Grafana open source community offers Helm Charts for running it on Kubernetes Please be aware that the code is provided without any warranties If you encounter any problems you can report them to the Official GitHub repository https github com grafana helm charts Watch this video to learn more about installing Grafana using Helm Charts Before you begin To install Grafana using Helm ensure you have completed the following Install a Kubernetes server on your machine For information about installing Kubernetes refer to Install Kubernetes https kubernetes io docs setup Install the latest stable version of Helm For information on installing Helm refer to Install Helm https helm sh docs intro install Install Grafana using Helm When you install Grafana using Helm you complete the following tasks 1 Set up the Grafana Helm repository which provides a space in which you will install Grafana 1 Deploy Grafana using Helm which installs Grafana into a namespace 1 Accessing Grafana which provides steps to sign into Grafana Set up the Grafana Helm repository To set up the Grafana Helm repository so that you download the correct Grafana Helm charts on your machine complete the following steps 1 To add the Grafana repository use the following command syntax helm repo add DESIRED NAME HELM REPO URL The following example adds the grafana Helm repository bash helm repo add grafana https grafana github io helm charts 1 Run the following command to verify the repository was added bash helm repo list After you add the repository you should see an output similar to the following bash NAME URL grafana https grafana github io helm charts 1 Run the following command to update the repository to download the latest Grafana Helm charts bash helm repo update Deploy the Grafana Helm charts After you have set up the Grafana Helm repository you can start to deploy it on your Kubernetes cluster When you deploy Grafana Helm charts use a separate namespace instead of relying on the default namespace The default namespace might already have other applications running which can lead to conflicts and other potential issues When you create a new namespace in Kubernetes you can better organize allocate and manage cluster resources For more information refer to Namespaces https kubernetes io docs concepts overview working with objects namespaces 1 To create a namespace run the following command bash kubectl create namespace monitoring You will see an output similar to this which means that the namespace has been successfully created bash namespace monitoring created 1 Search for the official grafana grafana repository using the command helm search repo repo name package name For example the following command provides a list of the Grafana Helm Charts from which you will install the latest version of the Grafana chart bash helm search repo grafana grafana 1 Run the following command to deploy the Grafana Helm Chart inside your namespace bash helm install my grafana grafana grafana namespace monitoring Where helm install Installs the chart by deploying it on the Kubernetes cluster my grafana The logical chart name that you provided grafana grafana The repository and package name to install namespace The Kubernetes namespace i e monitoring where you want to deploy the chart 1 To verify the deployment status run the following command and verify that deployed appears in the STATUS column bash helm list n monitoring You should see an output similar to the following bash NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION my grafana monitoring 1 2024 01 13 23 06 42 737989554 0000 UTC deployed grafana 6 59 0 10 1 0 1 To check the overall status of all the objects in the namespace run the following command bash kubectl get all n monitoring If you encounter errors or warnings in the STATUS column check the logs and refer to the Troubleshooting section of this documentation Access Grafana This section describes the steps you must complete to access Grafana via web browser 1 Run the following helm get notes command bash helm get notes my grafana n monitoring This command will print out the chart notes You will the output NOTES that provide the complete instructions about How to decode the login password for the Grafana admin account Access Grafana service to the web browser 1 To get the Grafana admin password run the command as follows bash kubectl get secret namespace monitoring my grafana o jsonpath data admin password base64 decode echo It will give you a decoded base64 string output which is the password for the admin account 1 Save the decoded password to a file on your machine 1 To access Grafana service on the web browser run the following command bash export POD NAME kubectl get pods namespace monitoring l app kubernetes io name grafana app kubernetes io instance my grafana o jsonpath items 0 metadata name The above command will export a shell variable named POD NAME that will save the complete name of the pod which got deployed 1 Run the following port forwarding command to direct the Grafana pod to listen to port 3000 bash kubectl namespace monitoring port forward POD NAME 3000 For more information about port forwarding refer to Use Port Forwarding to Access Applications in a Cluster https kubernetes io docs tasks access application cluster port forward access application cluster 1 Navigate to 127 0 0 1 3000 in your browser 1 The Grafana sign in page appears 1 To sign in enter admin for the username 1 For the password paste it which you have saved to a file after decoding it earlier Customize Grafana default configuration Helm is a popular package manager for Kubernetes It bundles Kubernetes resource manifests to be re used across different environments These manifests are written in a templating language allowing you to provide configuration values via values yaml file or in line using Helm to replace the placeholders in the manifest where these configurations should reside The values yaml file allows you to customize the chart s configuration by specifying values for various parameters such as image versions resource limits service configurations etc By modifying the values in the values yaml file you can tailor the deployment of a Helm chart to your specific requirements by using the helm install or upgrade commands For more information about configuring Helm refer to Values Files https helm sh docs chart template guide values files Download the values yaml file In order to make any configuration changes download the values yaml file from the Grafana Helm Charts repository https github com grafana helm charts blob main charts grafana values yaml Depending on your use case requirements you can use a single YAML file that contains your configuration changes or you can create multiple YAML files Enable persistent storage recommended By default persistent storage is disabled which means that Grafana uses ephemeral storage and all data will be stored within the container s file system This data will be lost if the container is stopped restarted or if the container crashes It is highly recommended that you enable persistent storage in Grafana Helm charts if you want to ensure that your data persists and is not lost in case of container restarts or failures Enabling persistent storage in Grafana Helm charts ensures a reliable solution for running Grafana in production environments To enable the persistent storage in the Grafana Helm charts complete the following steps 1 Open the values yaml file in your favorite editor 1 Edit the values and under the section of persistence change the enable flag from false to true yaml persistence type pvc enabled true storageClassName default 1 Run the following helm upgrade command by specifying the values yaml file to make the changes take effect bash helm upgrade my grafana grafana grafana f values yaml n monitoring The PVC will now store all your data such as dashboards data sources and so on Install plugins e g Zabbix app Clock panel etc You can install plugins in Grafana from the official and community plugins page https grafana com grafana plugins These plugins allow you to add new visualization types data sources and applications to help you better visualize your data Grafana currently supports three types of plugins panel data source and app For more information on managing plugins refer to Plugin Management https grafana com docs grafana latest administration plugin management To install plugins in the Grafana Helm Charts complete the following steps 1 Open the values yaml file in your favorite editor 1 Find the line that says plugins and under that section define the plugins that you want to install yaml plugins here we are installing two plugins make sure to keep the indentation correct as written here alexanderzobnin zabbix app grafana clock panel 1 Save the changes and use the helm upgrade command to get these plugins installed bash helm upgrade my grafana grafana grafana f values yaml n monitoring 1 Navigate to 127 0 0 1 3000 in your browser 1 Login with admin credentials when the Grafana sign in page appears 1 Navigate to UI Administration Plugins 1 Search for the above plugins and they should be marked as installed Configure a Private CA Certificate Authority In many enterprise networks TLS certificates are issued by a private certificate authority and are not trusted by default using the provided OS trust chain If your Grafana instance needs to interact with services exposing certificates issued by these private CAs then you need to ensure Grafana trusts the root certificate You might need to configure this if you have plugins that require connectivity to other self hosted systems For example if you ve installed the Grafana Enterprise Metrics Logs or Traces GEM GEL GET plugins and your GEM or GEL GET cluster is using a private certificate want to connect to data sources which are listening on HTTPS with a private certificate are using a backend database for persistence or caching service that uses private certificates for encryption in transit In some cases you can specify a self signed certificate within Grafana such as in some data sources or choose to skip TLS certificate validation this is not recommended unless absolutely necessary A simple solution which should work across your entire instance plugins data sources and backend connections is to add your self signed CA certificate to your Kubernetes deployment 1 Create a ConfigMap containing the certificate and deploy it to your Kubernetes cluster yaml grafana ca configmap yaml apiVersion v1 kind ConfigMap metadata name grafana ca cert data ca pem BEGIN CERTIFICATE rest of the CA cert END CERTIFICATE bash kubectl apply filename grafana ca configmap yaml namespace monitoring 1 Open the Helm values yaml file in your favorite editor 1 Find the line that says extraConfigmapMounts and under that section specify the additional ConfigMap that you want to mount yaml extraConfigmapMounts name ca certs configmap mountPath etc ssl certs ca pem subPath ca pem configMap grafana ca cert readOnly true 1 Save the changes and use the helm upgrade command to update your Grafana deployment and mount the new ConfigMap bash helm upgrade my grafana grafana grafana values values yaml namespace monitoring Troubleshooting This section includes troubleshooting tips you might find helpful when deploying Grafana on Kubernetes via Helm Collect logs It is important to view the Grafana server logs while troubleshooting any issues To check the Grafana logs run the following command bash dump Pod logs for a Deployment single container case kubectl logs namespace monitoring deploy my grafana If you have multiple containers running in the deployment run the following command to obtain the logs only for the Grafana deployment bash dump Pod logs for a Deployment multi container case kubectl logs namespace monitoring deploy grafana c my grafana For more information about accessing Kubernetes application logs refer to Pods https kubernetes io docs reference kubectl cheatsheet interacting with running pods and Deployments https kubernetes io docs reference kubectl cheatsheet interacting with deployments and services Increase log levels By default the Grafana log level is set to info but you can increase it to debug mode to fetch information needed to diagnose and troubleshoot a problem For more information about Grafana log levels refer to Configuring logs https grafana com docs grafana latest setup grafana configure grafana log To increase log level to debug mode use the following steps 1 Open the values yaml file in your favorite editor and search for the string grafana ini and there you will find a section about log mode 1 Add level debug just below the line mode console yaml This is the values yaml file grafana ini paths data var lib grafana mode console level debug Make sure to keep the indentation level the same otherwise it will not work 1 Now to apply this run the helm upgrade command as follows bash helm upgrade my grafana grafana grafana f values yaml n monitoring 1 To verify it access the Grafana UI in the browser using the provided IP Port The Grafana sign in page appears 1 To sign in to Grafana enter admin for the username and paste the password which was decoded earlier Navigate to Server Admin Settings and then search for log You should see the level to debug mode Reset Grafana admin secrets login credentials By default the login credentials for the super admin account are generated via secrets However this can be changed easily To achieve this use the following steps 1 Edit the values yaml file and search for the string adminPassword There you can define a new password yaml Administrator credentials when not using an existing secret see below adminUser admin adminPassword admin 1 Then use the helm upgrade command as follows bash helm upgrade my grafana grafana grafana f values yaml n monitoring This command will now make your super admin login credentials as admin for both username and password 1 To verify it sign in to Grafana enter admin for both username and password You should be able to login as super admin Uninstall the Grafana deployment To uninstall the Grafana deployment run the command helm uninstall RELEASE NAME NAMESPACE NAME bash helm uninstall my grafana n monitoring This deletes all of the objects from the given namespace monitoring If you want to delete the namespace monitoring then run the command bash kubectl delete namespace monitoring
grafana setup plugin grafana rendering image aliases keywords image rendering Image rendering administration imagerendering
--- aliases: - ../administration/image_rendering/ - ../image-rendering/ description: Image rendering keywords: - grafana - image - rendering - plugin labels: products: - enterprise - oss title: Set up image rendering weight: 1000 --- # Set up image rendering Grafana supports automatic rendering of panels as PNG images. This allows Grafana to automatically generate images of your panels to include in alert notifications, [PDF export](), and [Reporting](). PDF Export and Reporting are available only in [Grafana Enterprise]() and [Grafana Cloud](/docs/grafana-cloud/). > **Note:** Image rendering of dashboards is not supported at this time. While an image is being rendered, the PNG image is temporarily written to the file system. When the image is rendered, the PNG image is temporarily written to the `png` folder in the Grafana `data` folder. A background job runs every 10 minutes and removes temporary images. You can configure how long an image should be stored before being removed by configuring the [temp_data_lifetime]() setting. You can also render a PNG by hovering over the panel to display the actions menu in the top-right corner, and then clicking **Share > Share link**. The **Render image** option is displayed in the link settings. ## Alerting and render limits Alert notifications can include images, but rendering many images at the same time can overload the server where the renderer is running. For instructions of how to configure this, see [max_concurrent_screenshots](). ## Install Grafana Image Renderer plugin All PhantomJS support has been removed. Instead, use the Grafana Image Renderer plugin or remote rendering service. To install the plugin, refer to the [Grafana Image Renderer Installation instructions](/grafana/plugins/grafana-image-renderer/?tab=installation#installation). ### Memory requirements Rendering images requires a lot of memory, mainly because Grafana creates browser instances in the background for the actual rendering. Grafana recommends a minimum of 16GB of free memory on the system rendering images. Rendering multiple images in parallel requires an even bigger memory footprint. You can use the remote rendering service in order to render images on a remote system, so your local system resources are not affected. ## Configuration The Grafana Image Renderer plugin has a number of configuration options that are used in plugin or remote rendering modes. In plugin mode, you can specify them directly in the [Grafana configuration file](). In remote rendering mode, you can specify them in a `.json` [configuration file](#configuration-file) or, for some of them, you can override the configuration defaults using environment variables. ### Configuration file You can update your settings by using a configuration file, see [default.json](https://github.com/grafana/grafana-image-renderer/tree/master/default.json) for defaults. Note that any configured environment variable takes precedence over configuration file settings. You can volume mount your custom configuration file when starting the docker container: ```bash docker run -d --name=renderer --network=host -v /some/path/config.json:/usr/src/app/config.json grafana/grafana-image-renderer:latest ``` You can see a docker-compose example using a custom configuration file [here](https://github.com/grafana/grafana-image-renderer/tree/master/devenv/docker/custom-config). ### Security This feature is available in Image Renderer v3.6.1 and later. You can restrict access to the rendering endpoint by specifying a secret token. The token should be configured in the Grafana configuration file and the renderer configuration file. This token is important when you run the plugin in remote rendering mode. Renderer versions v3.6.1 or later require a Grafana version with this feature. These include: - Grafana v9.1.2 or later - Grafana v9.0.8 or later patch releases - Grafana v8.5.11 or later patch releases - Grafana v8.4.11 or later patch releases - Grafana v8.3.11 or later patch releases ```bash AUTH_TOKEN=- ``` ```json { "service": { "security": { "authToken": "-" } } } ``` See [Grafana configuration]() for how to configure the token in Grafana. ### Rendering mode You can instruct how headless browser instances are created by configuring a rendering mode. Default is `default`, other supported values are `clustered` and `reusable`. #### Default Default mode will create a new browser instance on each request. When handling multiple concurrent requests, this mode increases memory usage as it will launch multiple browsers at the same time. If you want to set a maximum number of browser to open, you'll need to use the [clustered mode](#clustered). When using the `default` mode, it's recommended to not remove the default Chromium flag `--disable-gpu`. When receiving a lot of concurrent requests, not using this flag can cause Puppeteer `newPage` function to freeze, causing request timeouts and leaving browsers open. ```bash RENDERING_MODE=default ``` ```json { "rendering": { "mode": "default" } } ``` #### Clustered With the `clustered` mode, you can configure how many browser instances or incognito pages can execute concurrently. Default is `browser` and will ensure a maximum amount of browser instances can execute concurrently. Mode `context` will ensure a maximum amount of incognito pages can execute concurrently. You can also configure the maximum concurrency allowed, which per default is `5`, and the maximum duration of a rendering request, which per default is `30` seconds. Using a cluster of incognito pages is more performant and consumes less CPU and memory than a cluster of browsers. However, if one page crashes it can bring down the entire browser with it (making all the rendering requests happening at the same time fail). Also, each page isn't guaranteed to be totally clean (cookies and storage might bleed-through as seen [here](https://bugs.chromium.org/p/chromium/issues/detail?id=754576)). ```bash RENDERING_MODE=clustered RENDERING_CLUSTERING_MODE=browser RENDERING_CLUSTERING_MAX_CONCURRENCY=5 RENDERING_CLUSTERING_TIMEOUT=30 ``` ```json { "rendering": { "mode": "clustered", "clustering": { "mode": "browser", "maxConcurrency": 5, "timeout": 30 } } } ``` #### Reusable (experimental) When using the rendering mode `reusable`, one browser instance will be created and reused. A new incognito page will be opened for each request. This mode is experimental since, if the browser instance crashes, it will not automatically be restarted. You can achieve a similar behavior using `clustered` mode with a high `maxConcurrency` setting. ```bash RENDERING_MODE=reusable ``` ```json { "rendering": { "mode": "reusable" } } ``` #### Optimize the performance, CPU and memory usage of the image renderer The performance and resources consumption of the different modes depend a lot on the number of concurrent requests your service is handling. To understand how many concurrent requests your service is handling, [monitor your image renderer service](). With no concurrent requests, the different modes show very similar performance and CPU / memory usage. When handling concurrent requests, we see the following trends: - To improve performance and reduce CPU and memory consumption, use [clustered](#clustered) mode with `RENDERING_CLUSTERING_MODE` set as `context`. This parallelizes incognito pages instead of browsers. - If you use the [clustered](#clustered) mode with a `maxConcurrency` setting below your average number of concurrent requests, performance will drop as the rendering requests will need to wait for the other to finish before getting access to an incognito page / browser. To achieve better performance, monitor the machine on which your service is running. If you don't have enough memory and / or CPU, every rendering step will be slower than usual, increasing the duration of every rendering request. ### Other available settings Please note that not all settings are available using environment variables. If there is no example using environment variable below, it means that you need to update the configuration file. #### HTTP host Change the listening host of the HTTP server. Default is unset and will use the local host. ```bash HTTP_HOST=localhost ``` ```json { "service": { "host": "localhost" } } ``` #### HTTP port Change the listening port of the HTTP server. Default is `8081`. Setting `0` will automatically assign a port not in use. ```bash HTTP_PORT=0 ``` ```json { "service": { "port": 0 } } ``` #### HTTP protocol HTTPS protocol is supported in the image renderer v3.11.0 and later. Change the protocol of the server, it can be `http` or `https`. Default is `http`. ```json { "service": { "protocol": "http" } } ``` #### HTTPS certificate and key file Path to the image renderer certificate and key file used to start an HTTPS server. ```json { "service": { "certFile": "./path/to/cert", "certKey": "./path/to/key" } } ``` #### HTTPS min TLS version Minimum TLS version allowed. Accepted values are: `TLSv1.2`, `TLSv1.3`. Default is `TLSv1.2`. ```json { "service": { "minTLSVersion": "TLSv1.2" } } ``` #### Enable Prometheus metrics You can enable [Prometheus](https://prometheus.io/) metrics endpoint `/metrics` using the environment variable `ENABLE_METRICS`. Node.js and render request duration metrics are included, see [Enable Prometheus metrics endpoint]() for details. Default is `false`. ```bash ENABLE_METRICS=true ``` ```json { "service": { "metrics": { "enabled": true, "collectDefaultMetrics": true, "requestDurationBuckets": [1, 5, 7, 9, 11, 13, 15, 20, 30] } } } ``` #### Enable detailed timing metrics With the [Prometheus metrics enabled](#enable-prometheus-metrics), you can also enable detailed metrics to get the duration of every rendering step. Default is `false`. ```bash # Available from v3.9.0+ RENDERING_TIMING_METRICS=true ``` ```json { "rendering": { "timingMetrics": true } } ``` #### Log level Change the log level. Default is `info` and will include log messages with level `error`, `warning` and `info`. ```bash LOG_LEVEL=debug ``` ```json { "service": { "logging": { "level": "debug", "console": { "json": false, "colorize": true } } } } ``` #### Verbose logging Instruct headless browser instance whether to capture and log verbose information when rendering an image. Default is `false` and will only capture and log error messages. When enabled (`true`) debug messages are captured and logged as well. Note that you need to change log level to `debug`, see above, for the verbose information to be included in the logs. ```bash RENDERING_VERBOSE_LOGGING=true ``` ```json { "rendering": { "verboseLogging": true } } ``` #### Capture browser output Instruct headless browser instance whether to output its debug and error messages into running process of remote rendering service. Default is `false`. This can be useful to enable (`true`) when troubleshooting. ```bash RENDERING_DUMPIO=true ``` ```json { "rendering": { "dumpio": true } } ``` #### Custom Chrome/Chromium If you already have [Chrome](https://www.google.com/chrome/) or [Chromium](https://www.chromium.org/) installed on your system, then you can use this instead of the pre-packaged version of Chromium. Please note that this is not recommended, since you may encounter problems if the installed version of Chrome/Chromium is not compatible with the [Grafana Image renderer plugin](/grafana/plugins/grafana-image-renderer). You need to make sure that the Chrome/Chromium executable is available for the Grafana/image rendering service process. ```bash CHROME_BIN="/usr/bin/chromium-browser" ``` ```json { "rendering": { "chromeBin": "/usr/bin/chromium-browser" } } ``` #### Start browser with additional arguments Additional arguments to pass to the headless browser instance. Defaults are `--no-sandbox,--disable-gpu`. The list of Chromium flags can be found [here](https://peter.sh/experiments/chromium-command-line-switches/) and the list of flags used as defaults by Puppeteer can be found [there](https://cri.dev/posts/2020-04-04-Full-list-of-Chromium-Puppeteer-flags/). Multiple arguments is separated with comma-character. ```bash RENDERING_ARGS=--no-sandbox,--disable-setuid-sandbox,--disable-dev-shm-usage,--disable-accelerated-2d-canvas,--disable-gpu,--window-size=1280x758 ``` ```json { "rendering": { "args": [ "--no-sandbox", "--disable-setuid-sandbox", "--disable-dev-shm-usage", "--disable-accelerated-2d-canvas", "--disable-gpu", "--window-size=1280x758" ] } } ``` #### Ignore HTTPS errors Instruct headless browser instance whether to ignore HTTPS errors during navigation. Per default HTTPS errors are not ignored. Due to the security risk it's not recommended to ignore HTTPS errors. ```bash IGNORE_HTTPS_ERRORS=true ``` ```json { "rendering": { "ignoresHttpsErrors": true } } ``` #### Default timezone Instruct headless browser instance to use a default timezone when not provided by Grafana, .e.g. when rendering panel image of alert. See [ICU’s metaZones.txt](https://cs.chromium.org/chromium/src/third_party/icu/source/data/misc/metaZones.txt?rcl=faee8bc70570192d82d2978a71e2a615788597d1) for a list of supported timezone IDs. Fallbacks to `TZ` environment variable if not set. ```bash BROWSER_TZ=Europe/Stockholm ``` ```json { "rendering": { "timezone": "Europe/Stockholm" } } ``` #### Default language Instruct headless browser instance to use a default language when not provided by Grafana, e.g. when rendering panel image of alert. Refer to the HTTP header Accept-Language to understand how to format this value. ```bash # Available from v3.9.0+ RENDERING_LANGUAGE="fr-CH, fr;q=0.9, en;q=0.8, de;q=0.7, *;q=0.5" ``` ```json { "rendering": { "acceptLanguage": "fr-CH, fr;q=0.9, en;q=0.8, de;q=0.7, *;q=0.5" } } ``` #### Viewport width Default viewport width when width is not specified in the rendering request. Default is `1000`. ```bash # Available from v3.9.0+ RENDERING_VIEWPORT_WIDTH=1000 ``` ```json { "rendering": { "width": 1000 } } ``` #### Viewport height Default viewport height when height is not specified in the rendering request. Default is `500`. ```bash # Available from v3.9.0+ RENDERING_VIEWPORT_HEIGHT=500 ``` ```json { "rendering": { "height": 500 } } ``` #### Viewport maximum width Limit the maximum viewport width that can be requested. Default is `3000`. ```bash # Available from v3.9.0+ RENDERING_VIEWPORT_MAX_WIDTH=1000 ``` ```json { "rendering": { "maxWidth": 1000 } } ``` #### Viewport maximum height Limit the maximum viewport height that can be requested. Default is `3000`. ```bash # Available from v3.9.0+ RENDERING_VIEWPORT_MAX_HEIGHT=500 ``` ```json { "rendering": { "maxHeight": 500 } } ``` #### Device scale factor Specify default device scale factor for rendering images. `2` is enough for monitor resolutions, `4` would be better for printed material. Setting a higher value affects performance and memory. Default is `1`. This can be overridden in the rendering request. ```bash # Available from v3.9.0+ RENDERING_VIEWPORT_DEVICE_SCALE_FACTOR=2 ``` ```json { "rendering": { "deviceScaleFactor": 2 } } ``` #### Maximum device scale factor Limit the maximum device scale factor that can be requested. Default is `4`. ```bash # Available from v3.9.0+ RENDERING_VIEWPORT_MAX_DEVICE_SCALE_FACTOR=4 ``` ```json { "rendering": { "maxDeviceScaleFactor": 4 } } ``` #### Page zoom level The following command sets a page zoom level. The default value is `1`. A value of `1.5` equals 150% zoom. ```bash RENDERING_VIEWPORT_PAGE_ZOOM_LEVEL=1 ``` ```json { "rendering": { "pageZoomLevel": 1 } } ```
grafana setup
aliases administration image rendering image rendering description Image rendering keywords grafana image rendering plugin labels products enterprise oss title Set up image rendering weight 1000 Set up image rendering Grafana supports automatic rendering of panels as PNG images This allows Grafana to automatically generate images of your panels to include in alert notifications PDF export and Reporting PDF Export and Reporting are available only in Grafana Enterprise and Grafana Cloud docs grafana cloud Note Image rendering of dashboards is not supported at this time While an image is being rendered the PNG image is temporarily written to the file system When the image is rendered the PNG image is temporarily written to the png folder in the Grafana data folder A background job runs every 10 minutes and removes temporary images You can configure how long an image should be stored before being removed by configuring the temp data lifetime setting You can also render a PNG by hovering over the panel to display the actions menu in the top right corner and then clicking Share Share link The Render image option is displayed in the link settings Alerting and render limits Alert notifications can include images but rendering many images at the same time can overload the server where the renderer is running For instructions of how to configure this see max concurrent screenshots Install Grafana Image Renderer plugin All PhantomJS support has been removed Instead use the Grafana Image Renderer plugin or remote rendering service To install the plugin refer to the Grafana Image Renderer Installation instructions grafana plugins grafana image renderer tab installation installation Memory requirements Rendering images requires a lot of memory mainly because Grafana creates browser instances in the background for the actual rendering Grafana recommends a minimum of 16GB of free memory on the system rendering images Rendering multiple images in parallel requires an even bigger memory footprint You can use the remote rendering service in order to render images on a remote system so your local system resources are not affected Configuration The Grafana Image Renderer plugin has a number of configuration options that are used in plugin or remote rendering modes In plugin mode you can specify them directly in the Grafana configuration file In remote rendering mode you can specify them in a json configuration file configuration file or for some of them you can override the configuration defaults using environment variables Configuration file You can update your settings by using a configuration file see default json https github com grafana grafana image renderer tree master default json for defaults Note that any configured environment variable takes precedence over configuration file settings You can volume mount your custom configuration file when starting the docker container bash docker run d name renderer network host v some path config json usr src app config json grafana grafana image renderer latest You can see a docker compose example using a custom configuration file here https github com grafana grafana image renderer tree master devenv docker custom config Security This feature is available in Image Renderer v3 6 1 and later You can restrict access to the rendering endpoint by specifying a secret token The token should be configured in the Grafana configuration file and the renderer configuration file This token is important when you run the plugin in remote rendering mode Renderer versions v3 6 1 or later require a Grafana version with this feature These include Grafana v9 1 2 or later Grafana v9 0 8 or later patch releases Grafana v8 5 11 or later patch releases Grafana v8 4 11 or later patch releases Grafana v8 3 11 or later patch releases bash AUTH TOKEN json service security authToken See Grafana configuration for how to configure the token in Grafana Rendering mode You can instruct how headless browser instances are created by configuring a rendering mode Default is default other supported values are clustered and reusable Default Default mode will create a new browser instance on each request When handling multiple concurrent requests this mode increases memory usage as it will launch multiple browsers at the same time If you want to set a maximum number of browser to open you ll need to use the clustered mode clustered When using the default mode it s recommended to not remove the default Chromium flag disable gpu When receiving a lot of concurrent requests not using this flag can cause Puppeteer newPage function to freeze causing request timeouts and leaving browsers open bash RENDERING MODE default json rendering mode default Clustered With the clustered mode you can configure how many browser instances or incognito pages can execute concurrently Default is browser and will ensure a maximum amount of browser instances can execute concurrently Mode context will ensure a maximum amount of incognito pages can execute concurrently You can also configure the maximum concurrency allowed which per default is 5 and the maximum duration of a rendering request which per default is 30 seconds Using a cluster of incognito pages is more performant and consumes less CPU and memory than a cluster of browsers However if one page crashes it can bring down the entire browser with it making all the rendering requests happening at the same time fail Also each page isn t guaranteed to be totally clean cookies and storage might bleed through as seen here https bugs chromium org p chromium issues detail id 754576 bash RENDERING MODE clustered RENDERING CLUSTERING MODE browser RENDERING CLUSTERING MAX CONCURRENCY 5 RENDERING CLUSTERING TIMEOUT 30 json rendering mode clustered clustering mode browser maxConcurrency 5 timeout 30 Reusable experimental When using the rendering mode reusable one browser instance will be created and reused A new incognito page will be opened for each request This mode is experimental since if the browser instance crashes it will not automatically be restarted You can achieve a similar behavior using clustered mode with a high maxConcurrency setting bash RENDERING MODE reusable json rendering mode reusable Optimize the performance CPU and memory usage of the image renderer The performance and resources consumption of the different modes depend a lot on the number of concurrent requests your service is handling To understand how many concurrent requests your service is handling monitor your image renderer service With no concurrent requests the different modes show very similar performance and CPU memory usage When handling concurrent requests we see the following trends To improve performance and reduce CPU and memory consumption use clustered clustered mode with RENDERING CLUSTERING MODE set as context This parallelizes incognito pages instead of browsers If you use the clustered clustered mode with a maxConcurrency setting below your average number of concurrent requests performance will drop as the rendering requests will need to wait for the other to finish before getting access to an incognito page browser To achieve better performance monitor the machine on which your service is running If you don t have enough memory and or CPU every rendering step will be slower than usual increasing the duration of every rendering request Other available settings Please note that not all settings are available using environment variables If there is no example using environment variable below it means that you need to update the configuration file HTTP host Change the listening host of the HTTP server Default is unset and will use the local host bash HTTP HOST localhost json service host localhost HTTP port Change the listening port of the HTTP server Default is 8081 Setting 0 will automatically assign a port not in use bash HTTP PORT 0 json service port 0 HTTP protocol HTTPS protocol is supported in the image renderer v3 11 0 and later Change the protocol of the server it can be http or https Default is http json service protocol http HTTPS certificate and key file Path to the image renderer certificate and key file used to start an HTTPS server json service certFile path to cert certKey path to key HTTPS min TLS version Minimum TLS version allowed Accepted values are TLSv1 2 TLSv1 3 Default is TLSv1 2 json service minTLSVersion TLSv1 2 Enable Prometheus metrics You can enable Prometheus https prometheus io metrics endpoint metrics using the environment variable ENABLE METRICS Node js and render request duration metrics are included see Enable Prometheus metrics endpoint for details Default is false bash ENABLE METRICS true json service metrics enabled true collectDefaultMetrics true requestDurationBuckets 1 5 7 9 11 13 15 20 30 Enable detailed timing metrics With the Prometheus metrics enabled enable prometheus metrics you can also enable detailed metrics to get the duration of every rendering step Default is false bash Available from v3 9 0 RENDERING TIMING METRICS true json rendering timingMetrics true Log level Change the log level Default is info and will include log messages with level error warning and info bash LOG LEVEL debug json service logging level debug console json false colorize true Verbose logging Instruct headless browser instance whether to capture and log verbose information when rendering an image Default is false and will only capture and log error messages When enabled true debug messages are captured and logged as well Note that you need to change log level to debug see above for the verbose information to be included in the logs bash RENDERING VERBOSE LOGGING true json rendering verboseLogging true Capture browser output Instruct headless browser instance whether to output its debug and error messages into running process of remote rendering service Default is false This can be useful to enable true when troubleshooting bash RENDERING DUMPIO true json rendering dumpio true Custom Chrome Chromium If you already have Chrome https www google com chrome or Chromium https www chromium org installed on your system then you can use this instead of the pre packaged version of Chromium Please note that this is not recommended since you may encounter problems if the installed version of Chrome Chromium is not compatible with the Grafana Image renderer plugin grafana plugins grafana image renderer You need to make sure that the Chrome Chromium executable is available for the Grafana image rendering service process bash CHROME BIN usr bin chromium browser json rendering chromeBin usr bin chromium browser Start browser with additional arguments Additional arguments to pass to the headless browser instance Defaults are no sandbox disable gpu The list of Chromium flags can be found here https peter sh experiments chromium command line switches and the list of flags used as defaults by Puppeteer can be found there https cri dev posts 2020 04 04 Full list of Chromium Puppeteer flags Multiple arguments is separated with comma character bash RENDERING ARGS no sandbox disable setuid sandbox disable dev shm usage disable accelerated 2d canvas disable gpu window size 1280x758 json rendering args no sandbox disable setuid sandbox disable dev shm usage disable accelerated 2d canvas disable gpu window size 1280x758 Ignore HTTPS errors Instruct headless browser instance whether to ignore HTTPS errors during navigation Per default HTTPS errors are not ignored Due to the security risk it s not recommended to ignore HTTPS errors bash IGNORE HTTPS ERRORS true json rendering ignoresHttpsErrors true Default timezone Instruct headless browser instance to use a default timezone when not provided by Grafana e g when rendering panel image of alert See ICU s metaZones txt https cs chromium org chromium src third party icu source data misc metaZones txt rcl faee8bc70570192d82d2978a71e2a615788597d1 for a list of supported timezone IDs Fallbacks to TZ environment variable if not set bash BROWSER TZ Europe Stockholm json rendering timezone Europe Stockholm Default language Instruct headless browser instance to use a default language when not provided by Grafana e g when rendering panel image of alert Refer to the HTTP header Accept Language to understand how to format this value bash Available from v3 9 0 RENDERING LANGUAGE fr CH fr q 0 9 en q 0 8 de q 0 7 q 0 5 json rendering acceptLanguage fr CH fr q 0 9 en q 0 8 de q 0 7 q 0 5 Viewport width Default viewport width when width is not specified in the rendering request Default is 1000 bash Available from v3 9 0 RENDERING VIEWPORT WIDTH 1000 json rendering width 1000 Viewport height Default viewport height when height is not specified in the rendering request Default is 500 bash Available from v3 9 0 RENDERING VIEWPORT HEIGHT 500 json rendering height 500 Viewport maximum width Limit the maximum viewport width that can be requested Default is 3000 bash Available from v3 9 0 RENDERING VIEWPORT MAX WIDTH 1000 json rendering maxWidth 1000 Viewport maximum height Limit the maximum viewport height that can be requested Default is 3000 bash Available from v3 9 0 RENDERING VIEWPORT MAX HEIGHT 500 json rendering maxHeight 500 Device scale factor Specify default device scale factor for rendering images 2 is enough for monitor resolutions 4 would be better for printed material Setting a higher value affects performance and memory Default is 1 This can be overridden in the rendering request bash Available from v3 9 0 RENDERING VIEWPORT DEVICE SCALE FACTOR 2 json rendering deviceScaleFactor 2 Maximum device scale factor Limit the maximum device scale factor that can be requested Default is 4 bash Available from v3 9 0 RENDERING VIEWPORT MAX DEVICE SCALE FACTOR 4 json rendering maxDeviceScaleFactor 4 Page zoom level The following command sets a page zoom level The default value is 1 A value of 1 5 equals 150 zoom bash RENDERING VIEWPORT PAGE ZOOM LEVEL 1 json rendering pageZoomLevel 1
grafana setup image rendering troubleshooting plugin rendering aliases Image rendering troubleshooting troubleshooting keywords grafana image
--- aliases: - ../../image-rendering/troubleshooting/ description: Image rendering troubleshooting keywords: - grafana - image - rendering - plugin - troubleshooting labels: products: - enterprise - oss menuTitle: Troubleshooting title: Troubleshoot image rendering weight: 200 --- # Troubleshoot image rendering In this section, you'll learn how to enable logging for the image renderer and you'll find the most common issues. ## Enable debug logging To troubleshoot the image renderer, different kind of logs are available. You can enable debug log messages for rendering in the Grafana configuration file and inspect the Grafana server logs. ```bash [log] filters = rendering:debug ``` You can also enable more logs in image renderer service itself by enabling [debug logging](). ## Missing libraries The plugin and rendering service uses [Chromium browser](https://www.chromium.org/) which depends on certain libraries. If you don't have all of those libraries installed in your system you may encounter errors when trying to render an image, e.g. ```bash Rendering failed: Error: Failed to launch chrome!/var/lib/grafana/plugins/grafana-image-renderer/chrome-linux/chrome: error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory\n\n\nTROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md ``` In general you can use the [`ldd`](<https://en.wikipedia.org/wiki/Ldd_(Unix)>) utility to figure out what shared libraries are not installed in your system: ```bash cd <grafana-image-render plugin directory> ldd chrome-headless-shell/linux-132.0.6781.0/chrome-headless-shell-linux64/chrome-headless-shell linux-vdso.so.1 (0x00007fff1bf65000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2047945000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2047924000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f204791a000) libX11.so.6 => not found libX11-xcb.so.1 => not found libxcb.so.1 => not found libXcomposite.so.1 => not found ... ``` **Ubuntu:** On Ubuntu 18.10 the following dependencies are required for the image rendering to function. ```bash libx11-6 libx11-xcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrender1 libxtst6 libglib2.0-0 libnss3 libcups2 libdbus-1-3 libxss1 libxrandr2 libgtk-3-0 libasound2 libxcb-dri3-0 libgbm1 libxshmfence1 ``` **Debian:** On Debian 9 (Stretch) the following dependencies are required for the image rendering to function. ```bash libx11 libcairo libcairo2 libxtst6 libxcomposite1 libx11-xcb1 libxcursor1 libxdamage1 libnss3 libcups libcups2 libxss libxss1 libxrandr2 libasound2 libatk1.0-0 libatk-bridge2.0-0 libpangocairo-1.0-0 libgtk-3-0 libgbm1 libxshmfence1 ``` On Debian 10 (Buster) the following dependencies are required for the image rendering to function. ```bash libxdamage1 libxext6 libxi6 libxtst6 libnss3 libcups2 libxss1 libxrandr2 libasound2 libatk1.0-0 libatk-bridge2.0-0 libpangocairo-1.0-0 libpango-1.0-0 libcairo2 libatspi2.0-0 libgtk3.0-cil libgdk3.0-cil libx11-xcb-dev libgbm1 libxshmfence1 ``` **Centos:** On a minimal CentOS 7 installation, the following dependencies are required for the image rendering to function: ```bash libXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita-cursor-theme adwaita-icon-theme at at-spi2-atk at-spi2-core cairo-gobject colord-libs dconf desktop-file-utils ed emacs-filesystem gdk-pixbuf2 glib-networking gnutls gsettings-desktop-schemas gtk-update-icon-cache gtk3 hicolor-icon-theme jasper-libs json-glib libappindicator-gtk3 libdbusmenu libdbusmenu-gtk3 libepoxy liberation-fonts liberation-narrow-fonts liberation-sans-fonts liberation-serif-fonts libgusb libindicator-gtk3 libmodman libproxy libsoup libwayland-cursor libwayland-egl libxkbcommon m4 mailx nettle patch psmisc redhat-lsb-core redhat-lsb-submod-security rest spax time trousers xdg-utils xkeyboard-config alsa-lib ``` On a minimal CentOS 8 installation, the following dependencies are required for the image rendering to function: ```bash libXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita-cursor-theme adwaita-icon-theme at at-spi2-atk at-spi2-core cairo-gobject colord-libs dconf desktop-file-utils ed emacs-filesystem gdk-pixbuf2 glib-networking gnutls gsettings-desktop-schemas gtk-update-icon-cache gtk3 hicolor-icon-theme jasper-libs json-glib libappindicator-gtk3 libdbusmenu libdbusmenu-gtk3 libepoxy liberation-fonts liberation-narrow-fonts liberation-sans-fonts liberation-serif-fonts libgusb libindicator-gtk3 libmodman libproxy libsoup libwayland-cursor libwayland-egl libxkbcommon m4 mailx nettle patch psmisc redhat-lsb-core redhat-lsb-submod-security rest spax time trousers xdg-utils xkeyboard-config alsa-lib libX11-xcb ``` **RHEL:** On a minimal RHEL 8 installation, the following dependencies are required for the image rendering to function: ```bash linux-vdso.so.1 libdl.so.2 libpthread.so.0 libgobject-2.0.so.0 libglib-2.0.so.0 libnss3.so libnssutil3.so libsmime3.so libnspr4.so libatk-1.0.so.0 libatk-bridge-2.0.so.0 libcups.so.2 libgio-2.0.so.0 libdrm.so.2 libdbus-1.so.3 libexpat.so.1 libxcb.so.1 libxkbcommon.so.0 libm.so.6 libX11.so.6 libXcomposite.so.1 libXdamage.so.1 libXext.so.6 libXfixes.so.3 libXrandr.so.2 libgbm.so.1 libpango-1.0.so.0 libcairo.so.2 libasound.so.2 libatspi.so.0 libgcc_s.so.1 libc.so.6 /lib64/ld-linux-x86-64.so.2 libgnutls.so.30 libpcre.so.1 libffi.so.6 libplc4.so libplds4.so librt.so.1 libgmodule-2.0.so.0 libgssapi_krb5.so.2 libkrb5.so.3 libk5crypto.so.3 libcom_err.so.2 libavahi-common.so.3 libavahi-client.so.3 libcrypt.so.1 libz.so.1 libselinux.so.1 libresolv.so.2 libmount.so.1 libsystemd.so.0 libXau.so.6 libXrender.so.1 libthai.so.0 libfribidi.so.0 libpixman-1.so.0 libfontconfig.so.1 libpng16.so.16 libxcb-render.so.0 libidn2.so.0 libunistring.so.2 libtasn1.so.6 libnettle.so.6 libhogweed.so.4 libgmp.so.10 libkrb5support.so.0 libkeyutils.so.1 libpcre2-8.so.0 libuuid.so.1 liblz4.so.1 libgcrypt.so.20 libbz2.so.1 ``` ## Certificate signed by internal certificate authorities In many cases, Grafana runs on internal servers and uses certificates that have not been signed by a CA ([Certificate Authority](https://en.wikipedia.org/wiki/Certificate_authority)) known to Chrome, and therefore cannot be validated. Chrome internally uses NSS ([Network Security Services](https://en.wikipedia.org/wiki/Network_Security_Services)) for cryptographic operations such as the validation of certificates. If you are using the Grafana Image Renderer with a Grafana server that uses a certificate signed by such a custom CA (for example a company-internal CA), rendering images will fail and you will see messages like this in the Grafana log: ``` t=2019-12-04T12:39:22+0000 lvl=error msg="Render request failed" logger=rendering error=map[] url="https://192.168.106.101:3443/d-solo/zxDJxNaZk/graphite-metrics?orgId=1&refresh=1m&from=1575438321300&to=1575459921300&var-Host=master1&panelId=4&width=1000&height=500&tz=Europe%2FBerlin&render=1" timestamp=0001-01-01T00:00:00.000Z t=2019-12-04T12:39:22+0000 lvl=error msg="Rendering failed." logger=context userId=1 orgId=1 uname=admin error="Rendering failed: Error: net::ERR_CERT_AUTHORITY_INVALID at https://192.168.106.101:3443/d-solo/zxDJxNaZk/graphite-metrics?orgId=1&refresh=1m&from=1575438321300&to=1575459921300&var-Host=master1&panelId=4&width=1000&height=500&tz=Europe%2FBerlin&render=1" t=2019-12-04T12:39:22+0000 lvl=error msg="Request Completed" logger=context userId=1 orgId=1 uname=admin method=GET path=/render/d-solo/zxDJxNaZk/graphite-metrics status=500 remote_addr=192.168.106.101 time_ms=310 size=1722 referer="https://grafana.xxx-xxx/d/zxDJxNaZk/graphite-metrics?orgId=1&refresh=1m" ``` If this happens, then you have to add the certificate to the trust store. If you have the certificate file for the internal root CA in the file `internal-root-ca.crt.pem`, then use these commands to create a user specific NSS trust store for the Grafana user (`grafana` for the purpose of this example) and execute the following steps: **Linux:** ``` [root@server ~]# [ -d /usr/share/grafana/.pki/nssdb ] || mkdir -p /usr/share/grafana/.pki/nssdb [root@server ~]# certutil -d sql:/usr/share/grafana/.pki/nssdb -A -n internal-root-ca -t C -i /etc/pki/tls/certs/internal-root-ca.crt.pem [root@server ~]# chown -R grafana: /usr/share/grafana/.pki/nssdb ``` **Windows:** ``` certutil –addstore "Root" <path>/internal-root-ca.crt.pem ``` **Container:** ```Dockerfile FROM grafana/grafana-image-renderer:latest USER root RUN apk add --no-cache nss-tools USER grafana COPY internal-root-ca.crt.pem /etc/pki/tls/certs/internal-root-ca.crt.pem RUN mkdir -p /home/grafana/.pki/nssdb RUN certutil -d sql:/home/grafana/.pki/nssdb -A -n internal-root-ca -t C -i /etc/pki/tls/certs/internal-root-ca.crt.pem ``` ## Custom Chrome/Chromium As a last resort, if you already have [Chrome](https://www.google.com/chrome/) or [Chromium](https://www.chromium.org/) installed on your system, then you can configure the Grafana Image renderer plugin to use this instead of the pre-packaged version of Chromium. Please note that this is not recommended, since you may encounter problems if the installed version of Chrome/Chromium is not compatible with the [Grafana Image renderer plugin](/grafana/plugins/grafana-image-renderer). To override the path to the Chrome/Chromium executable in plugin mode, set an environment variable and make sure that it's available for the Grafana process. For example: ```bash export GF_PLUGIN_RENDERING_CHROME_BIN="/usr/bin/chromium-browser" ``` In remote rendering mode, you need to set the environment variable or update the configuration file and make sure that it's available for the image rendering service process: ```bash CHROME_BIN="/usr/bin/chromium-browser" ``` ```json { "rendering": { "chromeBin": "/usr/bin/chromium-browser" } } ```
grafana setup
aliases image rendering troubleshooting description Image rendering troubleshooting keywords grafana image rendering plugin troubleshooting labels products enterprise oss menuTitle Troubleshooting title Troubleshoot image rendering weight 200 Troubleshoot image rendering In this section you ll learn how to enable logging for the image renderer and you ll find the most common issues Enable debug logging To troubleshoot the image renderer different kind of logs are available You can enable debug log messages for rendering in the Grafana configuration file and inspect the Grafana server logs bash log filters rendering debug You can also enable more logs in image renderer service itself by enabling debug logging Missing libraries The plugin and rendering service uses Chromium browser https www chromium org which depends on certain libraries If you don t have all of those libraries installed in your system you may encounter errors when trying to render an image e g bash Rendering failed Error Failed to launch chrome var lib grafana plugins grafana image renderer chrome linux chrome error while loading shared libraries libX11 so 6 cannot open shared object file No such file or directory n n nTROUBLESHOOTING https github com GoogleChrome puppeteer blob master docs troubleshooting md In general you can use the ldd https en wikipedia org wiki Ldd Unix utility to figure out what shared libraries are not installed in your system bash cd grafana image render plugin directory ldd chrome headless shell linux 132 0 6781 0 chrome headless shell linux64 chrome headless shell linux vdso so 1 0x00007fff1bf65000 libdl so 2 lib x86 64 linux gnu libdl so 2 0x00007f2047945000 libpthread so 0 lib x86 64 linux gnu libpthread so 0 0x00007f2047924000 librt so 1 lib x86 64 linux gnu librt so 1 0x00007f204791a000 libX11 so 6 not found libX11 xcb so 1 not found libxcb so 1 not found libXcomposite so 1 not found Ubuntu On Ubuntu 18 10 the following dependencies are required for the image rendering to function bash libx11 6 libx11 xcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrender1 libxtst6 libglib2 0 0 libnss3 libcups2 libdbus 1 3 libxss1 libxrandr2 libgtk 3 0 libasound2 libxcb dri3 0 libgbm1 libxshmfence1 Debian On Debian 9 Stretch the following dependencies are required for the image rendering to function bash libx11 libcairo libcairo2 libxtst6 libxcomposite1 libx11 xcb1 libxcursor1 libxdamage1 libnss3 libcups libcups2 libxss libxss1 libxrandr2 libasound2 libatk1 0 0 libatk bridge2 0 0 libpangocairo 1 0 0 libgtk 3 0 libgbm1 libxshmfence1 On Debian 10 Buster the following dependencies are required for the image rendering to function bash libxdamage1 libxext6 libxi6 libxtst6 libnss3 libcups2 libxss1 libxrandr2 libasound2 libatk1 0 0 libatk bridge2 0 0 libpangocairo 1 0 0 libpango 1 0 0 libcairo2 libatspi2 0 0 libgtk3 0 cil libgdk3 0 cil libx11 xcb dev libgbm1 libxshmfence1 Centos On a minimal CentOS 7 installation the following dependencies are required for the image rendering to function bash libXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita cursor theme adwaita icon theme at at spi2 atk at spi2 core cairo gobject colord libs dconf desktop file utils ed emacs filesystem gdk pixbuf2 glib networking gnutls gsettings desktop schemas gtk update icon cache gtk3 hicolor icon theme jasper libs json glib libappindicator gtk3 libdbusmenu libdbusmenu gtk3 libepoxy liberation fonts liberation narrow fonts liberation sans fonts liberation serif fonts libgusb libindicator gtk3 libmodman libproxy libsoup libwayland cursor libwayland egl libxkbcommon m4 mailx nettle patch psmisc redhat lsb core redhat lsb submod security rest spax time trousers xdg utils xkeyboard config alsa lib On a minimal CentOS 8 installation the following dependencies are required for the image rendering to function bash libXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita cursor theme adwaita icon theme at at spi2 atk at spi2 core cairo gobject colord libs dconf desktop file utils ed emacs filesystem gdk pixbuf2 glib networking gnutls gsettings desktop schemas gtk update icon cache gtk3 hicolor icon theme jasper libs json glib libappindicator gtk3 libdbusmenu libdbusmenu gtk3 libepoxy liberation fonts liberation narrow fonts liberation sans fonts liberation serif fonts libgusb libindicator gtk3 libmodman libproxy libsoup libwayland cursor libwayland egl libxkbcommon m4 mailx nettle patch psmisc redhat lsb core redhat lsb submod security rest spax time trousers xdg utils xkeyboard config alsa lib libX11 xcb RHEL On a minimal RHEL 8 installation the following dependencies are required for the image rendering to function bash linux vdso so 1 libdl so 2 libpthread so 0 libgobject 2 0 so 0 libglib 2 0 so 0 libnss3 so libnssutil3 so libsmime3 so libnspr4 so libatk 1 0 so 0 libatk bridge 2 0 so 0 libcups so 2 libgio 2 0 so 0 libdrm so 2 libdbus 1 so 3 libexpat so 1 libxcb so 1 libxkbcommon so 0 libm so 6 libX11 so 6 libXcomposite so 1 libXdamage so 1 libXext so 6 libXfixes so 3 libXrandr so 2 libgbm so 1 libpango 1 0 so 0 libcairo so 2 libasound so 2 libatspi so 0 libgcc s so 1 libc so 6 lib64 ld linux x86 64 so 2 libgnutls so 30 libpcre so 1 libffi so 6 libplc4 so libplds4 so librt so 1 libgmodule 2 0 so 0 libgssapi krb5 so 2 libkrb5 so 3 libk5crypto so 3 libcom err so 2 libavahi common so 3 libavahi client so 3 libcrypt so 1 libz so 1 libselinux so 1 libresolv so 2 libmount so 1 libsystemd so 0 libXau so 6 libXrender so 1 libthai so 0 libfribidi so 0 libpixman 1 so 0 libfontconfig so 1 libpng16 so 16 libxcb render so 0 libidn2 so 0 libunistring so 2 libtasn1 so 6 libnettle so 6 libhogweed so 4 libgmp so 10 libkrb5support so 0 libkeyutils so 1 libpcre2 8 so 0 libuuid so 1 liblz4 so 1 libgcrypt so 20 libbz2 so 1 Certificate signed by internal certificate authorities In many cases Grafana runs on internal servers and uses certificates that have not been signed by a CA Certificate Authority https en wikipedia org wiki Certificate authority known to Chrome and therefore cannot be validated Chrome internally uses NSS Network Security Services https en wikipedia org wiki Network Security Services for cryptographic operations such as the validation of certificates If you are using the Grafana Image Renderer with a Grafana server that uses a certificate signed by such a custom CA for example a company internal CA rendering images will fail and you will see messages like this in the Grafana log t 2019 12 04T12 39 22 0000 lvl error msg Render request failed logger rendering error map url https 192 168 106 101 3443 d solo zxDJxNaZk graphite metrics orgId 1 refresh 1m from 1575438321300 to 1575459921300 var Host master1 panelId 4 width 1000 height 500 tz Europe 2FBerlin render 1 timestamp 0001 01 01T00 00 00 000Z t 2019 12 04T12 39 22 0000 lvl error msg Rendering failed logger context userId 1 orgId 1 uname admin error Rendering failed Error net ERR CERT AUTHORITY INVALID at https 192 168 106 101 3443 d solo zxDJxNaZk graphite metrics orgId 1 refresh 1m from 1575438321300 to 1575459921300 var Host master1 panelId 4 width 1000 height 500 tz Europe 2FBerlin render 1 t 2019 12 04T12 39 22 0000 lvl error msg Request Completed logger context userId 1 orgId 1 uname admin method GET path render d solo zxDJxNaZk graphite metrics status 500 remote addr 192 168 106 101 time ms 310 size 1722 referer https grafana xxx xxx d zxDJxNaZk graphite metrics orgId 1 refresh 1m If this happens then you have to add the certificate to the trust store If you have the certificate file for the internal root CA in the file internal root ca crt pem then use these commands to create a user specific NSS trust store for the Grafana user grafana for the purpose of this example and execute the following steps Linux root server d usr share grafana pki nssdb mkdir p usr share grafana pki nssdb root server certutil d sql usr share grafana pki nssdb A n internal root ca t C i etc pki tls certs internal root ca crt pem root server chown R grafana usr share grafana pki nssdb Windows certutil addstore Root path internal root ca crt pem Container Dockerfile FROM grafana grafana image renderer latest USER root RUN apk add no cache nss tools USER grafana COPY internal root ca crt pem etc pki tls certs internal root ca crt pem RUN mkdir p home grafana pki nssdb RUN certutil d sql home grafana pki nssdb A n internal root ca t C i etc pki tls certs internal root ca crt pem Custom Chrome Chromium As a last resort if you already have Chrome https www google com chrome or Chromium https www chromium org installed on your system then you can configure the Grafana Image renderer plugin to use this instead of the pre packaged version of Chromium Please note that this is not recommended since you may encounter problems if the installed version of Chrome Chromium is not compatible with the Grafana Image renderer plugin grafana plugins grafana image renderer To override the path to the Chrome Chromium executable in plugin mode set an environment variable and make sure that it s available for the Grafana process For example bash export GF PLUGIN RENDERING CHROME BIN usr bin chromium browser In remote rendering mode you need to set the environment variable or update the configuration file and make sure that it s available for the image rendering service process bash CHROME BIN usr bin chromium browser json rendering chromeBin usr bin chromium browser
grafana setup plugin Image rendering monitoring rendering aliases monitoring keywords grafana image rendering monitoring image
--- aliases: - ../../image-rendering/monitoring/ description: Image rendering monitoring keywords: - grafana - image - rendering - plugin - monitoring labels: products: - enterprise - oss title: Monitor the image renderer weight: 100 --- # Monitor the image renderer Rendering images requires a lot of memory, mainly because Grafana creates browser instances in the background for the actual rendering. Monitoring your service can help you allocate the right amount of resources to your rendering service and set the right [rendering mode](). ## Enable Prometheus metrics endpoint Configure this service to expose a Prometheus metrics endpoint. For information on how to configure and monitor this service using Prometheus as a data source, refer to [Grafana Image Rendering Service dashboard](/grafana/dashboards/12203). **Metrics endpoint output example:** ``` # HELP process_cpu_user_seconds_total Total user CPU time spent in seconds. # TYPE process_cpu_user_seconds_total counter process_cpu_user_seconds_total 0.536 1579444523566 # HELP process_cpu_system_seconds_total Total system CPU time spent in seconds. # TYPE process_cpu_system_seconds_total counter process_cpu_system_seconds_total 0.064 1579444523566 # HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. # TYPE process_cpu_seconds_total counter process_cpu_seconds_total 0.6000000000000001 1579444523566 # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. # TYPE process_start_time_seconds gauge process_start_time_seconds 1579444433 # HELP process_resident_memory_bytes Resident memory size in bytes. # TYPE process_resident_memory_bytes gauge process_resident_memory_bytes 52686848 1579444523568 # HELP process_virtual_memory_bytes Virtual memory size in bytes. # TYPE process_virtual_memory_bytes gauge process_virtual_memory_bytes 2055344128 1579444523568 # HELP process_heap_bytes Process heap size in bytes. # TYPE process_heap_bytes gauge process_heap_bytes 1996390400 1579444523568 # HELP process_open_fds Number of open file descriptors. # TYPE process_open_fds gauge process_open_fds 31 1579444523567 # HELP process_max_fds Maximum number of open file descriptors. # TYPE process_max_fds gauge process_max_fds 1573877 # HELP nodejs_eventloop_lag_seconds Lag of event loop in seconds. # TYPE nodejs_eventloop_lag_seconds gauge nodejs_eventloop_lag_seconds 0.000915922 1579444523567 # HELP nodejs_active_handles Number of active libuv handles grouped by handle type. Every handle type is C++ class name. # TYPE nodejs_active_handles gauge nodejs_active_handles{type="WriteStream"} 2 1579444523566 nodejs_active_handles{type="Server"} 1 1579444523566 nodejs_active_handles{type="Socket"} 9 1579444523566 nodejs_active_handles{type="ChildProcess"} 2 1579444523566 # HELP nodejs_active_handles_total Total number of active handles. # TYPE nodejs_active_handles_total gauge nodejs_active_handles_total 14 1579444523567 # HELP nodejs_active_requests Number of active libuv requests grouped by request type. Every request type is C++ class name. # TYPE nodejs_active_requests gauge nodejs_active_requests{type="FSReqCallback"} 2 # HELP nodejs_active_requests_total Total number of active requests. # TYPE nodejs_active_requests_total gauge nodejs_active_requests_total 2 1579444523567 # HELP nodejs_heap_size_total_bytes Process heap size from node.js in bytes. # TYPE nodejs_heap_size_total_bytes gauge nodejs_heap_size_total_bytes 13725696 1579444523567 # HELP nodejs_heap_size_used_bytes Process heap size used from node.js in bytes. # TYPE nodejs_heap_size_used_bytes gauge nodejs_heap_size_used_bytes 12068008 1579444523567 # HELP nodejs_external_memory_bytes Nodejs external memory size in bytes. # TYPE nodejs_external_memory_bytes gauge nodejs_external_memory_bytes 1728962 1579444523567 # HELP nodejs_heap_space_size_total_bytes Process heap space size total from node.js in bytes. # TYPE nodejs_heap_space_size_total_bytes gauge nodejs_heap_space_size_total_bytes{space="read_only"} 262144 1579444523567 nodejs_heap_space_size_total_bytes{space="new"} 1048576 1579444523567 nodejs_heap_space_size_total_bytes{space="old"} 9809920 1579444523567 nodejs_heap_space_size_total_bytes{space="code"} 425984 1579444523567 nodejs_heap_space_size_total_bytes{space="map"} 1052672 1579444523567 nodejs_heap_space_size_total_bytes{space="large_object"} 1077248 1579444523567 nodejs_heap_space_size_total_bytes{space="code_large_object"} 49152 1579444523567 nodejs_heap_space_size_total_bytes{space="new_large_object"} 0 1579444523567 # HELP nodejs_heap_space_size_used_bytes Process heap space size used from node.js in bytes. # TYPE nodejs_heap_space_size_used_bytes gauge nodejs_heap_space_size_used_bytes{space="read_only"} 32296 1579444523567 nodejs_heap_space_size_used_bytes{space="new"} 601696 1579444523567 nodejs_heap_space_size_used_bytes{space="old"} 9376600 1579444523567 nodejs_heap_space_size_used_bytes{space="code"} 286688 1579444523567 nodejs_heap_space_size_used_bytes{space="map"} 704320 1579444523567 nodejs_heap_space_size_used_bytes{space="large_object"} 1064872 1579444523567 nodejs_heap_space_size_used_bytes{space="code_large_object"} 3552 1579444523567 nodejs_heap_space_size_used_bytes{space="new_large_object"} 0 1579444523567 # HELP nodejs_heap_space_size_available_bytes Process heap space size available from node.js in bytes. # TYPE nodejs_heap_space_size_available_bytes gauge nodejs_heap_space_size_available_bytes{space="read_only"} 229576 1579444523567 nodejs_heap_space_size_available_bytes{space="new"} 445792 1579444523567 nodejs_heap_space_size_available_bytes{space="old"} 417712 1579444523567 nodejs_heap_space_size_available_bytes{space="code"} 20576 1579444523567 nodejs_heap_space_size_available_bytes{space="map"} 343632 1579444523567 nodejs_heap_space_size_available_bytes{space="large_object"} 0 1579444523567 nodejs_heap_space_size_available_bytes{space="code_large_object"} 0 1579444523567 nodejs_heap_space_size_available_bytes{space="new_large_object"} 1047488 1579444523567 # HELP nodejs_version_info Node.js version info. # TYPE nodejs_version_info gauge nodejs_version_info{version="v14.16.1",major="14",minor="16",patch="1"} 1 # HELP grafana_image_renderer_service_http_request_duration_seconds duration histogram of http responses labeled with: status_code # TYPE grafana_image_renderer_service_http_request_duration_seconds histogram grafana_image_renderer_service_http_request_duration_seconds_bucket{le="1",status_code="200"} 0 grafana_image_renderer_service_http_request_duration_seconds_bucket{le="5",status_code="200"} 4 grafana_image_renderer_service_http_request_duration_seconds_bucket{le="7",status_code="200"} 4 grafana_image_renderer_service_http_request_duration_seconds_bucket{le="9",status_code="200"} 4 grafana_image_renderer_service_http_request_duration_seconds_bucket{le="11",status_code="200"} 4 grafana_image_renderer_service_http_request_duration_seconds_bucket{le="13",status_code="200"} 4 grafana_image_renderer_service_http_request_duration_seconds_bucket{le="15",status_code="200"} 4 grafana_image_renderer_service_http_request_duration_seconds_bucket{le="20",status_code="200"} 4 grafana_image_renderer_service_http_request_duration_seconds_bucket{le="30",status_code="200"} 4 grafana_image_renderer_service_http_request_duration_seconds_bucket{le="+Inf",status_code="200"} 4 grafana_image_renderer_service_http_request_duration_seconds_sum{status_code="200"} 10.492873834 grafana_image_renderer_service_http_request_duration_seconds_count{status_code="200"} 4 # HELP up 1 = up, 0 = not up # TYPE up gauge up 1 # HELP grafana_image_renderer_http_request_in_flight A gauge of requests currently being served by the image renderer. # TYPE grafana_image_renderer_http_request_in_flight gauge grafana_image_renderer_http_request_in_flight 1 # HELP grafana_image_renderer_step_duration_seconds duration histogram of browser steps for rendering an image labeled with: step # TYPE grafana_image_renderer_step_duration_seconds histogram grafana_image_renderer_step_duration_seconds_bucket{le="0.3",step="launch"} 0 grafana_image_renderer_step_duration_seconds_bucket{le="0.5",step="launch"} 0 grafana_image_renderer_step_duration_seconds_bucket{le="1",step="launch"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="2",step="launch"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="3",step="launch"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="5",step="launch"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="+Inf",step="launch"} 1 grafana_image_renderer_step_duration_seconds_sum{step="launch"} 0.7914972 grafana_image_renderer_step_duration_seconds_count{step="launch"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="0.3",step="newPage"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="0.5",step="newPage"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="1",step="newPage"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="2",step="newPage"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="3",step="newPage"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="5",step="newPage"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="+Inf",step="newPage"} 1 grafana_image_renderer_step_duration_seconds_sum{step="newPage"} 0.2217868 grafana_image_renderer_step_duration_seconds_count{step="newPage"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="0.3",step="prepare"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="0.5",step="prepare"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="1",step="prepare"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="2",step="prepare"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="3",step="prepare"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="5",step="prepare"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="+Inf",step="prepare"} 1 grafana_image_renderer_step_duration_seconds_sum{step="prepare"} 0.0819274 grafana_image_renderer_step_duration_seconds_count{step="prepare"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="0.3",step="navigate"} 0 grafana_image_renderer_step_duration_seconds_bucket{le="0.5",step="navigate"} 0 grafana_image_renderer_step_duration_seconds_bucket{le="1",step="navigate"} 0 grafana_image_renderer_step_duration_seconds_bucket{le="2",step="navigate"} 0 grafana_image_renderer_step_duration_seconds_bucket{le="3",step="navigate"} 0 grafana_image_renderer_step_duration_seconds_bucket{le="5",step="navigate"} 0 grafana_image_renderer_step_duration_seconds_bucket{le="+Inf",step="navigate"} 1 grafana_image_renderer_step_duration_seconds_sum{step="navigate"} 15.3311258 grafana_image_renderer_step_duration_seconds_count{step="navigate"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="0.3",step="panelsRendered"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="0.5",step="panelsRendered"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="1",step="panelsRendered"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="2",step="panelsRendered"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="3",step="panelsRendered"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="5",step="panelsRendered"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="+Inf",step="panelsRendered"} 1 grafana_image_renderer_step_duration_seconds_sum{step="panelsRendered"} 0.0205577 grafana_image_renderer_step_duration_seconds_count{step="panelsRendered"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="0.3",step="screenshot"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="0.5",step="screenshot"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="1",step="screenshot"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="2",step="screenshot"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="3",step="screenshot"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="5",step="screenshot"} 1 grafana_image_renderer_step_duration_seconds_bucket{le="+Inf",step="screenshot"} 1 grafana_image_renderer_step_duration_seconds_sum{step="screenshot"} 0.2866623 grafana_image_renderer_step_duration_seconds_count{step="screenshot"} 1 # HELP grafana_image_renderer_browser_info A metric with a constant '1 value labeled by version of the browser in use # TYPE grafana_image_renderer_browser_info gauge grafana_image_renderer_browser_info{version="HeadlessChrome/79.0.3945.0"} 1 ```
grafana setup
aliases image rendering monitoring description Image rendering monitoring keywords grafana image rendering plugin monitoring labels products enterprise oss title Monitor the image renderer weight 100 Monitor the image renderer Rendering images requires a lot of memory mainly because Grafana creates browser instances in the background for the actual rendering Monitoring your service can help you allocate the right amount of resources to your rendering service and set the right rendering mode Enable Prometheus metrics endpoint Configure this service to expose a Prometheus metrics endpoint For information on how to configure and monitor this service using Prometheus as a data source refer to Grafana Image Rendering Service dashboard grafana dashboards 12203 Metrics endpoint output example HELP process cpu user seconds total Total user CPU time spent in seconds TYPE process cpu user seconds total counter process cpu user seconds total 0 536 1579444523566 HELP process cpu system seconds total Total system CPU time spent in seconds TYPE process cpu system seconds total counter process cpu system seconds total 0 064 1579444523566 HELP process cpu seconds total Total user and system CPU time spent in seconds TYPE process cpu seconds total counter process cpu seconds total 0 6000000000000001 1579444523566 HELP process start time seconds Start time of the process since unix epoch in seconds TYPE process start time seconds gauge process start time seconds 1579444433 HELP process resident memory bytes Resident memory size in bytes TYPE process resident memory bytes gauge process resident memory bytes 52686848 1579444523568 HELP process virtual memory bytes Virtual memory size in bytes TYPE process virtual memory bytes gauge process virtual memory bytes 2055344128 1579444523568 HELP process heap bytes Process heap size in bytes TYPE process heap bytes gauge process heap bytes 1996390400 1579444523568 HELP process open fds Number of open file descriptors TYPE process open fds gauge process open fds 31 1579444523567 HELP process max fds Maximum number of open file descriptors TYPE process max fds gauge process max fds 1573877 HELP nodejs eventloop lag seconds Lag of event loop in seconds TYPE nodejs eventloop lag seconds gauge nodejs eventloop lag seconds 0 000915922 1579444523567 HELP nodejs active handles Number of active libuv handles grouped by handle type Every handle type is C class name TYPE nodejs active handles gauge nodejs active handles type WriteStream 2 1579444523566 nodejs active handles type Server 1 1579444523566 nodejs active handles type Socket 9 1579444523566 nodejs active handles type ChildProcess 2 1579444523566 HELP nodejs active handles total Total number of active handles TYPE nodejs active handles total gauge nodejs active handles total 14 1579444523567 HELP nodejs active requests Number of active libuv requests grouped by request type Every request type is C class name TYPE nodejs active requests gauge nodejs active requests type FSReqCallback 2 HELP nodejs active requests total Total number of active requests TYPE nodejs active requests total gauge nodejs active requests total 2 1579444523567 HELP nodejs heap size total bytes Process heap size from node js in bytes TYPE nodejs heap size total bytes gauge nodejs heap size total bytes 13725696 1579444523567 HELP nodejs heap size used bytes Process heap size used from node js in bytes TYPE nodejs heap size used bytes gauge nodejs heap size used bytes 12068008 1579444523567 HELP nodejs external memory bytes Nodejs external memory size in bytes TYPE nodejs external memory bytes gauge nodejs external memory bytes 1728962 1579444523567 HELP nodejs heap space size total bytes Process heap space size total from node js in bytes TYPE nodejs heap space size total bytes gauge nodejs heap space size total bytes space read only 262144 1579444523567 nodejs heap space size total bytes space new 1048576 1579444523567 nodejs heap space size total bytes space old 9809920 1579444523567 nodejs heap space size total bytes space code 425984 1579444523567 nodejs heap space size total bytes space map 1052672 1579444523567 nodejs heap space size total bytes space large object 1077248 1579444523567 nodejs heap space size total bytes space code large object 49152 1579444523567 nodejs heap space size total bytes space new large object 0 1579444523567 HELP nodejs heap space size used bytes Process heap space size used from node js in bytes TYPE nodejs heap space size used bytes gauge nodejs heap space size used bytes space read only 32296 1579444523567 nodejs heap space size used bytes space new 601696 1579444523567 nodejs heap space size used bytes space old 9376600 1579444523567 nodejs heap space size used bytes space code 286688 1579444523567 nodejs heap space size used bytes space map 704320 1579444523567 nodejs heap space size used bytes space large object 1064872 1579444523567 nodejs heap space size used bytes space code large object 3552 1579444523567 nodejs heap space size used bytes space new large object 0 1579444523567 HELP nodejs heap space size available bytes Process heap space size available from node js in bytes TYPE nodejs heap space size available bytes gauge nodejs heap space size available bytes space read only 229576 1579444523567 nodejs heap space size available bytes space new 445792 1579444523567 nodejs heap space size available bytes space old 417712 1579444523567 nodejs heap space size available bytes space code 20576 1579444523567 nodejs heap space size available bytes space map 343632 1579444523567 nodejs heap space size available bytes space large object 0 1579444523567 nodejs heap space size available bytes space code large object 0 1579444523567 nodejs heap space size available bytes space new large object 1047488 1579444523567 HELP nodejs version info Node js version info TYPE nodejs version info gauge nodejs version info version v14 16 1 major 14 minor 16 patch 1 1 HELP grafana image renderer service http request duration seconds duration histogram of http responses labeled with status code TYPE grafana image renderer service http request duration seconds histogram grafana image renderer service http request duration seconds bucket le 1 status code 200 0 grafana image renderer service http request duration seconds bucket le 5 status code 200 4 grafana image renderer service http request duration seconds bucket le 7 status code 200 4 grafana image renderer service http request duration seconds bucket le 9 status code 200 4 grafana image renderer service http request duration seconds bucket le 11 status code 200 4 grafana image renderer service http request duration seconds bucket le 13 status code 200 4 grafana image renderer service http request duration seconds bucket le 15 status code 200 4 grafana image renderer service http request duration seconds bucket le 20 status code 200 4 grafana image renderer service http request duration seconds bucket le 30 status code 200 4 grafana image renderer service http request duration seconds bucket le Inf status code 200 4 grafana image renderer service http request duration seconds sum status code 200 10 492873834 grafana image renderer service http request duration seconds count status code 200 4 HELP up 1 up 0 not up TYPE up gauge up 1 HELP grafana image renderer http request in flight A gauge of requests currently being served by the image renderer TYPE grafana image renderer http request in flight gauge grafana image renderer http request in flight 1 HELP grafana image renderer step duration seconds duration histogram of browser steps for rendering an image labeled with step TYPE grafana image renderer step duration seconds histogram grafana image renderer step duration seconds bucket le 0 3 step launch 0 grafana image renderer step duration seconds bucket le 0 5 step launch 0 grafana image renderer step duration seconds bucket le 1 step launch 1 grafana image renderer step duration seconds bucket le 2 step launch 1 grafana image renderer step duration seconds bucket le 3 step launch 1 grafana image renderer step duration seconds bucket le 5 step launch 1 grafana image renderer step duration seconds bucket le Inf step launch 1 grafana image renderer step duration seconds sum step launch 0 7914972 grafana image renderer step duration seconds count step launch 1 grafana image renderer step duration seconds bucket le 0 3 step newPage 1 grafana image renderer step duration seconds bucket le 0 5 step newPage 1 grafana image renderer step duration seconds bucket le 1 step newPage 1 grafana image renderer step duration seconds bucket le 2 step newPage 1 grafana image renderer step duration seconds bucket le 3 step newPage 1 grafana image renderer step duration seconds bucket le 5 step newPage 1 grafana image renderer step duration seconds bucket le Inf step newPage 1 grafana image renderer step duration seconds sum step newPage 0 2217868 grafana image renderer step duration seconds count step newPage 1 grafana image renderer step duration seconds bucket le 0 3 step prepare 1 grafana image renderer step duration seconds bucket le 0 5 step prepare 1 grafana image renderer step duration seconds bucket le 1 step prepare 1 grafana image renderer step duration seconds bucket le 2 step prepare 1 grafana image renderer step duration seconds bucket le 3 step prepare 1 grafana image renderer step duration seconds bucket le 5 step prepare 1 grafana image renderer step duration seconds bucket le Inf step prepare 1 grafana image renderer step duration seconds sum step prepare 0 0819274 grafana image renderer step duration seconds count step prepare 1 grafana image renderer step duration seconds bucket le 0 3 step navigate 0 grafana image renderer step duration seconds bucket le 0 5 step navigate 0 grafana image renderer step duration seconds bucket le 1 step navigate 0 grafana image renderer step duration seconds bucket le 2 step navigate 0 grafana image renderer step duration seconds bucket le 3 step navigate 0 grafana image renderer step duration seconds bucket le 5 step navigate 0 grafana image renderer step duration seconds bucket le Inf step navigate 1 grafana image renderer step duration seconds sum step navigate 15 3311258 grafana image renderer step duration seconds count step navigate 1 grafana image renderer step duration seconds bucket le 0 3 step panelsRendered 1 grafana image renderer step duration seconds bucket le 0 5 step panelsRendered 1 grafana image renderer step duration seconds bucket le 1 step panelsRendered 1 grafana image renderer step duration seconds bucket le 2 step panelsRendered 1 grafana image renderer step duration seconds bucket le 3 step panelsRendered 1 grafana image renderer step duration seconds bucket le 5 step panelsRendered 1 grafana image renderer step duration seconds bucket le Inf step panelsRendered 1 grafana image renderer step duration seconds sum step panelsRendered 0 0205577 grafana image renderer step duration seconds count step panelsRendered 1 grafana image renderer step duration seconds bucket le 0 3 step screenshot 1 grafana image renderer step duration seconds bucket le 0 5 step screenshot 1 grafana image renderer step duration seconds bucket le 1 step screenshot 1 grafana image renderer step duration seconds bucket le 2 step screenshot 1 grafana image renderer step duration seconds bucket le 3 step screenshot 1 grafana image renderer step duration seconds bucket le 5 step screenshot 1 grafana image renderer step duration seconds bucket le Inf step screenshot 1 grafana image renderer step duration seconds sum step screenshot 0 2866623 grafana image renderer step duration seconds count step screenshot 1 HELP grafana image renderer browser info A metric with a constant 1 value labeled by version of the browser in use TYPE grafana image renderer browser info gauge grafana image renderer browser info version HeadlessChrome 79 0 3945 0 1
helm aliases docs quickstart title Quickstart Guide This guide covers how you can quickly get started using Helm Prerequisites How to install and get started with Helm including instructions for distros FAQs and plugins weight 1
--- title: "Quickstart Guide" description: "How to install and get started with Helm including instructions for distros, FAQs, and plugins." weight: 1 aliases: ["/docs/quickstart/"] --- This guide covers how you can quickly get started using Helm. ## Prerequisites The following prerequisites are required for a successful and properly secured use of Helm. 1. A Kubernetes cluster 2. Deciding what security configurations to apply to your installation, if any 3. Installing and configuring Helm. ### Install Kubernetes or have access to a cluster - You must have Kubernetes installed. For the latest release of Helm, we recommend the latest stable release of Kubernetes, which in most cases is the second-latest minor release. - You should also have a local configured copy of `kubectl`. See the [Helm Version Support Policy](https://helm.sh/docs/topics/version_skew/) for the maximum version skew supported between Helm and Kubernetes. ## Install Helm Download a binary release of the Helm client. You can use tools like `homebrew`, or look at [the official releases page](https://github.com/helm/helm/releases). For more details, or for other options, see [the installation guide](). ## Initialize a Helm Chart Repository Once you have Helm ready, you can add a chart repository. Check [Artifact Hub](https://artifacthub.io/packages/search?kind=0) for available Helm chart repositories. ```console $ helm repo add bitnami https://charts.bitnami.com/bitnami ``` Once this is installed, you will be able to list the charts you can install: ```console $ helm search repo bitnami NAME CHART VERSION APP VERSION DESCRIPTION bitnami/bitnami-common 0.0.9 0.0.9 DEPRECATED Chart with custom templates used in ... bitnami/airflow 8.0.2 2.0.0 Apache Airflow is a platform to programmaticall... bitnami/apache 8.2.3 2.4.46 Chart for Apache HTTP Server bitnami/aspnet-core 1.2.3 3.1.9 ASP.NET Core is an open-source framework create... # ... and many more ``` ## Install an Example Chart To install a chart, you can run the `helm install` command. Helm has several ways to find and install a chart, but the easiest is to use the `bitnami` charts. ```console $ helm repo update # Make sure we get the latest list of charts $ helm install bitnami/mysql --generate-name NAME: mysql-1612624192 LAST DEPLOYED: Sat Feb 6 16:09:56 2021 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ... ``` In the example above, the `bitnami/mysql` chart was released, and the name of our new release is `mysql-1612624192`. You get a simple idea of the features of this MySQL chart by running `helm show chart bitnami/mysql`. Or you could run `helm show all bitnami/mysql` to get all information about the chart. Whenever you install a chart, a new release is created. So one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded. The `helm install` command is a very powerful command with many capabilities. To learn more about it, check out the [Using Helm Guide]() ## Learn About Releases It's easy to see what has been released using Helm: ```console $ helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql-1612624192 default 1 2021-02-06 16:09:56.283059 +0100 CET deployed mysql-8.3.0 8.0.23 ``` The `helm list` (or `helm ls`) function will show you a list of all deployed releases. ## Uninstall a Release To uninstall a release, use the `helm uninstall` command: ```console $ helm uninstall mysql-1612624192 release "mysql-1612624192" uninstalled ``` This will uninstall `mysql-1612624192` from Kubernetes, which will remove all resources associated with the release as well as the release history. If the flag `--keep-history` is provided, release history will be kept. You will be able to request information about that release: ```console $ helm status mysql-1612624192 Status: UNINSTALLED ... ``` Because Helm tracks your releases even after you've uninstalled them, you can audit a cluster's history, and even undelete a release (with `helm rollback`). ## Reading the Help Text To learn more about the available Helm commands, use `helm help` or type a command followed by the `-h` flag: ```console $ helm get -h ```
helm
title Quickstart Guide description How to install and get started with Helm including instructions for distros FAQs and plugins weight 1 aliases docs quickstart This guide covers how you can quickly get started using Helm Prerequisites The following prerequisites are required for a successful and properly secured use of Helm 1 A Kubernetes cluster 2 Deciding what security configurations to apply to your installation if any 3 Installing and configuring Helm Install Kubernetes or have access to a cluster You must have Kubernetes installed For the latest release of Helm we recommend the latest stable release of Kubernetes which in most cases is the second latest minor release You should also have a local configured copy of kubectl See the Helm Version Support Policy https helm sh docs topics version skew for the maximum version skew supported between Helm and Kubernetes Install Helm Download a binary release of the Helm client You can use tools like homebrew or look at the official releases page https github com helm helm releases For more details or for other options see the installation guide Initialize a Helm Chart Repository Once you have Helm ready you can add a chart repository Check Artifact Hub https artifacthub io packages search kind 0 for available Helm chart repositories console helm repo add bitnami https charts bitnami com bitnami Once this is installed you will be able to list the charts you can install console helm search repo bitnami NAME CHART VERSION APP VERSION DESCRIPTION bitnami bitnami common 0 0 9 0 0 9 DEPRECATED Chart with custom templates used in bitnami airflow 8 0 2 2 0 0 Apache Airflow is a platform to programmaticall bitnami apache 8 2 3 2 4 46 Chart for Apache HTTP Server bitnami aspnet core 1 2 3 3 1 9 ASP NET Core is an open source framework create and many more Install an Example Chart To install a chart you can run the helm install command Helm has several ways to find and install a chart but the easiest is to use the bitnami charts console helm repo update Make sure we get the latest list of charts helm install bitnami mysql generate name NAME mysql 1612624192 LAST DEPLOYED Sat Feb 6 16 09 56 2021 NAMESPACE default STATUS deployed REVISION 1 TEST SUITE None NOTES In the example above the bitnami mysql chart was released and the name of our new release is mysql 1612624192 You get a simple idea of the features of this MySQL chart by running helm show chart bitnami mysql Or you could run helm show all bitnami mysql to get all information about the chart Whenever you install a chart a new release is created So one chart can be installed multiple times into the same cluster And each can be independently managed and upgraded The helm install command is a very powerful command with many capabilities To learn more about it check out the Using Helm Guide Learn About Releases It s easy to see what has been released using Helm console helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql 1612624192 default 1 2021 02 06 16 09 56 283059 0100 CET deployed mysql 8 3 0 8 0 23 The helm list or helm ls function will show you a list of all deployed releases Uninstall a Release To uninstall a release use the helm uninstall command console helm uninstall mysql 1612624192 release mysql 1612624192 uninstalled This will uninstall mysql 1612624192 from Kubernetes which will remove all resources associated with the release as well as the release history If the flag keep history is provided release history will be kept You will be able to request information about that release console helm status mysql 1612624192 Status UNINSTALLED Because Helm tracks your releases even after you ve uninstalled them you can audit a cluster s history and even undelete a release with helm rollback Reading the Help Text To learn more about the available Helm commands use helm help or type a command followed by the h flag console helm get h
helm Helm cheatsheet Basic interpretations context Helm cheatsheet featuring all the necessary commands required to manage an application through Helm weight 4 title Cheat Sheet
--- title: "Cheat Sheet" description: "Helm cheatsheet" weight: 4 --- Helm cheatsheet featuring all the necessary commands required to manage an application through Helm. ----------------------------------------------------------------------------------------------------------------------------------------------- ### Basic interpretations/context Chart: - It is the name of your chart in case it has been pulled and untarred. - It is <repo_name>/<chart_name> in case the repository has been added but chart not pulled. - It is the URL/Absolute path to the chart. Name: - It is the name you want to give to your current helm chart installation. Release: - Is the name you assigned to an installation instance. Revision: - Is the value from the Helm history command Repo-name: - The name of a repository. DIR: - Directory name/path ------------------------------------------------------------------------------------------------------------------------------------------------ ### Chart Management ```bash helm create <name> # Creates a chart directory along with the common files and directories used in a chart. helm package <chart-path> # Packages a chart into a versioned chart archive file. helm lint <chart> # Run tests to examine a chart and identify possible issues: helm show all <chart> # Inspect a chart and list its contents: helm show values <chart> # Displays the contents of the values.yaml file helm pull <chart> # Download/pull chart helm pull <chart> --untar=true # If set to true, will untar the chart after downloading it helm pull <chart> --verify # Verify the package before using it helm pull <chart> --version <number> # Default-latest is used, specify a version constraint for the chart version to use helm dependency list <chart> # Display a list of a chart’s dependencies: ``` -------------------------------------------------------------------------------------------------------------------------------------------------- ### Install and Uninstall Apps ```bash helm install <name> <chart> # Install the chart with a name helm install <name> <chart> --namespace <namespace> # Install the chart in a specific namespace helm install <name> <chart> --set key1=val1,key2=val2 # Set values on the command line (can specify multiple or separate values with commas) helm install <name> <chart> --values <yaml-file/url> # Install the chart with your specified values helm install <name> <chart> --dry-run --debug # Run a test installation to validate chart (p) helm install <name> <chart> --verify # Verify the package before using it helm install <name> <chart> --dependency-update # update dependencies if they are missing before installing the chart helm uninstall <name> # Uninstall a release ``` ------------------------------------------------------------------------------------------------------------------------------------------------ ### Perform App Upgrade and Rollback ```bash helm upgrade <release> <chart> # Upgrade a release helm upgrade <release> <chart> --atomic # If set, upgrade process rolls back changes made in case of failed upgrade. helm upgrade <release> <chart> --dependency-update # update dependencies if they are missing before installing the chart helm upgrade <release> <chart> --version <version_number> # specify a version constraint for the chart version to use helm upgrade <release> <chart> --values # specify values in a YAML file or a URL (can specify multiple) helm upgrade <release> <chart> --set key1=val1,key2=val2 # Set values on the command line (can specify multiple or separate valuese) helm upgrade <release> <chart> --force # Force resource updates through a replacement strategy helm rollback <release> <revision> # Roll back a release to a specific revision helm rollback <release> <revision> --cleanup-on-fail # Allow deletion of new resources created in this rollback when rollback fails ``` ------------------------------------------------------------------------------------------------------------------------------------------------ ### List, Add, Remove, and Update Repositories ```bash helm repo add <repo-name> <url> # Add a repository from the internet: helm repo list # List added chart repositories helm repo update # Update information of available charts locally from chart repositories helm repo remove <repo_name> # Remove one or more chart repositories helm repo index <DIR> # Read the current directory and generate an index file based on the charts found. helm repo index <DIR> --merge # Merge the generated index with an existing index file helm search repo <keyword> # Search repositories for a keyword in charts helm search hub <keyword> # Search for charts in the Artifact Hub or your own hub instance ``` ------------------------------------------------------------------------------------------------------------------------------------------------- ### Helm Release monitoring ```bash helm list # Lists all of the releases for a specified namespace, uses current namespace context if namespace not specified helm list --all # Show all releases without any filter applied, can use -a helm list --all-namespaces # List releases across all namespaces, we can use -A helm list -l key1=value1,key2=value2 # Selector (label query) to filter on, supports '=', '==', and '!=' helm list --date # Sort by release date helm list --deployed # Show deployed releases. If no other is specified, this will be automatically enabled helm list --pending # Show pending releases helm list --failed # Show failed releases helm list --uninstalled # Show uninstalled releases (if 'helm uninstall --keep-history' was used) helm list --superseded # Show superseded releases helm list -o yaml # Prints the output in the specified format. Allowed values: table, json, yaml (default table) helm status <release> # This command shows the status of a named release. helm status <release> --revision <number> # if set, display the status of the named release with revision helm history <release> # Historical revisions for a given release. helm env # Env prints out all the environment information in use by Helm. ``` ------------------------------------------------------------------------------------------------------------------------------------------------- ### Download Release Information ```bash helm get all <release> # A human readable collection of information about the notes, hooks, supplied values, and generated manifest file of the given release. helm get hooks <release> # This command downloads hooks for a given release. Hooks are formatted in YAML and separated by the YAML '---\n' separator. helm get manifest <release> # A manifest is a YAML-encoded representation of the Kubernetes resources that were generated from this release's chart(s). If a chart is dependent on other charts, those resources will also be included in the manifest. helm get notes <release> # Shows notes provided by the chart of a named release. helm get values <release> # Downloads a values file for a given release. use -o to format output ``` ------------------------------------------------------------------------------------------------------------------------------------------------- ### Plugin Management ```bash helm plugin install <path/url1> # Install plugins helm plugin list # View a list of all installed plugins helm plugin update <plugin> # Update plugins helm plugin uninstall <plugin> # Uninstall a plugin ``` -------------------------------------------------------------------------------------------------------------------------------------------------
helm
title Cheat Sheet description Helm cheatsheet weight 4 Helm cheatsheet featuring all the necessary commands required to manage an application through Helm Basic interpretations context Chart It is the name of your chart in case it has been pulled and untarred It is repo name chart name in case the repository has been added but chart not pulled It is the URL Absolute path to the chart Name It is the name you want to give to your current helm chart installation Release Is the name you assigned to an installation instance Revision Is the value from the Helm history command Repo name The name of a repository DIR Directory name path Chart Management bash helm create name Creates a chart directory along with the common files and directories used in a chart helm package chart path Packages a chart into a versioned chart archive file helm lint chart Run tests to examine a chart and identify possible issues helm show all chart Inspect a chart and list its contents helm show values chart Displays the contents of the values yaml file helm pull chart Download pull chart helm pull chart untar true If set to true will untar the chart after downloading it helm pull chart verify Verify the package before using it helm pull chart version number Default latest is used specify a version constraint for the chart version to use helm dependency list chart Display a list of a chart s dependencies Install and Uninstall Apps bash helm install name chart Install the chart with a name helm install name chart namespace namespace Install the chart in a specific namespace helm install name chart set key1 val1 key2 val2 Set values on the command line can specify multiple or separate values with commas helm install name chart values yaml file url Install the chart with your specified values helm install name chart dry run debug Run a test installation to validate chart p helm install name chart verify Verify the package before using it helm install name chart dependency update update dependencies if they are missing before installing the chart helm uninstall name Uninstall a release Perform App Upgrade and Rollback bash helm upgrade release chart Upgrade a release helm upgrade release chart atomic If set upgrade process rolls back changes made in case of failed upgrade helm upgrade release chart dependency update update dependencies if they are missing before installing the chart helm upgrade release chart version version number specify a version constraint for the chart version to use helm upgrade release chart values specify values in a YAML file or a URL can specify multiple helm upgrade release chart set key1 val1 key2 val2 Set values on the command line can specify multiple or separate valuese helm upgrade release chart force Force resource updates through a replacement strategy helm rollback release revision Roll back a release to a specific revision helm rollback release revision cleanup on fail Allow deletion of new resources created in this rollback when rollback fails List Add Remove and Update Repositories bash helm repo add repo name url Add a repository from the internet helm repo list List added chart repositories helm repo update Update information of available charts locally from chart repositories helm repo remove repo name Remove one or more chart repositories helm repo index DIR Read the current directory and generate an index file based on the charts found helm repo index DIR merge Merge the generated index with an existing index file helm search repo keyword Search repositories for a keyword in charts helm search hub keyword Search for charts in the Artifact Hub or your own hub instance Helm Release monitoring bash helm list Lists all of the releases for a specified namespace uses current namespace context if namespace not specified helm list all Show all releases without any filter applied can use a helm list all namespaces List releases across all namespaces we can use A helm list l key1 value1 key2 value2 Selector label query to filter on supports and helm list date Sort by release date helm list deployed Show deployed releases If no other is specified this will be automatically enabled helm list pending Show pending releases helm list failed Show failed releases helm list uninstalled Show uninstalled releases if helm uninstall keep history was used helm list superseded Show superseded releases helm list o yaml Prints the output in the specified format Allowed values table json yaml default table helm status release This command shows the status of a named release helm status release revision number if set display the status of the named release with revision helm history release Historical revisions for a given release helm env Env prints out all the environment information in use by Helm Download Release Information bash helm get all release A human readable collection of information about the notes hooks supplied values and generated manifest file of the given release helm get hooks release This command downloads hooks for a given release Hooks are formatted in YAML and separated by the YAML n separator helm get manifest release A manifest is a YAML encoded representation of the Kubernetes resources that were generated from this release s chart s If a chart is dependent on other charts those resources will also be included in the manifest helm get notes release Shows notes provided by the chart of a named release helm get values release Downloads a values file for a given release use o to format output Plugin Management bash helm plugin install path url1 Install plugins helm plugin list View a list of all installed plugins helm plugin update plugin Update plugins helm plugin uninstall plugin Uninstall a plugin
helm title Installing Helm aliases docs install Learn how to install and get running with Helm weight 2 This guide shows how to install the Helm CLI Helm can be installed either from source or from pre built binary releases
--- title: "Installing Helm" description: "Learn how to install and get running with Helm." weight: 2 aliases: ["/docs/install/"] --- This guide shows how to install the Helm CLI. Helm can be installed either from source, or from pre-built binary releases. ## From The Helm Project The Helm project provides two ways to fetch and install Helm. These are the official methods to get Helm releases. In addition to that, the Helm community provides methods to install Helm through different package managers. Installation through those methods can be found below the official methods. ### From the Binary Releases Every [release](https://github.com/helm/helm/releases) of Helm provides binary releases for a variety of OSes. These binary versions can be manually downloaded and installed. 1. Download your [desired version](https://github.com/helm/helm/releases) 2. Unpack it (`tar -zxvf helm-v3.0.0-linux-amd64.tar.gz`) 3. Find the `helm` binary in the unpacked directory, and move it to its desired destination (`mv linux-amd64/helm /usr/local/bin/helm`) From there, you should be able to run the client and [add the stable chart repository](https://helm.sh/docs/intro/quickstart/#initialize-a-helm-chart-repository): `helm help`. **Note:** Helm automated tests are performed for Linux AMD64 only during GitHub Actions builds and releases. Testing of other OSes are the responsibility of the community requesting Helm for the OS in question. ### From Script Helm now has an installer script that will automatically grab the latest version of Helm and [install it locally](https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3). You can fetch that script, and then execute it locally. It's well documented so that you can read through it and understand what it is doing before you run it. ```console $ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 $ chmod 700 get_helm.sh $ ./get_helm.sh ``` Yes, you can `curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash` if you want to live on the edge. ## Through Package Managers The Helm community provides the ability to install Helm through operating system package managers. These are not supported by the Helm project and are not considered trusted 3rd parties. ### From Homebrew (macOS) Members of the Helm community have contributed a Helm formula build to Homebrew. This formula is generally up to date. ```console brew install helm ``` (Note: There is also a formula for emacs-helm, which is a different project.) ### From Chocolatey (Windows) Members of the Helm community have contributed a [Helm package](https://chocolatey.org/packages/kubernetes-helm) build to [Chocolatey](https://chocolatey.org/). This package is generally up to date. ```console choco install kubernetes-helm ``` ### From Scoop (Windows) Members of the Helm community have contributed a [Helm package](https://github.com/ScoopInstaller/Main/blob/master/bucket/helm.json) build to [Scoop](https://scoop.sh). This package is generally up to date. ```console scoop install helm ``` ### From Winget (Windows) Members of the Helm community have contributed a [Helm package](https://github.com/microsoft/winget-pkgs/tree/master/manifests/h/Helm/Helm) build to [Winget](https://learn.microsoft.com/en-us/windows/package-manager/). This package is generally up to date. ```console winget install Helm.Helm ``` ### From Apt (Debian/Ubuntu) Members of the Helm community have contributed a [Helm package](https://helm.baltorepo.com/stable/debian/) for Apt. This package is generally up to date. ```console curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null sudo apt-get install apt-transport-https --yes echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update sudo apt-get install helm ``` ### From dnf/yum (fedora) Since Fedora 35, helm is available on the official repository. You can install helm with invoking: ```console sudo dnf install helm ``` ### From Snap The [Snapcrafters](https://github.com/snapcrafters) community maintains the Snap version of the [Helm package](https://snapcraft.io/helm): ```console sudo snap install helm --classic ``` ### From pkg (FreeBSD) Members of the FreeBSD community have contributed a [Helm package](https://www.freshports.org/sysutils/helm) build to the [FreeBSD Ports Collection](https://man.freebsd.org/ports). This package is generally up to date. ```console pkg install helm ``` ### Development Builds In addition to releases you can download or install development snapshots of Helm. ### From Canary Builds "Canary" builds are versions of the Helm software that are built from the latest `main` branch. They are not official releases, and may not be stable. However, they offer the opportunity to test the cutting edge features. Canary Helm binaries are stored at [get.helm.sh](https://get.helm.sh). Here are links to the common builds: - [Linux AMD64](https://get.helm.sh/helm-canary-linux-amd64.tar.gz) - [macOS AMD64](https://get.helm.sh/helm-canary-darwin-amd64.tar.gz) - [Experimental Windows AMD64](https://get.helm.sh/helm-canary-windows-amd64.zip) ### From Source (Linux, macOS) Building Helm from source is slightly more work, but is the best way to go if you want to test the latest (pre-release) Helm version. You must have a working Go environment. ```console $ git clone https://github.com/helm/helm.git $ cd helm $ make ``` If required, it will fetch the dependencies and cache them, and validate configuration. It will then compile `helm` and place it in `bin/helm`. ## Conclusion In most cases, installation is as simple as getting a pre-built `helm` binary. This document covers additional cases for those who want to do more sophisticated things with Helm. Once you have the Helm Client successfully installed, you can move on to using Helm to manage charts and [add the stable chart repository](https://helm.sh/docs/intro/quickstart/#initialize-a-helm-chart-repository).
helm
title Installing Helm description Learn how to install and get running with Helm weight 2 aliases docs install This guide shows how to install the Helm CLI Helm can be installed either from source or from pre built binary releases From The Helm Project The Helm project provides two ways to fetch and install Helm These are the official methods to get Helm releases In addition to that the Helm community provides methods to install Helm through different package managers Installation through those methods can be found below the official methods From the Binary Releases Every release https github com helm helm releases of Helm provides binary releases for a variety of OSes These binary versions can be manually downloaded and installed 1 Download your desired version https github com helm helm releases 2 Unpack it tar zxvf helm v3 0 0 linux amd64 tar gz 3 Find the helm binary in the unpacked directory and move it to its desired destination mv linux amd64 helm usr local bin helm From there you should be able to run the client and add the stable chart repository https helm sh docs intro quickstart initialize a helm chart repository helm help Note Helm automated tests are performed for Linux AMD64 only during GitHub Actions builds and releases Testing of other OSes are the responsibility of the community requesting Helm for the OS in question From Script Helm now has an installer script that will automatically grab the latest version of Helm and install it locally https raw githubusercontent com helm helm main scripts get helm 3 You can fetch that script and then execute it locally It s well documented so that you can read through it and understand what it is doing before you run it console curl fsSL o get helm sh https raw githubusercontent com helm helm main scripts get helm 3 chmod 700 get helm sh get helm sh Yes you can curl https raw githubusercontent com helm helm main scripts get helm 3 bash if you want to live on the edge Through Package Managers The Helm community provides the ability to install Helm through operating system package managers These are not supported by the Helm project and are not considered trusted 3rd parties From Homebrew macOS Members of the Helm community have contributed a Helm formula build to Homebrew This formula is generally up to date console brew install helm Note There is also a formula for emacs helm which is a different project From Chocolatey Windows Members of the Helm community have contributed a Helm package https chocolatey org packages kubernetes helm build to Chocolatey https chocolatey org This package is generally up to date console choco install kubernetes helm From Scoop Windows Members of the Helm community have contributed a Helm package https github com ScoopInstaller Main blob master bucket helm json build to Scoop https scoop sh This package is generally up to date console scoop install helm From Winget Windows Members of the Helm community have contributed a Helm package https github com microsoft winget pkgs tree master manifests h Helm Helm build to Winget https learn microsoft com en us windows package manager This package is generally up to date console winget install Helm Helm From Apt Debian Ubuntu Members of the Helm community have contributed a Helm package https helm baltorepo com stable debian for Apt This package is generally up to date console curl https baltocdn com helm signing asc gpg dearmor sudo tee usr share keyrings helm gpg dev null sudo apt get install apt transport https yes echo deb arch dpkg print architecture signed by usr share keyrings helm gpg https baltocdn com helm stable debian all main sudo tee etc apt sources list d helm stable debian list sudo apt get update sudo apt get install helm From dnf yum fedora Since Fedora 35 helm is available on the official repository You can install helm with invoking console sudo dnf install helm From Snap The Snapcrafters https github com snapcrafters community maintains the Snap version of the Helm package https snapcraft io helm console sudo snap install helm classic From pkg FreeBSD Members of the FreeBSD community have contributed a Helm package https www freshports org sysutils helm build to the FreeBSD Ports Collection https man freebsd org ports This package is generally up to date console pkg install helm Development Builds In addition to releases you can download or install development snapshots of Helm From Canary Builds Canary builds are versions of the Helm software that are built from the latest main branch They are not official releases and may not be stable However they offer the opportunity to test the cutting edge features Canary Helm binaries are stored at get helm sh https get helm sh Here are links to the common builds Linux AMD64 https get helm sh helm canary linux amd64 tar gz macOS AMD64 https get helm sh helm canary darwin amd64 tar gz Experimental Windows AMD64 https get helm sh helm canary windows amd64 zip From Source Linux macOS Building Helm from source is slightly more work but is the best way to go if you want to test the latest pre release Helm version You must have a working Go environment console git clone https github com helm helm git cd helm make If required it will fetch the dependencies and cache them and validate configuration It will then compile helm and place it in bin helm Conclusion In most cases installation is as simple as getting a pre built helm binary This document covers additional cases for those who want to do more sophisticated things with Helm Once you have the Helm Client successfully installed you can move on to using Helm to manage charts and add the stable chart repository https helm sh docs intro quickstart initialize a helm chart repository
helm Explains the basics of Helm install md the Helm client Kubernetes cluster It assumes that you have already installed ref title Using Helm weight 3 This guide explains the basics of using Helm to manage packages on your
--- title: "Using Helm" description: "Explains the basics of Helm." weight: 3 --- This guide explains the basics of using Helm to manage packages on your Kubernetes cluster. It assumes that you have already [installed]() the Helm client. If you are simply interested in running a few quick commands, you may wish to begin with the [Quickstart Guide](). This chapter covers the particulars of Helm commands, and explains how to use Helm. ## Three Big Concepts A *Chart* is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster. Think of it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file. A *Repository* is the place where charts can be collected and shared. It's like Perl's [CPAN archive](https://www.cpan.org) or the [Fedora Package Database](https://src.fedoraproject.org/), but for Kubernetes packages. A *Release* is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster. And each time it is installed, a new _release_ is created. Consider a MySQL chart. If you want two databases running in your cluster, you can install that chart twice. Each one will have its own _release_, which will in turn have its own _release name_. With these concepts in mind, we can now explain Helm like this: Helm installs _charts_ into Kubernetes, creating a new _release_ for each installation. And to find new charts, you can search Helm chart _repositories_. ## 'helm search': Finding Charts Helm comes with a powerful search command. It can be used to search two different types of source: - `helm search hub` searches [the Artifact Hub](https://artifacthub.io), which lists helm charts from dozens of different repositories. - `helm search repo` searches the repositories that you have added to your local helm client (with `helm repo add`). This search is done over local data, and no public network connection is needed. You can find publicly available charts by running `helm search hub`: ```console $ helm search hub wordpress URL CHART VERSION APP VERSION DESCRIPTION https://hub.helm.sh/charts/bitnami/wordpress 7.6.7 5.2.4 Web publishing platform for building blogs and ... https://hub.helm.sh/charts/presslabs/wordpress-... v0.6.3 v0.6.3 Presslabs WordPress Operator Helm Chart https://hub.helm.sh/charts/presslabs/wordpress-... v0.7.1 v0.7.1 A Helm chart for deploying a WordPress site on ... ``` The above searches for all `wordpress` charts on Artifact Hub. With no filter, `helm search hub` shows you all of the available charts. `helm search hub` exposes the URL to the location on [artifacthub.io](https://artifacthub.io/) but not the actual Helm repo. `helm search hub --list-repo-url` exposes the actual Helm repo URL which comes in handy when you are looking to add a new repo: `helm repo add [NAME] [URL]`. Using `helm search repo`, you can find the names of the charts in repositories you have already added: ```console $ helm repo add brigade https://brigadecore.github.io/charts "brigade" has been added to your repositories $ helm search repo brigade NAME CHART VERSION APP VERSION DESCRIPTION brigade/brigade 1.3.2 v1.2.1 Brigade provides event-driven scripting of Kube... brigade/brigade-github-app 0.4.1 v0.2.1 The Brigade GitHub App, an advanced gateway for... brigade/brigade-github-oauth 0.2.0 v0.20.0 The legacy OAuth GitHub Gateway for Brigade brigade/brigade-k8s-gateway 0.1.0 A Helm chart for Kubernetes brigade/brigade-project 1.0.0 v1.0.0 Create a Brigade project brigade/kashti 0.4.0 v0.4.0 A Helm chart for Kubernetes ``` Helm search uses a fuzzy string matching algorithm, so you can type parts of words or phrases: ```console $ helm search repo kash NAME CHART VERSION APP VERSION DESCRIPTION brigade/kashti 0.4.0 v0.4.0 A Helm chart for Kubernetes ``` Search is a good way to find available packages. Once you have found a package you want to install, you can use `helm install` to install it. ## 'helm install': Installing a Package To install a new package, use the `helm install` command. At its simplest, it takes two arguments: A release name that you pick, and the name of the chart you want to install. ```console $ helm install happy-panda bitnami/wordpress NAME: happy-panda LAST DEPLOYED: Tue Jan 26 10:27:17 2021 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: ** Please be patient while the chart is being deployed ** Your WordPress site can be accessed through the following DNS name from within your cluster: happy-panda-wordpress.default.svc.cluster.local (port 80) To access your WordPress site from outside the cluster follow the steps below: 1. Get the WordPress URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc --namespace default -w happy-panda-wordpress' export SERVICE_IP=$(kubectl get svc --namespace default happy-panda-wordpress --template "") echo "WordPress URL: http://$SERVICE_IP/" echo "WordPress Admin URL: http://$SERVICE_IP/admin" 2. Open a browser and access WordPress using the obtained URL. 3. Login with the following credentials below to see your blog: echo Username: user echo Password: $(kubectl get secret --namespace default happy-panda-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode) ``` Now the `wordpress` chart is installed. Note that installing a chart creates a new _release_ object. The release above is named `happy-panda`. (If you want Helm to generate a name for you, leave off the release name and use `--generate-name`.) During installation, the `helm` client will print useful information about which resources were created, what the state of the release is, and also whether there are additional configuration steps you can or should take. Helm installs resources in the following order: - Namespace - NetworkPolicy - ResourceQuota - LimitRange - PodSecurityPolicy - PodDisruptionBudget - ServiceAccount - Secret - SecretList - ConfigMap - StorageClass - PersistentVolume - PersistentVolumeClaim - CustomResourceDefinition - ClusterRole - ClusterRoleList - ClusterRoleBinding - ClusterRoleBindingList - Role - RoleList - RoleBinding - RoleBindingList - Service - DaemonSet - Pod - ReplicationController - ReplicaSet - Deployment - HorizontalPodAutoscaler - StatefulSet - Job - CronJob - Ingress - APIService Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600MB in size, and may take a long time to install into the cluster. To keep track of a release's state, or to re-read configuration information, you can use `helm status`: ```console $ helm status happy-panda NAME: happy-panda LAST DEPLOYED: Tue Jan 26 10:27:17 2021 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: ** Please be patient while the chart is being deployed ** Your WordPress site can be accessed through the following DNS name from within your cluster: happy-panda-wordpress.default.svc.cluster.local (port 80) To access your WordPress site from outside the cluster follow the steps below: 1. Get the WordPress URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc --namespace default -w happy-panda-wordpress' export SERVICE_IP=$(kubectl get svc --namespace default happy-panda-wordpress --template "") echo "WordPress URL: http://$SERVICE_IP/" echo "WordPress Admin URL: http://$SERVICE_IP/admin" 2. Open a browser and access WordPress using the obtained URL. 3. Login with the following credentials below to see your blog: echo Username: user echo Password: $(kubectl get secret --namespace default happy-panda-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode) ``` The above shows the current state of your release. ### Customizing the Chart Before Installing Installing the way we have here will only use the default configuration options for this chart. Many times, you will want to customize the chart to use your preferred configuration. To see what options are configurable on a chart, use `helm show values`: ```console $ helm show values bitnami/wordpress ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value ## Current available global Docker image parameters: imageRegistry and imagePullSecrets ## # global: # imageRegistry: myRegistryName # imagePullSecrets: # - myRegistryKeySecretName # storageClass: myStorageClass ## Bitnami WordPress image version ## ref: https://hub.docker.com/r/bitnami/wordpress/tags/ ## image: registry: docker.io repository: bitnami/wordpress tag: 5.6.0-debian-10-r35 [..] ``` You can then override any of these settings in a YAML formatted file, and then pass that file during installation. ```console $ echo '{mariadb.auth.database: user0db, mariadb.auth.username: user0}' > values.yaml $ helm install -f values.yaml bitnami/wordpress --generate-name ``` The above will create a default MariaDB user with the name `user0`, and grant this user access to a newly created `user0db` database, but will accept all the rest of the defaults for that chart. There are two ways to pass configuration data during install: - `--values` (or `-f`): Specify a YAML file with overrides. This can be specified multiple times and the rightmost file will take precedence - `--set`: Specify overrides on the command line. If both are used, `--set` values are merged into `--values` with higher precedence. Overrides specified with `--set` are persisted in a Secret. Values that have been `--set` can be viewed for a given release with `helm get values <release-name>`. Values that have been `--set` can be cleared by running `helm upgrade` with `--reset-values` specified. #### The Format and Limitations of `--set` The `--set` option takes zero or more name/value pairs. At its simplest, it is used like this: `--set name=value`. The YAML equivalent of that is: ```yaml name: value ``` Multiple values are separated by `,` characters. So `--set a=b,c=d` becomes: ```yaml a: b c: d ``` More complex expressions are supported. For example, `--set outer.inner=value` is translated into this: ```yaml outer: inner: value ``` Lists can be expressed by enclosing values in `{` and `}`. For example, `--set name={a, b, c}` translates to: ```yaml name: - a - b - c ``` Certain name/key can be set to be `null` or to be an empty array `[]`. For example, `--set name=[],a=null` translates ```yaml name: - a - b - c a: b ``` to ```yaml name: [] a: null ``` As of Helm 2.5.0, it is possible to access list items using an array index syntax. For example, `--set servers[0].port=80` becomes: ```yaml servers: - port: 80 ``` Multiple values can be set this way. The line `--set servers[0].port=80,servers[0].host=example` becomes: ```yaml servers: - port: 80 host: example ``` Sometimes you need to use special characters in your `--set` lines. You can use a backslash to escape the characters; `--set name=value1\,value2` will become: ```yaml name: "value1,value2" ``` Similarly, you can escape dot sequences as well, which may come in handy when charts use the `toYaml` function to parse annotations, labels and node selectors. The syntax for `--set nodeSelector."kubernetes\.io/role"=master` becomes: ```yaml nodeSelector: kubernetes.io/role: master ``` Deeply nested data structures can be difficult to express using `--set`. Chart designers are encouraged to consider the `--set` usage when designing the format of a `values.yaml` file (read more about [Values Files](../chart_template_guide/values_files/)). ### More Installation Methods The `helm install` command can install from several sources: - A chart repository (as we've seen above) - A local chart archive (`helm install foo foo-0.1.1.tgz`) - An unpacked chart directory (`helm install foo path/to/foo`) - A full URL (`helm install foo https://example.com/charts/foo-1.2.3.tgz`) ## 'helm upgrade' and 'helm rollback': Upgrading a Release, and Recovering on Failure When a new version of a chart is released, or when you want to change the configuration of your release, you can use the `helm upgrade` command. An upgrade takes an existing release and upgrades it according to the information you provide. Because Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade. It will only update things that have changed since the last release. ```console $ helm upgrade -f panda.yaml happy-panda bitnami/wordpress ``` In the above case, the `happy-panda` release is upgraded with the same chart, but with a new YAML file: ```yaml mariadb.auth.username: user1 ``` We can use `helm get values` to see whether that new setting took effect. ```console $ helm get values happy-panda mariadb: auth: username: user1 ``` The `helm get` command is a useful tool for looking at a release in the cluster. And as we can see above, it shows that our new values from `panda.yaml` were deployed to the cluster. Now, if something does not go as planned during a release, it is easy to roll back to a previous release using `helm rollback [RELEASE] [REVISION]`. ```console $ helm rollback happy-panda 1 ``` The above rolls back our happy-panda to its very first release version. A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1. The first revision number is always 1. And we can use `helm history [RELEASE]` to see revision numbers for a certain release. ## Helpful Options for Install/Upgrade/Rollback There are several other helpful options you can specify for customizing the behavior of Helm during an install/upgrade/rollback. Please note that this is not a full list of cli flags. To see a description of all flags, just run `helm <command> --help`. - `--timeout`: A [Go duration](https://golang.org/pkg/time/#ParseDuration) value to wait for Kubernetes commands to complete. This defaults to `5m0s`. - `--wait`: Waits until all Pods are in a ready state, PVCs are bound, Deployments have minimum (`Desired` minus `maxUnavailable`) Pods in ready state and Services have an IP address (and Ingress if a `LoadBalancer`) before marking the release as successful. It will wait for as long as the `--timeout` value. If timeout is reached, the release will be marked as `FAILED`. Note: In scenarios where Deployment has `replicas` set to 1 and `maxUnavailable` is not set to 0 as part of rolling update strategy, `--wait` will return as ready as it has satisfied the minimum Pod in ready condition. - `--no-hooks`: This skips running hooks for the command - `--recreate-pods` (only available for `upgrade` and `rollback`): This flag will cause all pods to be recreated (with the exception of pods belonging to deployments). (DEPRECATED in Helm 3) ## 'helm uninstall': Uninstalling a Release When it is time to uninstall a release from the cluster, use the `helm uninstall` command: ```console $ helm uninstall happy-panda ``` This will remove the release from the cluster. You can see all of your currently deployed releases with the `helm list` command: ```console $ helm list NAME VERSION UPDATED STATUS CHART inky-cat 1 Wed Sep 28 12:59:46 2016 DEPLOYED alpine-0.1.0 ``` From the output above, we can see that the `happy-panda` release was uninstalled. In previous versions of Helm, when a release was deleted, a record of its deletion would remain. In Helm 3, deletion removes the release record as well. If you wish to keep a deletion release record, use `helm uninstall --keep-history`. Using `helm list --uninstalled` will only show releases that were uninstalled with the `--keep-history` flag. The `helm list --all` flag will show you all release records that Helm has retained, including records for failed or deleted items (if `--keep-history` was specified): ```console $ helm list --all NAME VERSION UPDATED STATUS CHART happy-panda 2 Wed Sep 28 12:47:54 2016 UNINSTALLED wordpress-10.4.5.6.0 inky-cat 1 Wed Sep 28 12:59:46 2016 DEPLOYED alpine-0.1.0 kindred-angelf 2 Tue Sep 27 16:16:10 2016 UNINSTALLED alpine-0.1.0 ``` Note that because releases are now deleted by default, it is no longer possible to rollback an uninstalled resource. ## 'helm repo': Working with Repositories Helm 3 no longer ships with a default chart repository. The `helm repo` command group provides commands to add, list, and remove repositories. You can see which repositories are configured using `helm repo list`: ```console $ helm repo list NAME URL stable https://charts.helm.sh/stable mumoshu https://mumoshu.github.io/charts ``` And new repositories can be added with `helm repo add [NAME] [URL]`: ```console $ helm repo add dev https://example.com/dev-charts ``` Because chart repositories change frequently, at any point you can make sure your Helm client is up to date by running `helm repo update`. Repositories can be removed with `helm repo remove`. ## Creating Your Own Charts The [Chart Development Guide]() explains how to develop your own charts. But you can get started quickly by using the `helm create` command: ```console $ helm create deis-workflow Creating deis-workflow ``` Now there is a chart in `./deis-workflow`. You can edit it and create your own templates. As you edit your chart, you can validate that it is well-formed by running `helm lint`. When it's time to package the chart up for distribution, you can run the `helm package` command: ```console $ helm package deis-workflow deis-workflow-0.1.0.tgz ``` And that chart can now easily be installed by `helm install`: ```console $ helm install deis-workflow ./deis-workflow-0.1.0.tgz ... ``` Charts that are packaged can be loaded into chart repositories. See the documentation for [Helm chart repositories]() for more details. ## Conclusion This chapter has covered the basic usage patterns of the `helm` client, including searching, installation, upgrading, and uninstalling. It has also covered useful utility commands like `helm status`, `helm get`, and `helm repo`. For more information on these commands, take a look at Helm's built-in help: `helm help`. In the [next chapter](../howto/charts_tips_and_tricks/), we look at the process of developing charts.
helm
title Using Helm description Explains the basics of Helm weight 3 This guide explains the basics of using Helm to manage packages on your Kubernetes cluster It assumes that you have already installed the Helm client If you are simply interested in running a few quick commands you may wish to begin with the Quickstart Guide This chapter covers the particulars of Helm commands and explains how to use Helm Three Big Concepts A Chart is a Helm package It contains all of the resource definitions necessary to run an application tool or service inside of a Kubernetes cluster Think of it like the Kubernetes equivalent of a Homebrew formula an Apt dpkg or a Yum RPM file A Repository is the place where charts can be collected and shared It s like Perl s CPAN archive https www cpan org or the Fedora Package Database https src fedoraproject org but for Kubernetes packages A Release is an instance of a chart running in a Kubernetes cluster One chart can often be installed many times into the same cluster And each time it is installed a new release is created Consider a MySQL chart If you want two databases running in your cluster you can install that chart twice Each one will have its own release which will in turn have its own release name With these concepts in mind we can now explain Helm like this Helm installs charts into Kubernetes creating a new release for each installation And to find new charts you can search Helm chart repositories helm search Finding Charts Helm comes with a powerful search command It can be used to search two different types of source helm search hub searches the Artifact Hub https artifacthub io which lists helm charts from dozens of different repositories helm search repo searches the repositories that you have added to your local helm client with helm repo add This search is done over local data and no public network connection is needed You can find publicly available charts by running helm search hub console helm search hub wordpress URL CHART VERSION APP VERSION DESCRIPTION https hub helm sh charts bitnami wordpress 7 6 7 5 2 4 Web publishing platform for building blogs and https hub helm sh charts presslabs wordpress v0 6 3 v0 6 3 Presslabs WordPress Operator Helm Chart https hub helm sh charts presslabs wordpress v0 7 1 v0 7 1 A Helm chart for deploying a WordPress site on The above searches for all wordpress charts on Artifact Hub With no filter helm search hub shows you all of the available charts helm search hub exposes the URL to the location on artifacthub io https artifacthub io but not the actual Helm repo helm search hub list repo url exposes the actual Helm repo URL which comes in handy when you are looking to add a new repo helm repo add NAME URL Using helm search repo you can find the names of the charts in repositories you have already added console helm repo add brigade https brigadecore github io charts brigade has been added to your repositories helm search repo brigade NAME CHART VERSION APP VERSION DESCRIPTION brigade brigade 1 3 2 v1 2 1 Brigade provides event driven scripting of Kube brigade brigade github app 0 4 1 v0 2 1 The Brigade GitHub App an advanced gateway for brigade brigade github oauth 0 2 0 v0 20 0 The legacy OAuth GitHub Gateway for Brigade brigade brigade k8s gateway 0 1 0 A Helm chart for Kubernetes brigade brigade project 1 0 0 v1 0 0 Create a Brigade project brigade kashti 0 4 0 v0 4 0 A Helm chart for Kubernetes Helm search uses a fuzzy string matching algorithm so you can type parts of words or phrases console helm search repo kash NAME CHART VERSION APP VERSION DESCRIPTION brigade kashti 0 4 0 v0 4 0 A Helm chart for Kubernetes Search is a good way to find available packages Once you have found a package you want to install you can use helm install to install it helm install Installing a Package To install a new package use the helm install command At its simplest it takes two arguments A release name that you pick and the name of the chart you want to install console helm install happy panda bitnami wordpress NAME happy panda LAST DEPLOYED Tue Jan 26 10 27 17 2021 NAMESPACE default STATUS deployed REVISION 1 NOTES Please be patient while the chart is being deployed Your WordPress site can be accessed through the following DNS name from within your cluster happy panda wordpress default svc cluster local port 80 To access your WordPress site from outside the cluster follow the steps below 1 Get the WordPress URL by running these commands NOTE It may take a few minutes for the LoadBalancer IP to be available Watch the status with kubectl get svc namespace default w happy panda wordpress export SERVICE IP kubectl get svc namespace default happy panda wordpress template echo WordPress URL http SERVICE IP echo WordPress Admin URL http SERVICE IP admin 2 Open a browser and access WordPress using the obtained URL 3 Login with the following credentials below to see your blog echo Username user echo Password kubectl get secret namespace default happy panda wordpress o jsonpath data wordpress password base64 decode Now the wordpress chart is installed Note that installing a chart creates a new release object The release above is named happy panda If you want Helm to generate a name for you leave off the release name and use generate name During installation the helm client will print useful information about which resources were created what the state of the release is and also whether there are additional configuration steps you can or should take Helm installs resources in the following order Namespace NetworkPolicy ResourceQuota LimitRange PodSecurityPolicy PodDisruptionBudget ServiceAccount Secret SecretList ConfigMap StorageClass PersistentVolume PersistentVolumeClaim CustomResourceDefinition ClusterRole ClusterRoleList ClusterRoleBinding ClusterRoleBindingList Role RoleList RoleBinding RoleBindingList Service DaemonSet Pod ReplicationController ReplicaSet Deployment HorizontalPodAutoscaler StatefulSet Job CronJob Ingress APIService Helm does not wait until all of the resources are running before it exits Many charts require Docker images that are over 600MB in size and may take a long time to install into the cluster To keep track of a release s state or to re read configuration information you can use helm status console helm status happy panda NAME happy panda LAST DEPLOYED Tue Jan 26 10 27 17 2021 NAMESPACE default STATUS deployed REVISION 1 NOTES Please be patient while the chart is being deployed Your WordPress site can be accessed through the following DNS name from within your cluster happy panda wordpress default svc cluster local port 80 To access your WordPress site from outside the cluster follow the steps below 1 Get the WordPress URL by running these commands NOTE It may take a few minutes for the LoadBalancer IP to be available Watch the status with kubectl get svc namespace default w happy panda wordpress export SERVICE IP kubectl get svc namespace default happy panda wordpress template echo WordPress URL http SERVICE IP echo WordPress Admin URL http SERVICE IP admin 2 Open a browser and access WordPress using the obtained URL 3 Login with the following credentials below to see your blog echo Username user echo Password kubectl get secret namespace default happy panda wordpress o jsonpath data wordpress password base64 decode The above shows the current state of your release Customizing the Chart Before Installing Installing the way we have here will only use the default configuration options for this chart Many times you will want to customize the chart to use your preferred configuration To see what options are configurable on a chart use helm show values console helm show values bitnami wordpress Global Docker image parameters Please note that this will override the image parameters including dependencies configured to use the global value Current available global Docker image parameters imageRegistry and imagePullSecrets global imageRegistry myRegistryName imagePullSecrets myRegistryKeySecretName storageClass myStorageClass Bitnami WordPress image version ref https hub docker com r bitnami wordpress tags image registry docker io repository bitnami wordpress tag 5 6 0 debian 10 r35 You can then override any of these settings in a YAML formatted file and then pass that file during installation console echo mariadb auth database user0db mariadb auth username user0 values yaml helm install f values yaml bitnami wordpress generate name The above will create a default MariaDB user with the name user0 and grant this user access to a newly created user0db database but will accept all the rest of the defaults for that chart There are two ways to pass configuration data during install values or f Specify a YAML file with overrides This can be specified multiple times and the rightmost file will take precedence set Specify overrides on the command line If both are used set values are merged into values with higher precedence Overrides specified with set are persisted in a Secret Values that have been set can be viewed for a given release with helm get values release name Values that have been set can be cleared by running helm upgrade with reset values specified The Format and Limitations of set The set option takes zero or more name value pairs At its simplest it is used like this set name value The YAML equivalent of that is yaml name value Multiple values are separated by characters So set a b c d becomes yaml a b c d More complex expressions are supported For example set outer inner value is translated into this yaml outer inner value Lists can be expressed by enclosing values in and For example set name a b c translates to yaml name a b c Certain name key can be set to be null or to be an empty array For example set name a null translates yaml name a b c a b to yaml name a null As of Helm 2 5 0 it is possible to access list items using an array index syntax For example set servers 0 port 80 becomes yaml servers port 80 Multiple values can be set this way The line set servers 0 port 80 servers 0 host example becomes yaml servers port 80 host example Sometimes you need to use special characters in your set lines You can use a backslash to escape the characters set name value1 value2 will become yaml name value1 value2 Similarly you can escape dot sequences as well which may come in handy when charts use the toYaml function to parse annotations labels and node selectors The syntax for set nodeSelector kubernetes io role master becomes yaml nodeSelector kubernetes io role master Deeply nested data structures can be difficult to express using set Chart designers are encouraged to consider the set usage when designing the format of a values yaml file read more about Values Files chart template guide values files More Installation Methods The helm install command can install from several sources A chart repository as we ve seen above A local chart archive helm install foo foo 0 1 1 tgz An unpacked chart directory helm install foo path to foo A full URL helm install foo https example com charts foo 1 2 3 tgz helm upgrade and helm rollback Upgrading a Release and Recovering on Failure When a new version of a chart is released or when you want to change the configuration of your release you can use the helm upgrade command An upgrade takes an existing release and upgrades it according to the information you provide Because Kubernetes charts can be large and complex Helm tries to perform the least invasive upgrade It will only update things that have changed since the last release console helm upgrade f panda yaml happy panda bitnami wordpress In the above case the happy panda release is upgraded with the same chart but with a new YAML file yaml mariadb auth username user1 We can use helm get values to see whether that new setting took effect console helm get values happy panda mariadb auth username user1 The helm get command is a useful tool for looking at a release in the cluster And as we can see above it shows that our new values from panda yaml were deployed to the cluster Now if something does not go as planned during a release it is easy to roll back to a previous release using helm rollback RELEASE REVISION console helm rollback happy panda 1 The above rolls back our happy panda to its very first release version A release version is an incremental revision Every time an install upgrade or rollback happens the revision number is incremented by 1 The first revision number is always 1 And we can use helm history RELEASE to see revision numbers for a certain release Helpful Options for Install Upgrade Rollback There are several other helpful options you can specify for customizing the behavior of Helm during an install upgrade rollback Please note that this is not a full list of cli flags To see a description of all flags just run helm command help timeout A Go duration https golang org pkg time ParseDuration value to wait for Kubernetes commands to complete This defaults to 5m0s wait Waits until all Pods are in a ready state PVCs are bound Deployments have minimum Desired minus maxUnavailable Pods in ready state and Services have an IP address and Ingress if a LoadBalancer before marking the release as successful It will wait for as long as the timeout value If timeout is reached the release will be marked as FAILED Note In scenarios where Deployment has replicas set to 1 and maxUnavailable is not set to 0 as part of rolling update strategy wait will return as ready as it has satisfied the minimum Pod in ready condition no hooks This skips running hooks for the command recreate pods only available for upgrade and rollback This flag will cause all pods to be recreated with the exception of pods belonging to deployments DEPRECATED in Helm 3 helm uninstall Uninstalling a Release When it is time to uninstall a release from the cluster use the helm uninstall command console helm uninstall happy panda This will remove the release from the cluster You can see all of your currently deployed releases with the helm list command console helm list NAME VERSION UPDATED STATUS CHART inky cat 1 Wed Sep 28 12 59 46 2016 DEPLOYED alpine 0 1 0 From the output above we can see that the happy panda release was uninstalled In previous versions of Helm when a release was deleted a record of its deletion would remain In Helm 3 deletion removes the release record as well If you wish to keep a deletion release record use helm uninstall keep history Using helm list uninstalled will only show releases that were uninstalled with the keep history flag The helm list all flag will show you all release records that Helm has retained including records for failed or deleted items if keep history was specified console helm list all NAME VERSION UPDATED STATUS CHART happy panda 2 Wed Sep 28 12 47 54 2016 UNINSTALLED wordpress 10 4 5 6 0 inky cat 1 Wed Sep 28 12 59 46 2016 DEPLOYED alpine 0 1 0 kindred angelf 2 Tue Sep 27 16 16 10 2016 UNINSTALLED alpine 0 1 0 Note that because releases are now deleted by default it is no longer possible to rollback an uninstalled resource helm repo Working with Repositories Helm 3 no longer ships with a default chart repository The helm repo command group provides commands to add list and remove repositories You can see which repositories are configured using helm repo list console helm repo list NAME URL stable https charts helm sh stable mumoshu https mumoshu github io charts And new repositories can be added with helm repo add NAME URL console helm repo add dev https example com dev charts Because chart repositories change frequently at any point you can make sure your Helm client is up to date by running helm repo update Repositories can be removed with helm repo remove Creating Your Own Charts The Chart Development Guide explains how to develop your own charts But you can get started quickly by using the helm create command console helm create deis workflow Creating deis workflow Now there is a chart in deis workflow You can edit it and create your own templates As you edit your chart you can validate that it is well formed by running helm lint When it s time to package the chart up for distribution you can run the helm package command console helm package deis workflow deis workflow 0 1 0 tgz And that chart can now easily be installed by helm install console helm install deis workflow deis workflow 0 1 0 tgz Charts that are packaged can be loaded into chart repositories See the documentation for Helm chart repositories for more details Conclusion This chapter has covered the basic usage patterns of the helm client including searching installation upgrading and uninstalling It has also covered useful utility commands like helm status helm get and helm repo For more information on these commands take a look at Helm s built in help helm help In the next chapter howto charts tips and tricks we look at the process of developing charts
helm A closer look at best practices surrounding templates Structure of title Templates aliases docs topics chartbestpractices templates This part of the Best Practices Guide focuses on templates weight 3
--- title: "Templates" description: "A closer look at best practices surrounding templates." weight: 3 aliases: ["/docs/topics/chart_best_practices/templates/"] --- This part of the Best Practices Guide focuses on templates. ## Structure of `templates/` The `templates/` directory should be structured as follows: - Template files should have the extension `.yaml` if they produce YAML output. The extension `.tpl` may be used for template files that produce no formatted content. - Template file names should use dashed notation (`my-example-configmap.yaml`), not camelcase. - Each resource definition should be in its own template file. - Template file names should reflect the resource kind in the name. e.g. `foo-pod.yaml`, `bar-svc.yaml` ## Names of Defined Templates Defined templates (templates created inside a ` ` directive) are globally accessible. That means that a chart and all of its subcharts will have access to all of the templates created with ``. For that reason, _all defined template names should be namespaced._ Correct: ```yaml ``` Incorrect: ```yaml ``` It is highly recommended that new charts are created via `helm create` command as the template names are automatically defined as per this best practice. ## Formatting Templates Templates should be indented using _two spaces_ (never tabs). Template directives should have whitespace after the opening braces and before the closing braces: Correct: ``` ``` Incorrect: ``` ``` Templates should chomp whitespace where possible: ```yaml foo: ``` Blocks (such as control structures) may be indented to indicate flow of the template code. ``` Hello ``` However, since YAML is a whitespace-oriented language, it is often not possible for code indentation to follow that convention. ## Whitespace in Generated Templates It is preferable to keep the amount of whitespace in generated templates to a minimum. In particular, numerous blank lines should not appear adjacent to each other. But occasional empty lines (particularly between logical sections) is fine. This is best: ```yaml apiVersion: batch/v1 kind: Job metadata: name: example labels: first: first second: second ``` This is okay: ```yaml apiVersion: batch/v1 kind: Job metadata: name: example labels: first: first second: second ``` But this should be avoided: ```yaml apiVersion: batch/v1 kind: Job metadata: name: example labels: first: first second: second ``` ## Comments (YAML Comments vs. Template Comments) Both YAML and Helm Templates have comment markers. YAML comments: ```yaml # This is a comment type: sprocket ``` Template Comments: ```yaml type: frobnitz ``` Template comments should be used when documenting features of a template, such as explaining a defined template: ```yaml ``` Inside of templates, YAML comments may be used when it is useful for Helm users to (possibly) see the comments during debugging. ```yaml # This may cause problems if the value is more than 100Gi memory: ``` The comment above is visible when the user runs `helm install --debug`, while comments specified in `` sections are not. Beware of adding `#` YAML comments on template sections containing Helm values that may be required by certain template functions. For example, if `required` function is introduced to the above example, and `maxMem` is unset, then a `#` YAML comment will introduce a rendering error. Correct: `helm template` does not render this block ```yaml */ -}} ``` Incorrect: `helm template` returns `Error: execution error at (templates/test.yaml:2:13): maxMem must be set` ```yaml # This may cause problems if the value is more than 100Gi # memory: ``` Review [Debugging Templates](../chart_template_guide/debugging.md) for another example of this behavior of how YAML comments are left intact. ## Use of JSON in Templates and Template Output YAML is a superset of JSON. In some cases, using a JSON syntax can be more readable than other YAML representations. For example, this YAML is closer to the normal YAML method of expressing lists: ```yaml arguments: - "--dirname" - "/foo" ``` But it is easier to read when collapsed into a JSON list style: ```yaml arguments: ["--dirname", "/foo"] ``` Using JSON for increased legibility is good. However, JSON syntax should not be used for representing more complex constructs. When dealing with pure JSON embedded inside of YAML (such as init container configuration), it is of course appropriate to use the JSON format.
helm
title Templates description A closer look at best practices surrounding templates weight 3 aliases docs topics chart best practices templates This part of the Best Practices Guide focuses on templates Structure of templates The templates directory should be structured as follows Template files should have the extension yaml if they produce YAML output The extension tpl may be used for template files that produce no formatted content Template file names should use dashed notation my example configmap yaml not camelcase Each resource definition should be in its own template file Template file names should reflect the resource kind in the name e g foo pod yaml bar svc yaml Names of Defined Templates Defined templates templates created inside a directive are globally accessible That means that a chart and all of its subcharts will have access to all of the templates created with For that reason all defined template names should be namespaced Correct yaml Incorrect yaml It is highly recommended that new charts are created via helm create command as the template names are automatically defined as per this best practice Formatting Templates Templates should be indented using two spaces never tabs Template directives should have whitespace after the opening braces and before the closing braces Correct Incorrect Templates should chomp whitespace where possible yaml foo Blocks such as control structures may be indented to indicate flow of the template code Hello However since YAML is a whitespace oriented language it is often not possible for code indentation to follow that convention Whitespace in Generated Templates It is preferable to keep the amount of whitespace in generated templates to a minimum In particular numerous blank lines should not appear adjacent to each other But occasional empty lines particularly between logical sections is fine This is best yaml apiVersion batch v1 kind Job metadata name example labels first first second second This is okay yaml apiVersion batch v1 kind Job metadata name example labels first first second second But this should be avoided yaml apiVersion batch v1 kind Job metadata name example labels first first second second Comments YAML Comments vs Template Comments Both YAML and Helm Templates have comment markers YAML comments yaml This is a comment type sprocket Template Comments yaml type frobnitz Template comments should be used when documenting features of a template such as explaining a defined template yaml Inside of templates YAML comments may be used when it is useful for Helm users to possibly see the comments during debugging yaml This may cause problems if the value is more than 100Gi memory The comment above is visible when the user runs helm install debug while comments specified in sections are not Beware of adding YAML comments on template sections containing Helm values that may be required by certain template functions For example if required function is introduced to the above example and maxMem is unset then a YAML comment will introduce a rendering error Correct helm template does not render this block yaml Incorrect helm template returns Error execution error at templates test yaml 2 13 maxMem must be set yaml This may cause problems if the value is more than 100Gi memory Review Debugging Templates chart template guide debugging md for another example of this behavior of how YAML comments are left intact Use of JSON in Templates and Template Output YAML is a superset of JSON In some cases using a JSON syntax can be more readable than other YAML representations For example this YAML is closer to the normal YAML method of expressing lists yaml arguments dirname foo But it is easier to read when collapsed into a JSON list style yaml arguments dirname foo Using JSON for increased legibility is good However JSON syntax should not be used for representing more complex constructs When dealing with pure JSON embedded inside of YAML such as init container configuration it is of course appropriate to use the JSON format
helm guide we provide recommendations on how you should structure and use your weight 2 title Values Focuses on how you should structure and use your values This part of the best practices guide covers using values In this part of the values with focus on designing a chart s file aliases docs topics chartbestpractices values
--- title: "Values" description: "Focuses on how you should structure and use your values." weight: 2 aliases: ["/docs/topics/chart_best_practices/values/"] --- This part of the best practices guide covers using values. In this part of the guide, we provide recommendations on how you should structure and use your values, with focus on designing a chart's `values.yaml` file. ## Naming Conventions Variable names should begin with a lowercase letter, and words should be separated with camelcase: Correct: ```yaml chicken: true chickenNoodleSoup: true ``` Incorrect: ```yaml Chicken: true # initial caps may conflict with built-ins chicken-noodle-soup: true # do not use hyphens in the name ``` Note that all of Helm's built-in variables begin with an uppercase letter to easily distinguish them from user-defined values: `.Release.Name`, `.Capabilities.KubeVersion`. ## Flat or Nested Values YAML is a flexible format, and values may be nested deeply or flattened. Nested: ```yaml server: name: nginx port: 80 ``` Flat: ```yaml serverName: nginx serverPort: 80 ``` In most cases, flat should be favored over nested. The reason for this is that it is simpler for template developers and users. For optimal safety, a nested value must be checked at every level: ``` ``` For every layer of nesting, an existence check must be done. But for flat configuration, such checks can be skipped, making the template easier to read and use. ``` ``` When there are a large number of related variables, and at least one of them is non-optional, nested values may be used to improve readability. ## Make Types Clear YAML's type coercion rules are sometimes counterintuitive. For example, `foo: false` is not the same as `foo: "false"`. Large integers like `foo: 12345678` will get converted to scientific notation in some cases. The easiest way to avoid type conversion errors is to be explicit about strings, and implicit about everything else. Or, in short, _quote all strings_. Often, to avoid the integer casting issues, it is advantageous to store your integers as strings as well, and use `` in the template to convert from a string back to an integer. In most cases, explicit type tags are respected, so `foo: !!string 1234` should treat `1234` as a string. _However_, the YAML parser consumes tags, so the type data is lost after one parse. ## Consider How Users Will Use Your Values There are three potential sources of values: - A chart's `values.yaml` file - A values file supplied by `helm install -f` or `helm upgrade -f` - The values passed to a `--set` or `--set-string` flag on `helm install` or `helm upgrade` When designing the structure of your values, keep in mind that users of your chart may want to override them via either the `-f` flag or with the `--set` option. Since `--set` is more limited in expressiveness, the first guidelines for writing your `values.yaml` file is _make it easy to override from `--set`_. For this reason, it's often better to structure your values file using maps. Difficult to use with `--set`: ```yaml servers: - name: foo port: 80 - name: bar port: 81 ``` The above cannot be expressed with `--set` in Helm `<=2.4`. In Helm 2.5, accessing the port on foo is `--set servers[0].port=80`. Not only is it harder for the user to figure out, but it is prone to errors if at some later time the order of the `servers` is changed. Easy to use: ```yaml servers: foo: port: 80 bar: port: 81 ``` Accessing foo's port is much more obvious: `--set servers.foo.port=80`. ## Document `values.yaml` Every defined property in `values.yaml` should be documented. The documentation string should begin with the name of the property that it describes, and then give at least a one-sentence description. Incorrect: ```yaml # the host name for the webserver serverHost: example serverPort: 9191 ``` Correct: ```yaml # serverHost is the host name for the webserver serverHost: example # serverPort is the HTTP listener port for the webserver serverPort: 9191 ``` Beginning each comment with the name of the parameter it documents makes it easy to grep out documentation, and will enable documentation tools to reliably correlate doc strings with the parameters they describe.
helm
title Values description Focuses on how you should structure and use your values weight 2 aliases docs topics chart best practices values This part of the best practices guide covers using values In this part of the guide we provide recommendations on how you should structure and use your values with focus on designing a chart s values yaml file Naming Conventions Variable names should begin with a lowercase letter and words should be separated with camelcase Correct yaml chicken true chickenNoodleSoup true Incorrect yaml Chicken true initial caps may conflict with built ins chicken noodle soup true do not use hyphens in the name Note that all of Helm s built in variables begin with an uppercase letter to easily distinguish them from user defined values Release Name Capabilities KubeVersion Flat or Nested Values YAML is a flexible format and values may be nested deeply or flattened Nested yaml server name nginx port 80 Flat yaml serverName nginx serverPort 80 In most cases flat should be favored over nested The reason for this is that it is simpler for template developers and users For optimal safety a nested value must be checked at every level For every layer of nesting an existence check must be done But for flat configuration such checks can be skipped making the template easier to read and use When there are a large number of related variables and at least one of them is non optional nested values may be used to improve readability Make Types Clear YAML s type coercion rules are sometimes counterintuitive For example foo false is not the same as foo false Large integers like foo 12345678 will get converted to scientific notation in some cases The easiest way to avoid type conversion errors is to be explicit about strings and implicit about everything else Or in short quote all strings Often to avoid the integer casting issues it is advantageous to store your integers as strings as well and use in the template to convert from a string back to an integer In most cases explicit type tags are respected so foo string 1234 should treat 1234 as a string However the YAML parser consumes tags so the type data is lost after one parse Consider How Users Will Use Your Values There are three potential sources of values A chart s values yaml file A values file supplied by helm install f or helm upgrade f The values passed to a set or set string flag on helm install or helm upgrade When designing the structure of your values keep in mind that users of your chart may want to override them via either the f flag or with the set option Since set is more limited in expressiveness the first guidelines for writing your values yaml file is make it easy to override from set For this reason it s often better to structure your values file using maps Difficult to use with set yaml servers name foo port 80 name bar port 81 The above cannot be expressed with set in Helm 2 4 In Helm 2 5 accessing the port on foo is set servers 0 port 80 Not only is it harder for the user to figure out but it is prone to errors if at some later time the order of the servers is changed Easy to use yaml servers foo port 80 bar port 81 Accessing foo s port is much more obvious set servers foo port 80 Document values yaml Every defined property in values yaml should be documented The documentation string should begin with the name of the property that it describes and then give at least a one sentence description Incorrect yaml the host name for the webserver serverHost example serverPort 9191 Correct yaml serverHost is the host name for the webserver serverHost example serverPort is the HTTP listener port for the webserver serverPort 9191 Beginning each comment with the name of the parameter it documents makes it easy to grep out documentation and will enable documentation tools to reliably correlate doc strings with the parameters they describe