schema
stringclasses
471 values
key
stringlengths
0
203
description
stringlengths
0
4.37k
object
stringlengths
2
322k
datahub_ingestion_schema.json
token_uri
Token uri
{"default": "https://oauth2.googleapis.com/token", "type": "string"}
datahub_ingestion_schema.json
auth_provider_x509_cert_url
Auth provider x509 certificate url
{"default": "https://www.googleapis.com/oauth2/v1/certs", "type": "string"}
datahub_ingestion_schema.json
type
Authentication type
{"default": "service_account", "type": "string"}
datahub_ingestion_schema.json
client_x509_cert_url
If not set it will be default to https://www.googleapis.com/robot/v1/metadata/x509/client_email
{"type": "string"}
datahub_ingestion_schema.json
GitInfo
A reference to a Git repository, including a deploy key that can be used to clone it.
{"type": "object", "properties": {"repo": {"type": "string"}, "branch": {"default": "main", "type": "string"}, "url_template": {"type": "string"}, "deploy_key_file": {"format": "file-path", "type": "string"}, "deploy_key": {"type": "string", "writeOnly": true, "format": "password"}, "repo_ssh_locator": {"type": "string"}}, "required": ["repo"], "additionalProperties": false}
datahub_ingestion_schema.json
repo
Name of your Git repo e.g. https://github.com/datahub-project/datahub or https://gitlab.com/gitlab-org/gitlab. If organization/repo is provided, we assume it is a GitHub repo.
{"type": "string"}
datahub_ingestion_schema.json
branch
Branch on which your files live by default. Typically main or master. This can also be a commit hash.
{"default": "main", "type": "string"}
datahub_ingestion_schema.json
url_template
Template for generating a URL to a file in the repo e.g. '{repo_url}/blob/{branch}/{file_path}'. We can infer this for GitHub and GitLab repos, and it is otherwise required.It supports the following variables: {repo_url}, {branch}, {file_path}
{"type": "string"}
datahub_ingestion_schema.json
deploy_key_file
A private key file that contains an ssh key that has been configured as a deploy key for this repository. Use a file where possible, else see deploy_key for a config field that accepts a raw string.
{"format": "file-path", "type": "string"}
datahub_ingestion_schema.json
deploy_key
A private key that contains an ssh key that has been configured as a deploy key for this repository. See deploy_key_file if you want to use a file that contains this key.
{"type": "string", "writeOnly": true, "format": "password"}
datahub_ingestion_schema.json
repo_ssh_locator
The url to call `git clone` on. We infer this for github and gitlab repos, but it is required for other hosts.
{"type": "string"}
datahub_ingestion_schema.json
platform_env
The environment that the platform is located in. Leaving this empty will inherit defaults from the top level Looker configuration
{"type": "string"}
datahub_ingestion_schema.json
client_id
Looker API client id.
{"type": "string"}
datahub_ingestion_schema.json
client_secret
Looker API client secret.
{"type": "string"}
datahub_ingestion_schema.json
base_url
Url to your Looker instance: `https://company.looker.com:19999` or `https://looker.company.com`, or similar. Used for making API calls to Looker and constructing clickable dashboard and chart urls.
{"type": "string"}
datahub_ingestion_schema.json
transport_options
Populates the [TransportOptions](https://github.com/looker-open-source/sdk-codegen/blob/94d6047a0d52912ac082eb91616c1e7c379ab262/python/looker_sdk/rtl/transport.py#L70) struct for looker client
{"allOf": [{}]}
datahub_ingestion_schema.json
include
Path to table. Name variable `{table}` is used to mark the folder with dataset. In absence of `{table}`, file level dataset will be created. Check below examples for more details.
{"type": "string"}
datahub_ingestion_schema.json
exclude
list of paths in glob pattern which will be excluded while scanning for the datasets
{"type": "array", "items": {"type": "string"}}
datahub_ingestion_schema.json
file_types
Files with extenstions specified here (subset of default value) only will be scanned to create dataset. Other files will be omitted.
{"default": ["csv", "tsv", "json", "parquet", "avro"], "type": "array", "items": {"type": "string"}}
datahub_ingestion_schema.json
default_extension
For files without extension it will assume the specified file type. If it is not set the files without extensions will be skipped.
{"type": "string"}
datahub_ingestion_schema.json
table_name
Display name of the dataset.Combination of named variables from include path and strings
{"type": "string"}
datahub_ingestion_schema.json
enable_compression
Enable or disable processing compressed files. Currently .gz and .bz files are supported.
{"default": true, "type": "boolean"}
datahub_ingestion_schema.json
sample_files
Not listing all the files but only taking a handful amount of sample file to infer the schema. File count and file size calculation will be disabled. This can affect performance significantly if enabled
{"default": true, "type": "boolean"}
datahub_ingestion_schema.json
S3LineageProviderConfig
Any source that produces s3 lineage from/to Datasets should inherit this class.
{"type": "object", "properties": {"path_specs": {"type": "array", "items": {}}}, "required": ["path_specs"], "additionalProperties": false}
datahub_ingestion_schema.json
path_specs
List of PathSpec. See below the details about PathSpec
{"type": "array", "items": {}}
datahub_ingestion_schema.json
LineageMode
An enumeration.
{"enum": ["sql_based", "stl_scan_based", "mixed"]}
datahub_ingestion_schema.json
EmitDirective
A holder for directives for emission for specific types of entities
{"enum": ["YES", "NO", "ONLY"]}
datahub_ingestion_schema.json
DBTEntitiesEnabled
Controls which dbt entities are going to be emitted by this source
{"type": "object", "properties": {"models": {"default": "YES", "allOf": [{}]}, "sources": {"default": "YES", "allOf": [{}]}, "seeds": {"default": "YES", "allOf": [{}]}, "snapshots": {"default": "YES", "allOf": [{}]}, "test_definitions": {"default": "YES", "allOf": [{}]}, "test_results": {"default": "YES", "allOf": [{}]}}, "additionalProperties": false}
datahub_ingestion_schema.json
models
Emit metadata for dbt models when set to Yes or Only
{"default": "YES", "allOf": [{}]}
datahub_ingestion_schema.json
sources
Emit metadata for dbt sources when set to Yes or Only
{"default": "YES", "allOf": [{}]}
datahub_ingestion_schema.json
seeds
Emit metadata for dbt seeds when set to Yes or Only
{"default": "YES", "allOf": [{}]}
datahub_ingestion_schema.json
snapshots
Emit metadata for dbt snapshots when set to Yes or Only
{"default": "YES", "allOf": [{}]}
datahub_ingestion_schema.json
test_definitions
Emit metadata for test definitions when enabled when set to Yes or Only
{"default": "YES", "allOf": [{}]}
datahub_ingestion_schema.json
test_results
Emit metadata for test results when set to Yes or Only
{"default": "YES", "allOf": [{}]}
datahub_ingestion_schema.json
RoleArn
ARN of the role to assume.
{"type": "string"}
datahub_ingestion_schema.json
ExternalId
External ID to use when assuming the role.
{"type": "string"}
datahub_ingestion_schema.json
AwsConnectionConfig
Common AWS credentials config. Currently used by: - Glue source - SageMaker source - dbt source
{"type": "object", "properties": {"aws_access_key_id": {"type": "string"}, "aws_secret_access_key": {"type": "string"}, "aws_session_token": {"type": "string"}, "aws_role": {"anyOf": [{"type": "string"}, {"type": "array", "items": {"anyOf": [{"type": "string"}, {}]}}]}, "aws_profile": {"type": "string"}, "aws_region": {"type": "string"}, "aws_endpoint_url": {"type": "string"}, "aws_proxy": {"type": "object", "additionalProperties": {"type": "string"}}}, "required": ["aws_region"], "additionalProperties": false}
datahub_ingestion_schema.json
aws_access_key_id
AWS access key ID. Can be auto-detected, see [the AWS boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) for details.
{"type": "string"}
datahub_ingestion_schema.json
aws_secret_access_key
AWS secret access key. Can be auto-detected, see [the AWS boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) for details.
{"type": "string"}
datahub_ingestion_schema.json
aws_session_token
AWS session token. Can be auto-detected, see [the AWS boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) for details.
{"type": "string"}
datahub_ingestion_schema.json
aws_role
AWS roles to assume. If using the string format, the role ARN can be specified directly. If using the object format, the role can be specified in the RoleArn field and additional available arguments are documented at https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html?highlight=assume_role#STS.Client.assume_role
{"anyOf": [{"type": "string"}, {"type": "array", "items": {"anyOf": [{"type": "string"}, {}]}}]}
datahub_ingestion_schema.json
aws_profile
Named AWS profile to use. Only used if access key / secret are unset. If not set the default will be used
{"type": "string"}
datahub_ingestion_schema.json
aws_region
AWS region code.
{"type": "string"}
datahub_ingestion_schema.json
aws_endpoint_url
The AWS service endpoint. This is normally [constructed automatically](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html), but can be overridden here.
{"type": "string"}
datahub_ingestion_schema.json
aws_proxy
A set of proxy configs to use with AWS. See the [botocore.config](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html) docs for details.
{"type": "object", "additionalProperties": {"type": "string"}}
datahub_ingestion_schema.json
GitReference
Reference to a hosted Git repository. Used to generate "view source" links.
{"type": "object", "properties": {"repo": {"type": "string"}, "branch": {"default": "main", "type": "string"}, "url_template": {"type": "string"}}, "required": ["repo"], "additionalProperties": false}
datahub_ingestion_schema.json
repo
Name of your Git repo e.g. https://github.com/datahub-project/datahub or https://gitlab.com/gitlab-org/gitlab. If organization/repo is provided, we assume it is a GitHub repo.
{"type": "string"}
datahub_ingestion_schema.json
branch
Branch on which your files live by default. Typically main or master. This can also be a commit hash.
{"default": "main", "type": "string"}
datahub_ingestion_schema.json
url_template
Template for generating a URL to a file in the repo e.g. '{repo_url}/blob/{branch}/{file_path}'. We can infer this for GitHub and GitLab repos, and it is otherwise required.It supports the following variables: {repo_url}, {branch}, {file_path}
{"type": "string"}
datahub_ingestion_schema.json
PrestoOnHiveConfigMode
An enumeration.
{"enum": ["hive", "presto", "presto-on-hive", "trino"], "type": "string"}
datahub_ingestion_schema.json
platform_instance
DataHub platform instance name. To generate correct urn for upstream dataset, this should match with platform instance name used in ingestion recipe of other datahub sources.
{"type": "string"}
datahub_ingestion_schema.json
env
The environment that all assets produced by DataHub platform ingestion source belong to
{"default": "PROD", "type": "string"}
datahub_ingestion_schema.json
create_corp_user
Whether ingest PowerBI user as Datahub Corpuser
{"default": true, "type": "boolean"}
datahub_ingestion_schema.json
use_powerbi_email
Use PowerBI User email to ingest as corpuser, default is powerbi user identifier
{"default": false, "type": "boolean"}
datahub_ingestion_schema.json
remove_email_suffix
Remove PowerBI User email suffix for example, @acryl.io
{"default": false, "type": "boolean"}
datahub_ingestion_schema.json
dataset_configured_by_as_owner
Take PBI dataset configuredBy as dataset owner if exist
{"default": false, "type": "boolean"}
datahub_ingestion_schema.json
owner_criteria
Need to have certain authority to qualify as owner for example ['ReadWriteReshareExplore','Owner','Admin']
{"type": "array", "items": {"type": "string"}}
datahub_ingestion_schema.json
enabled
Whether to enable profiling for the elastic search source.
{"default": false, "type": "boolean"}
datahub_ingestion_schema.json
operation_config
Experimental feature. To specify operation configs.
{"allOf": [{}]}
datahub_ingestion_schema.json
urns_suffix_regex
List of regex patterns to remove from the name of the URN. All of the indices before removal of URNs are considered as the same dataset. These are applied in order for each URN. The main case where you would want to have multiple of these if the name where you are trying to remove suffix from have different formats. e.g. ending with -YYYY-MM-DD as well as ending -epochtime would require you to have 2 regex patterns to remove the suffixes across all URNs.
{"type": "array", "items": {"type": "string"}}
datahub_ingestion_schema.json
type
The type of the classifier to use. For DataHub, use `datahub`
{"type": "string"}
datahub_ingestion_schema.json
config
The configuration required for initializing the classifier. If not specified, uses defaults for classifer type.
{}
datahub_ingestion_schema.json
enabled
Whether classification should be used to auto-detect glossary terms
{"default": false, "type": "boolean"}
datahub_ingestion_schema.json
sample_size
Number of sample values used for classification.
{"default": 100, "type": "integer"}
datahub_ingestion_schema.json
max_workers
Number of worker threads to use for classification. Set to 1 to disable.
{"default": 2, "type": "integer"}
datahub_ingestion_schema.json
table_pattern
Regex patterns to filter tables for classification. This is used in combination with other patterns in parent config. Specify regex to match the entire table name in `database.schema.table` format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*'
{"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}
datahub_ingestion_schema.json
column_pattern
Regex patterns to filter columns for classification. This is used in combination with other patterns in parent config. Specify regex to match the column name in `database.schema.table.column` format.
{"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}
datahub_ingestion_schema.json
info_type_to_term
Optional mapping to provide glossary term identifier for info type
{"default": {}, "type": "object", "additionalProperties": {"type": "string"}}
datahub_ingestion_schema.json
classifiers
Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance.
{"default": [{"type": "datahub", "config": null}], "type": "array", "items": {}}
datahub_ingestion_schema.json
OAuthIdentityProvider
An enumeration.
{"enum": ["microsoft", "okta"]}
datahub_ingestion_schema.json
provider
Identity provider for oauth.Supported providers are microsoft and okta.
{"allOf": [{}]}
datahub_ingestion_schema.json
authority_url
Authority url of your identity provider
{"type": "string"}
datahub_ingestion_schema.json
client_id
client id of your registered application
{"type": "string"}
datahub_ingestion_schema.json
scopes
scopes required to connect to snowflake
{"type": "array", "items": {"type": "string"}}
datahub_ingestion_schema.json
use_certificate
Do you want to use certificate and private key to authenticate using oauth
{"default": false, "type": "boolean"}
datahub_ingestion_schema.json
client_secret
client secret of the application if use_certificate = false
{"type": "string", "writeOnly": true, "format": "password"}
datahub_ingestion_schema.json
encoded_oauth_public_key
base64 encoded certificate content if use_certificate = true
{"type": "string"}
datahub_ingestion_schema.json
encoded_oauth_private_key
base64 encoded private key content if use_certificate = true
{"type": "string"}
datahub_ingestion_schema.json
TagOption
An enumeration.
{"enum": ["with_lineage", "without_lineage", "skip"], "type": "string"}
datahub_ingestion_schema.json
enabled
Whether profiling should be done.
{"default": false, "type": "boolean"}
datahub_ingestion_schema.json
operation_config
Experimental feature. To specify operation configs.
{"allOf": [{}]}
datahub_ingestion_schema.json
warehouse_id
SQL Warehouse id, for running profiling queries.
{"type": "string"}
datahub_ingestion_schema.json
profile_table_level_only
Whether to perform profiling at table-level only or include column-level profiling as well.
{"default": false, "type": "boolean"}
datahub_ingestion_schema.json
pattern
Regex patterns to filter tables for profiling during ingestion. Specify regex to match the `catalog.schema.table` format. Note that only tables allowed by the `table_pattern` will be considered.
{"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}
datahub_ingestion_schema.json
call_analyze
Whether to call ANALYZE TABLE as part of profile ingestion.If false, will ingest the results of the most recent ANALYZE TABLE call, if any.
{"default": true, "type": "boolean"}
datahub_ingestion_schema.json
max_wait_secs
Maximum time to wait for an ANALYZE TABLE query to complete.
{"default": 3600, "type": "integer"}
datahub_ingestion_schema.json
max_workers
Number of worker threads to use for profiling. Set to 1 to disable.
{"default": 10, "type": "integer"}
datahub_ingestion_schema.json
retry_backoff_multiplier
Multiplier for exponential backoff when waiting to retry
{"default": 2, "anyOf": [{"type": "integer"}, {"type": "number"}]}
datahub_ingestion_schema.json
max_retry_interval
Maximum interval to wait when retrying
{"default": 10, "anyOf": [{"type": "integer"}, {"type": "number"}]}
datahub_ingestion_schema.json
max_attempts
Maximum number of attempts to retry before failing
{"default": 5, "type": "integer"}
datahub_ingestion_schema.json
AdlsSourceConfig
Common Azure credentials config. https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-directory-file-acl-python
{"type": "object", "properties": {"base_path": {"default": "/", "type": "string"}, "container_name": {"type": "string"}, "account_name": {"type": "string"}, "account_key": {"type": "string"}, "sas_token": {"type": "string"}, "client_secret": {"type": "string"}, "client_id": {"type": "string"}, "tenant_id": {"type": "string"}}, "required": ["container_name", "account_name"], "additionalProperties": false}
datahub_ingestion_schema.json
base_path
Base folder in hierarchical namespaces to start from.
{"default": "/", "type": "string"}
datahub_ingestion_schema.json
container_name
Azure storage account container name.
{"type": "string"}
datahub_ingestion_schema.json
account_name
Name of the Azure storage account. See [Microsoft official documentation on how to create a storage account.](https://docs.microsoft.com/en-us/azure/storage/blobs/create-data-lake-storage-account)
{"type": "string"}
datahub_ingestion_schema.json
account_key
Azure storage account access key that can be used as a credential. **An account key, a SAS token or a client secret is required for authentication.**
{"type": "string"}
datahub_ingestion_schema.json
sas_token
Azure storage account Shared Access Signature (SAS) token that can be used as a credential. **An account key, a SAS token or a client secret is required for authentication.**
{"type": "string"}
datahub_ingestion_schema.json
client_secret
Azure client secret that can be used as a credential. **An account key, a SAS token or a client secret is required for authentication.**
{"type": "string"}
datahub_ingestion_schema.json
client_id
Azure client (Application) ID required when a `client_secret` is used as a credential.
{"type": "string"}
datahub_ingestion_schema.json
tenant_id
Azure tenant (Directory) ID required when a `client_secret` is used as a credential.
{"type": "string"}
datahub_ingestion_schema.json
enabled
Whether profiling should be done.
{"default": false, "type": "boolean"}