schema
stringclasses 471
values | key
stringlengths 0
203
| description
stringlengths 0
4.37k
| object
stringlengths 2
322k
|
---|---|---|---|
datahub_ingestion_schema.json | convert_column_urns_to_lowercase | When enabled, converts column URNs to lowercase to ensure cross-platform compatibility. If `target_platform` is Snowflake, the default is True. | {"default": false, "type": "boolean"} |
datahub_ingestion_schema.json | enable_meta_mapping | When enabled, applies the mappings that are defined through the meta_mapping directives. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | enable_query_tag_mapping | When enabled, applies the mappings that are defined through the `query_tag_mapping` directives. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | metadata_endpoint | The dbt Cloud metadata API endpoint. | {"default": "https://metadata.cloud.getdbt.com/graphql", "type": "string"} |
datahub_ingestion_schema.json | token | The API token to use to authenticate with DBT Cloud. | {"type": "string"} |
datahub_ingestion_schema.json | account_id | The DBT Cloud account ID to use. | {"type": "integer"} |
datahub_ingestion_schema.json | project_id | The dbt Cloud project ID to use. | {"type": "integer"} |
datahub_ingestion_schema.json | job_id | The ID of the job to ingest metadata from. | {"type": "integer"} |
datahub_ingestion_schema.json | run_id | The ID of the run to ingest metadata from. If not specified, we'll default to the latest run. | {"type": "integer"} |
datahub_ingestion_schema.json | path | File path to folder or file to ingest, or URL to a remote file. If pointed to a folder, all files with extension {file_extension} (default json) within that folder will be processed. | {"type": "string"} |
datahub_ingestion_schema.json | file_extension | When providing a folder to use to read files, set this field to control file extensions that you want the source to process. * is a special value that means process every file regardless of extension | {"default": ".json", "type": "string"} |
datahub_ingestion_schema.json | aspect | Set to an aspect to only read this aspect for ingestion. | {"type": "string"} |
datahub_ingestion_schema.json | count_all_before_starting | When enabled, counts total number of records in the file before starting. Used for accurate estimation of completion time. Turn it off if startup time is too high. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | s3_config | Base configuration class for stateful ingestion for source configs to inherit from. | {"type": "object", "properties": {"path_specs": {"type": "array", "items": {}}, "env": {"default": "PROD", "type": "string"}, "platform_instance": {"type": "string"}, "stateful_ingestion": {}, "platform": {"default": "", "type": "string"}, "aws_config": {"allOf": [{}]}, "use_s3_bucket_tags": {"type": "boolean"}, "use_s3_object_tags": {"type": "boolean"}, "profile_patterns": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "profiling": {"default": {"enabled": false, "operation_config": {"lower_freq_profile_enabled": false, "profile_day_of_week": null, "profile_date_of_month": null}, "profile_table_level_only": false, "max_number_of_fields_to_profile": null, "include_field_null_count": true, "include_field_min_value": true, "include_field_max_value": true, "include_field_mean_value": true, "include_field_median_value": true, "include_field_stddev_value": true, "include_field_quantiles": true, "include_field_distinct_value_frequencies": true, "include_field_histogram": true, "include_field_sample_values": true}, "allOf": [{}]}, "spark_driver_memory": {"default": "4g", "type": "string"}, "spark_config": {"default": {}, "type": "object"}, "max_rows": {"default": 100, "type": "integer"}, "verify_ssl": {"default": true, "anyOf": [{"type": "boolean"}, {"type": "string"}]}, "number_of_files_to_sample": {"default": 100, "type": "integer"}}, "required": ["path_specs"], "additionalProperties": false} |
datahub_ingestion_schema.json | path_specs | List of PathSpec. See [below](#path-spec) the details about PathSpec | {"type": "array", "items": {}} |
datahub_ingestion_schema.json | env | The environment that all assets produced by this connector belong to | {"default": "PROD", "type": "string"} |
datahub_ingestion_schema.json | platform_instance | The instance of the platform that all assets produced by this recipe belong to | {"type": "string"} |
datahub_ingestion_schema.json | platform | The platform that this source connects to (either 's3' or 'file'). If not specified, the platform will be inferred from the path_specs. | {"default": "", "type": "string"} |
datahub_ingestion_schema.json | aws_config | AWS configuration | {"allOf": [{}]} |
datahub_ingestion_schema.json | use_s3_bucket_tags | Whether or not to create tags in datahub from the s3 bucket | {"type": "boolean"} |
datahub_ingestion_schema.json | use_s3_object_tags | Whether or not to create tags in datahub from the s3 object | {"type": "boolean"} |
datahub_ingestion_schema.json | profile_patterns | regex patterns for tables to profile | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | profiling | Data profiling configuration | {"default": {"enabled": false, "operation_config": {"lower_freq_profile_enabled": false, "profile_day_of_week": null, "profile_date_of_month": null}, "profile_table_level_only": false, "max_number_of_fields_to_profile": null, "include_field_null_count": true, "include_field_min_value": true, "include_field_max_value": true, "include_field_mean_value": true, "include_field_median_value": true, "include_field_stddev_value": true, "include_field_quantiles": true, "include_field_distinct_value_frequencies": true, "include_field_histogram": true, "include_field_sample_values": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | spark_driver_memory | Max amount of memory to grant Spark. | {"default": "4g", "type": "string"} |
datahub_ingestion_schema.json | spark_config | Spark configuration properties to set on the SparkSession. Put config property names into quotes. For example: '"spark.executor.memory": "2g"' | {"default": {}, "type": "object"} |
datahub_ingestion_schema.json | max_rows | Maximum number of rows to use when inferring schemas for TSV and CSV files. | {"default": 100, "type": "integer"} |
datahub_ingestion_schema.json | verify_ssl | Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use. | {"default": true, "anyOf": [{"type": "boolean"}, {"type": "string"}]} |
datahub_ingestion_schema.json | number_of_files_to_sample | Number of files to list to sample for schema inference. This will be ignored if sample_files is set to False in the pathspec. | {"default": 100, "type": "integer"} |
datahub_ingestion_schema.json | looker_config | Any source that is a primary producer of Dataset metadata should inherit this class | {"type": "object", "properties": {"env": {"default": "PROD", "type": "string"}, "stateful_ingestion": {}, "platform_instance": {"type": "string"}, "explore_naming_pattern": {"default": {"pattern": "{model}.explore.{name}"}, "allOf": [{}]}, "explore_browse_pattern": {"default": {"pattern": "/{env}/{platform}/{project}/explores"}, "allOf": [{}]}, "view_naming_pattern": {"default": {"pattern": "{project}.view.{name}"}, "allOf": [{}]}, "view_browse_pattern": {"default": {"pattern": "/{env}/{platform}/{project}/views"}, "allOf": [{}]}, "tag_measures_and_dimensions": {"default": true, "type": "boolean"}, "platform_name": {"default": "looker", "type": "string"}, "extract_column_level_lineage": {"default": true, "type": "boolean"}, "client_id": {"type": "string"}, "client_secret": {"type": "string"}, "base_url": {"type": "string"}, "transport_options": {"allOf": [{}]}, "dashboard_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "chart_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "include_deleted": {"default": false, "type": "boolean"}, "extract_owners": {"default": true, "type": "boolean"}, "actor": {"type": "string"}, "strip_user_ids_from_email": {"default": false, "type": "boolean"}, "skip_personal_folders": {"default": false, "type": "boolean"}, "max_threads": {"default": 2, "type": "integer"}, "external_base_url": {"type": "string"}, "extract_usage_history": {"default": true, "type": "boolean"}, "extract_usage_history_for_interval": {"default": "30 days", "type": "string"}, "extract_embed_urls": {"default": true, "type": "boolean"}, "extract_independent_looks": {"default": false, "type": "boolean"}}, "required": ["client_id", "client_secret", "base_url"], "additionalProperties": false} |
datahub_ingestion_schema.json | env | The environment that all assets produced by this connector belong to | {"default": "PROD", "type": "string"} |
datahub_ingestion_schema.json | platform_instance | The instance of the platform that all assets produced by this recipe belong to | {"type": "string"} |
datahub_ingestion_schema.json | explore_naming_pattern | Pattern for providing dataset names to explores. Allowed variables are ['platform', 'env', 'project', 'model', 'name'] | {"default": {"pattern": "{model}.explore.{name}"}, "allOf": [{}]} |
datahub_ingestion_schema.json | explore_browse_pattern | Pattern for providing browse paths to explores. Allowed variables are ['platform', 'env', 'project', 'model', 'name'] | {"default": {"pattern": "/{env}/{platform}/{project}/explores"}, "allOf": [{}]} |
datahub_ingestion_schema.json | view_naming_pattern | Pattern for providing dataset names to views. Allowed variables are ['platform', 'env', 'project', 'model', 'name'] | {"default": {"pattern": "{project}.view.{name}"}, "allOf": [{}]} |
datahub_ingestion_schema.json | view_browse_pattern | Pattern for providing browse paths to views. Allowed variables are ['platform', 'env', 'project', 'model', 'name'] | {"default": {"pattern": "/{env}/{platform}/{project}/views"}, "allOf": [{}]} |
datahub_ingestion_schema.json | tag_measures_and_dimensions | When enabled, attaches tags to measures, dimensions and dimension groups to make them more discoverable. When disabled, adds this information to the description of the column. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | platform_name | Default platform name. Don't change. | {"default": "looker", "type": "string"} |
datahub_ingestion_schema.json | extract_column_level_lineage | When enabled, extracts column-level lineage from Views and Explores | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | client_id | Looker API client id. | {"type": "string"} |
datahub_ingestion_schema.json | client_secret | Looker API client secret. | {"type": "string"} |
datahub_ingestion_schema.json | base_url | Url to your Looker instance: `https://company.looker.com:19999` or `https://looker.company.com`, or similar. Used for making API calls to Looker and constructing clickable dashboard and chart urls. | {"type": "string"} |
datahub_ingestion_schema.json | transport_options | Populates the [TransportOptions](https://github.com/looker-open-source/sdk-codegen/blob/94d6047a0d52912ac082eb91616c1e7c379ab262/python/looker_sdk/rtl/transport.py#L70) struct for looker client | {"allOf": [{}]} |
datahub_ingestion_schema.json | dashboard_pattern | Patterns for selecting dashboard ids that are to be included | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | chart_pattern | Patterns for selecting chart ids that are to be included | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | include_deleted | Whether to include deleted dashboards and looks. | {"default": false, "type": "boolean"} |
datahub_ingestion_schema.json | extract_owners | When enabled, extracts ownership from Looker directly. When disabled, ownership is left empty for dashboards and charts. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | actor | This config is deprecated in favor of `extract_owners`. Previously, was the actor to use in ownership properties of ingested metadata. | {"type": "string"} |
datahub_ingestion_schema.json | strip_user_ids_from_email | When enabled, converts Looker user emails of the form [email protected] to urn:li:corpuser:name when assigning ownership | {"default": false, "type": "boolean"} |
datahub_ingestion_schema.json | skip_personal_folders | Whether to skip ingestion of dashboards in personal folders. Setting this to True will only ingest dashboards in the Shared folder space. | {"default": false, "type": "boolean"} |
datahub_ingestion_schema.json | max_threads | Max parallelism for Looker API calls. Defaults to cpuCount or 40 | {"default": 2, "type": "integer"} |
datahub_ingestion_schema.json | external_base_url | Optional URL to use when constructing external URLs to Looker if the `base_url` is not the correct one to use. For example, `https://looker-public.company.com`. If not provided, the external base URL will default to `base_url`. | {"type": "string"} |
datahub_ingestion_schema.json | extract_usage_history | Whether to ingest usage statistics for dashboards. Setting this to True will query looker system activity explores to fetch historical dashboard usage. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | extract_usage_history_for_interval | Used only if extract_usage_history is set to True. Interval to extract looker dashboard usage history for. See https://docs.looker.com/reference/filter-expressions#date_and_time. | {"default": "30 days", "type": "string"} |
datahub_ingestion_schema.json | extract_embed_urls | Produce URLs used to render Looker Explores as Previews inside of DataHub UI. Embeds must be enabled inside of Looker to use this feature. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | extract_independent_looks | Extract looks which are not part of any Dashboard. To enable this flag the stateful_ingestion should also be enabled. | {"default": false, "type": "boolean"} |
datahub_ingestion_schema.json | salesforce_config | Any source that is a primary producer of Dataset metadata should inherit this class | {"type": "object", "properties": {"env": {"default": "PROD", "type": "string"}, "platform_instance": {"type": "string"}, "auth": {"default": "USERNAME_PASSWORD", "allOf": [{}]}, "username": {"type": "string"}, "password": {"type": "string"}, "consumer_key": {"type": "string"}, "private_key": {"type": "string"}, "security_token": {"type": "string"}, "instance_url": {"type": "string"}, "is_sandbox": {"default": false, "type": "boolean"}, "access_token": {"type": "string"}, "ingest_tags": {"default": false, "type": "boolean"}, "object_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "domain": {"default": {}, "type": "object", "additionalProperties": {}}, "profiling": {"default": {"enabled": false, "operation_config": {"lower_freq_profile_enabled": false, "profile_day_of_week": null, "profile_date_of_month": null}}, "allOf": [{}]}, "profile_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "platform": {"default": "salesforce", "type": "string"}}, "additionalProperties": false} |
datahub_ingestion_schema.json | env | The environment that all assets produced by this connector belong to | {"default": "PROD", "type": "string"} |
datahub_ingestion_schema.json | platform_instance | The instance of the platform that all assets produced by this recipe belong to | {"type": "string"} |
datahub_ingestion_schema.json | username | Salesforce username | {"type": "string"} |
datahub_ingestion_schema.json | password | Password for Salesforce user | {"type": "string"} |
datahub_ingestion_schema.json | consumer_key | Consumer key for Salesforce JSON web token access | {"type": "string"} |
datahub_ingestion_schema.json | private_key | Private key as a string for Salesforce JSON web token access | {"type": "string"} |
datahub_ingestion_schema.json | security_token | Security token for Salesforce username | {"type": "string"} |
datahub_ingestion_schema.json | instance_url | Salesforce instance url. e.g. https://MyDomainName.my.salesforce.com | {"type": "string"} |
datahub_ingestion_schema.json | is_sandbox | Connect to Sandbox instance of your Salesforce | {"default": false, "type": "boolean"} |
datahub_ingestion_schema.json | access_token | Access token for instance url | {"type": "string"} |
datahub_ingestion_schema.json | ingest_tags | Ingest Tags from source. This will override Tags entered from UI | {"default": false, "type": "boolean"} |
datahub_ingestion_schema.json | object_pattern | Regex patterns for Salesforce objects to filter in ingestion. | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | domain | Regex patterns for tables/schemas to describe domain_key domain key (domain_key can be any string like "sales".) There can be multiple domain keys specified. | {"default": {}, "type": "object", "additionalProperties": {}} |
datahub_ingestion_schema.json | profile_pattern | Regex patterns for profiles to filter in ingestion, allowed by the `object_pattern`. | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | hive_config | Base configuration class for stateful ingestion for source configs to inherit from. | {"type": "object", "properties": {"env": {"default": "PROD", "type": "string"}, "platform_instance": {"type": "string"}, "stateful_ingestion": {}, "options": {"type": "object"}, "schema_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "table_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "view_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "profile_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "domain": {"default": {}, "type": "object", "additionalProperties": {}}, "include_tables": {"default": true, "type": "boolean"}, "include_table_location_lineage": {"default": true, "type": "boolean"}, "profiling": {"default": {"enabled": false, "operation_config": {"lower_freq_profile_enabled": false, "profile_day_of_week": null, "profile_date_of_month": null}, "limit": null, "offset": null, "report_dropped_profiles": false, "turn_off_expensive_profiling_metrics": false, "profile_table_level_only": false, "include_field_null_count": true, "include_field_distinct_count": true, "include_field_min_value": true, "include_field_max_value": true, "include_field_mean_value": true, "include_field_median_value": true, "include_field_stddev_value": true, "include_field_quantiles": false, "include_field_distinct_value_frequencies": false, "include_field_histogram": false, "include_field_sample_values": true, "field_sample_values_limit": 20, "max_number_of_fields_to_profile": null, "profile_if_updated_since_days": null, "profile_table_size_limit": 5, "profile_table_row_limit": 5000000, "profile_table_row_count_estimate_only": false, "max_workers": 10, "query_combiner_enabled": true, "catch_exceptions": true, "partition_profiling_enabled": true, "partition_datetime": null}, "allOf": [{}]}, "username": {"type": "string"}, "password": {"type": "string", "writeOnly": true, "format": "password"}, "host_port": {"type": "string"}, "database": {"type": "string"}, "database_alias": {"type": "string"}, "sqlalchemy_uri": {"type": "string"}, "database_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}}, "required": ["host_port"], "additionalProperties": false} |
datahub_ingestion_schema.json | env | The environment that all assets produced by this connector belong to | {"default": "PROD", "type": "string"} |
datahub_ingestion_schema.json | platform_instance | The instance of the platform that all assets produced by this recipe belong to | {"type": "string"} |
datahub_ingestion_schema.json | options | Any options specified here will be passed to [SQLAlchemy.create_engine](https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine) as kwargs. | {"type": "object"} |
datahub_ingestion_schema.json | schema_pattern | Deprecated in favour of database_pattern. | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | table_pattern | Regex patterns for tables to filter in ingestion. Specify regex to match the entire table name in database.schema.table format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*' | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | view_pattern | Regex patterns for views to filter in ingestion. Note: Defaults to table_pattern if not specified. Specify regex to match the entire view name in database.schema.view format. e.g. to match all views starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*' | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | profile_pattern | Regex patterns to filter tables (or specific columns) for profiling during ingestion. Note that only tables allowed by the `table_pattern` will be considered. | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | domain | Attach domains to databases, schemas or tables during ingestion using regex patterns. Domain key can be a guid like *urn:li:domain:ec428203-ce86-4db3-985d-5a8ee6df32ba* or a string like "Marketing".) If you provide strings, then datahub will attempt to resolve this name to a guid, and will error out if this fails. There can be multiple domain keys specified. | {"default": {}, "type": "object", "additionalProperties": {}} |
datahub_ingestion_schema.json | include_tables | Whether tables should be ingested. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | include_table_location_lineage | If the source supports it, include table lineage to the underlying storage location. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | username | username | {"type": "string"} |
datahub_ingestion_schema.json | password | password | {"type": "string", "writeOnly": true, "format": "password"} |
datahub_ingestion_schema.json | host_port | host URL | {"type": "string"} |
datahub_ingestion_schema.json | database | database (catalog) | {"type": "string"} |
datahub_ingestion_schema.json | database_alias | [Deprecated] Alias to apply to database when ingesting. | {"type": "string"} |
datahub_ingestion_schema.json | sqlalchemy_uri | URI of database to connect to. See https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls. Takes precedence over other connection parameters. | {"type": "string"} |
datahub_ingestion_schema.json | database_pattern | Regex patterns for databases to filter in ingestion. | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | mariadb_config | Base configuration class for stateful ingestion for source configs to inherit from. | {"type": "object", "properties": {"env": {"default": "PROD", "type": "string"}, "platform_instance": {"type": "string"}, "stateful_ingestion": {}, "options": {"type": "object"}, "schema_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "table_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "view_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "profile_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}, "domain": {"default": {}, "type": "object", "additionalProperties": {}}, "include_views": {"default": true, "type": "boolean"}, "include_tables": {"default": true, "type": "boolean"}, "include_table_location_lineage": {"default": true, "type": "boolean"}, "profiling": {"default": {"enabled": false, "operation_config": {"lower_freq_profile_enabled": false, "profile_day_of_week": null, "profile_date_of_month": null}, "limit": null, "offset": null, "report_dropped_profiles": false, "turn_off_expensive_profiling_metrics": false, "profile_table_level_only": false, "include_field_null_count": true, "include_field_distinct_count": true, "include_field_min_value": true, "include_field_max_value": true, "include_field_mean_value": true, "include_field_median_value": true, "include_field_stddev_value": true, "include_field_quantiles": false, "include_field_distinct_value_frequencies": false, "include_field_histogram": false, "include_field_sample_values": true, "field_sample_values_limit": 20, "max_number_of_fields_to_profile": null, "profile_if_updated_since_days": null, "profile_table_size_limit": 5, "profile_table_row_limit": 5000000, "profile_table_row_count_estimate_only": false, "max_workers": 10, "query_combiner_enabled": true, "catch_exceptions": true, "partition_profiling_enabled": true, "partition_datetime": null}, "allOf": [{}]}, "username": {"type": "string"}, "password": {"type": "string", "writeOnly": true, "format": "password"}, "host_port": {"default": "localhost:3306", "type": "string"}, "database": {"type": "string"}, "database_alias": {"type": "string"}, "scheme": {"default": "mysql+pymysql", "type": "string"}, "sqlalchemy_uri": {"type": "string"}, "database_pattern": {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]}}, "additionalProperties": false} |
datahub_ingestion_schema.json | env | The environment that all assets produced by this connector belong to | {"default": "PROD", "type": "string"} |
datahub_ingestion_schema.json | platform_instance | The instance of the platform that all assets produced by this recipe belong to | {"type": "string"} |
datahub_ingestion_schema.json | options | Any options specified here will be passed to [SQLAlchemy.create_engine](https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine) as kwargs. | {"type": "object"} |
datahub_ingestion_schema.json | schema_pattern | Deprecated in favour of database_pattern. | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | table_pattern | Regex patterns for tables to filter in ingestion. Specify regex to match the entire table name in database.schema.table format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*' | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | view_pattern | Regex patterns for views to filter in ingestion. Note: Defaults to table_pattern if not specified. Specify regex to match the entire view name in database.schema.view format. e.g. to match all views starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*' | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | profile_pattern | Regex patterns to filter tables (or specific columns) for profiling during ingestion. Note that only tables allowed by the `table_pattern` will be considered. | {"default": {"allow": [".*"], "deny": [], "ignoreCase": true}, "allOf": [{}]} |
datahub_ingestion_schema.json | domain | Attach domains to databases, schemas or tables during ingestion using regex patterns. Domain key can be a guid like *urn:li:domain:ec428203-ce86-4db3-985d-5a8ee6df32ba* or a string like "Marketing".) If you provide strings, then datahub will attempt to resolve this name to a guid, and will error out if this fails. There can be multiple domain keys specified. | {"default": {}, "type": "object", "additionalProperties": {}} |
datahub_ingestion_schema.json | include_views | Whether views should be ingested. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | include_tables | Whether tables should be ingested. | {"default": true, "type": "boolean"} |
datahub_ingestion_schema.json | include_table_location_lineage | If the source supports it, include table lineage to the underlying storage location. | {"default": true, "type": "boolean"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.