schema
stringclasses 471
values | key
stringlengths 0
203
| description
stringlengths 0
4.37k
| object
stringlengths 2
322k
|
---|---|---|---|
comet.json | temporaryGcsBucket | The GCS bucket that temporarily holds the data before it is loaded to BigQuery. | {"type": "string"} |
comet.json | substituteVars | Internal use. Do not modify. | {"type": "boolean"} |
comet.json | apply | Should access policies be enforced ? | {"type": "boolean"} |
comet.json | location | GCP project location. Required if apply is true. | {"type": "string"} |
comet.json | database | GCP Project id. Required if apply is true. | {"type": "string"} |
comet.json | taxonomy | Taxonomy name. Required if apply is true. | {"type": "string"} |
comet.json | maxJobs | Max number of Spark jobs to run in parallel, default is 1 | {"type": "integer"} |
comet.json | poolName | Pool name to use for Spark jobs, default is 'default' | {"type": "string"} |
comet.json | mode | This can be FIFO or FAIR, to control whether jobs within the pool queue up behind each other (the default) or share the pool’s resources fairly. | {"type": "string"} |
comet.json | file | Scheduler filename in the metadata folder. If not set, defaults to fairscheduler.xml. | {"type": "string"} |
comet.json | path | When using filesystem storage, the path to the expectations file | {"type": "string"} |
comet.json | active | should expectations be executed ? | {"type": "boolean"} |
comet.json | path | When using filesystem storage, the path to the metrics file | {"type": "string"} |
comet.json | discreteMaxCardinality | Max number of unique values accepted for a discrete column. Default is 10 | {"type": "integer"} |
comet.json | active | Should metrics be computed ? | {"type": "boolean"} |
comet.json | options | spark options to use | {} |
comet.json | id | ES: Attribute to use as id of the document. Generated by Elasticsearch if not specified. | {"type": "string"} |
comet.json | timestamp | ES or BQ: The timestamp column to use for table partitioning if any. No partitioning by default\nES:Timestamp field format as expected by Elasticsearch ("{beginTs|yyyy.MM.dd}" for example). | {"type": "string"} |
comet.json | location | BQ: Database location (EU, US, ...) | {"type": "string"} |
comet.json | clustering | FS or BQ: List of attributes to use for clustering | {"type": "array", "items": {"type": "string"}} |
comet.json | days | BQ: Number of days before this table is set as expired and deleted. Never by default. | {"type": "number"} |
comet.json | requirePartitionFilter | BQ: Should be require a partition filter on every request ? No by default. | {"type": "boolean"} |
comet.json | materializedView | BQ: Should we materialize as a table or as a view when saving the results ? false by default. | {"type": "boolean"} |
comet.json | enableRefresh | BQ: Enable automatic refresh of materialized view ? false by default. | {"type": "boolean"} |
comet.json | refreshIntervalMs | BQ: Refresh interval in milliseconds. Default to BigQuery default value | {"type": "number"} |
comet.json | format | FS: File format | {"type": "string"} |
comet.json | extension | FS: File extension | {"type": "string"} |
comet.json | partition | FS or BQ: List of partition attributes | {} |
comet.json | connectionRef | JDBC: Connection String | {"type": "string"} |
comet.json | coalesce | When outputting files, should we coalesce it to a single file. Useful when CSV is the output format. | {"type": "boolean"} |
comet.json | encoding | UTF-8 if not specified. | {"type": "string"} |
comet.json | multiline | are json objects on a single line or multiple line ? Single by default. false means single. false also means faster | {"type": "boolean"} |
comet.json | array | Is the json stored as a single object array ? false by default. This means that by default we have on json document per line. | {"type": "boolean"} |
comet.json | withHeader | does the dataset has a header ? true bu default | {"type": "boolean"} |
comet.json | separator | the values delimiter, ';' by default value may be a multichar string starting from Spark3 | {"type": "string"} |
comet.json | quote | The String quote char, '"' by default | {"type": "string"} |
comet.json | escape | escaping char '\' by default | {"type": "string"} |
comet.json | write | Write mode, APPEND by default | {} |
comet.json | ignore | Pattern to ignore or UDF to apply to ignore some lines | {"type": "string"} |
comet.json | xml | com.databricks.spark.xml options to use (eq. rowTag) | {} |
comet.json | directory | Folder on the local filesystem where incoming files are stored.
Typically, this folder will be scanned periodically to move the dataset to the cluster for ingestion.
Files located in this folder are moved to the pending folder for ingestion by the "import" command. | {"type": "string"} |
comet.json | extensions | recognized filename extensions. json, csv, dsv, psv are recognized by default.
Only files with these extensions will be moved to the pending folder. | {"type": "array", "items": {"type": "string"}} |
comet.json | ack | Ack extension used for each file. ".ack" if not specified.
Files are moved to the pending folder only once a file with the same name as the source file and with this extension is present.
To move a file without requiring an ack file to be present, set explicitly this property to the empty string value "". | {"type": "string"} |
comet.json | options | Options to add to the spark reader | {} |
comet.json | validator | Validator to use, 'spark' or 'native'. Default to 'spark' of SL_VALIDATOR env variable is set to 'native' | {"type": "string"} |
comet.json | emptyIsNull | Treat empty columns as null in DSV files. Default to false | {"type": "boolean"} |
comet.json | nullValue | Treat a specific input string as a null value indicator | {"type": "string"} |
comet.json | freshness | Configure freshness checks on this dataset | {} |
comet.json | schedule | Cron expression to use for this domain/table | {"type": "string"} |
comet.json | dagRef | Cron expression to use for this domain/table | {"type": "string"} |
comet.json | name | Schema in JDBC Database / Snowflake / Redshift or Dataset in BigQuery | {"type": "string"} |
comet.json | tables | Tables to scan in this domain | {"type": "array", "items": {"type": "string"}} |
comet.json | endpoint | DAG reference | {"type": "string"} |
comet.json | ingest | Cron expression to use for this domain/table | {"type": "string"} |
comet.json | database | Database name of Project id in BigQuery | {"type": "string"} |
comet.json | external | List of domains to scan | {"type": "array", "items": {}} |
comet.json | pending | Files recognized by the extensions property are moved to this folder for ingestion by the "import" command. | {"type": "string"} |
comet.json | unresolved | Files that cannot be ingested (do not match by any table pattern) are moved to this folder. | {"type": "string"} |
comet.json | archive | Files that have been ingested are moved to this folder if SL_ARCHIVE is set to true. | {"type": "string"} |
comet.json | ingesting | Files that are being ingested are moved to this folder. | {"type": "string"} |
comet.json | accepted | When filesystem storage is used, successfully ingested stored in this this folder in parquet format or any format set by the SL_DEFAULT_WRITE_FORMAT env property. | {"type": "string"} |
comet.json | rejected | When filesystem storage is used, rejected records are stored in this folder in parquet format or any format set by the SL_DEFAULT_WRITE_FORMAT env property. | {"type": "string"} |
comet.json | replay | Invalid records are stored in this folder in source format when SL_SINK_REPLAY_TO_FILE is set to true. | {"type": "string"} |
comet.json | business | {"type": "string"} |
|
comet.json | hiveDatabase | {"type": "string"} |
|
comet.json | warn | How old may be teh data before a warning is raised. Use syntax like '3 day' or '2 hour' or '30 minute' | {"type": "string"} |
comet.json | error | How old may be teh data before an error is raised. Use syntax like '3 day' or '2 hour' or '30 minute' | {"type": "string"} |
comet.json | name | Schema name, must be unique among all the schemas belonging to the same domain.
* Will become the hive table name On Premise or BigQuery Table name on GCP. | {"type": "string"} |
comet.json | rename | If present, the table is renamed with this name. Useful when use in conjunction with the 'extract' module | {"type": "string"} |
comet.json | pattern | filename pattern to which this schema must be applied.
* This instructs the framework to use this schema to parse any file with a filename that match this pattern. | {"type": "string"} |
comet.json | attributes | Attributes parsing rules. | {"type": "array", "items": {}} |
comet.json | metadata | Dataset metadata | {} |
comet.json | comment | free text | {"type": "string"} |
comet.json | presql | Reserved for future use. | {"type": "array", "items": {"type": "string"}} |
comet.json | postsql | Reserved for future use. | {"type": "array", "items": {"type": "string"}} |
comet.json | tags | Set of string to attach to this Schema | {"type": "array", "items": {"type": "string"}} |
comet.json | rls | Row level security on this schema. | {"type": "array", "items": {}} |
comet.json | expectations | Expectations to check after Load / Transform has succeeded | {} |
comet.json | primaryKey | List of columns that make up the primary key | {"type": "array", "items": {"type": "string"}} |
comet.json | acl | Map of rolename -> List[Users]. | {"type": "array", "items": {}} |
comet.json | sample | Store here a couple of records illustrating the table data. | {"type": "string"} |
comet.json | filter | remove all records that do not match this condition | {"type": "string"} |
comet.json | patternSample | Sample of filename matching this schema | {"type": "string"} |
comet.json | name | Attribute name as defined in the source dataset and as received in the file | {"type": "string"} |
comet.json | type | semantic type of the attribute | {"type": "string"} |
comet.json | array | Is it an array ? | {"type": "boolean"} |
comet.json | required | Should this attribute always be present in the source | {"type": "boolean"} |
comet.json | privacy | Should this attribute be applied a privacy transformation at ingestion time | {"type": "string"} |
comet.json | comment | free text for attribute description | {"type": "string"} |
comet.json | rename | If present, the attribute is renamed with this name | {"type": "string"} |
comet.json | metricType | If present, what kind of stat should be computed for this field | {"type": "string"} |
comet.json | attributes | List of sub-attributes (valid for JSON and XML files only) | {"type": "array", "items": {}} |
comet.json | default | Default value for this attribute when it is not present. | {"type": "string"} |
comet.json | tags | Tags associated with this attribute | {"type": "array", "items": {"type": "string"}} |
comet.json | script | Scripted field : SQL request on renamed column | {"type": "string"} |
comet.json | foreignKey | If this attribute is a foreign key, reference to [domain.]table[.attribute] | {"type": "string"} |
comet.json | ignore | Should this attribute be ignored on ingestion. Default to false | {"type": "boolean"} |
comet.json | accessPolicy | Policy tag to assign to this attribute. Used for column level security | {"type": "string"} |
comet.json | sql | Main SQL request to execute (do not forget to prefix table names with the database name to avoid conflicts) | {"type": "string"} |
comet.json | database | Output Database (refer to a project id in BigQuery). Default to SL_DATABASE env var if set. | {"type": "string"} |
Subsets and Splits