schema
stringclasses
471 values
key
stringlengths
0
203
description
stringlengths
0
4.37k
object
stringlengths
2
322k
comet.json
PrimitiveType
Timestamp based on `RFC 1123 / RFC 822` patterns (Tue, 3 Jun 2008 11:05:30 GMT)
{"const": "rfc_1123_date_time"}
comet.json
PrimitiveType
date/time that match the 'yyyy-MM-dd HH:mm:ss' regex s (2019-12-31 23:59:02). For epoch timestamp, set pattern attribute to 'epoch_second' or 'epoch_milli'
{"const": "timestamp"}
comet.json
PrimitiveType
Any floating value that match the '-?\d*\.{0,1}\d+' regex
{"const": "decimal"}
comet.json
PrimitiveType
Any attribute that has children. Set the array to true if this attribute is made of a list of attributes
{"const": "struct"}
comet.json
IndexMapping
{"const": "text"}
comet.json
IndexMapping
{"const": "keyword"}
comet.json
IndexMapping
{"const": "long"}
comet.json
IndexMapping
{"const": "integer"}
comet.json
IndexMapping
{"const": "short"}
comet.json
IndexMapping
{"const": "byte"}
comet.json
IndexMapping
{"const": "double"}
comet.json
IndexMapping
{"const": "float"}
comet.json
IndexMapping
{"const": "half_float"}
comet.json
IndexMapping
{"const": "scaled_float"}
comet.json
IndexMapping
{"const": "date"}
comet.json
IndexMapping
{"const": "boolean"}
comet.json
IndexMapping
{"const": "binary"}
comet.json
IndexMapping
{"const": "integer_rang"}
comet.json
IndexMapping
{"const": "float_range"}
comet.json
IndexMapping
{"const": "long_range"}
comet.json
IndexMapping
{"const": "double_range"}
comet.json
IndexMapping
{"const": "date_range"}
comet.json
IndexMapping
{"const": "geo_point"}
comet.json
IndexMapping
{"const": "geo_shape"}
comet.json
IndexMapping
{"const": "ip"}
comet.json
IndexMapping
{"const": "completion"}
comet.json
IndexMapping
{"const": "token_count"}
comet.json
IndexMapping
{"const": "object"}
comet.json
IndexMapping
{"const": "array"}
comet.json
WriteMode
Append to or overwrite existing data
{"type": "string", "oneOf": [{"const": "OVERWRITE"}, {"const": "APPEND"}, {"const": "ERROR_IF_EXISTS"}, {"const": "IGNORE"}]}
comet.json
WriteMode
That data will overwrite the existing data or create it if it does not exist
{"const": "OVERWRITE"}
comet.json
WriteMode
Append the data to an existing table or create it if it does not exist
{"const": "APPEND"}
comet.json
WriteMode
Fail if teh table already exist
{"const": "ERROR_IF_EXISTS"}
comet.json
WriteMode
Do not save at all. Useful in interactive / test mode.
{"const": "IGNORE"}
comet.json
UserType
Service account
{"const": "SA"}
comet.json
UserType
End user
{"const": "USER"}
comet.json
UserType
Group of users / service accounts
{"const": "GROUP"}
comet.json
Trim
Remove all leading space chars from the input
{"const": "LEFT"}
comet.json
Trim
Remove all trailing spaces from the input
{"const": "RIGHT"}
comet.json
Trim
Remove all leading and trailing spaces from the input
{"const": "BOTH"}
comet.json
Trim
Do not remove leading or trailing spaces from the input
{"const": "NONE"}
comet.json
TableDdl
DDL used to create a table
{"type": "object", "properties": {"createSql": {"type": "string"}, "pingSql": {"type": "string"}}, "required": ["createSql"]}
comet.json
createSql
SQL CREATE DDL statement
{"type": "string"}
comet.json
pingSql
How to test if the table exist. Use the following statement by default: 'select count(*) from tableName where 1=0'
{"type": "string"}
comet.json
TableType
Table types supported by the Extract module
{"type": "string", "oneOf": [{"const": "TABLE"}, {"const": "VIEW"}, {"const": "SYSTEM TABLE"}, {"const": "GLOBAL TEMPORARY"}, {"const": "LOCAL TEMPORARY"}, {"const": "ALIAS"}, {"const": "SYNONYM"}]}
comet.json
TableType
SQl Table
{"const": "TABLE"}
comet.json
TableType
SQl View
{"const": "VIEW"}
comet.json
TableType
Database specific system table
{"const": "SYSTEM TABLE"}
comet.json
TableType
{"const": "GLOBAL TEMPORARY"}
comet.json
TableType
{"const": "LOCAL TEMPORARY"}
comet.json
TableType
Table alias
{"const": "ALIAS"}
comet.json
TableType
Table synonym
{"const": "SYNONYM"}
comet.json
Type
Custom type definition. Custom types are defined in the types/types.comet.yml file
{"type": "object", "properties": {"name": {"type": "string"}, "primitiveType": {}, "pattern": {"type": "string"}, "zone": {"type": "string"}, "sample": {"type": "string"}, "comment": {"type": "string"}, "indexMapping": {"type": "string"}, "ddlMapping": {}}, "required": ["name", "pattern", "primitiveType"]}
comet.json
name
unique id for this type
{"type": "string"}
comet.json
primitiveType
To what primitive type should this type be mapped. This is the memory representation of the type, When saving, this primitive type is mapped to the database specific type
{}
comet.json
pattern
Regex used to validate the input field
{"type": "string"}
comet.json
zone
useful when parsing specific string: - double: To parse a french decimal (comma as decimal separator) set it to fr_FR locale. - decimal: to set the precision and scale of this number, '38,9' by default. -
{"type": "string"}
comet.json
sample
This field makes sure that the pattern matches the value you want to match. This will be checked on startup
{"type": "string"}
comet.json
comment
Describes this type
{"type": "string"}
comet.json
indexMapping
How this type is indexed in your datawarehouse
{"type": "string"}
comet.json
ddlMapping
Configure here the type mapping for each datawarehouse.\nWill be used when inferring DDL from schema.
{}
comet.json
Partition
Partition columns, no partitioning by default
{"type": "object", "properties": {"sampling": {"type": "number"}, "attributes": {"type": "array", "items": {"type": "string"}}}, "required": []}
comet.json
sampling
0.0 means no sampling, > 0 && < 1 means sample dataset, >=1 absolute number of partitions. Used exclusively on Hadoop & databricks warehouses
{"type": "number"}
comet.json
items
Attributes used to partition de dataset.
{"type": "string"}
comet.json
first
Zero based position of the first character for this attribute
{"type": "number"}
comet.json
last
Zero based position of the last character to include in this attribute
{"type": "number"}
comet.json
Connection
Connection
{"type": "object", "properties": {"type": {"type": "string"}, "sparkFormat": {"type": "string"}, "mode": {}, "options": {}}, "required": ["type"]}
comet.json
type
aka jdbc, bigquery, snowflake, redshift ...
{"type": "string"}
comet.json
sparkFormat
Set only if you want to use the Spark engine
{"type": "string"}
comet.json
mode
Used for JDBC connections only. Write mode, APPEND by default
{}
comet.json
options
Connection options
{}
comet.json
RowLevelSecurity
Row level security policy to apply to the output data.
{"type": "object", "properties": {"name": {"type": "string"}, "predicate": {"type": "string"}, "grants": {"type": "array", "items": {"type": "string"}}}, "required": ["name", "grants"]}
comet.json
name
This Row Level Security unique name
{"type": "string"}
comet.json
description
Description for this access policy
{"type": "string"}
comet.json
predicate
The condition that goes to the WHERE clause and limit the visible rows.
{"type": "string"}
comet.json
grants
user / groups / service accounts to which this security level is applied. ex : user:[email protected],group:[email protected],serviceAccount:[email protected]
{"type": "array", "items": {"type": "string"}}
comet.json
AccessControlEntry
Column level security policy to apply to the attribute.
{"type": "object", "properties": {"role": {"type": "string"}, "grants": {"type": "array", "items": {"type": "string"}}}, "required": ["role", "grants"]}
comet.json
role
This role to give to the granted users
{"type": "string"}
comet.json
grants
user / groups / service accounts to which this security level is applied. ex : user:[email protected],group:[email protected],serviceAccount:[email protected]
{"type": "array", "items": {"type": "string"}}
comet.json
key
list of attributes to join an existing and incoming dataset. Use renamed columns if any here.
{"type": "array", "items": {"type": "string"}}
comet.json
delete
Optional valid delete condition on the incoming dataset. Use renamed column here.
{"type": "string"}
comet.json
timestamp
Timestamp column used to identify last version, if not specified currently ingested row is considered the last
{"type": "string"}
comet.json
queryFilter
Useful when you want to merge only on a subset of the existing partitions, thus improving performance and reducing costs. You may use here: - Any SQL condition - latest which will be translated to the last existing partition - column in last(10) which will apply the merge on the last 10 partitions of your dataset. last and latest assume that your table is partitioned by day.
{"type": "string"}
comet.json
Format
DSV by default. Supported file formats are :\n- DSV : Delimiter-separated values file. Delimiter value is specified in the "separator" field.\n- POSITION : FIXED format file where values are located at an exact position in each line.\n- SIMPLE_JSON : For optimisation purpose, we differentiate JSON with top level values from JSON\n with deep level fields. SIMPLE_JSON are JSON files with top level fields only.\n- JSON : Deep JSON file. Use only when your json documents contain sub-documents, otherwise prefer to\n use SIMPLE_JSON since it is much faster.\n- XML : XML files
{"type": "string", "oneOf": [{"const": "DSV"}, {"const": "POSITION"}, {"const": "JSON"}, {"const": "ARRAY_JSON"}, {"const": "SIMPLE_JSON"}, {"const": "XML"}]}
comet.json
Format
any single or multiple character delimited file. Separator is specified in the separator field
{"const": "DSV"}
comet.json
Format
any fixed position file. Positions are specified in the position field
{"const": "POSITION"}
comet.json
Format
any deep json file. To improve performance, prefer the SIMPLE_JSON format if your json documents are flat
{"const": "JSON"}
comet.json
Format
any json file containing an array of json objects.
{"const": "ARRAY_JSON"}
comet.json
Format
any flat json file. To improve performance, prefer this format if your json documents are flat
{"const": "SIMPLE_JSON"}
comet.json
Format
any xml file. Use the metadata.xml.rowTag field to specify the root tag of your xml file
{"const": "XML"}
comet.json
MapString
Map of string
{"type": "object", "additionalProperties": {"type": "string"}}
comet.json
MapConnection
Map of connections
{"type": "object", "additionalProperties": {}}
comet.json
MapJdbcEngine
Map of jdbc engines
{"type": "object", "additionalProperties": {}}
comet.json
MapTableDdl
Map of table ddl
{"type": "object", "additionalProperties": {}}
comet.json
JdbcEngine
Jdbc engine
{"type": "object", "properties": {"tables": {"type": "array", "items": {}}}}
comet.json
tables
List of all SQL create statements used to create audit tables for this JDBC engine. Tables are created only if the execution of the pingSQL statement fails
{"type": "array", "items": {}}
comet.json
options
Privacy strategies. The following default strategies are defined by default: - none: Leave the data as is - hide: replace the data with an empty string - hideX("s", n): replace the string with n occurrences of the string 's' - md5: Redact the data using the MD5 algorithm - sha1: Redact the data using the SHA1 algorithm - sha256: Redact the data using the SHA256 algorithm - sha512: Redact the data using the SHA512 algorithm - initials: keep only the first char of each word in the data
{}
comet.json
Internal
configure Spark internal options
{"type": "object", "properties": {"cacheStorageLevel": {"type": "string"}, "intermediateBigqueryFormat": {"type": "string"}, "temporaryGcsBucket": {"type": "string"}, "substituteVars": {"type": "boolean"}}}
comet.json
cacheStorageLevel
How the RDD are cached. Default is MEMORY_AND_DISK_SER. Available options are (https://spark.apache.org/docs/latest/api/java/index.html?org/apache/spark/storage/StorageLevel.html): - MEMORY_ONLY - MEMORY_AND_DISK - MEMORY_ONLY_SER - MEMORY_AND_DISK_SER - DISK_ONLY - OFF_HEAP
{"type": "string"}
comet.json
intermediateBigqueryFormat
May be parquet or ORC. Default is parquet. Used for BigQuery intermediate storage. Use ORC for for JSON files to keep the original data structure. https://stackoverflow.com/questions/53674838/spark-writing-parquet-arraystring-converts-to-a-different-datatype-when-loadin
{"type": "string"}