schema
stringclasses 471
values | key
stringlengths 0
203
| description
stringlengths 0
4.37k
| object
stringlengths 2
322k
|
---|---|---|---|
scenario_schema.json | capacity | Only applies to leaky buckets. A positive integer
representing the bucket capacity. If there are more than
capacity item in the bucket, it will overflow.
| {"type": "integer"} |
scenario_schema.json | cache_size | By default, a bucket holds capacity events "in memory". However,
for a number of cases, you don't want this, as it might lead to
excessive memory consumption. By setting cache_size to a
positive integer, we can control the maximum in-memory cache
size of the bucket, without changing its capacity and such. It
is useful when buckets are likely to stay alive for a long time
or ingest a lot of events to avoid storing a lot of events in
memory.
| {"type": "number"} |
scenario_schema.json | overflow_filter | overflow_filter is an expression that is run when the bucket
overflows. If this expression is present and returns false, the
overflow will be discarded.
| {"type": "string"} |
scenario_schema.json | filter | filter must be a valid expr expression that will be evaluated
against the event. If filter evaluation returns true or is
absent, event will be pour in the bucket. If filter returns
false or a non-boolean, the event will be skipped for this
bucket.
| {"type": "string"} |
scenario_schema.json | type | Defines the type of the bucket. Currently three types are
supported : leaky : a leaky bucket that must be configured
with a capacity and a leakspeed trigger : a bucket that
overflows as soon as an event is poured (it is like a leaky
bucket is a capacity of 0) counter : a bucket that only
overflows every duration. It is especially useful to count
things.
| {"enum": ["leaky"], "type": "string"} |
scenario_schema.json | reprocess | If set to true, the resulting overflow will be sent again in the
scenario/parsing pipeline. It is useful when you want to have
further scenarios that will rely on past-overflows to take
decision
| {"type": "boolean"} |
scenario_schema.json | name | "github_account_name/my_scenario_name" or name:
"my_author_name/my_scenario_name" name is mandatory
| {"type": "string"} |
scenario_schema.json | description | The description is mandatory. It is a short description,
probably one sentence, describing what it detects.
| {"type": "string"} |
scenario_schema.json | leakspeed | Only applies to leaky buckets. A duration that represent how
often an event will be leaking from the bucket.
| {"pattern": "^([0-9]+(\\.[0-9]+)*d)?([0-9]+(\\.[0-9]+)*h)?([0-9]+(\\.[0-9]+)*m)?([0-9]+(\\.[0-9]+)*s)?([0-9]+(\\.[0-9]+)*ms)?([0-9]+(\\.[0-9]+)*(us|\u00b5s))?([0-9]+(\\.[0-9]+)*ns)?$", "type": "string"} |
scenario_schema.json | cancel_on | cancel_on is an expression that runs on each event poured to the
bucket. If the cancel_on expression returns true, the bucket is
immediately destroyed (and doesn't overflow).
| {"type": "string"} |
scenario_schema.json | format | CrowdSec has a notion of format support for parsers and
scenarios for compatibility management. Running cscli version
will show you such compatibility matrix :
| {"minimum": "1.0", "type": "number"} |
scenario_schema.json | debug | If set to to true, enabled scenario level debugging. It is meant
to help understanding scenario behavior by providing contextual
| {"type": "boolean"} |
scenario_schema.json | scope | While most scenarios might focus on IP addresses, CrowdSec and Bouncers can work with any scope. The scope directive allows you to override the default scope :
type is a string representing the scope name
expression is an expr expression that will be evaluated to fetch the value
| {"type": "object", "properties": {"type": {"type": "string"}, "expression": {"type": "string"}}, "additionalProperties": ""} |
scenario_schema.json | groupby | An expr expression that must return a string. This string will
be used as a partition for the buckets.
| {"type": "string"} |
scenario_schema.json | references | Reference to external paper or documentation
| {"anyOf": [{"type": "string"}, {"type": "array"}]} |
scenario_schema.json | blackhole | A duration for which a bucket will be "silenced" after
overflowing. This is intended to limit / avoid spam of buckets
that might be very rapidly triggered. The blackhole only
applies to the individual bucket rather than the whole
scenario. Must be compatible with golang ParseDuration format.
| {"pattern": "^([0-9]+(\\.[0-9]+)*d)?([0-9]+(\\.[0-9]+)*h)?([0-9]+(\\.[0-9]+)*m)?([0-9]+(\\.[0-9]+)*s)?([0-9]+(\\.[0-9]+)*ms)?([0-9]+(\\.[0-9]+)*(us|\u00b5s))?([0-9]+(\\.[0-9]+)*ns)?$", "type": "string"} |
scenario_schema.json | reprocess | If set to true, the resulting overflow will be sent again in the
scenario/parsing pipeline. It is useful when you want to have
further scenarios that will rely on past-overflows to take
decision
| {"type": "boolean"} |
scenario_schema.json | type | Defines the type of the bucket. Currently three types are
supported : leaky : a leaky bucket that must be configured
with a capacity and a leakspeed trigger : a bucket that
overflows as soon as an event is poured (it is like a leaky
bucket is a capacity of 0) counter : a bucket that only
overflows every duration. It is especially useful to count
things.
| {"type": "string", "enum": ["conditional"]} |
scenario_schema.json | description | The description is mandatory. It is a short description,
probably one sentence, describing what it detects.
| {"type": "string"} |
scenario_schema.json | name | "github_account_name/my_scenario_name" or name:
"my_author_name/my_scenario_name" name is mandatory
| {"type": "string"} |
scenario_schema.json | distinct | An expr expression that must return a string. The event will be
poured only if the string is not already present in the bucket.
| {"type": "string"} |
scenario_schema.json | overflow_filter | overflow_filter is an expression that is run when the bucket
overflows. If this expression is present and returns false, the
overflow will be discarded.
| {"type": "string"} |
scenario_schema.json | condition | Make the bucket overflow when it returns true. The expression is evaluated each time an event is poured to the bucket.
| {"type": "string"} |
scenario_schema.json | filter | filter must be a valid expr expression that will be evaluated
against the event. If filter evaluation returns true or is
absent, event will be pour in the bucket. If filter returns
false or a non-boolean, the event will be skipped for this
bucket.
| {"type": "string"} |
scenario_schema.json | capacity | Only applies to leaky buckets. A positive integer
representing the bucket capacity. If there are more than
capacity item in the bucket, it will overflow.
| {"type": "integer"} |
scenario_schema.json | cache_size | By default, a bucket holds capacity events "in memory". However,
for a number of cases, you don't want this, as it might lead to
excessive memory consumption. By setting cache_size to a
positive integer, we can control the maximum in-memory cache
size of the bucket, without changing its capacity and such. It
is useful when buckets are likely to stay alive for a long time
or ingest a lot of events to avoid storing a lot of events in
memory.
| {"type": "number"} |
scenario_schema.json | scope | While most scenarios might focus on IP addresses, CrowdSec and Bouncers can work with any scope. The scope directive allows you to override the default scope :
type is a string representing the scope name
expression is an expr expression that will be evaluated to fetch the value
| {"properties": {"expression": {"type": "string"}, "type": {"type": "string"}}, "type": "object", "additionalProperties": ""} |
scenario_schema.json | groupby | An expr expression that must return a string. This string will
be used as a partition for the buckets.
| {"type": "string"} |
scenario_schema.json | blackhole | A duration for which a bucket will be "silenced" after
overflowing. This is intended to limit / avoid spam of buckets
that might be very rapidly triggered. The blackhole only
applies to the individual bucket rather than the whole
scenario. Must be compatible with golang ParseDuration format.
| {"type": "string", "pattern": "^([0-9]+(\\.[0-9]+)*d)?([0-9]+(\\.[0-9]+)*h)?([0-9]+(\\.[0-9]+)*m)?([0-9]+(\\.[0-9]+)*s)?([0-9]+(\\.[0-9]+)*ms)?([0-9]+(\\.[0-9]+)*(us|\u00b5s))?([0-9]+(\\.[0-9]+)*ns)?$"} |
scenario_schema.json | references | Reference to external paper or documentation
| {"anyOf": [{"type": "string"}, {"type": "array"}]} |
scenario_schema.json | debug | If set to to true, enabled scenario level debugging. It is meant
to help understanding scenario behavior by providing contextual
| {"type": "boolean"} |
scenario_schema.json | leakspeed | Only applies to leaky buckets. A duration that represent how
often an event will be leaking from the bucket.
| {"type": "string", "pattern": "^([0-9]+(\\.[0-9]+)*d)?([0-9]+(\\.[0-9]+)*h)?([0-9]+(\\.[0-9]+)*m)?([0-9]+(\\.[0-9]+)*s)?([0-9]+(\\.[0-9]+)*ms)?([0-9]+(\\.[0-9]+)*(us|\u00b5s))?([0-9]+(\\.[0-9]+)*ns)?$"} |
scenario_schema.json | cancel_on | cancel_on is an expression that runs on each event poured to the
bucket. If the cancel_on expression returns true, the bucket is
immediately destroyed (and doesn't overflow).
| {"type": "string"} |
scenario_schema.json | format | CrowdSec has a notion of format support for parsers and
scenarios for compatibility management. Running cscli version
will show you such compatibility matrix :
| {"minimum": "1.0", "type": "number"} |
scenario_schema.json | labels | Labels is a list of label: values that provide context to an
overflow. The labels are (currently) not stored in the database,
nor they are sent to the API. Special labels : The remediation
label, if set to true indicate the the originating IP should be
banned.
| {"patternProperties": {"^.*$": {"type": ["string", "boolean", "array", "integer"]}}, "type": "object"} |
scenario_schema.json | references | Reference to external paper or documentation
| {"anyOf": [{"type": "string"}, {"type": "array"}]} |
scenario_schema.json | blackhole | A duration for which a bucket will be "silenced" after
overflowing. This is intended to limit / avoid spam of buckets
that might be very rapidly triggered. The blackhole only
applies to the individual bucket rather than the whole
scenario. Must be compatible with golang ParseDuration format.
| {"type": "string", "pattern": "^([0-9]+(\\.[0-9]+)*d)?([0-9]+(\\.[0-9]+)*h)?([0-9]+(\\.[0-9]+)*m)?([0-9]+(\\.[0-9]+)*s)?([0-9]+(\\.[0-9]+)*ms)?([0-9]+(\\.[0-9]+)*(us|\u00b5s))?([0-9]+(\\.[0-9]+)*ns)?$"} |
scenario_schema.json | groupby | An expr expression that must return a string. This string will
be used as a partition for the buckets.
| {"type": "string"} |
scenario_schema.json | scope | While most scenarios might focus on IP addresses, CrowdSec
and Bouncers can work with any scope. The scope directive
allows you to override the default scope : type is a string
representing the scope name expression is an expr expression
that will be evaluated to fetch the value
| {"additionalProperties": "", "properties": {"expression": {"type": "string"}, "type": {"type": "string"}}, "type": "object"} |
scenario_schema.json | cancel_on | cancel_on is an expression that runs on each event poured to the
bucket. If the cancel_on expression returns true, the bucket is
immediately destroyed (and doesn't overflow).
| {"type": "string"} |
scenario_schema.json | format | CrowdSec has a notion of format support for parsers and
scenarios for compatibility management. Running cscli version
will show you such compatibility matrix :
| {"minimum": "1.0", "type": "number"} |
scenario_schema.json | debug | If set to to true, enabled scenario level debugging. It is meant
to help understanding scenario behavior by providing contextual
| {"type": "boolean"} |
scenario_schema.json | name | "github_account_name/my_scenario_name" or name:
"my_author_name/my_scenario_name" name is mandatory
| {"type": "string"} |
scenario_schema.json | description | The description is mandatory. It is a short description,
probably one sentence, describing what it detects.
| {"type": "string"} |
scenario_schema.json | reprocess | If set to true, the resulting overflow will be sent again in the
scenario/parsing pipeline. It is useful when you want to have
further scenarios that will rely on past-overflows to take
decision
| {"type": "boolean"} |
scenario_schema.json | type | Defines the type of the bucket. Currently three types are
supported : leaky : a leaky bucket that must be configured
with a capacity and a leakspeed trigger : a bucket that
overflows as soon as an event is poured (it is like a leaky
bucket is a capacity of 0) counter : a bucket that only
overflows every duration. It is especially useful to count
things.
| {"type": "string", "enum": ["trigger"]} |
scenario_schema.json | cache_size | By default, a bucket holds capacity events "in memory". However,
for a number of cases, you don't want this, as it might lead to
excessive memory consumption. By setting cache_size to a
positive integer, we can control the maximum in-memory cache
size of the bucket, without changing its capacity and such. It
is useful when buckets are likely to stay alive for a long time
or ingest a lot of events to avoid storing a lot of events in
memory.
| {"type": "number"} |
scenario_schema.json | overflow_filter | overflow_filter is an expression that is run when the bucket
overflows. If this expression is present and returns false, the
overflow will be discarded.
| {"type": "string"} |
scenario_schema.json | filter | filter must be a valid expr expression that will be evaluated
against the event. If filter evaluation returns true or is
absent, event will be pour in the bucket. If filter returns
false or a non-boolean, the event will be skipped for this
bucket.
| {"type": "string"} |
scenario_schema.json | distinct | An expr expression that must return a string. The event will be
poured only if the string is not already present in the bucket.
| {"type": "string"} |
scenario_schema.json | cancel_on | cancel_on is an expression that runs on each event poured to the
bucket. If the cancel_on expression returns true, the bucket is
immediately destroyed (and doesn't overflow).
| {"type": "string"} |
scenario_schema.json | format | CrowdSec has a notion of format support for parsers and
scenarios for compatibility management. Running cscli version
will show you such compatibility matrix :
| {"type": "number", "minimum": "1.0"} |
scenario_schema.json | debug | If set to to true, enabled scenario level debugging. It is meant
to help understanding scenario behavior by providing contextual
| {"type": "boolean"} |
scenario_schema.json | groupby | An expr expression that must return a string. This string will
be used as a partition for the buckets.
| {"type": "string"} |
scenario_schema.json | scope | While most scenarios might focus on IP addresses, CrowdSec and Bouncers can work with any scope. The scope directive allows you to override the default scope :
type is a string representing the scope name
expression is an expr expression that will be evaluated to fetch the value
| {"additionalProperties": "", "properties": {"type": {"type": "string"}, "expression": {"type": "string"}}, "type": "object"} |
scenario_schema.json | references | Reference to external paper or documentation
| {"anyOf": [{"type": "string"}, {"type": "array"}]} |
scenario_schema.json | blackhole | A duration for which a bucket will be "silenced" after
overflowing. This is intended to limit / avoid spam of buckets
that might be very rapidly triggered. The blackhole only
applies to the individual bucket rather than the whole
scenario. Must be compatible with golang ParseDuration format.
| {"pattern": "^([0-9]+(\\.[0-9]+)*d)?([0-9]+(\\.[0-9]+)*h)?([0-9]+(\\.[0-9]+)*m)?([0-9]+(\\.[0-9]+)*s)?([0-9]+(\\.[0-9]+)*ms)?([0-9]+(\\.[0-9]+)*(us|\u00b5s))?([0-9]+(\\.[0-9]+)*ns)?$", "type": "string"} |
scenario_schema.json | distinct | An expr expression that must return a string. The event will be
poured only if the string is not already present in the bucket.
| {"type": "string"} |
scenario_schema.json | cache_size | By default, a bucket holds capacity events "in memory". However,
for a number of cases, you don't want this, as it might lead to
excessive memory consumption. By setting cache_size to a
positive integer, we can control the maximum in-memory cache
size of the bucket, without changing its capacity and such. It
is useful when buckets are likely to stay alive for a long time
or ingest a lot of events to avoid storing a lot of events in
memory.
| {"type": "number"} |
scenario_schema.json | overflow_filter | overflow_filter is an expression that is run when the bucket
overflows. If this expression is present and returns false, the
overflow will be discarded.
| {"type": "string"} |
scenario_schema.json | filter | filter must be a valid expr expression that will be evaluated
against the event. If filter evaluation returns true or is
absent, event will be pour in the bucket. If filter returns
false or a non-boolean, the event will be skipped for this
bucket.
| {"type": "string"} |
scenario_schema.json | duration | Only applies to leaky buckets.
A duration that represent how often an event will be leaking from the bucket.
| {"pattern": "^([0-9]+(\\.[0-9]+)*d)?([0-9]+(\\.[0-9]+)*h)?([0-9]+(\\.[0-9]+)*m)?([0-9]+(\\.[0-9]+)*s)?([0-9]+(\\.[0-9]+)*ms)?([0-9]+(\\.[0-9]+)*(us|\u00b5s))?([0-9]+(\\.[0-9]+)*ns)?$", "type": "string"} |
scenario_schema.json | type | Defines the type of the bucket. Currently three types are
supported : leaky : a leaky bucket that must be configured
with a capacity and a leakspeed trigger : a bucket that
overflows as soon as an event is poured (it is like a leaky
bucket is a capacity of 0) counter : a bucket that only
overflows every duration. It is especially useful to count
things.
| {"enum": ["counter"], "type": "string"} |
scenario_schema.json | reprocess | If set to true, the resulting overflow will be sent again in the
scenario/parsing pipeline. It is useful when you want to have
further scenarios that will rely on past-overflows to take
decision
| {"type": "boolean"} |
scenario_schema.json | name | "github_account_name/my_scenario_name" or name:
"my_author_name/my_scenario_name" name is mandatory
| {"type": "string"} |
scenario_schema.json | description | The description is mandatory. It is a short description,
probably one sentence, describing what it detects.
| {"type": "string"} |
scenario_schema.json | data | data allows user to specify an external source of data. This
section is only relevant when cscli is used to install parser
from hub, as it will download the source_url and store it to
dest_file. When the parser is not installed from the hub,
CrowdSec won't download the URL, but the file must exist for the
parser to be loaded correctly.
| {"additionalProperties": "", "required": ["type", "dest_file"], "type": "array", "items": {"type": "object", "properties": {"type": {"type": "string", "pattern": "^(string|regexp)$", "additionalProperties": ""}, "source_url": {"type": "string"}, "dest_file": {"type": "string"}}}} |
scenario_schema.json | type | The type is mandatory if you want to evaluate the data in
the file, and should be regex for valid (re2) regular
expression per line or string for string per line. The
regexps will be compiled, the strings will be loaded into
a list and both will be kept in memory. Without specifying
a type, the file will be downloaded and stored as file and
not in memory.
| {"type": "string", "pattern": "^(string|regexp)$", "additionalProperties": ""} |
scenario_schema.json | source_url | url to download file from
| {"type": "string"} |
scenario_schema.json | dest_file | destination to store the downloaded file to
| {"type": "string"} |
chutzpah.json | engineOptions | The options to configure the chosen browser engine for Chutzpah to use. | {"type": "object", "properties": {"ChromeBrowserPath": {"type": "string"}}} |
chutzpah.json | ChromeBrowserPath | The path to the chrome/chromium executable on the machine | {"type": "string"} |
chutzpah.json | serverSettings | Server settings let you enable to configure Chutzpah web server mode. | {"type": "object", "properties": {"Enabled": {"type": "boolean", "default": false}, "DefaultPort": {"type": "number"}, "RootPath": {"type": "string"}}} |
chutzpah.json | Enabled | Determines if the web server mode is enabled. | {"type": "boolean", "default": false} |
chutzpah.json | DefaultPort | The default port to use. If this port is taken Chutzpah will try incrementing until it finds an available one. | {"type": "number"} |
chutzpah.json | RootPath | The root path of the server. All file paths are relative to this and should be in a directory below or equal to this. Defaults to drive root. | {"type": "string"} |
chutzpah.json | Mode | The way the template is injected into the HTML page. | {"enum": ["Raw", "Script"], "default": "Raw"} |
chutzpah.json | Id | If in script mode what Id to place on the script tag. | {"type": "string"} |
chutzpah.json | Type | If in script mode what Type to place on script tag | {"type": "string"} |
chutzpah.json | Path | The path to either a file or a folder. If given a folder, it will be scanned recursively. This path can be relative to the location of the chutzpah.json file. | {"type": "string"} |
chutzpah.json | Includes | This is an optional array of include glob patterns. Only files matching the Include pattern will be added. | {"type": "array", "items": {"type": "string"}} |
chutzpah.json | Excludes | This is an optional array of exclude glob patterns. Only files not matching the Exclude patterns will be added. | {"type": "array", "items": {"type": "string"}} |
chutzpah.json | IncludeInTestHarness | This determines if the reference should be injected into the test harness. When referencing files like .d.ts or files that you plan to load using require.js you should set this to false. Defaults to true. | {"type": "boolean", "default": true} |
chutzpah.json | IsTestFrameworkFile | Indicated that this references should be placed directly after the test framework files in the test harness. This ensures that this file is injected into the test harness before almost all other files. Defaults to false. | {"type": "boolean", "default": false} |
chutzpah.json | SourcePath | The source file/directory | {"type": "string"} |
chutzpah.json | OutputPath | The file/directory that source file/directory is mapped to. Specifying a file OutputPath and a directory for SourcePath indicated the files are being concatenated into one large file | {"type": "string"} |
chutzpah.json | OutputPathType | The type (file or folder) that the output path refers to. If not specified Chutzpah will try to take a best guess by assuming it is a file if it has a .js extension | {"type": "string", "enum": ["File", "Folder"], "default": "Folder"} |
chutzpah.json | Path | The path to either a file or a folder. If given a folder, it will be scanned recursively. This path can be relative to the location of the chutzpah.json file. | {"type": "string"} |
chutzpah.json | Includes | This is an optional array of include glob patterns. Only files matching the Include pattern will be added. | {"type": "array", "items": {"type": "string"}} |
chutzpah.json | Excludes | This is an optional array of exclude glob patterns. Only files not matching the Exclude patterns will be added. | {"type": "array", "items": {"type": "string"}} |
chutzpah.json | Name | The name of the transform to execute | {"type": "string"} |
chutzpah.json | Path | The file for the transform to save its output to. | {"type": "string"} |
chutzpah.json | compileSettings | This setting lets you describe in the Chutzpah.json file how to execute a command which can compile your source files to .js files. You tell Chutzpah what to execute and some information about what your executable does (like where to find the generated .js files). Then after running the executable Chutzpah can associate each source file with each output file to still give the nice behavior of mapping tests back to their original files. | {"type": "object", "properties": {"Extensions": {"type": "array", "items": {"type": "string"}}, "ExtensionsWithNoOutput": {"type": "array", "items": {"type": "string"}}, "Paths": {"type": "array", "items": {}}, "WorkingDirectory": {"type": "string"}, "Executable": {"type": ["string", "null"], "default": null}, "Arguments": {"type": ["string", "null"], "default": null}, "Timeout": {"type": "integer", "default": 30000}, "SkipIfUnchanged": {"type": "boolean", "default": true}, "Mode": {"type": "string", "enum": ["Executable", "External"], "default": "External"}, "UseSourceMaps": {"type": "boolean", "default": false}, "IgnoreMissingFiles": {"type": "boolean", "default": false}}} |
chutzpah.json | Extensions | The extensions of the files which are getting compiled (e.g. .ts). | {"type": "array", "items": {"type": "string"}} |
chutzpah.json | ExtensionsWithNoOutput | The extensions of files which take part in compile but have no build output. This is used for cases like TypeScript declaration files which share a .ts extension. They have .d.ts extension and are part of compilation but have no output. You must tell Chutzpah about these if you want the SkipIfUnchanged setting to work. Otherwise Chutzpah will think these are missing output. | {"type": "array", "items": {"type": "string"}} |
chutzpah.json | Paths | The collection of path mapping from source directory/file to output directory/file. | {"type": "array", "items": {}} |
chutzpah.json | WorkingDirectory | This is the working directory of the process which executes the command. | {"type": "string"} |
chutzpah.json | Executable | The path to an executable which Chutzpah executes to perform the batch compilation. Chutzpah will try to resolve the path relative to the settings directory. But if can't find the file there you must give it a full path. | {"type": ["string", "null"], "default": null} |
chutzpah.json | Arguments | The arguments to pass to the command. | {"type": ["string", "null"], "default": null} |
chutzpah.json | Timeout | How long to wait for compile to finish in milliseconds? | {"type": "integer", "default": 30000} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.