tag
dict | content
listlengths 1
171
|
---|---|
{
"category": "App Definition and Development",
"file_name": "create_role,role_option.grammar.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "```output.ebnf createrole ::= CREATE ROLE rolename [ [ WITH ] role_option [ , ... ] ] role_option ::= SUPERUSER | NOSUPERUSER | CREATEDB | NOCREATEDB | CREATEROLE | NOCREATEROLE | INHERIT | NOINHERIT | LOGIN | NOLOGIN | CONNECTION LIMIT connlimit | [ ENCRYPTED ] PASSWORD ' password ' | PASSWORD NULL | VALID UNTIL ' timestamp ' | IN ROLE role_name [ , ... ] | IN GROUP role_name [ , ... ] | ROLE role_name [ , ... ] | ADMIN role_name [ , ... ] | USER role_name [ , ... ] | SYSID uid ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "graph_analytics_workloads.md",
"project_name": "GraphScope",
"subcategory": "Database"
} | [
{
"data": "A graph is a data structure composed of vertices (or nodes) connected by edges. Graphs can represent many real-world data, such as social networks, transportation networks, and protein interaction networks, as shown in the following figure. :::{figure-md} <img src=\"../images/graph_examples.png\" alt=\"Examples of graphs\" width=\"80%\"> Examples of graphs. ::: In general, any computation on graph data can be considered graph analytics. The objective of graph analytics is to uncover and utilize the structure of graphs, providing insights into the relationships and connections between different elements in graphs. Computation patterns in graph analytics can vary significantly: some involve only a small number of vertices/edges, while others access a large portion or even all vertices/edges of a graph. In GraphScope, we refer to the former as graph traversal and the latter as graph analytics, unless specified otherwise. There are various types of graph analytics algorithms, which typically iterate over a large portion or all vertices/edges of a graph to discover hidden insights within the graph data. Common graph analytics algorithms include general analytics algorithms (e.g., PageRank, shortest path, and maximum flow), community detection algorithms (e.g., maximum clique/bi-clique, connected components, Louvain, and label propagation), and graph mining algorithms (e.g., frequent structure mining and graph pattern discovery). Below, we provide several examples to illustrate how graph analytics algorithms operate. The algorithm measures the importance of each vertex in a graph by iteratively counting the number and importance of its neighbors. This helps determine a rough estimate of a vertex's importance. Specifically, the PageRank computation consists of multiple iterations, with each vertex initially assigned a value indicating its importance. During each iteration, vertices sum the values of their neighbors pointing to them and update their own values accordingly. :::{figure-md} <img src=\"../images/pagerank.png\" alt=\"PageRank algorithm\" width=\"40%\"> PageRank algorithm (https://snap-stanford.github.io/cs224w-notes/network-methods/pagerank). ::: aims to find the most efficient path between two vertices by minimizing the sum of the weights of its constituent edges. Several well-known algorithms, such as Dijkstra's algorithm and BellmanFord algorithm, have been proposed to solve this problem. For instance, Dijkstra's algorithm selects a single vertex as the \"source\" vertex and attempts to find the shortest paths from the source to all other vertices in the graph. The computation of Dijkstra's algorithm involves multiple iterations. In each iteration, a vertex with a known shortest path to the source is selected, and the shortest path values of its neighbors are updated, as demonstrated in the following . :::{figure-md} <img src=\"../images/sssp.gif\" alt=\"Dijkstra's algorithm\" width=\"40%\"> Dijkstra's algorithm (https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm). ::: (e.g., Louvain) aim to identify groups of vertices that are more densely connected internally than with other vertices in the graph. These algorithms work by having each vertex repeatedly send its label to its neighbors and update its own label based on specific rules after receiving labels from its"
},
{
"data": "After multiple iterations, vertices that are densely connected internally will have the same or similar labels. :::{figure-md} <img src=\"../images/comm_detection.png\" alt=\"Community detection algorithm\" width=\"40%\"> Community detection algorithm (https://towardsdatascience.com/community-detection-algorithms-9bd8951e7dae). ::: The examples above demonstrate how graph analytics algorithms analyze properties of vertices and edges within a graph. In real-world applications, many problems can be modeled as graph analytics problems. For instance, Google Search represents websites and their interconnections as a graph, applying the PageRank algorithm to identify the most important websites on the Internet. Similarly, a city's road map can be modeled as a graph, with the shortest path algorithm assisting in path planning for logistics and delivery services. By considering social media users as a graph, community detection techniques (e.g., Louvain) can help discover users with shared interests and maintain strong connections between them. :::{figure-md} <img src=\"../images/analytics_examples.png\" alt=\"Applications of graph analytics\" width=\"80%\"> Applications of graph analytics. ::: Based on our experience, processing graph data and utilizing frameworks (systems) for graph data processing present the following challenges: Handling large-scale and complex graph data The majority of real-world graph data is large-scale, heterogeneous, and attributed. For instance, modern e-commerce graphs often contain billions of vertices and tens of billions of edges, with various types and rich attributes. Representing and storing such graph data is a nontrivial task. Diverse programming models/languages Numerous graph processing systems have been developed to manage graph analytics algorithms. These systems employ different programming models (e.g., vertex-centric model and PIE model) and programming languages (e.g., C++, Java, and Python). Consequently, users often face steep learning curves. Demand for high performance The efficiency and scalability of processing large graphs remain limited. While current systems have significantly benefited from years of optimization work, they still encounter efficiency and/or scalability issues. Achieving superior performance when dealing with large-scale graph data is highly sought after. In GraphScope, the Graph Analytics Engine (GAE) tackles the aforementioned challenges by managing graph analytics algorithms in the following manner: Distributed graph data management GraphScope represents graph data as a property graph model and automatically partitions large-scale graphs into subgraphs (fragments) distributed across multiple machines in a cluster. It also provides user-friendly interfaces for loading graphs, making graph data management easier. For more details on managing large-scale graphs, refer to . Various programming models/languages support GraphScope supports both the vertex-centric model (Pregel) and PIE (PEval-IncEval-Assemble) programming model. These models are widely used in existing graph processing systems. For more information, refer to our introductions to and models. GraphScope provides SDKs for multiple languages, allowing users to write custom algorithms in C++, Java, or Python. For more details on developing customized algorithms, check out . Optimized high-performance runtime GAE achieves high performance through an optimized analytical runtime, employing techniques such as pull/push dynamic switching, cache-efficient memory layout, and pipelining. We have compared GraphScope with state-of-the-art graph processing systems on the LDBC Graph Analytics Benchmark, and the show that GraphScope outperforms other graph systems."
}
] |
{
"category": "App Definition and Development",
"file_name": "controllers.md",
"project_name": "Numaflow",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Currently in `Numaflow`, there are 3 CRDs introduced, each one has a corresponding controller. interstepbufferservices.numaflow.numaproj.io pipelines.numaflow.numaproj.io vertices.numaflow.numaproj.io The source code of the controllers is located at `./pkg/reconciler/`. `Inter-Step Buffer Service Controller` is used to watch `InterStepBufferService` object, depending on the spec of the object, it might install services (such as JetStream, or Redis) in the namespace, or simply provide the configuration of the `InterStepBufferService` (for example, when an `external` redis ISB Service is given). Pipeline Controller is used to watch `Pipeline` objects, it does following major things when there's a pipeline object created. Spawn a Kubernetes Job to create in the . Create `Vertex` objects according to `.spec.vertices` defined in `Pipeline` object. Create some other Kubernetes objects used for the Pipeline, such as a Deployment and a Service for daemon service application. Vertex controller watches the `Vertex` objects, based on the replica defined in the spec, creates a number of pods to run the workloads."
}
] |
{
"category": "App Definition and Development",
"file_name": "v21.6.7.57-stable.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Fix `ALTER MODIFY COLUMN` of columns, which participates in TTL expressions. (). Backported in : Fix extremely long backoff for background tasks when the background pool is full. Fixes . (). Backported in : Fix wrong thread estimation for right subquery join in some cases. Close . (). Backported in : Fix possible crash in `pointInPolygon` if the setting `validate_polygons` is turned off. (). 21.6 manual backport of ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "datetime.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/sql-reference/data-types/datetime sidebar_position: 16 sidebar_label: DateTime Allows to store an instant in time, that can be expressed as a calendar date and a time of a day. Syntax: ``` sql DateTime([timezone]) ``` Supported range of values: \\[1970-01-01 00:00:00, 2106-02-07 06:28:15\\]. Resolution: 1 second. The `Date` datatype is faster than `DateTime` under most conditions. The `Date` type requires 2 bytes of storage, while `DateTime` requires 4. However, when the database compresses the database, this difference is amplified. This amplification is due to the minutes and seconds in `DateTime` being less compressible. Filtering and aggregating `Date` instead of `DateTime` is also faster. The point in time is saved as a , regardless of the time zone or daylight saving time. The time zone affects how the values of the `DateTime` type values are displayed in text format and how the values specified as strings are parsed (2020-01-01 05:00:01). Timezone agnostic Unix timestamp is stored in tables, and the timezone is used to transform it to text format or back during data import/export or to make calendar calculations on the values (example: `toDate`, `toHour` functions etc.). The time zone is not stored in the rows of the table (or in resultset), but is stored in the column metadata. A list of supported time zones can be found in the and also can be queried by `SELECT * FROM system.time_zones`. is also available at Wikipedia. You can explicitly set a time zone for `DateTime`-type columns when creating a table. Example: `DateTime('UTC')`. If the time zone isnt set, ClickHouse uses the value of the parameter in the server settings or the operating system settings at the moment of the ClickHouse server start. The applies the server time zone by default if a time zone isnt explicitly set when initializing the data type. To use the client time zone, run `clickhouse-client` with the `--useclienttime_zone` parameter. ClickHouse outputs values depending on the value of the setting. `YYYY-MM-DD hh:mm:ss` text format by default. Additionally, you can change the output with the function. When inserting data into ClickHouse, you can use different formats of date and time strings, depending on the value of the setting. Creating a table with a `DateTime`-type column and inserting data into it: ``` sql CREATE TABLE dt ( `timestamp` DateTime('Asia/Istanbul'), `event_id` UInt8 ) ENGINE = TinyLog; ``` ``` sql -- Parse DateTime -- - from string, -- - from integer interpreted as number of seconds since 1970-01-01. INSERT INTO dt VALUES ('2019-01-01 00:00:00', 1), (1546300800, 3); SELECT * FROM dt; ``` ``` text timestampevent_id 2019-01-01 00:00:00 2 2019-01-01 03:00:00 1 ``` When inserting datetime as an integer, it is treated as Unix Timestamp (UTC). `1546300800` represents `'2019-01-01 00:00:00'`"
},
{
"data": "However, as `timestamp` column has `Asia/Istanbul` (UTC+3) timezone specified, when outputting as string the value will be shown as `'2019-01-01 03:00:00'` When inserting string value as datetime, it is treated as being in column timezone. `'2019-01-01 00:00:00'` will be treated as being in `Asia/Istanbul` timezone and saved as `1546290000`. Filtering on `DateTime` values ``` sql SELECT * FROM dt WHERE timestamp = toDateTime('2019-01-01 00:00:00', 'Asia/Istanbul') ``` ``` text timestampevent_id 2019-01-01 00:00:00 1 ``` `DateTime` column values can be filtered using a string value in `WHERE` predicate. It will be converted to `DateTime` automatically: ``` sql SELECT * FROM dt WHERE timestamp = '2019-01-01 00:00:00' ``` ``` text timestampevent_id 2019-01-01 00:00:00 1 ``` Getting a time zone for a `DateTime`-type column: ``` sql SELECT toDateTime(now(), 'Asia/Istanbul') AS column, toTypeName(column) AS x ``` ``` text columnx 2019-10-16 04:12:04 DateTime('Asia/Istanbul') ``` Timezone conversion ``` sql SELECT toDateTime(timestamp, 'Europe/London') as lon_time, toDateTime(timestamp, 'Asia/Istanbul') as mos_time FROM dt ``` ``` text lontimemostime 2019-01-01 00:00:00 2019-01-01 03:00:00 2018-12-31 21:00:00 2019-01-01 00:00:00 ``` As timezone conversion only changes the metadata, the operation has no computation cost. Some time zones may not be supported completely. There are a few cases: If the offset from UTC is not a multiple of 15 minutes, the calculation of hours and minutes can be incorrect. For example, the time zone in Monrovia, Liberia has offset UTC -0:44:30 before 7 Jan 1972. If you are doing calculations on the historical time in Monrovia timezone, the time processing functions may give incorrect results. The results after 7 Jan 1972 will be correct nevertheless. If the time transition (due to daylight saving time or for other reasons) was performed at a point of time that is not a multiple of 15 minutes, you can also get incorrect results at this specific day. Non-monotonic calendar dates. For example, in Happy Valley - Goose Bay, the time was transitioned one hour backwards at 00:01:00 7 Nov 2010 (one minute after midnight). So after 6th Nov has ended, people observed a whole one minute of 7th Nov, then time was changed back to 23:01 6th Nov and after another 59 minutes the 7th Nov started again. ClickHouse does not (yet) support this kind of fun. During these days the results of time processing functions may be slightly incorrect. Similar issue exists for Casey Antarctic station in year 2010. They changed time three hours back at 5 Mar, 02:00. If you are working in antarctic station, please don't afraid to use ClickHouse. Just make sure you set timezone to UTC or be aware of inaccuracies. Time shifts for multiple days. Some pacific islands changed their timezone offset from UTC+14 to UTC-12. That's alright but some inaccuracies may present if you do calculations with their timezone for historical time points at the days of conversion."
}
] |
{
"category": "App Definition and Development",
"file_name": "18-rollup.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: \"ROLLUP\" In this exercise we will query the products table, group the results by product\\id_ and then generate multiple grouping sets from the results using ROLLUP. ``` SELECT product_id, supplier_id, product_name, SUM(unitsinstock) FROM products GROUP BY product_id, ROLLUP(supplier_id); ``` This query should return 154 rows In this exercise we will query the products table, group the results by product\\id_ and then generate multiple grouping sets from the results using ROLLUP. ``` SELECT product_id, supplier_id, product_name, SUM(unitsinstock) FROM products GROUP BY productid ROLLUP(supplierid, product_id); ``` This query should return TBD rows"
}
] |
{
"category": "App Definition and Development",
"file_name": "kubectl-dba_debug_mariadb.md",
"project_name": "KubeDB by AppsCode",
"subcategory": "Database"
} | [
{
"data": "title: Kubectl-Dba Debug Mariadb menu: docs_{{ .version }}: identifier: kubectl-dba-debug-mariadb name: Kubectl-Dba Debug Mariadb parent: reference-cli menuname: docs{{ .version }} sectionmenuid: reference Debug helper for mariadb database ``` kubectl-dba debug mariadb [flags] ``` ``` kubectl dba debug mariadb -n demo sample-mariadb --operator-namespace kubedb ``` ``` -h, --help help for mariadb -o, --operator-namespace string the namespace where the kubedb operator is installed (default \"kubedb\") ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"/home/runner/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --default-seccomp-profile-type string Default seccomp profile --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server ``` - Debug any Database issue"
}
] |
{
"category": "App Definition and Development",
"file_name": "txn_start.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: START TRANSACTION statement [YSQL] headerTitle: START TRANSACTION linkTitle: START TRANSACTION description: Use the `START TRANSACTION` statement to start a transaction with the default (or specified) isolation level. menu: v2.18: identifier: txn_start parent: statements type: docs Use the `START TRANSACTION` statement to start a transaction with the default (or specified) isolation level. {{%ebnf%}} start_transaction {{%/ebnf%}} The `START TRANSACTION` statement is simply an alternative spelling for the statement. The syntax that follows `START TRANSACTION` is identical to that syntax that follows `BEGIN [ TRANSACTION | WORK ]`. And the two alternative spellings have identical semantics."
}
] |
{
"category": "App Definition and Development",
"file_name": "back_pressure.md",
"project_name": "Flink",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Monitoring Back Pressure\" weight: 3 type: docs aliases: /ops/monitoring/back_pressure.html /internals/backpressuremonitoring.html /ops/monitoring/back_pressure <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Flink's web interface provides a tab to monitor the back pressure behaviour of running jobs. If you see a back pressure warning (e.g. `High`) for a task, this means that it is producing data faster than the downstream operators can consume. Records in your job flow downstream (e.g. from sources to sinks) and back pressure is propagated in the opposite direction, up the stream. Take a simple `Source -> Sink` job as an example. If you see a warning for `Source`, this means that `Sink` is consuming data slower than `Source` is producing. `Sink` is back pressuring the upstream operator `Source`. Every parallel instance of a task (subtask) is exposing a group of three metrics: `backPressuredTimeMsPerSecond`, time that subtask spent being back pressured `idleTimeMsPerSecond`, time that subtask spent waiting for something to process `busyTimeMsPerSecond`, time that subtask was busy doing some actual work At any point of time these three metrics are adding up approximately to `1000ms`. These metrics are being updated every couple of seconds, and the reported value represents the average time that subtask was back pressured (or idle or busy) during the last couple of seconds. Keep this in mind if your job has a varying load. For example, a subtask with a constant load of 50% and another subtask that is alternating every second between fully loaded and idling will both have the same value of `busyTimeMsPerSecond`: around `500ms`. Internally, back pressure is judged based on the availability of output buffers. If a task has no available output buffers, then that task is considered back pressured. Idleness, on the other hand, is determined by whether or not there is input available. The WebUI aggregates the maximum value of the back pressure and busy metrics from all of the subtasks and presents those aggregated values inside the JobGraph. Besides displaying the raw values, tasks are also color-coded to make the investigation easier. {{< img src=\"/fig/backpressurejob_graph.png\" class=\"img-responsive\" >}} Idling tasks are blue, fully back pressured tasks are black, and fully busy tasks are colored red. All values in between are represented as shades between those three colors. In the Back Pressure tab next to the job overview you can find more detailed metrics. {{< img src=\"/fig/backpressuresubtasks.png\" class=\"img-responsive\" >}} For subtasks whose status is OK, there is no indication of back pressure. HIGH, on the other hand, means that a subtask is back pressured. Status is defined in the following way: OK: 0% <= back pressured <= 10% LOW: 10% < back pressured <= 50% HIGH: 50% < back pressured <= 100% Additionally, you can find the percentage of time each subtask is back pressured, idle, or busy. {{< top >}}"
}
] |
{
"category": "App Definition and Development",
"file_name": "2023_06_07_How_to_Run_SQl_Trace_with_ShardingSphere.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"How to Run SQL Trace with ShardingSphere-Agent\" weight = 100 chapter = true +++ , a data service platform that follows the Database Plus concept for distributed database systems, offers a range of features, including data sharding, read/write splitting, data encryption, and shadow database. In production environment, especially in data-sharding scenarios, SQL tracing is critical for monitoring and analyzing slow queries and abnormal executions. Therefore, a thorough understanding of SQL rewriting and query execution is crucial. provides an observable framework for ShardingSphere. It is implemented based on Java Agent technology, using Byte Buddy to modify the target bytecode and weave them into data collection logic. Metrics, tracing and logging functions are integrated into the agent through plugins to obtain observable data of system status. Among them, the tracing plugin is used to obtain the tracing information of SQL parsing and SQL execution, which can help users analyze SQL trace when using or . This post will take ShardingSphere-Proxy as an example to explain how to use ShardingSphere-Agent for SQL tracing. Before starting with the article, here are two important concepts that need to be paid attention to first: Span: the basic unit in a trace. A span is created for each call in the trace and ideentified by a unique ID. Spans can contain some customized information such as descriptive information, timestamps, key-value pairs, etc. Trace: the collection of spans with a tree structure. In ShardingSphere-Proxy, a trace represents to the full execution process of a SQL statement. When running a SQL statement in ShardingSphere-Proxy, it goes through parsing, routing, rewriting, execution, and merging. Currently, tracing has been implemented in two critical steps: parsing and execution with execution oftentimes being the focus. In the execution stage, Proxy will connect to the physical database to execute the actual SQL. Therefore, the information obtained during this stage provides important evidence for troubleshooting issues and fully reflects the correspondence between logical SQL and actual SQL after rewriting. In ShardingSphere-Proxy, a trace consists of three types of spans: <table> <tr> <td>Span </td> <td>Description </td> </tr> <tr> <td><code>/ShardingSphere/rootInvoke/</code> </td> <td>This span indicates the complete execution of an SQL statement, and you can view the amount of time spent on executing an SQL </td> </tr> <tr> <td><code>/ShardingSphere/parseSQL/</code> </td> <td>This span indicates the parsing stage of the SQL execution. You can view the parsing time of an SQL and the SQL statements. (It is not available when a <code>PreparedStatement</code> is used.) </td> </tr> <tr> <td><code>/ShardingSphere/executeSQL/</code> </td> <td>This span indicates the rewritten SQL is executed. And the time spent on executing is also available. (This span is not available if the SQL doesnt need to be executed in the backend physical database). </td> </tr> </table> For the convenience of viewing the tracing data, Zipkin or Jaeger is usually used to collect and display the tracing"
},
{
"data": "Currently, ShardingSphere-Agent supports reporting trace data to both components. Next, lets use the sharding scenario as an example to explain how to report data and analyze the SQL trace. Download Proxy from the [1] Create `demods0` and `demods1` under the MySQL database as the storage unit `ds0` and `ds1` . Start Proxy, and connect to Proxy using a MySQL client tool; create logical database `sharding_db`, and register the storage units under this database using DistSQL (Distributed SQL). DistSQL is the specific SQL language for Apache ShardingSphere. It is used in exactly the same way as standard SQL, and is used to provide SQL-level operational capabilities for incremental functions. For details, please refer to the [2] Use DistSQL to create sharding `rule torder`, set `ds0` and `ds_1` as storage units, and set the number of shard to 4. Create table `t_order` Finally, there will be tables `torder0` and `torder2` created in the physical database `demods0`, and `torder1` and `torder3` tables in the physical database `demods1`. After ShardingSphere-Proxy is well configured, the next step is to introduce how to report SQL trace data to Zipkin and Jaeger through ShardingSphere-Agent. Deploy Zipkin (please refer to the [3]) Configure `agent.yaml` to export data to Zipkin ``` plugins: tracing: OpenTelemetry: props: otel.service.name: \"shardingsphere\" # the service name configured otel.traces.exporter: \"zipkin\" # Use zipkin exporter otel.exporter.zipkin.endpoint: \"http://localhost:9411/api/v2/spans\" # the address where zipkin receives data otel-traces-sampler: \"always_on\" # sampling setting otel.metrics.exporter: \"none\" # close OpenTelemetry metric collection ``` Restart Proxy and Agent after stopping Proxy (`--agent`means enabling Agent) ``` ./bin/stop.sh ./bin/start.sh --agent ``` Use a MySQL client tool to connect to the Proxy and execute the following queries `insert`, `select`, `update`, and `delete` in sequence. Visit (Zipkin UI), and you can see 4 pieces of trace data, which is exactly the same number of SQL queries. Lets analyze the trace of the insert query. After finding the trace, you can see the execution details of this query. The Tags information in the `/shardingsphere/parsesql/` span shows that the parsed SQL is consistent with the SQL executed on the client. There are 4 `/shardingsphere/executesql/` spans shown in the span"
},
{
"data": "After reviewing the details, it is found that the following two SQL statements were executed in the storage unit `ds_0`: ``` insert into torder0 (orderid, userid, address_id, status) VALUES (4, 4, 4, 'OK') insert into torder2 (orderid, userid, address_id, status) VALUES (2, 2, 2, 'OK') ``` The following two SQL statements are executed in the storage unit `ds_1` ``` insert into torder1 (orderid, userid, address_id, status) VALUES (1, 1, 1, 'OK') insert into torder3 (orderid, userid, address_id, status) VALUES (3, 3, 3, 'OK') ``` Then log in to the physical database to check the corresponding data (after executing the insert query) Due to the `torde`r table being partitioned into 4 shards and data with `orderid` 1 to 4 being inserted, one record will be inserted into each of the `torder0`, `torder1`, `torder2`, and `torder3` tables. As a result, there will be 4 `/shardingsphere/executesql` spans. The displayed SQL trace is consistent with the actual execution results. So you can view the time spent on each step through the span and also know the specific execution of the SQL through the `/shardingsphere/executesql/` span. The following is the trace details of the select, update, and delete queries, which are also consistent with the actual situation. Deploy Jaeger (please refer to the [4]) Deploy Proxy Configure `agent.yaml` ``` plugins: tracing: OpenTelemetry: props: otel.service.name: \"shardingsphere\" # the service name configured otel.traces.exporter: \"jaeger\" # Use jaeger exporter otel.exporter.otlp.traces.endpoint: \"http://localhost:14250\" # the address where jaeger receives data otel.traces.sampler: \"always_on\" # sampling setting otel.metrics.exporter: \"none\" # close OpenTelemetry metric collection ``` Restart Proxy and Agent after stopping Proxy (`--agent` means enabling Agent) ``` ./bin/stop.sh ./bin/start.sh --agent ``` Log into Proxy and execute SQL queries under the `sharding_db` database (this SQL query is same as the ones executed in the Zipkin example) From (Jaeger UI address), you will see 4 trace data, same as the number of executed SQL queries. Since the executed SQL queries are the same as those in the Zipkin example, the trace data should also be the same. As an example, we will use the trace from the insert query. From the following picture, their are one parsed span and 4 executed span Storage unit `ds_0` has executed the following two SQL statements ``` insert into torder0 (orderid, userid, address_id, status) VALUES (4, 4, 4, 'OK') insert into torder2 (orderid, userid, address_id, status) VALUES (2, 2, 2, 'OK') ``` Storage unit `ds_1` has executed the following two SQL statements ``` insert into torder1 (orderid, userid, address_id, status) VALUES (1, 1, 1, 'OK') insert into torder3 (orderid, userid, address_id, status) VALUES (3, 3, 3, 'OK') ``` By analyzing the span number, the parsing result of SQL statement and the execution process, it is concluded that the whole SQL link is in line with the expectation Sampling is very common when the amount of trace data in the production environment is very large. Shown as follows, set the sampling rate to 0.01 (sampling rate of 1%). OpenTelemetry Exporters is used for exporting data here, and please refer to the for detailed parameters. [5] ``` plugins: tracing: OpenTelemetry: props: otel.service.name: \"shardingsphere\" otel.metrics.exporter: \"none\" otel.traces.exporter: \"zipkin\" otel.exporter.zipkin.endpoint: \"http://localhost:9411/api/v2/spans\" otel-traces-sampler: \"traceidratio\" otel.traces.sampler.arg: \"0.01\" ``` SQL Tracking allows developers and DBAs to quickly diagnose and locate performance bottlenecks in applications. By collecting SQL tracing data through ShardingSphere-Agent and using visualization tools such as Zipkin and Jaeger, the time-consuming situation of each storage node can be analyzed, which helps to improve the stability and robustness of the application, and ultimately enhances the user experience. Finally, youre welcome to join [6] to discuss your questions, suggestions or ideas about ShardingSphere and ShardingSphere-Agent."
}
] |
{
"category": "App Definition and Development",
"file_name": "ddl_drop_rule.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: DROP RULE statement [YSQL] headerTitle: DROP RULE linkTitle: DROP RULE description: Use the DROP RULE statement to remove a rule. menu: v2.18: identifier: ddldroprule parent: statements type: docs Use the `DROP RULE` statement to remove a rule. {{%ebnf%}} drop_rule {{%/ebnf%}} See the semantics of each option in the . Basic example. ```plpgsql yugabyte=# CREATE TABLE t1(a int4, b int4); yugabyte=# CREATE TABLE t2(a int4, b int4); yugabyte=# CREATE RULE t1tot2 AS ON INSERT TO t1 DO INSTEAD INSERT INTO t2 VALUES (new.a, new.b); yugabyte=# DROP RULE t1tot2 ON t1; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "jsonb-object.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: jsonbobject() and jsonobject() [JSON] headerTitle: jsonbobject() and jsonobject() linkTitle: jsonb_object() description: Create a JSON object from SQL arrays that specify keys with their values of SQL data type text. menu: v2.18: identifier: jsonb-object parent: json-functions-operators weight: 150 type: docs Purpose: Create a JSON object from SQL arrays that specify keys with their values of SQL data type `text`. Signature For the `jsonb` variant: ``` input value: ] | [ text[], text[] ] return value: jsonb ``` Notes: The `jsonb_object()` function achieves a similar effect to but with significantly less verbose syntax. Precisely because you present a single `text` actual, you can avoid the fuss of dynamic invocation and of dealing with interior single quotes that this brings in its train. However, it has the limitation that the primitive values in the resulting JSON value can only be string. It has three overloads. The first overload has a single `text[]` formal whose actual text expresses the variadic intention conventionally: the alternating comma separated items are the respectively the key and the value of a key-value pair. ```plpgsql do $body$ declare array_values constant text[] := array['a', '17', 'b', $$'Hello', you$$, 'c', 'true']; j constant jsonb := jsonbobject(arrayvalues); expected_j constant jsonb := $${\"a\": \"17\", \"b\": \"'Hello', you\", \"c\": \"true\"}$$; begin assert j = expected_j, 'unexpected'; end; $body$; ``` Compare this result with the result from supplying the same primitive SQL values to the function. There, the data types of the SQL values are properly honored: The numeric `17` and the boolean `TRUE` are represented by the proper JSON primitive types. But with `jsonbobject()` it is not possible to express that `17` should be taken as a JSON number value and `TRUE` should be taken as a JSON boolean_ value. The potential loss of data type fidelity brought by `jsonbobject()` is a high price to pay for the reduction in verbosity. On the other hand, `jsonbobject()` has the distinct advantage over `jsonbbuildobject()` that you don't need to know statically how many key-value pairs the target JSON object is to have. If you think that it improves the clarity, you can use the second overload. This has a single `text` formalin other words an array of arrays. ```plpgsql do $body$ declare array_values constant text := array[ array['a', '17'], array['b', $$'Hello', you$$], array['c', 'true'] ]; j constant jsonb := jsonbobject(arrayvalues); expected_j constant jsonb := $${\"a\": \"17\", \"b\": \"'Hello', you\", \"c\": \"true\"}$$; begin assert j = expected_j, 'unexpected'; end; $body$; ``` This produces the identical result to that produced by the example for the first overload. Again, if you think that it improves the clarity, you can use the third overload. This has a two `text[]` formals. The first expresses the list keys of the key-values pairs. And the second expresses the list values of the key-values pairs. The items must correspond pairwise, and clearly each array must have the same number of items. For example: ```plpgsql do $body$ declare array_keys constant text[] := array['a', 'b', 'c' ]; array_values constant text[] := array['17', $$'Hello', you$$, 'true']; j constant jsonb := jsonbobject(arraykeys, array_values); expected_j constant jsonb := $${\"a\": \"17\", \"b\": \"'Hello', you\", \"c\": \"true\"}$$; begin assert j = expected_j, 'unexpected'; end; $body$; ``` This, too, produces the identical result to that produced by the example for the first overload."
}
] |
{
"category": "App Definition and Development",
"file_name": "declarative_hibernation.md",
"project_name": "EDB",
"subcategory": "Database"
} | [
{
"data": "CloudNativePG is designed to keep PostgreSQL clusters up, running and available anytime. There are some kinds of workloads that require the database to be up only when the workload is active. Batch-driven solutions are one such case. In batch-driven solutions, the database needs to be up only when the batch process is running. The declarative hibernation feature enables saving CPU power by removing the database Pods, while keeping the database PVCs. !!! Note Declarative hibernation is different from the existing implementation of . Imperative hibernation shuts down all Postgres instances in the High Availability cluster, and keeps a static copy of the PVCs of the primary that contain `PGDATA` and WALs. The plugin enables to exit the hibernation phase, by resuming the primary and then recreating all the replicas - if they exist. To hibernate a cluster, set the `cnpg.io/hibernation=on` annotation: ``` sh $ kubectl annotate cluster <cluster-name> --overwrite cnpg.io/hibernation=on ``` A hibernated cluster won't have any running Pods, while the PVCs are retained so that the cluster can be rehydrated at a later time. Replica PVCs will be kept in addition to the primary's PVC. The hibernation procedure will delete the primary Pod and then the replica Pods, avoiding switchover, to ensure the replicas are kept in sync. The hibernation status can be monitored by looking for the `cnpg.io/hibernation` condition: ``` sh $ kubectl get cluster <cluster-name> -o \"jsonpath={.status.conditions[?(.type==\\\"cnpg.io/hibernation\\\")]}\" { \"lastTransitionTime\":\"2023-03-05T16:43:35Z\", \"message\":\"Cluster has been hibernated\", \"reason\":\"Hibernated\", \"status\":\"True\", \"type\":\"cnpg.io/hibernation\" } ``` The hibernation status can also be read with the `status` sub-command of the `cnpg` plugin for `kubectl`: ``` sh $ kubectl cnpg status <cluster-name> Cluster Summary Name: cluster-example Namespace: default PostgreSQL Image: ghcr.io/cloudnative-pg/postgresql:16.2 Primary instance: cluster-example-2 Status: Cluster in healthy state Instances: 3 Ready instances: 0 Hibernation Status Hibernated Message Cluster has been hibernated Time 2023-03-05 16:43:35 +0000 UTC [..] ``` To rehydrate a cluster, either set the `cnpg.io/hibernation` annotation to `off`: ``` $ kubectl annotate cluster <cluster-name> --overwrite cnpg.io/hibernation=off ``` Or, just unset it altogether: ``` $ kubectl annotate cluster <cluster-name> cnpg.io/hibernation- ``` The Pods will be recreated and the cluster will resume operation."
}
] |
{
"category": "App Definition and Development",
"file_name": "distributed-tracing.md",
"project_name": "CloudEvents",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "This extension embeds context from so that distributed systems can include traces that span an event-driven system. This extension is meant to contain historical data of the parent trace, in order to diagnose eventual failures of the system through tracing platforms like Jaeger, Zipkin, etc. As with the main , the key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in . However, the scope of these key words is limited to when this extension is used. For example, an attribute being marked as \"REQUIRED\" does not mean it needs to be in all CloudEvents, rather it needs to be included only when this extension is being used. Type: `String` Description: Contains a version, trace ID, span ID, and trace options as defined in Constraints REQUIRED Type: `String` Description: a comma-delimited list of key-value pairs, defined by . Constraints OPTIONAL The Distributed Tracing Extension is not intended to replace the protocol specific headers for tracing, like the ones described in for HTTP. Given a single hop event transmission (from source to sink directly), the Distributed Tracing Extension, if used, MUST carry the same trace information contained in protocol specific tracing headers. Given a multi hop event transmission, the Distributed Tracing Extension, if used, MUST carry the trace information of the starting trace of the transmission. In other words, it MUST NOT carry trace information of each individual hop, since this information is usually carried using protocol specific headers, understood by tools like . Middleware between the source and the sink of the event could eventually add a Distributed Tracing Extension if the source didn't include any, in order to provide to the sink the starting trace of the transmission. An example with HTTP: ```bash CURL -X POST example/webhook.json \\ -H 'ce-id: 1' \\ -H 'ce-specversion: 1.0' \\ -H 'ce-type: example' \\ -H 'ce-source: http://localhost' \\ -H 'ce-traceparent: 00-0af7651916cd43dd8448eb211c80319c-b9c7c989f97918e1-01' \\ -H 'traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01' \\ -H 'tracestate: rojo=00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01,congo=lZWRzIHRoNhcm5hbCBwbGVhc3VyZS4` ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "commiter_rights.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"Committer Rights\" weight = 5 chapter = true +++ Every Apache Committer is eligible for a free license that grants them access to all the JetBrains IDEs such as IntelliJ IDEA, PyCharm, and other desktop tools. If you are an Apache ShardingSphere Committer and still havent received a free JetBrains license, please use your @apache.org email address to for the subsequent development of ShardingSphere. If you have obtained a free JetBrains subscription but it has expired, you can reactivate all product packs for free using your @apache.org email address and click 'Refresh license list' in your JetBrains desktop tools (e.g. IntelliJ IDEA) to refresh the subscription."
}
] |
{
"category": "App Definition and Development",
"file_name": "pip-343.md",
"project_name": "Pulsar",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "We use the to build the CLI tool, which is a good library, and is stable, but it misses modern CLI features likes autocompletion, flag/command suggestion, native image, etc. These features are very important because there are many commands in the CLI, but the jcommander doesn't give friendly hints when we use incorrect flags/commands, which makes the user experience not very friendly. In modern times, the supports these features, which is a popular library. The following is some comparison between jcommander and picocli: Error prompt: ``` bin/pulsar-admin clusters update cluster-a -b Need to provide just 1 parameter Unknown option: '-b' ``` Command suggestion: ``` bin/pulsar-admin cluste Expected a command, got cluste Unmatched argument at index 0: 'cluste' Did you mean: pulsar-admin clusters? ``` Use the picocli instead of the jcommander in our CLI tool: bin/pulsar bin/pulsar-admin bin/pulsar-client bin/pulsar-shell bin/pulsar-perf I'm sure this will greatly improve the user experience, and in the future we can also consider using native images to reduce runtime, and improve the CLI document based on picocli. This PR simply replaces jcommander and does not introduce any enhancements. In the CLI, is an important feature, and after this PIP is complete I will make a new PIP to support this feature. The jcommander and picocli have similar APIs, this will make the migration task very simple. This is : ``` utilityname[-c optionargument] ][operand...] ``` Use `@Command` instead of `@Parameters` to define the class as a command: ```java @Command(name = \"my-command\", description = \"Operations on persistent topics\") public class MyCommand { } ``` Use `@Option` instead of `@Parameter` to defined the option of command: ```java @Option(names = {\"-r\", \"--role\"}) private String role; ``` Use `@Parameters` to get the operand of command: ```java @Parameters(description = \"persistent://tenant/namespace/topic\", arity = \"1\") private String topicName; ``` Migrate jcommander converter to picocli converter: ```java public class TimeUnitToMillisConverter implements ITypeConverter<Long> { @Override public Long convert(String value) throws Exception { return TimeUnit.SECONDS.toMillis(RelativeTimeUtil.parseRelativeTimeInSeconds(value)); } } ``` Add the picocli entrypoint: ```java @Command public class MyCommand implements Callable<Integer> { // Picocli entrypoint. @Override public Integer call() throws Exception { // TODO // run(); return 0; } } ``` The above is a common migration approach, and then we need to consider pulsar-shell and custom command separately. pulsar-shell This is an interactive shell based on jline3 and jcommander, which includes pulsar-admin and pulsar-client commands. The jcommander does not provide autocompletion because we have implemented it ourselves. In picocli, they have to help us quickly build the interactive shell. custom command: This is an extension of pulsar-admin, and the plugin's implementation does not depend on jcommander. Since the bridge is used, we only need to change the generator code based on picocli. Fully compatible. Mailing List discussion thread: https://lists.apache.org/thread/ydg1q064cd11pxwz693frtk4by74q32f Mailing List voting thread: https://lists.apache.org/thread/1bpsr6tkgm00bb66dt2s74r15o4b37s3"
}
] |
{
"category": "App Definition and Development",
"file_name": "fix-12714.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Fixed some field errors in prometheus api `/prometheus/stats`. Related metrics names: `emqxclustersessions_count` `emqxclustersessions_max` `emqxclusternodes_running` `emqxclusternodes_stopped` `emqxsubscriptionsshared_count` `emqxsubscriptionsshared_max` Fixed the issue in endpoint: `/stats` that the values of fields `subscriptions.shared.count` and `subscriptions.shared.max` can not be updated in time when the client disconnected or unsubscribed the Shared-Subscription."
}
] |
{
"category": "App Definition and Development",
"file_name": "HdfsUpgradeDomain.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> HDFS Upgrade Domain ==================== <!-- MACRO{toc|fromDepth=0|toDepth=3} --> Introduction The current default HDFS block placement policy guarantees that a blocks 3 replicas will be placed on at least 2 racks. Specifically one replica is placed on one rack and the other two replicas are placed on another rack during write pipeline. This is a good compromise between rack diversity and write-pipeline efficiency. Note that subsequent load balancing or machine membership change might cause 3 replicas of a block to be distributed across 3 different racks. Thus any 3 datanodes in different racks could store 3 replicas of a block. However, the default placement policy impacts how we should perform datanode rolling upgrade. explains how the datanodes can be upgraded in a rolling fashion without downtime. Because any 3 datanodes in different racks could store all the replicas of a block, it is important to perform sequential restart of datanodes one at a time in order to minimize the impact on data availability and read/write operations. Upgrading one rack at a time is another option; but that will increase the chance of data unavailability if there is machine failure at another rack during the upgrade. The side effect of this sequential datanode rolling upgrade strategy is longer upgrade duration for larger clusters. Architecture To address the limitation of block placement policy on rolling upgrade, the concept of upgrade domain has been added to HDFS via a new block placement policy. The idea is to group datanodes in a new dimension called upgrade domain, in addition to the existing rack-based grouping. For example, we can assign all datanodes in the first position of any rack to upgrade domain ud_01, nodes in the second position to upgrade domain ud_02 and so on. The namenode provides BlockPlacementPolicy interface to support any custom block placement besides the default block placement policy. A new upgrade domain block placement policy based on this interface is available in HDFS. It will make sure replicas of any given block are distributed across machines from different upgrade domains. By default, 3 replicas of any given block are placed on 3 different upgrade domains. This means all datanodes belonging to a specific upgrade domain collectively won't store more than one replica of any block. With upgrade domain block placement policy in place, we can upgrade all datanodes belonging to one upgrade domain at the same time without impacting data availability. Only after finishing upgrading one upgrade domain we move to the next upgrade domain until all upgrade domains have been upgraded. Such procedure will ensure no two replicas of any given block will be upgraded at the same time. This means we can upgrade many machines at the same time for a large"
},
{
"data": "And as the cluster continues to scale, new machines will be added to the existing upgrade domains without impact the parallelism of the upgrade. For an existing cluster with the default block placement policy, after switching to the new upgrade domain block placement policy, any newly created blocks will conform the new policy. The old blocks allocated based on the old policy need to migrated the new policy. There is a migrator tool you can use. See HDFS-8789 for details. Settings To enable upgrade domain on your clusters, please follow these steps: Assign datanodes to individual upgrade domain groups. Enable upgrade domain block placement policy. Migrate blocks allocated based on old block placement policy to the new upgrade domain policy. How a datanode maps to an upgrade domain id is defined by administrators and specific to the cluster layout. A common way to use the rack position of the machine as its upgrade domain id. To configure mapping from host name to its upgrade domain id, we need to use json-based host configuration file. by setting the following property as explained in . | Setting | Value | |:- |:- | |`dfs.namenode.hosts.provider.classname` | `org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager`| |`dfs.hosts`| the path of the json hosts file | The json hosts file defines the property for all hosts. In the following example, there are 4 datanodes in 2 racks; the machines at rack position 01 belong to upgrade domain 01; the machines at rack position 02 belong to upgrade domain 02. ```json [ { \"hostName\": \"dcArackA01\", \"upgradeDomain\": \"01\" }, { \"hostName\": \"dcArackA02\", \"upgradeDomain\": \"02\" }, { \"hostName\": \"dcArackB01\", \"upgradeDomain\": \"01\" }, { \"hostName\": \"dcArackB02\", \"upgradeDomain\": \"02\" } ] ``` After each datanode has been assigned an upgrade domain id, the next step is to enable upgrade domain block placement policy with the following configuration as explained in . | Setting | Value | |:- |:- | |`dfs.block.replicator.classname`| `org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithUpgradeDomain` | After restarting of namenode, the new policy will be used for any new block allocation. If you change the block placement policy of an existing cluster, you will need to make sure the blocks allocated prior to the block placement policy change conform the new block placement policy. HDFS-8789 provides the initial draft patch of a client-side migration tool. After the tool is committed, we will be able to describe how to use the tool. Rolling restart based on upgrade domains During cluster administration, we might need to restart datanodes to pick up new configuration, new hadoop release or JVM version and so on. With upgrade domains enabled and all blocks on the cluster conform to the new policy, we can now restart datanodes in batches, one upgrade domain at a time. Whether it is manual process or via automation, the steps are Group datanodes by upgrade domains based on dfsadmin or JMX's datanode information. For each upgrade domain (Optional) put all the nodes in that upgrade domain to maintenance state (refer to ). Restart all those nodes. Check if all datanodes are healthy after restart. Unhealthy nodes should be decommissioned. (Optional) Take all those nodes out of maintenance state. Metrics -- Upgrade domains are part of namenode's JMX. As explained in , you can also verify upgrade domains using the following commands. Use `dfsadmin` to check upgrade domains at the cluster level. `hdfs dfsadmin -report` Use `fsck` to check upgrade domains of datanodes storing data at a specific path. `hdfs fsck <path> -files -blocks -upgradedomains`"
}
] |
{
"category": "App Definition and Development",
"file_name": "data-type-literals.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "For primitive types, you can create literals based on string literals. Syntax `<Primitive type>( <string>[, <additional attributes>] )` Unlike `CAST(\"myString\" AS MyType)`: The check for literal's castability to the desired type occurs at validation. The result is non-optional. For the data types `Date`, `Datetime`, `Timestamp`, and `Interval`, literals are supported only in the format corresponding to . `Interval` has the following differences from the standard: It supports the negative sign for shifts to the past. Microseconds can be expressed as fractional parts of seconds. You can't use units of measurement exceeding one week. The options with the beginning/end of the interval and with repetitions, are not supported. For the data types `TzDate`, `TzDatetime`, `TzTimestamp`, literals are also set in the format meeting , but instead of the optional Z suffix, they specify the , separated by comma (for example, GMT or Europe/Moscow). {% include %} Examples ```yql SELECT Bool(\"true\"), Uint8(\"0\"), Int32(\"-1\"), Uint32(\"2\"), Int64(\"-3\"), Uint64(\"4\"), Float(\"-5\"), Double(\"6\"), Decimal(\"1.23\", 5, 2), -- up to 5 decimal digits, with 2 after the decimal point String(\"foo\"), Utf8(\"Hello\"), Yson(\"<a=1>[3;%false]\"), Json(@@{\"a\":1,\"b\":null}@@), Date(\"2017-11-27\"), Datetime(\"2017-11-27T13:24:00Z\"), Timestamp(\"2017-11-27T13:24:00.123456Z\"), Interval(\"P1DT2H3M4.567890S\"), TzDate(\"2017-11-27,Europe/Moscow\"), TzDatetime(\"2017-11-27T13:24:00,America/Los_Angeles\"), TzTimestamp(\"2017-11-27T13:24:00.123456,GMT\"), Uuid(\"f9d5cc3f-f1dc-4d9c-b97e-766e57ca4ccb\"); ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "CODE_COMMENT_STYLE.md",
"project_name": "TiKV",
"subcategory": "Database"
} | [
{
"data": "This document describes the code comment style applied to TiKV repositories. Since TiKV uses Rust as its development language, most of the styles or rules described in this guide are specific to Rust. When you are to commit, be sure to follow the style to write good code comments. To speed up the reviewing process To help maintain the code To improve the API document readability To improve the development efficiency of the whole team Write a comment where/when there is context that is missing from the code or is hard to deduce from the code. Specifically, use a comment: For important code For obscure code For tricky or interesting code For a complex code block If a bug exists in the code but you cannot fix it or you just want to ignore it for the moment If the code is not optimal but you dont have a smarter way now To remind yourself or others of missing functionality or upcoming requirements not present in the code A comment is generally used for: Public items exported from crate (Doc comments) Module Type Constant Function Fields Method Variable Complex algorithm Test case TODO FIXME Non-doc comment Used to document implementation details. Use `//` for a line comment > Note: Block comments (`/ ... /`) are not recommended unless for personal reasons or temporary purposes prior to being converted to line comments. Doc comment Used to document the interface of code (structures, fields, macros, etc.). Use `///` for item documentation (functions, attributes, structures, etc.). Use `//!` for module level documentation. > For more detailed guidelines on Rust doc comments, see . Place the single-line and block comment above the code its annotating. Fold long lines of comments. The maximum width for a line is 100 characters. Use relative URLs when appropriate for Rustdoc links. Word Use American English rather than British English. color, canceling, synchronize (Recommended) colour, cancelling, synchronise (Not recommended) Use correct spelling. Use standard or official capitalization. TiKV, TiDB-Binlog, Region, gRPC, RocksDB, GC, k8s, , (Right) Tikv, TiDB Binlog, region, grpc, rocksdb, gc, k8S, MyDumper, Prometheus PushGateway (Wrong) Use words and expressions consistently. \"dead link\" vs. \"broken link\" (Only one of them can appear in a single document.) Do not use lengthy compound words. Do not abbreviate unless it is necessary (for readability purposes). \"we\" should be used only when it means the code writer and the reader. Sentence Use standard grammar and correct punctuation. Use relatively short sentences. For each comment, capitalize the first letter and end the sentence with a period. When used for description, comments should be descriptive rather than imperative. Opens the file (Right) Open the file (Wrong) Use \"this\" instead of \"the\" to refer to the current thing. Gets the toolkit for this component (Recommended) Gets the toolkit for the component (Not recommended) The Markdown format is allowed. Opens the `log` file The Code Comment Style defined in this document follows the . The following language uses are deemed as taboos and are not acceptable in code comments: Sexualized language Racial or political allusions Public or private harassment Language that contains private information, such as a physical or electronic address, without explicit permission Other inapproiate uses Comment code while writing it. Do not assume the code is self-evident. Avoid unnecessary comments for simple code. Write comments as if they were for you. Make sure the comment is up-to-date. Make sure you keep comments up to date when you edit code. Let the code speak for itself. Thanks for your contribution!"
}
] |
{
"category": "App Definition and Development",
"file_name": "composable-protocols.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/operations/settings/composable-protocols sidebar_position: 64 sidebar_label: Composable Protocols Composable protocols allows more flexible configuration of TCP access to the ClickHouse server. This configuration can co-exist with or replace conventional configuration. Example: ``` xml <protocols> </protocols> ``` Example: ``` xml <protocols> <!-- plain_http module --> <plain_http> <type>http</type> </plain_http> </protocols> ``` where: `plain_http` - name which can be referred by another layer `type` - denotes protocol handler which will be instantiated to process data, set of protocol handlers is predefined: `tcp` - native clickhouse protocol handler `http` - http clickhouse protocol handler `tls` - TLS encryption layer `proxy1` - PROXYv1 layer `mysql` - MySQL compatibility protocol handler `postgres` - PostgreSQL compatibility protocol handler `prometheus` - Prometheus protocol handler `interserver` - clickhouse interserver handler :::note `gRPC` protocol handler is not implemented for `Composable protocols` ::: Example: ``` xml <protocols> <plain_http> <type>http</type> <!-- endpoint --> <host>127.0.0.1</host> <port>8123</port> </plain_http> </protocols> ``` If `<host>` is omitted, then `<listen_host>` from root config is used. Example: definition for HTTPS protocol ``` xml <protocols> <!-- http module --> <plain_http> <type>http</type> </plain_http> <!-- https module configured as a tls layer on top of plain_http module --> <https> <type>tls</type> <impl>plain_http</impl> <host>127.0.0.1</host> <port>8443</port> </https> </protocols> ``` Example: definition for HTTP (port 8123) and HTTPS (port 8443) endpoints ``` xml <protocols> <plain_http> <type>http</type> <host>127.0.0.1</host> <port>8123</port> </plain_http> <https> <type>tls</type> <impl>plain_http</impl> <host>127.0.0.1</host> <port>8443</port> </https> </protocols> ``` Example: `anotherhttp` endpoint is defined for `plainhttp` module ``` xml <protocols> <plain_http> <type>http</type> <host>127.0.0.1</host> <port>8123</port> </plain_http> <https> <type>tls</type> <impl>plain_http</impl> <host>127.0.0.1</host> <port>8443</port> </https> <another_http> <impl>plain_http</impl> <host>127.0.0.1</host> <port>8223</port> </another_http> </protocols> ``` Example: for TLS layer private key (`privateKeyFile`) and certificate files (`certificateFile`) can be specified ``` xml <protocols> <plain_http> <type>http</type> <host>127.0.0.1</host> <port>8123</port> </plain_http> <https> <type>tls</type> <impl>plain_http</impl> <host>127.0.0.1</host> <port>8443</port> <privateKeyFile>another_server.key</privateKeyFile> <certificateFile>another_server.crt</certificateFile> </https> </protocols> ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.0.8.0.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | hadoop dfs copy, move commands should accept multiple source files as arguments | Major | . | dhruba borthakur | | | | MapFile constructor should accept Progressible | Major | io | Doug Cutting | Doug Cutting | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | DFSClient should use debug level to log missing data-node. | Minor | . | Konstantin Shvachko | Konstantin Shvachko | | | fsck should execute faster | Major | . | Yoram Arnon | Milind Bhandarkar | | | Job Name should not ben empty, if its is not given bu user, Hadoop should use original Jar name as the job name. | Trivial | . | Sanjay Dahiya | Sanjay Dahiya | | | add a little servlet to display the server's thread call stacks | Minor | . | Owen O'Malley | Owen O'Malley | | | NNBench example comments are incorrect and code contains cut-and-paste error | Trivial | . | Nigel Daley | Nigel Daley | | | Failed tasks should not be put at the end of the job tracker's queue | Major | . | Owen O'Malley | Sanjay Dahiya | | | Format of junit output should be configurable | Minor | . | Nigel Daley | Nigel Daley | | | Generic types for FSNamesystem | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | Map outputs can't have a different type of compression from the reduce outputs | Major | . | Owen O'Malley | Owen O'Malley | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Streaming should execute Unix commands and scripts in well known languages without user specifying the path | Major | . | arkady borkovsky | dhruba borthakur | | | namenode heartbeat interval should be configurable | Major | . | Wendy Chien | Milind Bhandarkar | | | JobTracker History bug - kill() ed tasks are logged wrongly as finished. | Major | . | Sanjay Dahiya | Sanjay Dahiya | | | DFSShell throws out arrayoutofbounds exceptions if the number of arguments is not right | Minor | . | Mahadev konar | dhruba borthakur | | | one replica of a file should be written locally if possible | Major | . | Yoram Arnon | dhruba borthakur | | | Task Tracker offerService does not adequately protect from exceptions | Major | . | Owen O'Malley | Owen O'Malley | | | hadoop dfs command line doesn't exit with status code on error | Major |"
},
{
"data": "| Marco Nicosia | dhruba borthakur | | | Test files missing copyright headers | Trivial | . | Nigel Daley | Nigel Daley | | | MiniMRCluster missing synchronization | Major | . | Nigel Daley | Nigel Daley | | | DFS client should try to re-new lease if it gets a lease expiration exception when it adds a block to a file | Major | . | Runping Qi | dhruba borthakur | | | Name-node should demand a block report from resurrected data-nodes. | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | Explicit timeout for ipc.Client | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | TaskTracker missing synchronization around tasks variable access | Major | . | Nigel Daley | Nigel Daley | | | fix warning about pathSpec should start with '/' or '\\*' : mapOutput | Minor | . | Owen O'Malley | Owen O'Malley | | | source headers must conform to new Apache guidelines | Major | . | Doug Cutting | Doug Cutting | | | unit test hangs when a cluster is running on the same machine | Minor | . | Wendy Chien | Wendy Chien | | | DFS is succeptible to data loss in case of name node failure | Major | . | Yoram Arnon | Konstantin Shvachko | | | fsck does not handle arguments -blocks and -locations correctly | Major | . | Milind Bhandarkar | Milind Bhandarkar | | | DataNode and NameNode main() should catch and report exceptions. | Major | . | Konstantin Shvachko | Raghu Angadi | | | the javadoc currently generates lot of warnings about bad fields | Trivial | . | Owen O'Malley | Nigel Daley | | | DFS Always Reports 0 Bytes Used | Minor | . | Albert Chern | Raghu Angadi | | | ant test is failing | Major | . | Mahadev konar | Mahadev konar | | | build doesn't fail if libhdfs test(s) fail | Minor | . | Nigel Daley | Nigel Daley | | | if the jobinit thread gets killed the jobtracker keeps running without doing anything. | Minor | . | Mahadev konar | Owen O'Malley | | | Upgrade to trunk causes data loss. | Blocker | . | Milind Bhandarkar | Milind Bhandarkar | | | Some calls to mkdirs do not check return value | Major | . | Wendy Chien | Wendy Chien | | | Distributed cache creates unnecessary symlinks if asked for creating symlinks | Major | . | Mahadev konar | Mahadev konar | | | hadoop -rmr does NOT process multiple arguments and does not complain | Minor | . | dhruba borthakur | dhruba borthakur | | | Chain reaction in a big cluster caused by simultaneous failure of only a few data-nodes. | Major | . | Konstantin Shvachko | Sameer Paranjpye |"
}
] |
{
"category": "App Definition and Development",
"file_name": "discovery.md",
"project_name": "Backstage",
"subcategory": "Application Definition & Image Build"
} | [
{
"data": "id: discovery title: Gerrit Discovery sidebar_label: Discovery description: Automatically discovering catalog entities from Gerrit repositories The Gerrit integration has a special entity provider for discovering catalog entities from Gerrit repositories. The provider uses the \"List Projects\" API in Gerrit to get a list of repositories and will automatically ingest all `catalog-info.yaml` files stored in the root of the matching projects. As this provider is not one of the default providers, you will first need to install the Gerrit provider plugin: ```bash yarn --cwd packages/backend add @backstage/plugin-catalog-backend-module-gerrit ``` Then add the plugin to the plugin catalog `packages/backend/src/plugins/catalog.ts`: ```ts / packages/backend/src/plugins/catalog.ts / import { GerritEntityProvider } from '@backstage/plugin-catalog-backend-module-gerrit'; const builder = await CatalogBuilder.create(env); / ... other processors and/or providers ... */ builder.addEntityProvider( GerritEntityProvider.fromConfig(env.config, { logger: env.logger, scheduler: env.scheduler, }), ); ``` To use the discovery processor, you'll need a Gerrit integration . Then you can add any number of providers. ```yaml catalog: providers: gerrit: yourProviderId: # identifies your dataset / provider independent of config changes host: gerrit-your-company.com branch: master # Optional query: 'state=ACTIVE&prefix=webapps' schedule: frequency: { minutes: 30 } timeout: { minutes: 3 } backend: host: gerrit-your-company.com branch: master # Optional query: 'state=ACTIVE&prefix=backend' ``` The provider configuration is composed of three parts: `host`: the host of the Gerrit integration to use. `branch` (optional): the branch where we will look for catalog entities (defaults to \"master\"). `query`: this string is directly used as the argument to the \"List Project\" API. Typically, you will want to have some filter here to exclude projects that will never contain any catalog files."
}
] |
{
"category": "App Definition and Development",
"file_name": "transactions-overview.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Transactions overview headerTitle: Transactions overview linkTitle: Overview description: An overview of transactions work in YugabyteDB. menu: v2.18: identifier: architecture-transactions-overview parent: architecture-acid-transactions weight: 10 type: docs Transactions and strong consistency are a fundamental requirement for any RDBMS. DocDB has been designed for strong consistency. It supports fully distributed atomicity, consistency, isolation, durability (ACID) transactions across rows, multiple tablets, and multiple nodes at any scale. Transactions can span across tables in DocDB. A transaction is a sequence of operations performed as a single logical unit of work. The intermediate states of the database as a result of applying the operations inside a transaction are not visible to other concurrent transactions, and if a failure occurs that prevents the transaction from completing, then none of the steps affect the database. Note that all update operations inside DocDB are considered to be transactions, including operations that update only one row, as well as those that update multiple rows that reside on different nodes. If `autocommit` mode is enabled, each statement is executed as one transaction. A transaction in a YugabyteDB cluster may need to update multiple rows that span across nodes in a cluster. In order to be ACID-compliant, the various updates made by this transaction should be visible instantaneously as of a fixed time, irrespective of the node in the cluster that reads the update. To achieve this, the nodes of the cluster must agree on a global notion of time, which requires all nodes to have access to a highly-available and globally-synchronized clock. , used by Google Cloud Spanner, is an example of such a clock with tight error bounds. However, this type of clock is not available in many deployments. Physical time clocks (or wall clocks) cannot be perfectly synchronized across nodes and cannot order events with the purpose to establish a causal relationship across nodes. YugabyteDB uses hybrid logical clocks (HLC) which solve the problem by combining physical time clocks that are coarsely synchronized using NTP with Lamport clocks that track causal relationships. Each node in a YugabyteDB cluster first computes its HLC represented as a tuple (physical time component, logical component). HLCs generated on any node are strictly monotonic, and are compared as a tuple. When comparing two HLCs, the physical time component takes precedence over the logical component. Physical time component: YugabyteDB uses the physical clock (`CLOCK_REALTIME` in Linux) of a node to initialize the physical time component of its HLC. Once initialized, the physical time component can only be updated to a higher value. Logical component: For a given physical time component, the logical component of the HLC is a monotonically increasing number that provides ordering of events happening in that same physical time. This is initially set to 0. If the physical time component is updated at any point, the logical component is reset to 0. On any RPC communication between two nodes, HLC values are"
},
{
"data": "The node with the lower HLC updates its HLC to the higher value. If the physical time on a node exceeds the physical time component of its HLC, the latter is updated to the physical time and the logical component is set to 0. Thus, HLCs on a node are monotonically increasing. The same HLC is used to determine the read point in order to determine which updates should be visible to end clients. If an update has safely been replicated onto a majority of nodes, as per the Raft protocol, that update operation can be acknowledged as successful to the client and it is safe to serve all reads up to that HLC. This forms the foundation for . YugabyteDB maintains data consistency internally using multi-version concurrency control (MVCC) without the need to lock rows. Each transaction works on a version of the data in the database as of some hybrid timestamp. This prevents transactions from reading the intermediate updates made by concurrently-running transactions, some of which may be updating the same rows. Each transaction, however, can see its own updates, thereby providing transaction isolation for each database session. Using MVCC minimizes lock contention during the execution of multiple concurrent transactions. YugabyteDB implements MVCC and internally keeps track of multiple versions of values corresponding to the same key (for example, of a particular column in a particular row), as described in . The last part of each key is a timestamp, which enables quick navigation to a particular version of a key in the RocksDB key-value store. The timestamp used for MVCC comes from the algorithm, a distributed timestamp assignment algorithm that combines the advantages of local real-time (physical) clocks and Lamport clocks. The hybrid time algorithm ensures that events connected by a causal chain of the form \"A happens before B on the same server\" or \"A happens on one server, which then sends an RPC to another server, where B happens\", always get assigned hybrid timestamps in an increasing order. This is achieved by propagating a hybrid timestamp with most RPC requests, and always updating the hybrid time on the receiving server to the highest value observed, including the current physical time on the server. Multiple aspects of YugabyteDB's transaction model rely on these properties of hybrid time. Consider the following examples: Hybrid timestamps assigned to committed Raft log entries in the same tablet always keep increasing, even if there are leader changes. This is because the new leader always has all committed entries from previous leaders, and it makes sure to update its hybrid clock with the timestamp of the last committed entry before appending new entries. This property simplifies the logic of selecting a safe hybrid time to select for single-tablet read"
},
{
"data": "A request trying to read data from a tablet at a particular hybrid time needs to ensure that no changes happen in the tablet with timestamp values lower than the read timestamp, which could lead to an inconsistent result set. The need to read from a tablet at a particular timestamp arises during transactional reads across multiple tablets. This condition becomes easier to satisfy due to the fact that the read timestamp is chosen as the current hybrid time on the YB-TServer processing the read request, so hybrid time on the leader of the tablet being read from immediately becomes updated to a value that is at least as high as the read timestamp. Then the read request only has to wait for any relevant entries in the Raft queue with timestamp values lower than the read timestamp to be replicated and applied to RocksDB, and it can proceed with processing the read request after that. YugabyteDB supports the following transaction isolation levels: Read Committed, which maps to the SQL isolation level of the same name. Serializable, which maps to the SQL isolation level of the same name. Snapshot, which maps to the SQL isolation level `REPEATABLE READ`. For more information, see . As with PostgreSQL, YugabyteDB provides various row-level lock modes to control concurrent access to data in tables. These modes can be used for application-controlled locking in cases where MVCC does not provide the desired behavior. For more information, see . End-user statements seamlessly map to one of the types of transactions inside YugabyteDB. The transaction manager of YugabyteDB automatically detects transactions that update a single row (as opposed to transactions that update rows across tablets or nodes). In order to achieve high performance, the updates to a single row directly update the row without having to interact with the transaction status tablet using a single row transaction path (also known as fast path). For more information, see . Because single-row transactions do not have to update the transaction status table, their performance is much higher than . `INSERT`, `UPDATE`, and `DELETE` single-row SQL statements map to single row transactions. All single-row `INSERT` statements: ```sql INSERT INTO table (columns) VALUES (values); ``` Single-row `UPDATE` statements that specify all primary keys: ```sql UPDATE table SET column = <newvalue> WHERE <allprimarykeyvaluesarespecified>; ``` Single-row upsert statements using `UPDATE` .. `ON CONFLICT`: ```sql INSERT INTO table (columns) VALUES (values) ON CONFLICT DO UPDATE SET <values>; ``` If updates are performed on an existing row, they should match the set of values specified in the `INSERT` clause. Single-row `DELETE` statements that specify all primary keys: ```sql DELETE FROM table WHERE <allprimarykeyvaluesare_specified>; ``` A transaction that impacts a set of rows distributed across multiple tablets (which would be hosted on different nodes in the most general case) use a distributed transactions path to execute transactions. Implementing distributed transactions in YugabyteDB requires the use of a transaction manager that can coordinate various operations included in the transaction and finally commit or abort the transaction as needed. For more information, see ."
}
] |
{
"category": "App Definition and Development",
"file_name": "JobEnvConfig.md",
"project_name": "SeaTunnel",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "This document describes env configuration information, the common parameters can be used in all engines. In order to better distinguish between engine parameters, the additional parameters of other engine need to carry a prefix. In flink engine, we use `flink.` as the prefix. In the spark engine, we do not use any prefixes to modify parameters, because the official spark parameters themselves start with `spark.` The following configuration parameters are common to all engines This parameter configures the task name. Third-party packages can be loaded via `jars`, like `jars=\"file://local/jar1.jar;file://local/jar2.jar\"` You can configure whether the task is in batch mode or stream mode through `job.mode`, like `job.mode = \"BATCH\"` or `job.mode = \"STREAMING\"` Gets the interval in which checkpoints are periodically scheduled. In `STREAMING` mode, checkpoints is required, if you do not set it, it will be obtained from the application configuration file `seatunnel.yaml`. In `BATCH` mode, you can disable checkpoints by not setting this parameter. This parameter configures the parallelism of source and sink. Used to control the default retry times when a job fails. The default value is 3, and it only works in the Zeta engine. Specify the method of encryption, if you didn't have the requirement for encrypting or decrypting config files, this option can be ignored. For more details, you can refer to the documentation Here are some SeaTunnel parameter names corresponding to the names in Flink, not all of them, please refer to the official for more. | Flink Configuration Name | SeaTunnel Configuration Name | ||| | pipeline.max-parallelism | flink.pipeline.max-parallelism | | execution.checkpointing.mode | flink.execution.checkpointing.mode | | execution.checkpointing.timeout | flink.execution.checkpointing.timeout | | ... | ... | Because spark configuration items have not been modified, they are not listed here, please refer to the official ."
}
] |
{
"category": "App Definition and Development",
"file_name": "values.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Values\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> {{< localstorage language language-py >}} {{< button-pydoc path=\"apache_beam.transforms.util\" class=\"Values\" >}} Takes a collection of key-value pairs, and returns the value of each element. In the following example, we create a pipeline with a `PCollection` of key-value pairs. Then, we apply `Values` to extract the values and discard the keys. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHON_Values\" show=\"values\" >}} {{< /playground >}} for extracting the key of each component. swaps the key and value of each element. {{< button-pydoc path=\"apache_beam.transforms.util\" class=\"Values\" >}}"
}
] |
{
"category": "App Definition and Development",
"file_name": "dcl_drop_group.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: DROP GROUP statement [YSQL] linkTitle: DROP GROUP description: Use the DROP GROUP statement to drop a role. DROP GROUP is an alias for DROP ROLE and is used to drop a role. menu: v2.18: identifier: dcldropgroup parent: statements type: docs Use the `DROP GROUP` statement to drop a role. `DROP GROUP` is an alias for and is used to drop a role. {{%ebnf%}} drop_group {{%/ebnf%}} See for more details. Drop a group. ```plpgsql yugabyte=# DROP GROUP SysAdmin; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "azure-functions.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: How to Develop Azure Functions with YugabyteDB headerTitle: Develop Azure Functions linkTitle: Azure Functions description: How to Develop Azure Functions with YugabyteDB image: /images/tutorials/azure/icons/Function-App-Icon.svg headcontent: Use YugabyteDB as the backend for Azure Functions menu: preview_tutorials: identifier: tutorials-azure-functions parent: tutorials-azure weight: 20 type: docs In this tutorial, we'll guide you through the steps required to develop and deploy a serverless function using Azure Functions and YugabyteDB. Serverless functions serve many use cases, including API endpoints, scheduled jobs, and file processing. Azure Functions work with a number of , which allow developers to define precisely when a function will be invoked and how it will interact with other services. In the following sections, you will: Cover the prerequisites for developing an Azure Function backed by our fully managed DBaaS, . Deploy a database cluster to Azure on YugabyteDB Managed. Develop an Azure Function using an HTTP trigger. Deploy this serverless function to Azure. Let's begin by installing the dependencies required to begin effectively developing Azure Functions. First, visit GitHub for the we will be deploying to Azure. We'll develop and deploy an HTTP trigger function, which connects to YugabyteDB and returns the current inventory of our shoe store, YB Shoes. A YugabyteDB Managed account. Sign up for a . A subscription, with a resource group and storage account Access to the resource v18+ For steps on creating a cluster in YugabyteDB Managed, see the . For a configuration that provides fault tolerance across availability zones, deploy a on Azure in the westus3 region. However, you can start with an always-free single-node . Add your computer's IP address to the cluster so that you can run your serverless functions locally in development. Now that we have a working cluster in YugabyteDB Managed, let's add some data. Now that our cluster is running in the cloud, we can seed it with data using the provided `schema.sql` and `data.sql` files. Use the to connect to your cluster. Execute the commands in the `schema.sql` script against your cluster. Execute the commands in the `data.sql` script against your cluster. With your cluster seeded with data, it's time to build the serverless function to connect to it. The Azure Functions Core Tools provide a command-line interface for developing functions on your local machine and deploying them to Azure. Initialize a new Azure Functions project. ```sh func init YBAzureFunctions --worker-runtime javascript --model V4 ``` Create a new HTTP trigger"
},
{
"data": "```sh cd YBAzureFunctions func new --template \"Http Trigger\" --name GetShoeInventory ``` Install the YugabyteDB node-postgres Smart Driver. ```sh npm install @yugabytedb/pg ``` Update the boilerplate code in GetShoeInventory.js. ```javascript const { app } = require(\"@azure/functions\"); const { Client } = require(\"pg\"); app.http(\"GetShoeInventory\", { methods: [\"GET\"], authLevel: \"anonymous\", handler: async () => { // Read the PostgreSQL connection settings from local.settings.json console.log(\"process.env.DBHOST:\", process.env.DBHOST); const client = new Client({ user: process.env.DB_USERNAME, host: process.env.DB_HOST, database: \"yugabyte\", password: process.env.DB_PASSWORD, port: 5433, max: 10, idleTimeoutMillis: 0, ssl: { rejectUnauthorized: true, ca: atob(process.env.DB_CERTIFICATE), servername: process.env.DB_HOST, }, }); try { // Connect to the PostgreSQL database await client.connect(); // Query YugabyteDB for shoe inventory const query = \"SELECT i.quantity, s.model, s.brand from inventory i INNER JOIN shoes s on i.shoe_id = s.id;\"; const result = await client.query(query); // Process the query result const data = result.rows; // Close the database connection await client.end(); return { status: 200, body: JSON.stringify(data), }; } catch (error) { console.error(\"Error connecting to the database:\", error); return { status: 500, body: \"Internal Server Error\", }; } }, }); ``` Update local.settings.json with the configuration settings required to run the GetShoeInventory function locally. ```conf local.settings.json ... \"DB_USERNAME\": \"admin\", \"DBPASSWORD\": [YUGABYTEDB_PASSWORD], \"DBHOST\": [YUGABYTEDB_HOST], \"DB_NAME\": \"yugabyte\", \"DBCERTIFICATE\": [BASE64ENCODEDYUGABYTEDBCERTIFICATE] ``` Run the function locally. ```sh func start ``` Test your function in the browser at <http://localhost:7071/api/GetShoeInventory>. Now we'll deploy our function to Azure. We can deploy our application to Azure using the . Create a Function App. ```sh az functionapp create --resource-group RESOURCEGROUPNAME --consumption-plan-location eastus2 --runtime node --runtime-version 18 --functions-version 4 --name YBAzureFunctions --storage-account STORAGEACCOUNTNAME ``` Get the and add them to your YugabyteDB cluster's . This ensures that a connection can be made between Azure Functions and YugabyteDB. ```sh az functionapp show --resource-group RESOURCEGROUPNAME --name YBAzureFunctions --query possibleOutboundIpAddresses --output tsv ``` You can also obtain these addresses in the Networking tab of the Azure portal. Configure the application settings. ```sh az functionapp config appsettings set -g RESOURCEGROUPNAME -n YBAzureFunctions --setting DBHOST=[YUGABYTEDBHOST] DBUSERNAME=admin DBPASSWORD=[YUGABYTEDBPASSWORD] DBCERTIFICATE=[BASE64ENCODEDYUGABYTEDB_CERTIFICATE] ``` Publish Function App to Azure. ```sh func azure functionapp publish YBAzureFunctions ``` Verify that the function was published successfully. ```sh curl https://ybazurefunctions.azurewebsites.net/api/GetShoeInventory ``` ```output.json [{\"quantity\":24,\"model\":\"speedgoat 5\",\"brand\":\"hoka one one\"},{\"quantity\":74,\"model\":\"adizero adios pro 3\",\"brand\":\"adidas\"},{\"quantity\":13,\"model\":\"torrent 2\",\"brand\":\"hoka one one\"},{\"quantity\":99,\"model\":\"vaporfly 3\",\"brand\":\"nike\"}] ``` As you can see, it's easy to begin developing and publishing database-backed Azure Functions with YugabyteDB. If you're also interested in building a Node.js web application for YB Shoes using Azure App Service and YugabyteDB, check out the blog ."
}
] |
{
"category": "App Definition and Development",
"file_name": "yb-perf-v1.0.7.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "3 node, 16 vCPUs. Each write is replicated 3 ways internally. Each key-value is around 64 bytes combined. See for more info. CassandraKeyValue (see ) 97K writes/sec at 2.6ms (256 writers) 220K reads/sec at 1.2ms (256 readers) CassandraSecondaryIndex (see ) 5.9K writes/sec at 10.7ms (64 writers) 200K reads/sec at 1.3ms (256 readers) CassandraBatchKeyValue (see ) 220K writes/sec at 14ms (32 writes) 258K writes/sec at 24ms (64 writers) RedisKeyValue (see ) 89K writes/sec at 2.9ms (256 writers) 170K reads/sec at 1.5ms (256 readers) RedisPipelinedKeyValue (see ) 536K writes/sec at 21ms (24 writers) 538K reads/sec at 14ms (16 readers) See . These tests were done on GCP (Google Cloud). Replication factor (num nodes): 3 Cluster node type: n1-standard-16 CPU type: Intel(R) Xeon(R) CPU @ 2.30GHz specs: {\"memory\": \"60GB\", \"vCPUs\": \"16\", \"numDisks\": \"2\", \"diskSize\": \"375GB\", \"diskType\": \"SSD\"} Each key-value is around 64 bytes combined. Software version: 1.0.7 Compiled on: Aug 31, 2018 ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 97104.20 | 94671.87 | 98545.61 | 1153.82 | 97408.51 | 94671.87 | 94681.18 | 94871.44 | 98410.18 | 98539.33 | 98545.61 | | Write latency, ms/op | 2.64 | 2.60 | 2.70 | 0.03 | 2.63 | 2.60 | 2.60 | 2.60 | 2.70 | 2.70 | 2.70 | | Load tester CPU, user, % | 23.74 | 22.30 | 33.80 | 2.51 | 22.90 | 22.30 | 22.31 | 22.40 | 26.18 | 33.08 | 33.80 | | Load tester CPU, system, % | 9.02 | 8.10 | 14.50 | 1.66 | 8.50 | 8.10 | 8.11 | 8.22 | 12.50 | 14.39 | 14.50 | | Cluster node CPU, user, % | 63.28 | 42.30 | 66.30 | 4.61 | 64.40 | 42.30 | 50.26 | 62.07 | 65.43 | 65.96 | 66.30 | | Cluster node CPU, system, % | 17.16 | 13.10 | 18.10 | 0.81 | 17.30 | 13.10 | 15.50 | 16.70 | 17.80 | 17.86 | 18.10 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 19798.69 | 18668.88 | 20765.01 | 790.45 | 19825.69 | 18668.88 | 18677.57 | 18843.48 | 20729.09 | 20763.28 | 20765.01 | | Read latency, ms/op | 0.81 | 0.77 | 0.86 | 0.03 | 0.81 | 0.77 | 0.77 | 0.77 | 0.85 | 0.86 | 0.86 | | Write throughput, ops/sec | 76801.56 | 74526.34 | 78264.63 | 944.52 | 76780.56 | 74526.34 | 74579.45 | 75625.62 | 78181.10 | 78260.99 | 78264.63 | | Write latency, ms/op | 2.50 | 2.45 | 2.58 | 0.03 | 2.50 | 2.45 | 2.45 | 2.45 | 2.54 | 2.58 | 2.58 | | Load tester CPU, user, % | 22.06 | 20.60 | 27.10 | 1.52 | 21.80 | 20.60 | 20.63 | 20.92 | 24.98 | 26.95 | 27.10 | | Load tester CPU, system, % |"
},
{
"data": "| 5 | 11.40 | 1.18 | 8.70 | 5 | 5.35 | 8.52 | 10.90 | 11.39 | 11.40 | | Cluster node CPU, user, % | 62.43 | 19.40 | 66 | 9.13 | 64.40 | 19.40 | 36.95 | 62.74 | 65.50 | 65.67 | 66 | | Cluster node CPU, system, % | 16.67 | 6.90 | 18.50 | 2.05 | 17 | 6.90 | 11.26 | 16.47 | 17.73 | 17.90 | 18.50 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 70602.47 | 53972.20 | 76536.04 | 7895.53 | 75154.52 | 53972.20 | 54157.66 | 57902.37 | 76372.29 | 76528.52 | 76536.04 | | Read latency, ms/op | 0.92 | 0.84 | 1.18 | 0.11 | 0.85 | 0.84 | 0.84 | 0.84 | 1.11 | 1.18 | 1.18 | | Write throughput, ops/sec | 48591.69 | 46328.19 | 49775.89 | 1049.38 | 49014.84 | 46328.19 | 46338.92 | 46581.32 | 49667.41 | 49771.57 | 49775.89 | | Write latency, ms/op | 2.64 | 2.57 | 2.76 | 0.06 | 2.62 | 2.57 | 2.57 | 2.58 | 2.75 | 2.76 | 2.76 | | Load tester CPU, user, % | 27.50 | 21.60 | 76.90 | 11.38 | 25.50 | 21.60 | 21.77 | 23.34 | 26.36 | 71.85 | 76.90 | | Load tester CPU, system, % | 9.66 | 1.70 | 14.30 | 2.22 | 9.70 | 1.70 | 2.43 | 9.02 | 12.56 | 14.17 | 14.30 | | Cluster node CPU, user, % | 62.73 | 4.70 | 70.40 | 13.57 | 67.05 | 4.70 | 20.92 | 55.57 | 69.13 | 69.79 | 70.40 | | Cluster node CPU, system, % | 14.56 | 2 | 17.10 | 2.80 | 14.95 | 2 | 6.27 | 13.76 | 16.50 | 16.77 | 17.10 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 123549.13 | 3884.75 | 134862.33 | 29204.13 | 132434.15 | 3884.75 | 8754.79 | 102866.69 | 134624.05 | 134852.52 | 134862.33 | | Read latency, ms/op | 1.16 | 0.95 | 4.39 | 0.76 | 0.97 | 0.95 | 0.95 | 0.95 | 1.24 | 4.23 | 4.39 | | Write throughput, ops/sec | 22654.85 | 1417.48 | 24294.21 | 5056.41 | 23961.38 | 1417.48 | 2391.46 | 21087.64 | 24267.98 | 24293.09 | 24294.21 | | Write latency, ms/op | 8.56 | 2.63 | 119.99 | 26.23 | 2.67 | 2.63 | 2.63 | 2.64 | 3.03 | 114.14 | 119.99 | | Load tester CPU, user, % | 40.35 | 28.60 | 99.40 | 23.61 | 30.45 | 28.60 | 28.80 | 29.90 | 97.66 | 99.27 | 99.40 | | Load tester CPU, system, % | 9.57 | 0 | 13.30 | 3.86 | 10.70 | 0 | 0 | 0.18 | 11.57 | 13.04 | 13.30 | | Cluster node CPU, user, % | 58.96 | 0.30 | 70.80 | 23.34 | 68.40 | 0.30 | 0.30 | 1.66 | 70.10 | 70.27 | 70.80 | | Cluster node CPU, system, % | 11.63 | 0.20 | 14.80 | 4.46 |"
},
{
"data": "| 0.20 | 0.30 | 0.79 | 13.92 | 14.30 | 14.80 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 220996.12 | 207691.29 | 225682.32 | 5156.42 | 222720.25 | 207691.29 | 207691.29 | 208433.69 | 225606.75 | 225682.32 | 225682.32 | | Read latency, ms/op | 1.16 | 1.13 | 1.23 | 0.03 | 1.15 | 1.13 | 1.13 | 1.13 | 1.23 | 1.23 | 1.23 | | Write throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Load tester CPU, user, % | 36.71 | 17.90 | 39.70 | 5.11 | 38.30 | 17.90 | 18.80 | 28.36 | 39.50 | 39.69 | 39.70 | | Load tester CPU, system, % | 11.19 | 3.70 | 14.20 | 1.94 | 11.40 | 3.70 | 4.26 | 9.68 | 13.12 | 14.12 | 14.20 | | Cluster node CPU, user, % | 55.44 | 16.90 | 61.80 | 9.68 | 58.20 | 16.90 | 22.30 | 45.12 | 60.16 | 60.50 | 61.80 | | Cluster node CPU, system, % | 5.60 | 4.70 | 10.80 | 1.51 | 5.10 | 4.70 | 4.90 | 4.90 | 8.02 | 10.08 | 10.80 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 89430.02 | 85789.67 | 90412.81 | 1226.58 | 90144.44 | 85789.67 | 86145.25 | 87670.99 | 90376.10 | 90408.08 | 90412.81 | | Write latency, ms/op | 2.86 | 2.83 | 2.98 | 0.04 | 2.84 | 2.83 | 2.83 | 2.83 | 2.92 | 2.97 | 2.98 | | Load tester CPU, user, % | 4.39 | 3.60 | 5.80 | 0.47 | 4.30 | 3.60 | 3.64 | 3.80 | 5.06 | 5.70 | 5.80 | | Load tester CPU, system, % | 7.10 | 5.70 | 9.50 | 0.79 | 7.10 | 5.70 | 5.76 | 6.04 | 8.24 | 9.32 | 9.50 | | Cluster node CPU, user, % | 62.27 | 59.50 | 64.90 | 1.32 | 62.10 | 59.50 | 60.33 | 60.53 | 64.17 | 64.37 | 64.90 | | Cluster node CPU, system, % | 22.67 | 21.80 | 23.30 | 0.33 | 22.70 | 21.80 | 21.96 | 22.20 | 23.10 | 23.20 | 23.30 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 15490.67 | 15178.42 | 15879.06 | 150.11 | 15509.20 | 15178.42 | 15186.42 | 15241.98 | 15683.27 | 15853.84 |"
},
{
"data": "| | Read latency, ms/op | 1.03 | 1.01 | 1.05 | 0.01 | 1.03 | 1.01 | 1.01 | 1.02 | 1.05 | 1.05 | 1.05 | | Write throughput, ops/sec | 73459.64 | 72085.32 | 74302.45 | 700.60 | 73623.94 | 72085.32 | 72090.94 | 72114.34 | 74160.68 | 74286.31 | 74302.45 | | Write latency, ms/op | 2.61 | 2.58 | 2.66 | 0.02 | 2.61 | 2.58 | 2.58 | 2.59 | 2.66 | 2.66 | 2.66 | | Load tester CPU, user, % | 4.01 | 3.20 | 5.50 | 0.56 | 4 | 3.20 | 3.24 | 3.44 | 4.98 | 5.50 | 5.50 | | Load tester CPU, system, % | 6.62 | 5.30 | 9.40 | 0.90 | 6.60 | 5.30 | 5.32 | 5.44 | 7.84 | 9.20 | 9.40 | | Cluster node CPU, user, % | 61.77 | 58.70 | 63.70 | 0.95 | 61.95 | 58.70 | 60 | 60.36 | 62.87 | 63.23 | 63.70 | | Cluster node CPU, system, % | 22.57 | 21.90 | 23.70 | 0.36 | 22.60 | 21.90 | 21.90 | 22.10 | 23 | 23.10 | 23.70 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 56349.41 | 53466.73 | 58844.44 | 1524.23 | 56614.10 | 53466.73 | 53500.03 | 53806.68 | 58017.99 | 58690.58 | 58844.44 | | Read latency, ms/op | 1.14 | 1.09 | 1.20 | 0.03 | 1.13 | 1.09 | 1.09 | 1.10 | 1.19 | 1.20 | 1.20 | | Write throughput, ops/sec | 46738.69 | 43832.59 | 47962.94 | 1359.21 | 47365.48 | 43832.59 | 43834.24 | 43912.10 | 47854.37 | 47950.40 | 47962.94 | | Write latency, ms/op | 2.74 | 2.67 | 2.92 | 0.08 | 2.70 | 2.67 | 2.67 | 2.67 | 2.92 | 2.92 | 2.92 | | Load tester CPU, user, % | 5.35 | 4.70 | 7.80 | 0.63 | 5.20 | 4.70 | 4.72 | 4.84 | 5.98 | 7.46 | 7.80 | | Load tester CPU, system, % | 8.33 | 7.30 | 12.20 | 1.04 | 8.10 | 7.30 | 7.32 | 7.40 | 9.66 | 11.70 | 12.20 | | Cluster node CPU, user, % | 62.14 | 57.80 | 65.10 | 1.48 | 62.30 | 57.80 | 58.80 | 60.46 | 63.70 | 64.70 | 65.10 | | Cluster node CPU, system, % | 21.80 | 19.20 | 22.70 | 0.68 | 22 | 19.20 | 19.96 | 21.10 | 22.40 | 22.70 | 22.70 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 104896.16 | 98113.18 | 109967.65 | 2864.26 | 105133.04 | 98113.18 | 98334.15 | 100153.84 | 109173.23 | 109899.92 | 109967.65 | | Read latency, ms/op | 1.22 | 1.16 | 1.30 | 0.03 | 1.22 | 1.16 | 1.16 | 1.17 | 1.28 | 1.30 | 1.30 | | Write throughput, ops/sec | 22306.59 | 20811.96 | 23139.20 | 567.97 | 22449.34 | 20811.96 | 20891.52 | 21219.47 | 22947.24 | 23102.43 | 23139.20 | | Write latency, ms/op | 2.87 | 2.77 | 3.07 |"
},
{
"data": "| 2.85 | 2.77 | 2.77 | 2.79 | 3.02 | 3.06 | 3.07 | | Load tester CPU, user, % | 6.93 | 5.90 | 8 | 0.48 | 7 | 5.90 | 5.96 | 6.28 | 7.56 | 7.92 | 8 | | Load tester CPU, system, % | 10.85 | 9.40 | 12.20 | 0.79 | 10.80 | 9.40 | 9.44 | 9.68 | 12.10 | 12.18 | 12.20 | | Cluster node CPU, user, % | 62.65 | 58.80 | 64.60 | 1.26 | 62.65 | 58.80 | 59.90 | 60.93 | 64.17 | 64.33 | 64.60 | | Cluster node CPU, system, % | 21.51 | 18.80 | 22.50 | 0.68 | 21.70 | 18.80 | 19.83 | 20.83 | 22.17 | 22.40 | 22.50 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 170546.55 | 167257.28 | 172149.37 | 1038.35 | 170441.94 | 167257.28 | 167483.01 | 169558.69 | 171972.34 | 172139.37 | 172149.37 | | Read latency, ms/op | 1.50 | 1.49 | 1.53 | 0.01 | 1.50 | 1.49 | 1.49 | 1.49 | 1.51 | 1.53 | 1.53 | | Write throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Load tester CPU, user, % | 9.33 | 7.60 | 12.20 | 1.11 | 9.20 | 7.60 | 7.67 | 8.22 | 11.80 | 12.18 | 12.20 | | Load tester CPU, system, % | 15.01 | 12.30 | 19.60 | 1.66 | 14.80 | 12.30 | 12.42 | 13.25 | 18.49 | 19.55 | 19.60 | | Cluster node CPU, user, % | 57.14 | 54.30 | 59.50 | 1.09 | 56.95 | 54.30 | 55.80 | 55.97 | 58.53 | 59.06 | 59.50 | | Cluster node CPU, system, % | 20.77 | 19.60 | 21.80 | 0.50 | 20.80 | 19.60 | 19.90 | 20.10 | 21.50 | 21.66 | 21.80 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 220214.77 | 197638.44 | 243149.54 | 16964.00 | 213016.68 | 197638.44 | 197638.44 | 201416.52 | 241034.01 | 243149.54 | 243149.54 | | Write latency, ms/op | 14.61 | 13.17 | 16.19 | 1.11 | 15.02 | 13.17 | 13.17 | 13.28 | 15.88 | 16.19 | 16.19 | | Load tester CPU, user, % | 5.85 | 5 | 12.20 | 1.55 | 5.50 | 5 | 5.00 | 5.10 | 6.63 | 11.92 | 12.20 | | Load tester CPU, system, % | 1.00 | 0.70 | 2.30 | 0.33 | 0.90 | 0.70 | 0.70 |"
},
{
"data": "| 1.28 | 2.25 | 2.30 | | Cluster node CPU, user, % | 63.32 | 14.60 | 70.20 | 11.04 | 65.50 | 14.60 | 24.98 | 61.64 | 68.64 | 70.08 | 70.20 | | Cluster node CPU, system, % | 11.20 | 3.80 | 12.60 | 1.68 | 11.60 | 3.80 | 5.56 | 11 | 12 | 12.18 | 12.60 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 258833.91 | 243113.38 | 281044.22 | 12207.27 | 257488.80 | 243113.38 | 243113.38 | 243188.86 | 277027.70 | 281044.22 | 281044.22 | | Write latency, ms/op | 24.78 | 22.76 | 26.33 | 1.16 | 24.87 | 22.76 | 22.76 | 23.12 | 26.32 | 26.33 | 26.33 | | Load tester CPU, user, % | 6.70 | 2.10 | 21.60 | 3.67 | 5.95 | 2.10 | 2.28 | 5.60 | 8.14 | 20.93 | 21.60 | | Load tester CPU, system, % | 1.18 | 0.20 | 3 | 0.59 | 1 | 0.20 | 0.24 | 0.90 | 2.10 | 2.95 | 3 | | Cluster node CPU, user, % | 69.53 | 4 | 76.50 | 15.07 | 73.40 | 4 | 15.24 | 66.22 | 75.92 | 76.16 | 76.50 | | Cluster node CPU, system, % | 9.49 | 1.80 | 10.60 | 1.73 | 9.80 | 1.80 | 3.54 | 9.40 | 10.36 | 10.40 | 10.60 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 344754.85 | 299067.10 | 370454.26 | 17581.51 | 347958.79 | 299067.10 | 299692.18 | 312068.43 | 361219.99 | 370009.54 | 370454.26 | | Write latency, ms/op | 139.49 | 130.32 | 159.92 | 7.65 | 136.68 | 130.32 | 130.46 | 133.17 | 154.40 | 159.66 | 159.92 | | Load tester CPU, user, % | 4.25 | 3.10 | 8.70 | 1.27 | 3.80 | 3.10 | 3.11 | 3.24 | 5.90 | 8.43 | 8.70 | | Load tester CPU, system, % | 0.86 | 0.40 | 2.20 | 0.51 | 0.60 | 0.40 | 0.40 | 0.42 | 1.88 | 2.18 | 2.20 | | Cluster node CPU, user, % | 46.94 | 26.60 | 78.90 | 12.86 | 42.35 | 26.60 | 31.04 | 32.75 | 63.29 | 74.66 | 78.90 | | Cluster node CPU, system, % | 2.67 | 1.80 | 3.50 | 0.48 | 2.60 | 1.80 | 1.94 | 2.07 | 3.40 | 3.46 |"
},
{
"data": "| +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 14561.06 | 13726.19 | 14910.57 | 435.82 | 14806.58 | 13726.19 | 13726.19 | 13816.28 | 14900.82 | 14910.57 | 14910.57 | | Read latency, ms/op | 6.60 | 6.44 | 7 | 0.20 | 6.49 | 6.44 | 6.44 | 6.44 | 6.96 | 7 | 7 | | Write throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Load tester CPU, user, % | 7.99 | 6.40 | 15.10 | 1.74 | 7.70 | 6.40 | 6.42 | 6.84 | 8.19 | 14.75 | 15.10 | | Load tester CPU, system, % | 3.35 | 2.30 | 4.10 | 0.40 | 3.40 | 2.30 | 2.31 | 2.56 | 3.79 | 4.08 | 4.10 | | Cluster node CPU, user, % | 57.02 | 9.80 | 95.30 | 30.74 | 62.80 | 9.80 | 18.91 | 19.31 | 95 | 95.10 | 95.30 | | Cluster node CPU, system, % | 1.02 | 0.70 | 1.60 | 0.22 | 0.90 | 0.70 | 0.80 | 0.81 | 1.40 | 1.50 | 1.60 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 4910.79 | 4487.06 | 5724.97 | 307.62 | 4855.65 | 4487.06 | 4488.36 | 4514.09 | 5286.71 | 5703.26 | 5724.97 | | Read latency, ms/op | 13.08 | 11.17 | 14.29 | 0.79 | 13.20 | 11.17 | 11.22 | 12.10 | 14.24 | 14.29 | 14.29 | | Write throughput, ops/sec | 268379.38 | 249472.94 | 283363.10 | 8313.74 | 268873 | 249472.94 | 249787.86 | 256071.50 | 278128.69 | 283103.38 | 283363.10 | | Write latency, ms/op | 59.67 | 56.13 | 64.04 | 1.92 | 59.38 | 56.13 | 56.20 | 57.66 | 62.62 | 63.97 | 64.04 | | Load tester CPU, user, % | 5.42 | 4.20 | 19.90 | 3.35 | 4.60 | 4.20 | 4.21 | 4.30 | 5.92 | 18.51 | 19.90 | | Load tester CPU, system, % | 1.78 | 1.40 | 2.80 | 0.41 | 1.60 | 1.40 | 1.40 | 1.40 | 2.68 | 2.80 | 2.80 | | Cluster node CPU, user, % | 40.14 | 9.60 | 76.50 | 24.94 | 34.70 | 9.60 | 10.51 | 11.08 | 74.43 | 75.50 | 76.50 | | Cluster node CPU, system, % | 2.63 | 1.20 | 4.30 | 1.17 | 2.70 | 1.20 | 1.20 | 1.30 | 4.10 | 4.17 | 4.30 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 2273.71 | 1585.59 | 2900.08 | 401.17 | 2265.44 | 1585.59 | 1585.99 | 1613.90 | 2811.20 | 2896.12 | 2900.08 | | Read latency, ms/op | 14.57 | 11.12 | 20.39 | 2.83 |"
},
{
"data": "| 11.12 | 11.15 | 11.64 | 19.96 | 20.37 | 20.39 | | Write throughput, ops/sec | 200045.56 | 178478.12 | 223743.09 | 12882.33 | 197878.04 | 178478.12 | 178528.14 | 180278.61 | 222550.86 | 223689.46 | 223743.09 | | Write latency, ms/op | 160.28 | 141.43 | 176.43 | 9.91 | 161.10 | 141.43 | 141.58 | 144.53 | 174.22 | 176.33 | 176.43 | | Load tester CPU, user, % | 3.87 | 3 | 16.20 | 2.84 | 3.20 | 3 | 3 | 3 | 3.90 | 14.97 | 16.20 | | Load tester CPU, system, % | 0.91 | 0.60 | 1.80 | 0.27 | 0.80 | 0.60 | 0.61 | 0.70 | 1.30 | 1.75 | 1.80 | | Cluster node CPU, user, % | 31.47 | 6.60 | 67.70 | 20.24 | 25.20 | 6.60 | 6.94 | 7.84 | 59.60 | 62.14 | 67.70 | | Cluster node CPU, system, % | 1.97 | 0.90 | 3.60 | 0.82 | 1.90 | 0.90 | 0.90 | 1 | 3.10 | 3.17 | 3.60 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 1216.71 | 1121.88 | 1285.67 | 42.35 | 1214.46 | 1121.88 | 1121.88 | 1154.52 | 1282.95 | 1285.67 | 1285.67 | | Write latency, ms/op | 9.87 | 9.33 | 10.69 | 0.35 | 9.87 | 9.33 | 9.33 | 9.35 | 10.39 | 10.69 | 10.69 | | Load tester CPU, user, % | 3.83 | 3 | 9 | 1.47 | 3.30 | 3 | 3 | 3 | 6.03 | 9 | 9 | | Load tester CPU, system, % | 0.97 | 0.60 | 1.20 | 0.18 | 0.90 | 0.60 | 0.60 | 0.78 | 1.20 | 1.20 | 1.20 | | Cluster node CPU, user, % | 34.25 | 12.30 | 41.70 | 5.85 | 35.90 | 12.30 | 12.50 | 31.10 | 38.30 | 40.21 | 41.70 | | Cluster node CPU, system, % | 3.66 | 1.60 | 4.70 | 0.62 | 3.80 | 1.60 | 1.60 | 3.10 | 4.30 | 4.41 | 4.70 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 1451.56 | 1414.62 | 1504.25 | 22.64 | 1451.25 | 1414.62 | 1414.62 | 1417.68 | 1489.99 | 1504.25 | 1504.25 | | Read latency, ms/op | 2.75 | 2.66 | 2.83 | 0.04 | 2.75 | 2.66 | 2.66 | 2.68 | 2.82 | 2.83 | 2.83 | | Write throughput, ops/sec | 345.29 | 329.36 | 375.76 | 11.44 | 344.16 | 329.36 | 329.36 | 332.08 | 369.04 | 375.76 | 375.76 | | Write latency, ms/op | 11.59 | 10.64 | 12.13 | 0.37 | 11.61 | 10.64 | 10.64 | 10.84 | 12.05 | 12.13 |"
},
{
"data": "| | Load tester CPU, user, % | 3.46 | 2.80 | 6.80 | 1.06 | 3 | 2.80 | 2.80 | 2.80 | 5.90 | 6.80 | 6.80 | | Load tester CPU, system, % | 1.04 | 0.50 | 1.20 | 0.19 | 1.10 | 0.50 | 0.50 | 0.60 | 1.20 | 1.20 | 1.20 | | Cluster node CPU, user, % | 16.92 | 2.10 | 19.40 | 4.13 | 18.35 | 2.10 | 2.96 | 9.02 | 19.10 | 19.30 | 19.40 | | Cluster node CPU, system, % | 1.67 | 0.40 | 2 | 0.35 | 1.80 | 0.40 | 0.52 | 1.06 | 1.90 | 1.99 | 2 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 536777.80 | 513708.71 | 568294.88 | 19680.67 | 527968.16 | 513708.71 | 513722.82 | 514793.82 | 566214.62 | 568131.58 | 568294.88 | | Write latency, ms/op | 21.54 | 20.32 | 22.54 | 0.80 | 21.88 | 20.32 | 20.33 | 20.38 | 22.43 | 22.53 | 22.54 | | Load tester CPU, user, % | 8.45 | 7.90 | 9.10 | 0.33 | 8.40 | 7.90 | 7.90 | 7.93 | 8.87 | 9.07 | 9.10 | | Load tester CPU, system, % | 0.64 | 0.50 | 0.70 | 0.07 | 0.60 | 0.50 | 0.50 | 0.53 | 0.70 | 0.70 | 0.70 | | Cluster node CPU, user, % | 69.64 | 65 | 73.90 | 2.17 | 70 | 65 | 66 | 66.40 | 72.40 | 73.10 | 73.90 | | Cluster node CPU, system, % | 8.87 | 8.30 | 9.80 | 0.35 | 8.90 | 8.30 | 8.40 | 8.50 | 9.30 | 9.65 | 9.80 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 88680.28 | 76284.49 | 100983.10 | 7774.40 | 88686.90 | 76284.49 | 76507.27 | 77895.22 | 99164.43 | 100773.29 | 100983.10 | | Read latency, ms/op | 16.30 | 14.13 | 18.97 | 1.51 | 16.16 | 14.13 | 14.16 | 14.39 | 18.50 | 18.90 | 18.97 | | Write throughput, ops/sec | 337351.42 | 277549.39 | 400233.68 | 37768.04 | 340999.45 | 277549.39 | 279305.75 | 290584.38 | 393525.19 | 399303.75 | 400233.68 | | Write latency, ms/op | 20.13 | 16.63 | 24.36 | 2.32 | 19.63 | 16.63 | 16.67 | 16.92 | 23.18 | 24.20 | 24.36 | | Load tester CPU, user, % | 6.39 | 5.30 | 7.50 | 0.68 | 6.30 | 5.30 | 5.33 | 5.53 | 7.40 | 7.48 | 7.50 | | Load tester CPU, system, % | 0.45 | 0.30 | 0.60 | 0.07 | 0.45 | 0.30 | 0.32 | 0.40 | 0.50 | 0.58 | 0.60 | | Cluster node CPU, user, % |"
},
{
"data": "| 57.70 | 72.40 | 4.16 | 67 | 57.70 | 58.25 | 58.60 | 70.30 | 71.35 | 72.40 | | Cluster node CPU, system, % | 10.55 | 8.40 | 12.10 | 0.91 | 10.60 | 8.40 | 9 | 9.20 | 11.90 | 12 | 12.10 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 139069.86 | 122478.02 | 157470.97 | 12273.46 | 137774.10 | 122478.02 | 122598.17 | 123398.09 | 155200.29 | 157215.71 | 157470.97 | | Read latency, ms/op | 20.90 | 18.33 | 23.69 | 1.90 | 20.91 | 18.33 | 18.35 | 18.51 | 23.47 | 23.66 | 23.69 | | Write throughput, ops/sec | 223440.40 | 171168.66 | 290643.30 | 41781.37 | 210962.78 | 171168.66 | 171783.83 | 176739.34 | 289585.58 | 290583.74 | 290643.30 | | Write latency, ms/op | 22.23 | 16.35 | 28.25 | 4.15 | 22.77 | 16.35 | 16.36 | 16.45 | 27.45 | 28.16 | 28.25 | | Load tester CPU, user, % | 5.53 | 4.60 | 6.60 | 0.68 | 5.55 | 4.60 | 4.60 | 4.60 | 6.47 | 6.58 | 6.60 | | Load tester CPU, system, % | 0.43 | 0.30 | 0.60 | 0.07 | 0.40 | 0.30 | 0.30 | 0.33 | 0.50 | 0.58 | 0.60 | | Cluster node CPU, user, % | 64.86 | 55.40 | 76.70 | 6.50 | 62.50 | 55.40 | 57.15 | 58.10 | 74.60 | 75.10 | 76.70 | | Cluster node CPU, system, % | 10.77 | 8.20 | 13.80 | 1.81 | 10.60 | 8.20 | 8.45 | 8.50 | 13.50 | 13.65 | 13.80 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 293671.22 | 277047.28 | 317859.09 | 8978.75 | 292650.92 | 277047.28 | 277812.89 | 282720.35 | 307018.98 | 316418.04 | 317859.09 | | Read latency, ms/op | 19.70 | 18.10 | 20.92 | 0.62 | 19.77 | 18.10 | 18.19 | 18.77 | 20.48 | 20.86 | 20.92 | | Write throughput, ops/sec | 135653.43 | 112878.81 | 146271.57 | 9612.08 | 139723.92 | 112878.81 | 113599.10 | 118131.42 | 143233.46 | 145851.78 | 146271.57 | | Write latency, ms/op | 13.96 | 12.88 | 16.84 | 1.15 | 13.43 | 12.88 | 12.91 | 13.11 | 16.08 | 16.73 | 16.84 | | Load tester CPU, user, % | 6.34 | 5.30 | 11.20 | 1.26 | 5.90 | 5.30 | 5.33 | 5.53 | 7.74 | 10.69 | 11.20 | | Load tester CPU, system, % | 0.86 | 0.50 | 3.20 | 0.78 | 0.50 | 0.50 | 0.50 | 0.50 | 2.54 | 3.11 | 3.20 | | Cluster node CPU, user, % | 59.95 | 50.40 | 71.30 | 6.47 | 59.10 | 50.40 | 51.65 | 52.30 | 68.40 | 69.60 | 71.30 | | Cluster node CPU, system, % | 13.87 | 11.30 | 15.70 | 1.07 | 13.90 | 11.30 | 11.50 | 12.70 | 15.20 | 15.35 | 15.70 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max |"
},
{
"data": "| median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 538778.59 | 527217.65 | 548617.03 | 6743.17 | 539364.97 | 527217.65 | 527247.59 | 527915.86 | 548247.66 | 548602.09 | 548617.03 | | Read latency, ms/op | 14.09 | 13.79 | 14.43 | 0.19 | 14.04 | 13.79 | 13.79 | 13.86 | 14.41 | 14.43 | 14.43 | | Write throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Load tester CPU, user, % | 6.75 | 6.10 | 7.30 | 0.34 | 6.80 | 6.10 | 6.10 | 6.16 | 7.20 | 7.29 | 7.30 | | Load tester CPU, system, % | 0.53 | 0.40 | 1 | 0.12 | 0.50 | 0.40 | 0.40 | 0.42 | 0.68 | 0.97 | 1 | | Cluster node CPU, user, % | 57.45 | 53 | 62.40 | 2.95 | 57.20 | 53 | 53.32 | 53.64 | 61.76 | 62.24 | 62.40 | | Cluster node CPU, system, % | 12.71 | 11.30 | 13.50 | 0.45 | 12.80 | 11.30 | 11.84 | 12.20 | 13.20 | 13.28 | 13.50 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 17383.24 | 16800.52 | 17777.96 | 320.18 | 17496.59 | 16800.52 | 16810.44 | 16871.82 | 17766.83 | 17777.71 | 17777.96 | | Write latency, ms/op | 1.38 | 1.35 | 1.43 | 0.03 | 1.37 | 1.35 | 1.35 | 1.35 | 1.42 | 1.43 | 1.43 | | Load tester CPU, user, % | 4.74 | 3.40 | 5.30 | 0.36 | 4.80 | 3.40 | 3.55 | 4.46 | 5.07 | 5.27 | 5.30 | | Load tester CPU, system, % | 2.76 | 1.70 | 3.10 | 0.33 | 2.85 | 1.70 | 1.73 | 2.14 | 2.97 | 3.08 | 3.10 | | Cluster node CPU, user, % | 37.85 | 33.40 | 44.90 | 2.91 | 37.70 | 33.40 | 33.85 | 34.30 | 41.70 | 43.40 | 44.90 | | Cluster node CPU, system, % | 14.39 | 13.20 | 16 | 0.73 | 14.20 | 13.20 | 13.40 | 13.60 | 15.50 | 15.65 | 16 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 3394.39 | 3304.24 | 3458.36 | 35.95 | 3400.31 | 3304.24 | 3309.93 | 3343.07 | 3445.23 | 3457.64 | 3458.36 | | Read latency, ms/op | 1.77 | 1.73 | 1.81 | 0.02 | 1.76 | 1.73 | 1.73 | 1.74 | 1.79 | 1.81 |"
},
{
"data": "| | Write throughput, ops/sec | 8240.29 | 7400.23 | 8485.32 | 285.45 | 8359.17 | 7400.23 | 7439.52 | 7710.05 | 8449.09 | 8481.05 | 8485.32 | | Write latency, ms/op | 1.21 | 1.18 | 1.35 | 0.04 | 1.19 | 1.18 | 1.18 | 1.18 | 1.29 | 1.34 | 1.35 | | Load tester CPU, user, % | 27.50 | 26.30 | 27.80 | 0.43 | 27.70 | 26.30 | 26.33 | 26.62 | 27.80 | 27.80 | 27.80 | | Load tester CPU, system, % | 1.91 | 1.30 | 2 | 0.16 | 2 | 1.30 | 1.38 | 1.80 | 2 | 2 | 2 | | Cluster node CPU, user, % | 21.95 | 18.50 | 26.40 | 1.61 | 21.90 | 18.50 | 19.40 | 19.80 | 24.30 | 25.30 | 26.40 | | Cluster node CPU, system, % | 9.06 | 7.90 | 10.10 | 0.49 | 9.10 | 7.90 | 8.15 | 8.40 | 9.70 | 9.80 | 10.10 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 6290.91 | 5992.60 | 6401.93 | 90.03 | 6303.39 | 5992.60 | 6011.30 | 6141.39 | 6391.70 | 6400.45 | 6401.93 | | Read latency, ms/op | 1.91 | 1.87 | 2 | 0.03 | 1.90 | 1.87 | 1.87 | 1.88 | 1.95 | 1.99 | 2 | | Write throughput, ops/sec | 2500.82 | 2314.97 | 2612.42 | 71.67 | 2506.17 | 2314.97 | 2328.53 | 2409.11 | 2597.52 | 2610.95 | 2612.42 | | Write latency, ms/op | 1.60 | 1.53 | 1.73 | 0.05 | 1.60 | 1.53 | 1.53 | 1.54 | 1.66 | 1.72 | 1.73 | | Load tester CPU, user, % | 48.03 | 47 | 51.90 | 0.93 | 47.95 | 47 | 47.05 | 47.30 | 48.20 | 51.34 | 51.90 | | Load tester CPU, system, % | 1.03 | 0.90 | 1.30 | 0.09 | 1 | 0.90 | 0.90 | 0.93 | 1.17 | 1.28 | 1.30 | | Cluster node CPU, user, % | 13.66 | 11.70 | 19.50 | 2.06 | 12.90 | 11.70 | 12 | 12.10 | 17.90 | 18.95 | 19.50 | | Cluster node CPU, system, % | 6.55 | 4.80 | 7.10 | 0.40 | 6.60 | 4.80 | 5.55 | 6.20 | 6.90 | 7 | 7.10 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 8212.33 | 7884.06 | 8461.67 | 131.35 | 8234.57 | 7884.06 | 7888.47 | 7981.12 | 8377.70 | 8457.61 | 8461.67 | | Read latency, ms/op | 1.95 | 1.89 | 2.03 | 0.03 | 1.94 | 1.89 | 1.89 | 1.91 | 2.01 | 2.03 | 2.03 | | Write throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Load tester CPU, user, % | 62.87 | 59.70 | 68.50 |"
},
{
"data": "| 62.60 | 59.70 | 59.78 | 60.44 | 66.89 | 68.38 | 68.50 | | Load tester CPU, system, % | 0.96 | 0.80 | 1.90 | 0.22 | 0.90 | 0.80 | 0.80 | 0.83 | 1 | 1.76 | 1.90 | | Cluster node CPU, user, % | 6.33 | 4.80 | 9.40 | 0.78 | 6.40 | 4.80 | 5.44 | 5.50 | 6.83 | 7.94 | 9.40 | | Cluster node CPU, system, % | 2.57 | 1.50 | 3 | 0.29 | 2.65 | 1.50 | 1.97 | 2.27 | 2.80 | 2.90 | 3 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 5982.43 | 5852.70 | 6110.22 | 63.95 | 5999.94 | 5852.70 | 5852.70 | 5866.33 | 6070.63 | 6110.22 | 6110.22 | | Write latency, ms/op | 10.70 | 10.47 | 10.94 | 0.12 | 10.66 | 10.47 | 10.47 | 10.54 | 10.91 | 10.94 | 10.94 | | Load tester CPU, user, % | 4.74 | 2.60 | 12.60 | 2.00 | 4.30 | 2.60 | 2.60 | 4 | 5.80 | 12.60 | 12.60 | | Load tester CPU, system, % | 2.70 | 0.40 | 3 | 0.66 | 2.90 | 0.40 | 0.40 | 1.40 | 3 | 3 | 3 | | Cluster node CPU, user, % | 50.09 | 4.80 | 56.10 | 12.44 | 54 | 4.80 | 5.92 | 26.49 | 55.09 | 55.29 | 56.10 | | Cluster node CPU, system, % | 22.47 | 3.50 | 24.60 | 5.01 | 24.10 | 3.50 | 4.59 | 13.86 | 24.40 | 24.50 | 24.60 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 53908.61 | 52852.11 | 55146.34 | 648.27 | 53892.85 | 52852.11 | 52852.11 | 52975.66 | 54827.95 | 55146.34 | 55146.34 | | Read latency, ms/op | 1.19 | 1.16 | 1.21 | 0.01 | 1.19 | 1.16 | 1.16 | 1.17 | 1.21 | 1.21 | 1.21 | | Write throughput, ops/sec | 3462.76 | 3362.93 | 3520.80 | 40.78 | 3472.81 | 3362.93 | 3362.93 | 3388.25 | 3520.78 | 3520.80 | 3520.80 | | Write latency, ms/op | 9.24 | 9.09 | 9.52 | 0.11 | 9.21 | 9.09 | 9.09 | 9.09 | 9.45 | 9.52 | 9.52 | | Load tester CPU, user, % | 21.51 | 15.20 | 89.20 | 16.48 | 17.70 | 15.20 | 15.20 | 16.40 | 24.10 | 89.20 | 89.20 | | Load tester CPU, system, % | 8.37 | 0.20 | 10.10 | 2.78 | 9.50 | 0.20 | 0.20 | 1.30 | 9.70 | 10.10 | 10.10 | | Cluster node CPU, user, % | 56.54 | 1.20 | 63.80 | 17.15 | 62.20 | 1.20 | 1.73 | 15.48 |"
},
{
"data": "| 63.59 | 63.80 | | Cluster node CPU, system, % | 17.67 | 1 | 20.10 | 4.94 | 19.20 | 1 | 1.39 | 6.65 | 19.60 | 19.70 | 20.10 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 107411.60 | 43870.55 | 114597.73 | 16502.85 | 110726.21 | 43870.55 | 43870.55 | 95455.43 | 114023.56 | 114597.73 | 114597.73 | | Read latency, ms/op | 1.19 | 1.12 | 1.80 | 0.16 | 1.16 | 1.12 | 1.12 | 1.12 | 1.30 | 1.80 | 1.80 | | Write throughput, ops/sec | 1760.16 | 1020.77 | 1833.20 | 191.71 | 1815.01 | 1020.77 | 1020.77 | 1606.50 | 1825.36 | 1833.20 | 1833.20 | | Write latency, ms/op | 11.10 | 8.73 | 46.99 | 9.25 | 8.81 | 8.73 | 8.73 | 8.76 | 16.71 | 46.99 | 46.99 | | Load tester CPU, user, % | 32.91 | 18 | 99.80 | 24.77 | 23.40 | 18 | 18 | 22.60 | 97.70 | 99.80 | 99.80 | | Load tester CPU, system, % | 7.94 | 0 | 10.70 | 3.57 | 9.40 | 0 | 0 | 0 | 9.80 | 10.70 | 10.70 | | Cluster node CPU, user, % | 55.86 | 0.50 | 69.30 | 23.90 | 66.50 | 0.50 | 0.60 | 0.64 | 68.49 | 68.60 | 69.30 | | Cluster node CPU, system, % | 12.35 | 0.40 | 15.50 | 4.99 | 14.40 | 0.40 | 0.50 | 0.64 | 15.09 | 15.20 | 15.50 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 200839.11 | 173423.77 | 209657.32 | 12170.03 | 207288.33 | 173423.77 | 173423.77 | 175202.13 | 209236.59 | 209657.32 | 209657.32 | | Read latency, ms/op | 1.28 | 1.22 | 1.48 | 0.09 | 1.23 | 1.22 | 1.22 | 1.22 | 1.46 | 1.48 | 1.48 | | Write throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Load tester CPU, user, % | 24.77 | 1 | 37.10 | 15.51 | 34.70 | 1 | 1.45 | 2.62 | 36.98 | 37.10 | 37.10 | | Load tester CPU, system, % | 8.05 | 0.50 | 13.20 | 4.76 | 11.10 | 0.50 | 0.53 | 0.72 | 11.74 | 12.78 | 13.20 | | Cluster node CPU, user, % | 53.07 | 5.90 | 71.20 | 19.62 | 65.10 | 5.90 | 10.30 | 14.60 | 68.44 | 69.52 | 71.20 | | Cluster node CPU, system, % | 9.04 | 3.90 | 22.10 | 6.26 | 5.60 | 3.90 | 4.68 | 4.86 | 21.20 | 21.60 | 22.10 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max |"
},
{
"data": "| median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Read latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write throughput, ops/sec | 5468.89 | 5356.85 | 5579.73 | 60.56 | 5467.99 | 5356.85 | 5356.94 | 5362.07 | 5564.07 | 5579.10 | 5579.73 | | Write latency, ms/op | 5.85 | 5.73 | 5.97 | 0.06 | 5.85 | 5.73 | 5.73 | 5.75 | 5.97 | 5.97 | 5.97 | | Load tester CPU, user, % | 4.45 | 3.60 | 14 | 2.23 | 3.90 | 3.60 | 3.61 | 3.70 | 5.36 | 13.15 | 14 | | Load tester CPU, system, % | 2.61 | 1.60 | 2.90 | 0.25 | 2.70 | 1.60 | 1.69 | 2.50 | 2.86 | 2.90 | 2.90 | | Cluster node CPU, user, % | 49.87 | 23.90 | 53.30 | 5.70 | 50.95 | 23.90 | 33.46 | 49.41 | 52.46 | 52.86 | 53.30 | | Cluster node CPU, system, % | 22.79 | 13.70 | 24 | 1.98 | 23.20 | 13.70 | 17.14 | 22.57 | 23.80 | 23.86 | 24 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 37315.38 | 34253.91 | 38845.88 | 1289.69 | 37851.37 | 34253.91 | 34253.91 | 34409.04 | 38384.45 | 38845.88 | 38845.88 | | Read latency, ms/op | 0.86 | 0.82 | 0.93 | 0.03 | 0.84 | 0.82 | 0.82 | 0.83 | 0.93 | 0.93 | 0.93 | | Write throughput, ops/sec | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Write latency, ms/op | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | Load tester CPU, user, % | 12.89 | 1.40 | 16.50 | 5.24 | 15.45 | 1.40 | 1.79 | 2.57 | 16.30 | 16.43 | 16.50 | | Load tester CPU, system, % | 6.48 | 0.30 | 9.40 | 2.91 | 8.05 | 0.30 | 0.76 | 1.74 | 8.79 | 9.26 | 9.40 | | Cluster node CPU, user, % | 36.38 | 3.10 | 42.50 | 7.23 | 38.20 | 3.10 | 26.59 | 32.27 | 41.12 | 41.61 | 42.50 | | Cluster node CPU, system, % | 11.15 | 2.10 | 22.90 | 5.02 | 9.10 | 2.10 | 7.35 | 8.19 | 20.31 | 22.12 | 22.90 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 12828.18 | 12107.85 | 13125.62 | 217.70 | 12841.07 | 12107.85 | 12132.56 | 12612.57 | 13046.27 | 13121.70 | 13125.62 | | Read latency, ms/op | 0.93 | 0.91 | 0.99 | 0.02 | 0.93 | 0.91 | 0.91 | 0.92 | 0.95 | 0.99 | 0.99 | | Write throughput, ops/sec | 1404.07 | 1365.42 | 1447.63 | 19.43 | 1403.84 |"
},
{
"data": "| 1366.03 | 1378.51 | 1429.19 | 1446.76 | 1447.63 | | Write latency, ms/op | 2.85 | 2.76 | 2.93 | 0.04 | 2.84 | 2.76 | 2.76 | 2.80 | 2.90 | 2.93 | 2.93 | | Load tester CPU, user, % | 9.08 | 8.10 | 14.50 | 1.45 | 8.70 | 8.10 | 8.11 | 8.20 | 11.02 | 14.16 | 14.50 | | Load tester CPU, system, % | 4.70 | 1 | 5.30 | 0.87 | 4.90 | 1 | 1.35 | 4.52 | 5.10 | 5.28 | 5.30 | | Cluster node CPU, user, % | 32.91 | 3.40 | 37.20 | 6.46 | 34.80 | 3.40 | 14.05 | 30.16 | 36.13 | 36.27 | 37.20 | | Cluster node CPU, system, % | 12.77 | 2 | 15 | 2.34 | 13.30 | 2 | 6.05 | 12 | 13.93 | 14.27 | 15 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+--+--+--+--+--+--+--+--+--+--+--+ | Metric | mean | min | max | std.dev | median | 1% | 5% | 10% | 90% | 95% | 99% | | Read throughput, ops/sec | 3251.27 | 3171.84 | 3346.79 | 47.44 | 3258.80 | 3171.84 | 3172.11 | 3177.85 | 3297.35 | 3344.34 | 3346.79 | | Read latency, ms/op | 1.23 | 1.19 | 1.26 | 0.02 | 1.23 | 1.19 | 1.19 | 1.21 | 1.26 | 1.26 | 1.26 | | Write throughput, ops/sec | 3217.72 | 3147.98 | 3287.21 | 42.89 | 3214.68 | 3147.98 | 3148.66 | 3162.06 | 3271.76 | 3286.44 | 3287.21 | | Write latency, ms/op | 3.73 | 3.65 | 3.81 | 0.05 | 3.74 | 3.65 | 3.65 | 3.67 | 3.79 | 3.81 | 3.81 | | Load tester CPU, user, % | 4.97 | 4.20 | 10.70 | 1.59 | 4.40 | 4.20 | 4.20 | 4.20 | 7.72 | 10.44 | 10.70 | | Load tester CPU, system, % | 2.82 | 1.20 | 3 | 0.38 | 2.90 | 1.20 | 1.36 | 2.80 | 3 | 3 | 3 | | Cluster node CPU, user, % | 40.58 | 9.70 | 44.50 | 6.83 | 42.05 | 9.70 | 20.58 | 39.77 | 43.53 | 44.10 | 44.50 | | Cluster node CPU, system, % | 19.47 | 5.60 | 21.10 | 3.01 | 20.10 | 5.60 | 10.60 | 19.34 | 20.83 | 21 | 21.10 | +--+--+--+--+--+--+--+--+--+--+--+--+ ``` ``` +--+-+--+--+--+ | Workload | Throughput (ops/sec) | Op Type | Avg Latency (us) | 99th% Latency (us) | | workloada | 107119.99 | READ | 1284.16 | 3407 | | | | UPDATE | 3313.92 | 7351 | | | | | | | | workloadb | 120364.26 | READ | 1956.61 | 4875 | | | | UPDATE | 3730.82 | 9143 | | | | | | | | workloadc | 129284.46 | READ | 1902.59 | 4931 | | | | | | | | workloadd | 124609.33 | READ | 1872.5 | 4459 | | | | INSERT | 3940.2 | 11399 | | | | | | | | workloade | 14007.41 | INSERT | 16756.13 | 45951 | | | | SCAN | 17633.02 | 50303 | | | | | | | | workloadf | 72184.64 | READ | 1531.82 | 3999 | | | | READ-MODIFY-WRITE | 5286.64 | 12503 | | | | UPDATE | 3750.41 | 9623 | | | | | | | +--+-+--+--+--+ ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "03-tolerance.md",
"project_name": "TDengine",
"subcategory": "Database"
} | [
{
"data": "title: Fault Tolerance and Disaster Recovery description: This document describes how TDengine provides fault tolerance and disaster recovery. TDengine uses WAL, i.e. Write Ahead Log, to achieve fault tolerance and high reliability. When a data block is received by TDengine, the original data block is first written into WAL. The log in WAL will be deleted only after the data has been written into data files in the database. Data can be recovered from WAL in case the server is stopped abnormally for any reason and then restarted. There are 2 configuration parameters related to WAL: wal_level: Specifies the WAL level. 1 indicates that WAL is enabled but fsync is disabled. 2 indicates that WAL and fsync are both enabled. The default value is 1. walfsyncperiod: This parameter is only valid when wal_level is set to 2. It specifies the interval, in milliseconds, of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written. To achieve absolutely no data loss, set wallevel to 2 and walfsyncperiod to 0. There is a performance penalty to the data ingestion rate. However, if the concurrent data insertion threads on the client side can reach a big enough number, for example 50, the data ingestion performance will be still good enough. Our verification shows that the drop is only 30% when walfsync_period is set to 3000 milliseconds. TDengine provides disaster recovery by using taosX to replicate data between two TDengine clusters which are deployed in two distant data centers. Assume there are two TDengine clusters, A and B, A is the source and B is the target, and A takes the workload of writing and querying. You can deploy `taosX` in the data center where cluster A resides in, `taosX` consumes the data written into cluster A and writes into cluster B. If the data center of cluster A is disrupted because of disaster, you can switch to cluster B to take the workload of data writing and querying, and deploy a `taosX` in the data center of cluster B to replicate data from cluster B to cluster A if cluster A has been recovered, or another cluster C if cluster A has not been recovered. You can use the data replication feature of `taosX` to build more complicated disaster recovery solution. taosX is only provided in TDengine enterprise edition, for more details please contact [email protected]."
}
] |
{
"category": "App Definition and Development",
"file_name": "3.13.1.md",
"project_name": "RabbitMQ",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "RabbitMQ `3.13.1` is a maintenance release in the `3.13.x` . Starting June 1st, 2024, community support for this series will only be provided to and those who hold a valid . Please refer to the upgrade section from the if upgrading from a version prior to 3.13.0. This release requires Erlang 26 and supports Erlang versions up to `26.2.x`. has more details on Erlang version requirements for RabbitMQ. As of 3.13.0, RabbitMQ requires Erlang 26. Nodes will fail to start on older Erlang releases. Users upgrading from 3.12.x (or older releases) on Erlang 25 to 3.13.x on Erlang 26 (both RabbitMQ and Erlang are upgraded at the same time) must consult the and first. Release notes can be found on GitHub at . Classic queue v2 message store compaction could fail behind under high enough load, significantly increasing node's disk space footprint. GitHub issues: , Improved quorum queue safety in mixed version clusters. GitHub issue: When Khepri was enabled and virtual host recovery failed, subsequent recovery attempts also failed. GitHub issue: Messages published without any headers set on them did not have a header property set on them. This change compared to 3.12.x was not intentional. GitHub issues: , Free disk space monitor on Windows ran into an exception if external call to `win32sysinfo.exe` timed out. GitHub issue: `channelmaxper_node` is a new per-node limit that allows to put a cap on the number of AMQP 0-9-1 channels that can be concurrently open by all clients connected to a node: ``` ini channelmaxper_node = 5000 ``` This is a guardrail mean to protect nodes from . Contributed by @illotum (AWS). GitHub issue: Avoids a Windows-specific stream log corruption that affected some deployments. GitHub issue: When a cannot be created because of a duplicate partition name, a more informative error message is now used. GitHub issue: `rabbitmq-plugins list --formatter=json --silent` will no longer emit any warnings when some of the plugins in the are missing. Contributed by @Ayanda-D. GitHub issue: Configuring a JWKS URL without specifying a CA certificate resulted in an exception with Erlang 26's TLS implementation. GitHub issue: Set default `sort` query parameter value for better compatibility with an external Prometheus scraper. Note that the is the recommended way of RabbitMQ using Prometheus-compatible tools. GitHub issue: When a tab (Connections, Queues and Streams, etc) is switched, a table configuration pane from the previously selected tab is now hidden. Contributed by @ackepenek. GitHub issue: `GET /api/queues/{vhost}/{name}` now supports `enablequeuetotals` as well as `disable_stats`. This combination of query parameters can be used to retrieve message counters while greatly reducing the number of metrics returned by the endpoints. Contributed by @aaron-seo (AWS). GitHub issue: Exchange federation now can be configured to use a custom queue type for their internal buffers. To use a quorum queue, set the `queue-type` federation policy key to `quorum`. GitHub issues: , `rabbitmqfederationrunninglinkcount` is a new metric provided via Prometheus. GitHub issue: `osiris` was updated to `khepri` was upgraded to `cowboy` was updated to To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.13.1.tar.xz` instead of the source tarball produced by GitHub."
}
] |
{
"category": "App Definition and Development",
"file_name": "veepee.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Veepee\" icon: /images/logos/powered-by/veepee.png hasLink: \"https://fr.veepee.be/vexbridge/\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->"
}
] |
{
"category": "App Definition and Development",
"file_name": "20170215_system_jobs.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: System Jobs Status: completed Start Date: 2017-02-13 Authors: Nikhil Benesch RFC PR: [#13656] Cockroach Issue: [#12555] Add a system table to track the progress of backups, restores, and schema changes. When performing a schema change, only one bit of progress information is available: whether the change has completed, indicated by whether the query has returned. Similarly, when performing a backup or restore, status is only reported after the backup or restore has completed. Given that a full backup of a 2TB database takes on the order of several hours to complete, the lack of progress information is a serious pain point for users. Additionally, while each node runs a schema change daemon that can restart pending schema changes if the coordinating node dies, the same is not true for backups and restores. If the coordinating node for a backup or restore job dies, the job will abort, even if the individual workers were otherwise successful. This RFC proposes a new system table, `system.jobs`, that tracks the status of these long-running backup, restore, and schema change \"jobs.\" This table will directly expose the desired progress information via `SELECT` queries over the table, enable an API endpoint to expose the same progress information in the admin UI, and enable an internal daemon that periodically adopts and resumes all types of orphaned jobs. The table will also serve as a convenient place for schema changes to store checkpoint state; currently, checkpoints are stored on the affected table descriptor, which must be gossiped on every write. Adding a `system.jobs` table has been proposed and unsuccessfully implemented several times. [@vivekmenezes]'s initial attempt in [#7037] was abandoned since, at the time, there was no way to safely add a new system table to an existing cluster. Several months later, after [@a-robinson] and [@danhhz] built the necessary cluster migration framework, @a-robinson submitted a migration to add a `system.jobs` table in [#11722] as an example of using the new framework. His PR was rejected because the table's schema hadn't been fully considered. This RFC describes a revised `system.jobs` schema for us to thoroughly vet before we proceed with a final implementation. Within this RFC, the term \"job\" or \"long-running job\" refers only to backups, restores, and schema changes. Other types of long-running queries, like slow `SELECT` statements, are explicitly out of scope. A \"job-creating query,\" then, is any query of one of the following types: `ALTER TABLE`, which creates a schema change job `BACKUP`, which creates a backup job `RESTORE`, which creates a restore job To track the progress of these jobs, the following table will be injected into the `system` database using the [cluster migration framework]: ```sql CREATE TABLE system.jobs ( id INT DEFAULT unique_rowid() PRIMARY KEY, status STRING NOT NULL, created TIMESTAMP NOT NULL DEFAULT now(), payload BYTES, INDEX (status, created) ) ``` Each job is identified by a unique `id`, which is assigned when the job is created. Currently, this ID serves only to identify the job to the user, but future SQL commands to e.g. abort running jobs will need this ID to unambiguously specify the target"
},
{
"data": "The `status` column represents a state machine with `pending`, `running`, `succeeded`, and `failed` states. Jobs are created in the `pending` state when the job-creating query is accepted, move to the `running` state once work on the job has actually begun, then move to a final state of `succeeded` or `failed`. The `pending` state warrants additional explanation: it's used to track jobs that are enqueued but not currently performing work. Schema changes, for example, will sit in the `pending` state until all prior schema change jobs have completed. The `created` field, unsurprisingly, is set to the current timestamp at the time the record is created. The admin UI job status page is expected to display jobs ordered first by their status, then by their creation time. To make this query efficient, the table has a secondary index on `status, created`. We want to avoid future schema changes to `system.jobs` if at all possible. Every schema change requires a cluster migration, and every cluster migration introduces node startup time overhead, plus some risk and complexity. To that end, any field not required by an index is stashed in the `payload` column, which stores a protobuf that can be evolved per the standard protobuf forwards-compatibility support. The proposed message definition for the `payload` follows. ```protobuf message BackupJobPayload { // Intentionally unspecified. } message RestoreJobPayload { // Intentionally unspecified. } message SchemaChangeJobPayload { uint32 mutation_id = 1; repeated roachpb.Span resume_spans = 2; } message JobLease { uint32 node_id = 1; int64 expires = 2; } message JobPayload { string description = 1; string creator = 2; int64 started = 4; int64 finished = 5; int64 modified = 6; repeated uint32 descriptor_ids = 7; float fraction_completed = 8; string error = 9; oneof details { BackupJobPayload backup_details = 10; RestoreJobPayload restore_details = 11; SchemaChangeJobPayload schemachangedetails = 12; } JobLease lease = 13; } ``` The `description` field stores the text of the job-creating query for display in the UI. Schema changes will store the query verbatim; backups and restores, which may have sensitive cloud storage credentials specified in the query, will store a sanitized version of the query. The `creator` field records the user who launched the job. Next up are four fields to track the timing of the job. The `created` field tracks when the job record is created, the `started` field tracks when the job switches from `pending` to `running`, and the `finished` field tracks when the job switches from `running` to its final state of `succeeded` or `failed`. The `modified` field is updated whenever the job is updated and can be used to detect when a job has stalled. The repeated `descriptor_id` field stores the IDs of the databases or tables affected by the job. For backups and restores, the IDs of any tables targeted will have an entry. For schema migrations, the ID of the one database (`ALTER DATABASE...`) or table (`ALTER TABLE...`) under modification will be stored. Future long-running jobs which don't operate on databases or tables can simply leave this field empty. The `fraction_completed` field is periodically updated from 0.0 to 1.0 while the job is"
},
{
"data": "Jobs in the `succeeded` state will always have a `fraction_completed` of 1.0, while jobs in the `failed` state may have any `fraction_completed` value. This value is stored as a float instead of an integer to avoid needing to choose a fixed denominator for the fraction (e.g. 100 or 1000). The `error` field stores the reason for failure, if any. This is the same error message that is reported to the user through the normal query failure path, but is recorded in the table for posterity. The type of job can be determined by reflection on the `details` oneof, which stores additional details relevant to a specific job type. The `SchemaJobPayload`, for example, stores the ID of the underlying mutation and checkpoint status to resume an in-progress backfill if the original coordinator dies. `BackupJobPayload` and `RestoreJobPayload` are currently empty and exist only to allow reflection on the `details` oneof. Finally, the `lease` field tracks whether the job has a live coordinator. The field stores the node ID of the current coordinator in `lease.node_id` and when their lease expires in `lease.expires`. Each node will run a daemon to scan for running jobs whose leases have expired and attempt to become the new coordinator. (See the next section for a proposed lease acquisition scheme.) Schema changes have an existing daemon that does exactly this, but the daemon currently stores the lease information on the table descriptor. The daemon will be adjusted to store lease information here instead and extended to support backup and restore jobs. Several alternative divisions of fields between the schema and the protobuf were considered; see for more discussion. To help evaluate the schema design, a selection of SQL queries expected to be run against the `system.jobs` table follows. Most of these queries will be executed by the database internals, though some are expected to be run manually by users monitoring job progress. To create a new job: ```sql -- {} is imaginary syntax for a protobuf literal. INSERT INTO system.jobs (status, payload) VALUES ('pending', { description = 'BACKUP foodb TO barstorage', creator = 'root', modified = now(), descriptor_ids = [50], backup_details = {} }) RETURNING id ``` To mark a job as running: ```sql -- {...old, col = 'new-value' } is imaginary syntax for a protobuf literal that -- has the same values as old, except col is updated to 'new-value'. UPDATE system.jobs SET status = 'running', payload = { ...payload, started = now(), modified = now(), fraction_completed = 0.0, lease = { node_id = 1, expires = now() + JobLeaseDuration} } WHERE id = ? ``` To update the status of a running job: ```sql UPDATE system.jobs SET payload = {...payload, modified = now(), fraction_completed = 0.2442} WHERE id = ? ``` To take over an expired lease: ```go func maybeAcquireAbandonedJob() (int, JobPayload) { jobs = db.Query(\"SELECT id, payload FROM system.jobs WHERE status = 'running'\") for _, job := range jobs { payload := decode(job.payload) if payload.lease.expires.Add(MaxClockOffset).Before(time.Now()) { payload.lease = &JobLease{NodeID: NODE-ID, Expires: time.Now().Add(JobLeaseDuration)} if db.Exec( \"UPDATE payload = ? WHERE id = ? AND payload = ?\", encode(payload), job.ID, job.payload, ).RowsAffected() == 1 { // Acquired the lease on this job. return"
},
{
"data": "payload } // Another node got the lease. Try the next job. } // This job still has an active lease. Try the next job. } return nil, nil } ``` To mark a job as successful: ```sql UPDATE system.jobs SET status = 'succeeded' payload = {...payload, modified = now()} WHERE id = ? ``` To mark a job as failed: ```sql UPDATE system.jobs SET status = 'failed', payload = {...payload, modified = now(), error = 's3.aws.amazon.com: host unreachable'} WHERE id = ? ``` To find queued or running jobs (e.g., for the default \"System jobs\" admin view): ```sql SELECT * FROM system.jobs WHERE status IN ('pending', 'running') ORDER BY created; ``` To get the status of a specific job (e.g., a user in the SQL CLI): ```sql SELECT status FROM system.jobs WHERE id = ?; ``` Requiring the job leader to periodically issue `UPDATE system.jobs SET payload = {...payload, fraction_completed = ?}` queries to update the progress of running jobs is somewhat unsatisfying. One wishes to be able to conjure the `fraction_completed` column only when the record is read, but this design would introduce significant implementation complexity. Users cannot retrieve fields stored in the protobuf from SQL directly, but several fields that might be useful to users, like `fraction_completed` and `creator`, are stored within the protobuf. We can solve this by introducing a special syntax, like `SHOW JOBS`, if the need arises. Additionally, support for reaching into protobuf columns from a SQL query is planned. Note that at least one current customer has requested the ability to query job status from SQL directly. Even without a `SHOW JOBS` command, basic status information (i.e., `pending`, `running`, `succeeded`, or `failed`) is available directly through SQL under this proposal. To further minimize the chances that we'll need to modify the `system.jobs` schema, we could instead stuff all the data into the `payload` protobuf: ```sql CREATE TABLE system.jobs ( id INT DEFAULT unique_rowid() PRIMARY KEY, payload BYTES, ) ``` This allows for complete flexibility in adjusting the schema, but prevents essentially all useful SQL queries and indices over the table until protobuf columns are natively supported. We could also allow all data to be filtered by widening the `system.jobs` table to include some (or all) of the fields proposed to be stored in the `payload` protobuf. Following is an example where all but the job-specific fields are pulled out of `payload`. ```sql CREATE TABLE system.jobs ( id INT DEFAULT unique_rowid() PRIMARY KEY, status STRING NOT NULL, description STRING NOT NULL, creator STRING NOT NULL, nodeID INT, created TIMESTAMP NOT NULL DEFAULT now(), started TIMESTAMP, finished TIMESTAMP, modified TIMESTAMP NOT NULL DEFAULT now(), descriptors INT[], fractionCompleted FLOAT, INDEX (status, created) ) ``` The `payload` type then simplifies to the below definition, where the job-specific message types are defined as above. ```protobuf message JobPayload { oneof details { BackupJobPayload backup_details = 1; RestoreJobPayload restore_details = 2; SchemaChangeJobPayload schemachangedetails = 3; } } ``` This alternative poses a significant risk if we need to adjust the schema, but unlocks or simplifies some useful SQL"
},
{
"data": "Additionally, none of the `UPDATE` queries in the would need to modify the protobuf in this alternative. Also considered was a schema capable of recording every change made to a job. Each job, then, would consist of a collection of records in the `system.jobs` table, one row per update. The table schema would include a timestamp on every row, and the primary key would expand to `id, timestamp`: ```sql CREATE TABLE system.jobs ( id INT DEFAULT unique_rowid(), timestamp TIMESTAMP NOT NULL DEFAULT now() status STRING NOT NULL, payload BYTES, PRIMARY KEY (id, timestamp) ) ``` The `created`, `started`, and `finished` fields could then be derived from the `timestamp` of the first record in the new state, so the protobuf would simplify to: ```protobuf message JobPayload { string description = 1; string creator = 2; float fraction_completed = ?; uint32 node_id = 2; repeated uint32 descriptor_id = 6; string error = 7; oneof details { BackupJobPayload backup_details = 8; RestoreJobPayload restore_details = 9; SchemaChangeJobPayload schemachangedetails = 10; } } ``` The first entry into the table for a given job would include the immutable facts about the job, like the `description` and the `creator`. Future updates to a job would only include the updated fields in the protobuf. A running job would update `fraction_completed` in the usual case, for example, and would update `node_id` if the coordinating node changed. Protobufs elide omitted fields, so the space requirement of such a scheme is a modest several dozen kilobytes per job, assuming each job is updated several thousand times. Unfortunately, this design drastically complicates the queries necessary to retrieve information from the table. For example, the admin UI would need something like the following to display the list of in-progress and running jobs: ```sql SELECT latest.id AS id, latest.status AS status, latest.timestamp AS updated, latest.payload AS latestPayload, initial.payload AS initialPayload, initial.timestamp AS created FROM ( SELECT jobs.id, jobs.timestamp, jobs.status, jobs.payload FROM (SELECT id, max(timestamp) as timestamp FROM jobs GROUP BY id) AS latest JOIN jobs ON jobs.id = latest.id AND jobs.timestamp = latest.timestamp ) AS latest JOIN jobs AS initial ON initial.id = latest.id AND initial.status = 'pending' WHERE latest.status IN ('pending', 'running') ORDER BY initial.timestamp ``` The above query could be simplified if we instead reproduced the entire record with every update, but that would significantly increase the space requirement. In short, this alternative either complicates implementation or incurs significant space overhead for no clear win, since we don't currently have a specific compelling use case for a full update history. How does a user get a job ID via SQL? Job-creating queries currently block until the job completes; this behavior is consistent with e.g. `ALTER...` queries in other databases. This means, however, that a job-creating query cannot return a job ID immediately. Users will need to search the `system.jobs` table manually for the record that matches the query they ran. The answer to this question is unlikely to influence the design of the schema itself, since how we communicate the job ID to the user is orthogonal to how we keep track of job progress. All system log tables, including `system.jobs`, will eventually need garbage collection to prune old entries from the table, likely with a user-configurable timeframe."
}
] |
{
"category": "App Definition and Development",
"file_name": "userPrivilegeCase.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "We recommend you customize roles to manage privileges and users. The following examples classify a few combinations of privileges for some common scenarios. ```SQL -- Create a role. CREATE ROLE read_only; -- Grant the USAGE privilege on all catalogs to the role. GRANT USAGE ON ALL CATALOGS TO ROLE read_only; -- Grant the privilege to query all tables to the role. GRANT SELECT ON ALL TABLES IN ALL DATABASES TO ROLE read_only; -- Grant the privilege to query all views to the role. GRANT SELECT ON ALL VIEWS IN ALL DATABASES TO ROLE read_only; -- Grant the privilege to query all materialized views and the privilege to accelerate queries with them to the role. GRANT SELECT ON ALL MATERIALIZED VIEWS IN ALL DATABASES TO ROLE read_only; ``` And you can further grant the privilege to use UDFs in queries: ```SQL -- Grant the USAGE privilege on all database-level UDF to the role. GRANT USAGE ON ALL FUNCTIONS IN ALL DATABASES TO ROLE read_only; -- Grant the USAGE privilege on global UDF to the role. GRANT USAGE ON ALL GLOBAL FUNCTIONS TO ROLE read_only; ``` ```SQL -- Create a role. CREATE ROLE write_only; -- Grant the USAGE privilege on all catalogs to the role. GRANT USAGE ON ALL CATALOGS TO ROLE write_only; -- Grant the INSERT and UPDATE privileges on all tables to the role. GRANT INSERT, UPDATE ON ALL TABLES IN ALL DATABASES TO ROLE write_only; -- Grant the REFRESH privilege on all materialized views to the role. GRANT REFRESH ON ALL MATERIALIZED VIEWS IN ALL DATABASES TO ROLE write_only; ``` ```SQL -- Create a role. CREATE ROLE readcatalogonly; -- Grant the USAGE privilege on the destination catalog to the role. GRANT USAGE ON CATALOG hivecatalog TO ROLE readcatalog_only; -- Switch to the corresponding catalog. SET CATALOG hive_catalog; -- Grant the privileges to query all tables and all views in the external catalog. GRANT SELECT ON ALL TABLES IN ALL DATABASES TO ROLE readcatalogonly; ``` :::tip For views in external catalogs, you can query only Hive table views (since v3.1). ::: You can only write data into Iceberg tables (since v3.1) and Hive tables (since v3.2). ```SQL -- Create a role. CREATE ROLE writecatalogonly; -- Grant the USAGE privilege on the destination catalog to the role. GRANT USAGE ON CATALOG icebergcatalog TO ROLE readcatalog_only; -- Switch to the corresponding catalog. SET CATALOG iceberg_catalog; -- Grant the privilege to write data into Iceberg tables. GRANT INSERT ON ALL TABLES IN ALL DATABASES TO ROLE writecatalogonly; ``` Grant privileges to perform global backup and restore operations: The privileges to perform global backup and restore operations allow the role to back up and restore any database, table, or partition. It requires the REPOSITORY privilege on the SYSTEM level, the privileges to create databases in the default catalog, to create tables in any database, and to load and export data on any table. ```SQL -- Create a"
},
{
"data": "CREATE ROLE recover; -- Grant the REPOSITORY privilege on the SYSTEM level. GRANT REPOSITORY ON SYSTEM TO ROLE recover; -- Grant the privilege to create databases in the default catalog. GRANT CREATE DATABASE ON CATALOG default_catalog TO ROLE recover; -- Grant the privilege to create tables in any database. GRANT CREATE TABLE ON ALL DATABASES TO ROLE recover; -- Grant the privilege to load and export data on any table. GRANT INSERT, EXPORT ON ALL TABLES IN ALL DATABASES TO ROLE recover; ``` Grant the privileges to perform database-level backup and restore operations: The privileges to perform database-level backup and restore operations require the REPOSITORY privilege on the SYSTEM level, the privilege to create databases in the default catalog, the privilege to create tables in any database, the privilege to load data into any table, and the privilege export data from any table in the database to be backed up. ```SQL -- Create a role. CREATE ROLE recover_db; -- Grant the REPOSITORY privilege on the SYSTEM level. GRANT REPOSITORY ON SYSTEM TO ROLE recover_db; -- Grant the privilege to create databases. GRANT CREATE DATABASE ON CATALOG defaultcatalog TO ROLE recoverdb; -- Grant the privilege to create tables. GRANT CREATE TABLE ON ALL DATABASES TO ROLE recover_db; -- Grant the privilege to load data into any table. GRANT INSERT ON ALL TABLES IN ALL DATABASES TO ROLE recover_db; -- Grant the privilege to export data from any table in the database to be backed up. GRANT EXPORT ON ALL TABLES IN DATABASE <dbname> TO ROLE recoverdb; ``` Grant the privileges to perform table-level backup and restore operations: The privileges to perform table-level backup and restore operations require the REPOSITORY privilege on the SYSTEM level, the privilege to create tables in corresponding databases, the privilege to load data into any table in the database, and the privilege to export data from the table to be backed up. ```SQL -- Create a role. CREATE ROLE recover_tbl; -- Grant the REPOSITORY privilege on the SYSTEM level. GRANT REPOSITORY ON SYSTEM TO ROLE recover_tbl; -- Grant the privilege to create tables in corresponding databases. GRANT CREATE TABLE ON DATABASE <dbname> TO ROLE recovertbl; -- Grant the privilege to load data into any table in a database. GRANT INSERT ON ALL TABLES IN DATABASE <dbname> TO ROLE recoverdb; -- Grant the privilege to export data from the table you want to back up. GRANT EXPORT ON TABLE <tablename> TO ROLE recovertbl; ``` Grant the privileges to perform partition-level backup and restore operations: The privileges to perform partition-level backup and restore operations require the REPOSITORY privilege on the SYSTEM level, and the privilege to load and export data on the corresponding table. ```SQL -- Create a role. CREATE ROLE recover_par; -- Grant the REPOSITORY privilege on the SYSTEM level. GRANT REPOSITORY ON SYSTEM TO ROLE recover_par; -- Grant the privilege to load and export data on the corresponding table. GRANT INSERT, EXPORT ON TABLE <tablename> TO ROLE recoverpar; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "kafka-to-pubsub-example.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "layout: post title: \"Example to ingest data from Apache Kafka to Google Cloud Pub/Sub\" date: 2021-01-15 00:00:01 -0800 categories: blog java authors: arturkhanin ilyakozyrev alexkosolapov <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> In this blog post we present an example that creates a pipeline to read data from a single topic or multiple topics from and write data into a topic in . The example provides code samples to implement simple yet powerful pipelines and also provides an out-of-the-box solution that you can just _\" plug'n'play\"_. This end-to-end example is included in and can be downloaded . We hope you will find this example useful for setting up data pipelines between Kafka and Pub/Sub. Supported data formats: Serializable plain text formats, such as JSON Supported input source configurations: Single or multiple Apache Kafka bootstrap servers Apache Kafka SASL/SCRAM authentication over plaintext or SSL connection Secrets vault service Supported destination configuration: Single Google Pub/Sub topic In a simple scenario, the example will create an Apache Beam pipeline that will read messages from a source Kafka server with a source topic, and stream the text messages into specified Pub/Sub destination topic. Other scenarios may need Kafka SASL/SCRAM authentication, that can be performed over plaintext or SSL encrypted connection. The example supports using a single Kafka user account to authenticate in the provided source Kafka servers and topics. To support SASL authentication over SSL the example will need an SSL certificate location and access to a secrets vault service with Kafka username and password, currently supporting HashiCorp Vault. There are two ways to execute the pipeline. Locally. This way has many options - run directly from your IntelliJ, or create `.jar` file and run it in the terminal, or use your favourite method of running Beam pipelines. In using Google Cloud : With `gcloud` command-line tool you can create a out of this Beam example and execute it in Google Cloud Platform. _This requires corresponding modifications of the example to turn it into a template._ This example exists as a within repository and can be run with no additional code modifications. Give this Beam end-to-end example a try. If you are new to Beam, we hope this example will give you more understanding on how pipelines work and look like. If you are already using Beam, we hope some code samples in it will be useful for your use cases. Please if you encounter any issues."
}
] |
{
"category": "App Definition and Development",
"file_name": "v22.8.21.38-lts.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2023 Backported in : Packing inline cache into docker images sometimes causes strange special effects. Since we don't use it at all, it's good to go. (). Backported in : Preserve environment parameters in `clickhouse start` command. Fixes . (). Fix Block structure mismatch in Pipe::unitePipes for FINAL (). Fix ORDER BY tuple of WINDOW functions (). Fix `countSubstrings()` hang with empty needle and a column haystack (). The implementation of AnyHash was non-conformant. (). init and destroy ares channel on demand.. (). clickhouse-keeper: fix implementation of server with poll() (). Not-ready Set (). Fix incorrect normal projection AST format (). Fix: interpolate expression takes source column instead of same name aliased from select expression. (). Correctly handle totals and extremes with `DelayedSource` (). Fix: sorted distinct with sparse columns (). Fix crash in comparison functions due to incorrect query analysis (). Fix deadlocks in StorageTableFunctionProxy (). Disable testreversedns_query/test.py (). Disable testhostregexpmultipleptr_records/test.py (). Fix broken `02862sorteddistinctsparsefix` (). Get rid of describe_parameters for the best robot token ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "beam-2.13.0.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Apache Beam 2.13.0\" date: 2019-06-07 00:00:01 -0800 categories: blog release aliases: /blog/2019/05/22/beam-2.13.0.html authors: angoenka <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> We are happy to present the new 2.13.0 release of Beam. This release includes both improvements and new functionality. See the for this release.<!--more--> For more information on changes in 2.13.0, check out the . Support reading query results with the BigQuery storage API. Support KafkaIO to be configured externally for use with other SDKs. BigQuery IO now supports BYTES datatype on Python 3. Avro IO support enabled on Python 3. For Python 3 pipelines, the default Avro library used by Beam AvroIO and Dataflow workers was switched from avro-python3 to fastavro. Flink 1.8 support added. Support to run word count on Portable Spark runner. ElementCount metrics in FnApi Dataflow Runner. Support to create BinaryCombineFn from lambdas. When writing BYTES Datatype into Bigquery with Beam Bigquery IO on Python DirectRunner, users need to base64-encode bytes values before passing them to Bigquery IO. Accordingly, when reading bytes data from BigQuery, the IO will also return base64-encoded bytes. This change only affects Bigquery IO on Python DirectRunner. New DirectRunner behavior is consistent with treatment of Bytes by Beam Java Bigquery IO, and Python Dataflow Runner. Various bug fixes and performance improvements. According to git shortlog, the following people contributed to the 2.13.0 release. Thank you to all contributors! Aaron Li, Ahmet Altay, Aizhamal Nurmamat kyzy, Alex Amato, Alexey Romanenko, Andrew Pilloud, Ankur Goenka, Anton Kedin, apstndb, Boyuan Zhang, Brian Hulette, Brian Quinlan, Chamikara Jayalath, Cyrus Maden, Daniel Chen, Daniel Oliveira, David Cavazos, David Moravek, David Yan, EdgarLGB, Etienne Chauchot, frederik2, Gleb Kanterov, Harshit Dwivedi, Harsh Vardhan, Heejong Lee, Hennadiy Leontyev, Henri-Mayeul de Benque, Ismal Meja, Jae-woo Kim, Jamie Kirkpatrick, Jan Lukavsk, Jason Kuster, Jean-Baptiste Onofr, JohnZZGithub, Jozef Vilcek, Juta, Kenneth Jung, Kenneth Knowles, Kyle Weaver, ukasz Gajowy, Luke Cwik, Mark Liu, Mathieu Blanchard, Maximilian Michels, Melissa Pashniak, Michael Luckey, Michal Walenia, Mike Kaplinskiy, Mike Pedersen, Mikhail Gryzykhin, Mikhail-Ivanov, Niklas Hansson, pabloem, Pablo Estrada, Pranay Nanda, Reuven Lax, Richard Moorhead, Robbe Sneyders, Robert Bradshaw, Robert Burke, Roman van der Krogt, rosetn, Rui Wang, Ryan Yuan, Sam Whittle, sudhan499, Sylwester Kardziejonek, Ted, Thomas Weise, Tim Robertson, ttanay, tvalentyn, Udi Meiri, Valentyn Tymofieiev, Xinyu Liu, Yifan Zou, yoshiki.obata, Yueyang Qiu"
}
] |
{
"category": "App Definition and Development",
"file_name": "code-change-guide.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "<!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Last Updated: Apr 18, 2024 This guide is for Beam users and developers who want to change or test Beam code. Specifically, this guide provides information about: Testing code changes locally Building Beam artifacts with modified Beam code and using the modified code for pipelines The guide contains the following sections: : A description of the Apache Beam GitHub repository, including steps for setting up your and for verifying the configuration of your . : Guidance for setting up a Java environment, running and writing integration tests, and running a pipeline with modified Beam code. : Guidance for configuring your console for Python development, running unit and integration tests, and running a pipeline with modified Beam code. The Apache Beam GitHub repository (Beam repo) is, for the most part, a \"mono repo\". It contains everything in the Beam project, including the SDK, test infrastructure, dashboards, the , and the . The following example code paths in the Beam repo are relevant for SDK development. Java code paths are mainly found in two directories: `sdks/java` and `runners`. The following list provides notes about the contents of these directories and some of the subdirectories. `sdks/java` - Java SDK `sdks/java/core` - Java core `sdks/java/harness` - SDK harness (entrypoint of SDK container) `runners` - Java runner supports, including the following items: `runners/direct-java` - Java direct runner `runners/flink-java` - Java Flink runner `runners/google-cloud-dataflow-java` - Dataflow runner (job submission, translation, and so on) `runners/google-cloud-dataflow-java/worker` - Worker for Dataflow jobs that don't use Runner v2 For SDKs in other languages, the `sdks/LANG` directory contains the relevant files. The following list provides notes about the contents of some of the subdirectories. `sdks/python` - Setup file and scripts to trigger test-suites `sdks/python/apache_beam` - The Beam package `sdks/python/apache_beam/runners/worker` - SDK worker harness entrypoint and state sampler `sdks/python/apache_beam/io` - I/O connectors `sdks/python/apache_beam/transforms` - Most core components `sdks/python/apache_beam/ml` - Beam ML code `sdks/python/apache_beam/runners` - Runner implementations and wrappers ... `sdks/go` - Go SDK `.github/workflow` - GitHub action workflows, such as the tests that run during a pull request. Most workflows run a single Gradle command. To learn which command to run locally during development, during tests, check which command is running. The Beam repo is a single Gradle project that contains all components, including Java, Python, Go, and the website. Before you begin development, familiarize yourself with the Gradle project structure by reviewing in the Gradle documentation. Grade uses the following key concepts: project*: a folder that contains the `build.gradle` file task*: an action defined in the `build.gradle` file plugin*: predefined tasks and hierarchies; runs in the project's `build.gradle` file Common tasks for a Java project or subproject include: `compileJava` - compiles the Java source files `compileTestJava` - compiles the Java test source files `test` - runs unit tests `integrationTest` - runs integration tests To run a Gradle task, use the command `./gradlew -p <PROJECT_PATH> <TASK>` or the command"
},
{
"data": ":<PROJECT>:<PATH>:<TASK>`. For example: ``` ./gradlew -p sdks/java/core compileJava ./gradlew :sdks:java:harness:test ``` For Apache Beam, one plugin manages everything: `buildSrc/src/main/groovy/org/apache/beam/gradle/BeamModulePlugin`. The `BeamModulePlugin` is used for the following tasks: Manage Java dependencies Configure projects such as Java, Python, Go, Proto, Docker, Grpc, and Avro For Java, use `applyJavaNature`; for Python, use `applyPythonNature` Define common custom tasks for each type of project `test`: run Java unit tests `spotlessApply`: format Java code In every Java project or subproject, the `build.gradle` file starts with the following code: ```groovy apply plugin: 'org.apache.beam.module' applyJavaNature( ... ) ``` To set up a local development environment, first review the . If you plan to use Dataflow, you need to set up `gcloud` credentials. To set up `gcloud` credentials, see in the Google Cloud documentation. Depending on the languages involved, your `PATH` file needs to have the following elements configured. A Java environment that uses a supported Java version, preferably Java 8. This environment is needed for all development, because Beam is a Gradle project that uses JVM. Recommended: To manage Java versions, use . A Python environment that uses any supported Python version. This environment is needed for Python SDK development. Recommended: To manage Python versions, use and a . A Go environment that uses latest Go version. This environment is needed for Go SDK development. This environment is also needed for SDK container changes for all SDKs, because the container entrypoint scripts are written in Go. A Docker environment. This environment is needed for the following tasks: SDK container changes. Some cross-language functionality (if you run an SDK container image; not required in Beam 2.53.0 and later verions). Portable runners, such as using job server. The following list provides examples of when you need specific environemnts. When you test the code change in `sdks/java/io/google-cloud-platform`, you need a Java environment. When you test the code change in `sdks/java/harness`, you need a Java environment, a Go environment, and Docker environment. You need the Docker environment to compile and build the Java SDK harness container image. When you test the code change in `sdks/python/apache_beam`, you need a Python environment. This section provides guidance for setting up your environment to modify or test Java code. To set up IntelliJ, follow these steps. The IDE isn't required for changing the code and testing. You can run tests can by using a Gradle command line, as described in the Console setup section. From IntelliJ, open `/beam` (Important: Open the repository root directory, not `sdks/java`). Wait for indexing. Indexing might take a few minutes. Because Gradle is a self-contained build tool, if the prerequisites are met, the environment setup is complete. To verify whether the load is successful, follow these steps: Find the file `examples/java/build.gradle`. Next to the wordCount task, a Run button is present. Click Run. The wordCount example compiles and runs. <img width=\"631\" alt=\"image\" src=\"https://github.com/apache/beam/assets/8010435/f5203e8e-0f9c-4eaa-895b-e16f68a808a2\"> To run tests by using the Gradle command line (shell), in the command-line environment, run the following command. This command compiles the Apache Beam SDK, the WordCount pipeline, and a Hello-world program for data processing. It then runs the pipeline on the Direct Runner. ```shell $ cd beam $ ./gradlew :examples:java:wordCount ``` When the command completes successfully, the following text appears in the Gradle build log: ``` ... BUILD SUCCESSFUL in 2m 32s 96 actionable tasks: 9 executed, 87 up-to-date 3:41:06 PM: Execution finished"
},
{
"data": "``` In addition, the following text appears in the output file: ```shell $ head /tmp/output.txt* ==> /tmp/output.txt-00000-of-00003 <== should: 38 bites: 1 depraved: 1 gauntlet: 1 battle: 6 sith: 2 cools: 1 natures: 1 hedge: 1 words: 9 ==> /tmp/output.txt-00001-of-00003 <== elements: 1 Advise: 2 fearful: 2 towards: 4 ready: 8 pared: 1 left: 8 safe: 4 canst: 7 warrant: 2 ==> /tmp/output.txt-00002-of-00003 <== chanced: 1 ... ``` This section explains how to run unit tests locally after you make a code change in the Java SDK, for example, in `sdks/java/io/jdbc`. Tests are stored in the `src/test/java` folder of each project. Unit tests have the filename `.../Test.java`. Integration tests have the filename `.../IT.java`. To run all unit tests under a project, use the following command: ``` ./gradlew :sdks:java:harness:test ``` Find the JUnit report in an HTML file in the file path `<invoked_project>/build/reports/tests/test/index.html`. To run a specific test, use the following commands: ``` ./gradlew :sdks:java:harness:test --tests org.apache.beam.fn.harness.CachesTest ./gradlew :sdks:java:harness:test --tests *CachesTest ./gradlew :sdks:java:harness:test --tests *CachesTest.testClearableCache ``` To run tests using IntelliJ, click the ticks to run either a whole test class or a specific test. To debug the test, set breakpoints. <img width=\"452\" alt=\"image\" src=\"https://github.com/apache/beam/assets/8010435/7ae2a65c-a104-48a2-8bad-ff8c52dd1943\"> These steps don't apply to `sdks:java:core` tests. To invoke those unit tests, use the command `:runners:direct-java:needsRunnerTest`. Java core doesn't depend on a runner. Therefore, unit tests that run a pipeline require the Direct Runner. To run integration tests, use the Direct Runner. Integration tests have the filename `.../IT.java`. They use . Set options by using `TestPipelineOptions`. Integration tests differ from standard pipelines in the following ways: By default, they block on run (on `TestDataflowRunner`). They have a default timeout of 15 minutes. The pipeline options are set in the system property `beamTestPipelineOptions`. To configure the test, you need to set the property `-DbeamTestPipelineOptions=[...]`. This property sets the runner that the test uses. The following example demonstrates how to run an integration test by using the command line. This example includes the options required to run the pipeline on the Dataflow runner. ``` -DbeamTestPipelineOptions='[\"--runner=TestDataflowRunner\",\"--project=mygcpproject\",\"--region=us-central1\",\"--stagingLocation=gs://mygcsbucket/path\"]' ``` To set up a `TestPipeline` object in an integration test, use the following code: ```java @Rule public TestPipeline pipelineWrite = TestPipeline.create(); @Test public void testSomething() { pipeline.apply(...); pipeline.run().waitUntilFinish(); } ``` The task that runs the test needs to specify the runner. The following examples demonstrate how to specify the runner: To run a Google Cloud I/O integration test on the Direct Runner, use the command `:sdks:java:io:google-cloud-platform:integrationTest`. To run integration tests on the standard Dataflow runner, use the command `:runners:google-cloud-dataflow-java:googleCloudPlatformLegacyWorkerIntegrationTest`. To run integration test on Dataflow runner v2, use the command `:runners:google-cloud-dataflow-java:googleCloudPlatformRunnerV2IntegrationTest`. To see how to run your workflow locally, refer to the Gradle command that the GitHub Action workflow runs. The following commands demonstrate an example invocation: ``` ./gradlew :runners:google-cloud-dataflow-java:examplesJavaRunnerV2IntegrationTest \\ -PdisableSpotlessCheck=true -PdisableCheckStyle=true -PskipCheckerFramework \\ -PgcpProject=<yourgcpproject> -PgcpRegion=us-central1 \\ -PgcsTempRoot=gs://<yourgcsbucket>/tmp ``` To apply code changes to your pipeline, we recommend that you start with a separate branch. If you're making a pull request or want to test a change with the dev branch, start from Beam HEAD (). If you're making a patch on released Beam (2.xx.0), start from a tag, such as . Then, in the Beam repo, use the following command to compile the project that includes the code change. This example modifies `sdks/java/io/kafka`. ```"
},
{
"data": "-Ppublishing -p sdks/java/io/kafka publishToMavenLocal ``` By default, this command publishes the artifact with modified code to the Maven Local repository (`~/.m2/repository`). The change is picked up when the user pipeline runs. If your code change is made in a development branch, such as on Beam master or a PR, the artifact is produced under version `2.xx.0-SNAPSHOT` instead of on a release tag. To pick up this dependency, you need to make additional configurations in your pipeline project. The following examples provide guidance for making configurations in Maven and Gradle. Follow these steps for Maven projects. Recommended: Use the WordCount `maven-archetype` as a template to set up your project (https://beam.apache.org/get-started/quickstart-java/). To add a snapshot repository, include the following elements: ```xml <repository> <id>Maven-Snapshot</id> <name>maven snapshot repository</name> <url>https://repository.apache.org/content/groups/snapshots/</url> </repository> ``` In the `pom.xml` file, modify the value of `beam.version`: ```xml <properties> <beam.version>2.XX.0-SNAPSHOT</beam.version> ``` Follow these steps for Gradle projects. In the `build.gradle` file, add the following code: ```groovy repositories { maven { url \"https://repository.apache.org/content/groups/snapshots\" } } ``` Set the Beam dependency versions to the following value: `2.XX.0-SNAPSHOT`. This configuration directs the build system to download Beam nightly builds from the Maven Snapshot Repository. The local build that you edited isn't downloaded. You usually don't need to build all Beam artifacts locally. If you do need to build all Beam artifacts locally, use the following command for all projects `./gradlew -Ppublishing publishToMavenLocal`. The following situations require additional consideration. If you're using the standard Dataflow runner (not Runner v2), and the worker harness has changed, do the following: Use the following command to compile `dataflowWorkerJar`: ``` ./gradlew :runners:google-cloud-dataflow-java:worker:shadowJar ``` The jar is located in the build output. Use the following command to pass `pipelineOption`: ``` --dataflowWorkerJar=/.../beam-runners-google-cloud-dataflow-java-legacy-worker-2.XX.0-SNAPSHOT.jar ``` If you're using Dataflow Runner v2 and `sdks/java/harness` or its dependencies (like `sdks/java/core`) have changed, do the following: Use the following command to build the SDK harness container: ```shell ./gradlew :sdks:java:container:java8:docker # java8, java11, java17, etc docker tag apache/beamjava8sdk:2.49.0.dev \\ \"us.gcr.io/apache-beam-testing/beamjava11sdk:2.49.0-custom\" # change to your container registry docker push \"us.gcr.io/apache-beam-testing/beamjava11sdk:2.49.0-custom\" ``` Run the pipeline with the following options: ``` --experiments=userunnerv2 \\ --sdkContainerImage=\"us.gcr.io/apache-beam-testing/beamjava11sdk:2.49.0-custom\" ``` The Beam Python SDK is distributed as a single wheel, which is more straightforward than the Java SDK. These instructions explain how to configure your console (shell) for Python development. In this example, the working directory is set to `sdks/python`. Recommended: Install the Python interpreter by using `pyenv`. Use the following commands: `install prerequisites` `curl https://pyenv.run | bash` `pyenv install 3.X` (a supported Python version; see `python_version` in Use the following commands to set up and activate the virtual environment: `pyenv virtualenv 3.X ENV_NAME` `pyenv activate ENV_NAME` Install the `apache_beam` package in editable mode: `pip install -e .[gcp, test]` For development that uses an SDK container image, do the following: Install Docker Desktop. Install Go. If you're going to submit PRs, use the following command to precommit the hook for Python code changes (nobody likes lint failures!!): ```shell (env) $ pip install pre-commit (env) $ pre-commit install (env) $ pre-commit uninstall ``` Although the tests can be triggered with a Gradle command, that method sets up a new `virtualenv` and installs dependencies before each run, which takes minutes. Therefore, it's useful to have a persistent `virtualenv`. Unit tests have the filename `_test.py`. To run all tests in a file, use the following command: ```shell pytest -v apachebeam/io/textiotest.py ``` To run all tests in a class, use the following command: ```shell pytest -v"
},
{
"data": "``` To run a specific test, use the following command: ```shell pytest -v apachebeam/io/textiotest.py::TextSourceTest::test_progress ``` Integration tests have the filename `ittest.py`. To run an integration test on the Direct Runner, use the following command: ```shell python -m pytest -o logcli=True -o loglevel=Info \\ apachebeam/ml/inference/pytorchinferenceittest.py::PyTorchInference \\ --test-pipeline-options='--runner=TestDirectRunner ``` If you're preparing a PR, for test-suites to run in PostCommit Python, add tests paths under in the `common.gradle` file. To run an integration test on the Dataflow Runner, follow these steps: To build the SDK tarball, use the following command: ``` cd sdks/python pip install build && python -m build --sdist ``` The tarball file is generated in the `sdks/python/sdist/` directory. To specify the tarball file, use the `--test-pipeline-options` parameter. Use the location `--sdk_location=dist/apache-beam-2.53.0.dev0.tar.gz`. The following example shows the complete command: ```shell python -m pytest -o logcli=True -o loglevel=Info \\ apachebeam/ml/inference/pytorchinferenceittest.py::PyTorchInference \\ --test-pipeline-options='--runner=TestDataflowRunner --project=<project> --temp_location=gs://<bucket>/tmp --sdk_location=dist/apache-beam-2.35.0.dev0.tar.gz --region=us-central1 ``` If you're preparing a PR, to include integration tests in the Python PostCommit test suite's Dataflow task, use the marker `@pytest.mark.it_postcommit`. To build containers for modified SDK code, follow these steps. Run the following command: ```shell ./gradlew :sdks:python:container:py39:docker \\ -Pdocker-repository-root=<gcr.io/location> -Pdocker-tag=<tag> ``` Push the containers. Specify the container location by using the `--sdkcontainerimage` option. The following example shows a complete command: ```shell python -m pytest -o logcli=True -o loglevel=Info \\ apachebeam/ml/inference/pytorchinferenceittest.py::PyTorchInference \\ --test-pipeline-options='--runner=TestDataflowRunner --project=<project> --temp_location=gs://<bucket>/tmp --sdkcontainerimage=us.gcr.io/apache-beam-testing/beam-sdk/beam:dev --region=us-central1 ``` This section provides two options for specifying additional test dependencies. Use the `--requirementsfile` options. The following example demonstrates how to use the `--requirementsfile` options: ```shell python -m pytest -o logcli=True -o loglevel=Info \\ apachebeam/ml/inference/pytorchinferenceittest.py::PyTorchInference \\ --test-pipeline-options='--runner=TestDataflowRunner --project=<project> --temp_location=gs://<bucket>/tmp --sdk_location=us.gcr.io/apache-beam-testing/beam-sdk/beam:dev --region=us-central1 requirements_file=requirements.txt ``` If you're using the Dataflow runner, use . You can use the as a base and then apply your changes. To run your pipeline with modified beam code, follow these steps: Build the Beam SDK tarball. Under `sdks/python`, run `python -m build --sdist`. For more details, see on this page. Install the Apache Beam Python SDK in your Python virtual environment with the necessary extensions. Use a command similar to the following example: `pip install /path/to/apache-beam.tar.gz[gcp]`. Initiate your Python script. To run your pipeline, use a command similar to the following example: ```shell python mypipeline.py --runner=DataflowRunner --sdklocation=/path/to/apache-beam.tar.gz --project=myproject --region=us-central1 --templocation=gs://my-bucket/temp ... ``` Tips for using the Dataflow runner: The Python worker installs the Apache Beam SDK before processing work items. Therefore, you don't usually need to provide a custom worker container. If your Google Cloud VM doesn't have internet access and transient dependencies are changed from the officially released container images, you do need to provide a custom worker container. In this case, see on this page. Installing the Beam Python SDK from source can be slow (3.5 minutes for a`n1-standard-1` machine). As an alternative, if the host machine uses amd64 architecture, you can build a wheel instead of a tarball by using a command similar to `./gradle :sdks:python:bdistPy311linux` (for Python 3.11). To pass the built wheel, use the `--sdk_location` option. That installation completes in seconds. `NameError` when running `DoFn` on remote runner Global imports, functions, and variables in main pipeline module are not serialized by default Use `--savemainsession` pipeline option to enable it <!-- TODO # Go Guide --> <!-- # Cross-language Guide --> https://repository.apache.org/content/groups/snapshots/org/apache/beam/ Java SDK build (nightly) https://gcr.io/apache-beam-testing/beam-sdk Beam SDK container build (Java, Python, Go, every 4 hrs) https://gcr.io/apache-beam-testing/beam_portability Portable runner (Flink, Spark) job server container (nightly) gs://beam-python-nightly-snapshots Python SDK build (nightly)"
}
] |
{
"category": "App Definition and Development",
"file_name": "remove-operators.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: \"- and #- (remove operators) [JSON]\" headerTitle: \"- and #- (remove operators)\" linkTitle: \"- and #- (remove)\" description: Remove key-value pairs from an object or remove a single value from an array. menu: v2.18: identifier: remove-operators parent: json-functions-operators weight: 13 type: docs Purpose: Remove key-value pairs from an object or a single value from an array. The plain `-` variant takes the specified object itself. The `#-` variant takes the path from the specified object. Notes: Describing the behavior by using the term \"remove\" is a convenient shorthand. The actual effect of these operators is to create a new `jsonb` value from the specified `jsonb` value according to the rule that the operator implements, parameterized by the SQL value on the right of the operator. Purpose: Remove key-value pairs from an object or a single value from an array. Signature: ``` input values: jsonb - [int | text] return value: jsonb ``` Notes: There is no `json` overload. To remove a single key-value pair: ```plpgsql do $body$ declare j_left constant jsonb := '{\"a\": \"x\", \"b\": \"y\"}'; key constant text := 'a'; j_expected constant jsonb := '{\"b\": \"y\"}'; begin assert jleft - key = jexpected, 'unexpected'; end; $body$; ``` To remove several key-value pairs: ```plpgsql do $body$ declare j_left constant jsonb := '{\"a\": \"p\", \"b\": \"q\", \"c\": \"r\"}'; key_list constant text[] := array['a', 'c']; j_expected constant jsonb := '{\"b\": \"q\"}'; begin assert jleft - keylist = j_expected, 'unexpected'; end; $body$; ``` To remove a single value from an array: ```plpgsql do $body$ declare j_left constant jsonb := '[1, 2, 3, 4]'; idx constant int := 0; j_expected constant jsonb := '[2, 3, 4]'; begin assert jleft - idx = jexpected, 'unexpected'; end; $body$; ``` There is no direct way to remove several values from an array at a list of indexes, analogous to the ability to remove several key-value pairs from an object with a list of pair keys. The obvious attempt fails with this error: ``` operator does not exist: jsonb - integer[] ``` You can achieve the result thus: ```plpgsql do $body$ declare j_left constant jsonb := '[1, 2, 3, 4, 5, 7]'; idx constant int := 0; j_expected constant jsonb := '[4, 5, 7]'; begin assert ((jleft - idx) - idx) - idx = jexpected, 'unexpected'; end; $body$; ``` Purpose: Remove a single key-value pair from an object or a single value from an array at the specified path. Signature: ``` input values: jsonb - text[] return value: jsonb ``` Notes: There is no `json` overload. ```plpgsql do $body$ declare j_left constant jsonb := '[\"a\", {\"b\":17, \"c\": [\"dog\", \"cat\"]}]'; path constant text[] := array['1', 'c', '0']; j_expected constant jsonb := '[\"a\", {\"b\": 17, \"c\": [\"cat\"]}]'; begin assert jleft #- path = jexpected, 'assert failed'; end; $body$; ``` Just as with the , array index values are presented as convertible `text` values. Notice that the address of each JSON array element along the path is specified JSON-style, where the index starts at zero."
}
] |
{
"category": "App Definition and Development",
"file_name": "topk.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/sql-reference/aggregate-functions/reference/topk sidebar_position: 108 Returns an array of the approximately most frequent values in the specified column. The resulting array is sorted in descending order of approximate frequency of values (not by the values themselves). Implements the algorithm for analyzing TopK, based on the reduce-and-combine algorithm from . ``` sql topK(N)(column) topK(N, load_factor)(column) topK(N, load_factor, 'counts')(column) ``` This function does not provide a guaranteed result. In certain situations, errors might occur and it might return frequent values that arent the most frequent values. We recommend using the `N < 10` value; performance is reduced with large `N` values. Maximum value of `N = 65536`. Parameters `N` The number of elements to return. Optional. Default value: 10. `loadfactor` Defines, how many cells reserved for values. If uniq(column) > N * loadfactor, result of topK function will be approximate. Optional. Default value: 3. `counts` Defines, should result contain approximate count and error value. Arguments `column` The value to calculate frequency. Example Take the data set and select the three most frequently occurring values in the `AirlineID` column. ``` sql SELECT topK(3)(AirlineID) AS res FROM ontime ``` ``` text res [19393,19790,19805] ``` See Also"
}
] |
{
"category": "App Definition and Development",
"file_name": "yba_storage-config_gcs.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "Manage a YugabyteDB Anywhere GCS storage configuration Manage a GCS storage configuration in YugabyteDB Anywhere ``` yba storage-config gcs [flags] ``` ``` -n, --name string [Optional] The name of the storage configuration for the operation. Required for create, delete, describe, update. -h, --help help for gcs ``` ``` -a, --apiToken string YugabyteDB Anywhere api token. --config string Config file, defaults to $HOME/.yba-cli.yaml --debug Use debug mode, same as --logLevel debug. --disable-color Disable colors in output. (default false) -H, --host string YugabyteDB Anywhere Host (default \"http://localhost:9000\") -l, --logLevel string Select the desired log level format. Allowed values: debug, info, warn, error, fatal. (default \"info\") -o, --output string Select the desired output format. Allowed values: table, json, pretty. (default \"table\") --timeout duration Wait command timeout, example: 5m, 1h. (default 168h0m0s) --wait Wait until the task is completed, otherwise it will exit immediately. (default true) ``` - Manage YugabyteDB Anywhere storage configurations - Create a GCS YugabyteDB Anywhere storage configuration - Delete a GCS YugabyteDB Anywhere storage configuration - Describe a GCS YugabyteDB Anywhere storage configuration - List GCS YugabyteDB Anywhere storage-configurations - Update an GCS YugabyteDB Anywhere storage configuration"
}
] |
{
"category": "App Definition and Development",
"file_name": "perf-12336.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Isolate channels cleanup from other async tasks (like routes cleanup) by using a dedicated pool, as this task can be quite slow under high network latency conditions."
}
] |
{
"category": "App Definition and Development",
"file_name": "backup-dataFormat.md",
"project_name": "FoundationDB",
"subcategory": "Database"
} | [
{
"data": "This document describes the data format of the files generated by FoundationDB (FDB) backup procedure. The target readers who may benefit from reading this document are: who make changes on the current backup or restore procedure; who writes tools to digest the backup data for analytical purpose; who wants to understand the internals of how backup and restore works. The description of the backup data format is based on FDB 5.2 to FDB 6.1. The backup data format may (although unlikely) change after FDB 6.1. The backup procedure generates two types of files: range files and log files. A range file describes key-value pairs in a range at the version when the backup process takes a snapshot of the range. Different range files have data for different ranges at different versions. A log file describes the mutations taken from a version v<sub>1</sub> to v<sub>2</sub> during the backup procedure. With the key-value pairs in range file and the mutations in log file, the restore procedure can restore the database into a consistent state at a user-provided version v<sub>k</sub> if the backup data is claimed by the restore as restorable at v<sub>k</sub>. (The details of determining if a set of backup data is restorable at a version is out of scope of this document and can be found at . The backup files will be saved in a directory (i.e., url) specified by users. Under the directory, the range files are in the `snapshots` folder. The log files are in the `logs` folder. The convention of the range filename is ` snapshots/snapshot,beginVersion,beginVersion,blockSize`, where `beginVersion` is the version when the key-values in the range file are recorded, and blockSize is the size of data blocks in the range file. The convention of the log filename is `logs/,versionPrefix/log,beginVersion,endVersion,randomUID, blockSize`, where the versionPrefix is a 2-level path (`x/y`) where beginVersion should go such that `x/y/*` contains (10^smallestBucket) possible versions; the randomUID is a random UID, the `beginVersion` and `endVersion` are the version range (left inclusive, right exclusive) when the mutations are recorded; and the `blockSize` is the data block size in the log file. We will use an example to explain what each field in the range and log filename means. Suppose under the backup directory, we have a range file `snapshots/snapshot,78994177,78994177,97` and a log file `logs/0000/0000/log,78655645,98655645,149a0bdfedecafa2f648219d5eba816e,1048576`. The range files filename tells us that all key-value pairs decoded from the file are the KV value in DB at the version `78994177`. The data block size is `97` bytes. The log files filename tells us that the mutations in the log file were the mutations in the DB during the version range `[78655645,98655645)`, and the data block size is `1048576` bytes. A range file can have one to many data blocks. Each data block has a set of key-value pairs. A data block is encoded as follows: `Header startKey k1v1 k2v2 Padding`. Example: The client code writes keys in this sequence: a c d e f g h i j z The backup procedure records the key-value pairs in the database into range file. H = header P = padding"
},
{
"data": "= keys v = value | = block boundary Encoded file: H a cv dv P | H e ev fv gv hv P | H h hv iv jv z Decoded in blocks yields: Block 1: range [a, e) with kv pairs cv, dv Block 2: range [e, h) with kv pairs ev, fv, gv Block 3: range [h, z) with kv pairs hv, iv, jv NOTE: All blocks except for the final block will have one last value which will not be used. This isn't actually a waste since if the next KV pair wouldn't fit within the block after the value then the space after the final key to the next 1MB boundary would just be padding anyway. The code related to how a range file is written is in the `struct RangeFileWriter` in `namespace fileBackup`. The code that decodes a range block is in `ACTOR Future<Standalone<VectorRef<KeyValueRef>>> decodeRangeFileBlock(Reference<IAsyncFile> file, int64_t offset, int len, Database cx)`. A log file can have one to many data blocks. Each block is encoded as `Header, [Param1, Param2]... padding`. The first 32bits in `Param1` and `Param2` specifies the length of the `Param1` and `Param2`. `Param1` specifies the version when the mutations happened; `Param2` encodes the group of mutations happened at the version. Note that if the group of mutations is bigger than the block size, the mutation group will be split across multiple data blocks. For example, we may get `[Param1, Param2part0]`, `[Param1, Param2part1]`. By concatenating the `Param2part0` and `Param2part1`, we can get the group of all mutations happened in the version specified in `Param1`. The encoding format for `Param1` is as follows: `hashValue|commitVersion|part`, where `hashValue` is the hash of the commitVersion, `commitVersion` is the version when the mutations in `Param2`(s) are taken, and `part` is the part number in case we need to concatenate the `Param2` to get the group of all mutations. `hashValue` takes 8bits, `commitVersion` takes 64bits, and `part` takes 32bits. Note that in case of concatenating the partial group of mutations in `Param2` to get the full group of all mutations, the part number should be continuous. The encoding format for the group of mutations, which is Param2 or the concatenated Param2 in case of partial group of mutations in a block, is as follows: `lengthofthemutationgroup | encodedmutation1 | | encodedmutationk`. The `encodedmutationi` is encoded as follows `type|kLen|vLen|Key|Value` where type is the mutation type, such as Set or Clear, `kLen` and `vLen` respectively are the length of the key and value in the mutation. `Key` and `Value` are the serialized value of the Key and Value in the mutation. The code related to how a log file is written is in the `struct LogFileWriter` in `namespace fileBackup`. The code that decodes a mutation block is in `ACTOR Future<Standalone<VectorRef<KeyValueRef>>> decodeLogFileBlock(Reference<IAsyncFile> file, int64_t offset, int len)`. When the restore decodes a serialized integer from the backup file, it needs to convert the serialized value from big endian to little endian. The reason is as follows: When the backup procedure transfers the data to remote blob store, the backup data is encoded in big endian. However, FoundationDB currently only run on little endian machines. The endianness affects the interpretation of an integer, so we must perform the endianness conversion."
}
] |
{
"category": "App Definition and Development",
"file_name": "fix-12802.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Improve cluster discovery behaviour when a node is manually removed from a cluster using 'emqx ctl cluster leave' command. Previously, if the configured cluster 'discovery_strategy' was not 'manual', the left node might re-discover and re-join the same cluster shortly after it left (unless it was stopped). After this change, 'cluster leave' command disables automatic cluster_discovery, so that the left node won't re-join the same cluster again. Cluster discovery can be re-enabled by running 'emqx ctl discovery enable` or by restarting the left node."
}
] |
{
"category": "App Definition and Development",
"file_name": "oriel.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"From Apache Beam to Leukemia early detection\" icon: /images/logos/powered-by/oriel.png hasNav: true cardDescription: \"Oriel Research Therapeutics (ORT) is a startup company in the greater Boston area that provides early detection services for multiple medical conditions, utilizing cutting edge Artificial Intelligence technologies and Next Generation Sequencing (NGS). ORT utilizes Apache Beam pipelines to process over 1 million samples of genomics and clinical information. The processed data is used by ORT in detecting Leukemia, Sepsis, and other medical conditions.\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <div> <header class=\"case-study-header\"> <h2 itemprop=\"name headline\">From Apache Beam to Leukemia early detection</h2> </header> Oriel Research Therapeutics (ORT) is a startup company in the greater Boston area that provides early detection services for multiple medical conditions, utilizing cutting edge Artificial Intelligence technologies and Next Generation Sequencing (NGS). ORT utilizes Apache Beam pipelines to process over 1 million samples of genomics and clinical information. The processed data is used by ORT in detecting Leukemia, Sepsis and other medical conditions. </div>"
}
] |
{
"category": "App Definition and Development",
"file_name": "CookBook.md",
"project_name": "VoltDB",
"subcategory": "Database"
} | [
{
"data": "You can find recipes for using Google Mock here. If you haven't yet, please read the document first to make sure you understand the basics. Note: Google Mock lives in the `testing` name space. For readability, it is recommended to write `using ::testing::Foo;` once in your file before using the name `Foo` defined by Google Mock. We omit such `using` statements in this page for brevity, but you should do it in your own code. You must always put a mock method definition (`MOCK_METHOD*`) in a `public:` section of the mock class, regardless of the method being mocked being `public`, `protected`, or `private` in the base class. This allows `ONCALL` and `EXPECTCALL` to reference the mock function from outside of the mock class. (Yes, C++ allows a subclass to specify a different access level than the base class on a virtual function.) Example: ```cpp class Foo { public: ... virtual bool Transform(Gadget* g) = 0; protected: virtual void Resume(); private: virtual int GetTimeOut(); }; class MockFoo : public Foo { public: ... MOCK_METHOD1(Transform, bool(Gadget* g)); // The following must be in the public section, even though the // methods are protected or private in the base class. MOCK_METHOD0(Resume, void()); MOCK_METHOD0(GetTimeOut, int()); }; ``` You can mock overloaded functions as usual. No special attention is required: ```cpp class Foo { ... // Must be virtual as we'll inherit from Foo. virtual ~Foo(); // Overloaded on the types and/or numbers of arguments. virtual int Add(Element x); virtual int Add(int times, Element x); // Overloaded on the const-ness of this object. virtual Bar& GetBar(); virtual const Bar& GetBar() const; }; class MockFoo : public Foo { ... MOCK_METHOD1(Add, int(Element x)); MOCK_METHOD2(Add, int(int times, Element x); MOCK_METHOD0(GetBar, Bar&()); MOCKCONSTMETHOD0(GetBar, const Bar&()); }; ``` Note: if you don't mock all versions of the overloaded method, the compiler will give you a warning about some methods in the base class being hidden. To fix that, use `using` to bring them in scope: ```cpp class MockFoo : public Foo { ... using Foo::Add; MOCK_METHOD1(Add, int(Element x)); // We don't want to mock int Add(int times, Element x); ... }; ``` To mock a class template, append `T` to the `MOCK*` macros: ```cpp template <typename Elem> class StackInterface { ... // Must be virtual as we'll inherit from StackInterface. virtual ~StackInterface(); virtual int GetSize() const = 0; virtual void Push(const Elem& x) = 0; }; template <typename Elem> class MockStack : public StackInterface<Elem> { ... MOCKCONSTMETHOD0_T(GetSize, int()); MOCKMETHOD1T(Push, void(const Elem& x)); }; ``` Google Mock can mock non-virtual functions to be used in what we call _hi-perf dependency injection_. In this case, instead of sharing a common base class with the real class, your mock class will be unrelated to the real class, but contain methods with the same signatures. The syntax for mocking non-virtual methods is the same as mocking virtual methods: ```cpp // A simple packet stream class. None of its members is virtual. class ConcretePacketStream { public: void AppendPacket(Packet* new_packet); const Packet* GetPacket(sizet packetnumber) const; size_t NumberOfPackets() const; ... }; // A mock packet stream class. It inherits from no other, but defines // GetPacket() and NumberOfPackets(). class MockPacketStream { public: MOCKCONSTMETHOD1(GetPacket, const Packet*(sizet packetnumber)); MOCKCONSTMETHOD0(NumberOfPackets, size_t()); ... }; ``` Note that the mock class doesn't define `AppendPacket()`, unlike the real class. That's fine as long as the test doesn't need to call it. Next, you need a way to say that you want to use `ConcretePacketStream` in production code and to use `MockPacketStream` in"
},
{
"data": "Since the functions are not virtual and the two classes are unrelated, you must specify your choice at compile time (as opposed to run time). One way to do it is to templatize your code that needs to use a packet stream. More specifically, you will give your code a template type argument for the type of the packet stream. In production, you will instantiate your template with `ConcretePacketStream` as the type argument. In tests, you will instantiate the same template with `MockPacketStream`. For example, you may write: ```cpp template <class PacketStream> void CreateConnection(PacketStream* stream) { ... } template <class PacketStream> class PacketReader { public: void ReadPackets(PacketStream* stream, sizet packetnum); }; ``` Then you can use `CreateConnection<ConcretePacketStream>()` and `PacketReader<ConcretePacketStream>` in production code, and use `CreateConnection<MockPacketStream>()` and `PacketReader<MockPacketStream>` in tests. ```cpp MockPacketStream mock_stream; EXPECTCALL(mockstream, ...)...; .. set more expectations on mock_stream ... PacketReader<MockPacketStream> reader(&mock_stream); ... exercise reader ... ``` It's possible to use Google Mock to mock a free function (i.e. a C-style function or a static method). You just need to rewrite your code to use an interface (abstract class). Instead of calling a free function (say, `OpenFile`) directly, introduce an interface for it and have a concrete subclass that calls the free function: ```cpp class FileInterface { public: ... virtual bool Open(const char path, const char mode) = 0; }; class File : public FileInterface { public: ... virtual bool Open(const char path, const char mode) { return OpenFile(path, mode); } }; ``` Your code should talk to `FileInterface` to open a file. Now it's easy to mock out the function. This may seem much hassle, but in practice you often have multiple related functions that you can put in the same interface, so the per-function syntactic overhead will be much lower. If you are concerned about the performance overhead incurred by virtual functions, and profiling confirms your concern, you can combine this with the recipe for . If a mock method has no `EXPECT_CALL` spec but is called, Google Mock will print a warning about the \"uninteresting call\". The rationale is: New methods may be added to an interface after a test is written. We shouldn't fail a test just because a method it doesn't know about is called. However, this may also mean there's a bug in the test, so Google Mock shouldn't be silent either. If the user believes these calls are harmless, they can add an `EXPECT_CALL()` to suppress the warning. However, sometimes you may want to suppress all \"uninteresting call\" warnings, while sometimes you may want the opposite, i.e. to treat all of them as errors. Google Mock lets you make the decision on a per-mock-object basis. Suppose your test uses a mock class `MockFoo`: ```cpp TEST(...) { MockFoo mock_foo; EXPECTCALL(mockfoo, DoThis()); ... code that uses mock_foo ... } ``` If a method of `mock_foo` other than `DoThis()` is called, it will be reported by Google Mock as a warning. However, if you rewrite your test to use `NiceMock<MockFoo>` instead, the warning will be gone, resulting in a cleaner test output: ```cpp using ::testing::NiceMock; TEST(...) { NiceMock<MockFoo> mock_foo; EXPECTCALL(mockfoo, DoThis()); ... code that uses mock_foo ... } ``` `NiceMock<MockFoo>` is a subclass of `MockFoo`, so it can be used wherever `MockFoo` is accepted. It also works if `MockFoo`'s constructor takes some arguments, as `NiceMock<MockFoo>` \"inherits\" `MockFoo`'s constructors: ```cpp using ::testing::NiceMock; TEST(...) { NiceMock<MockFoo> mock_foo(5, \"hi\"); // Calls MockFoo(5, \"hi\"). EXPECTCALL(mockfoo, DoThis()); ... code that uses mock_foo ... } ``` The usage of `StrictMock` is similar, except that it makes all uninteresting calls failures: ```cpp using ::testing::StrictMock;"
},
{
"data": "{ StrictMock<MockFoo> mock_foo; EXPECTCALL(mockfoo, DoThis()); ... code that uses mock_foo ... // The test will fail if a method of mock_foo other than DoThis() // is called. } ``` There are some caveats though (I don't like them just as much as the next guy, but sadly they are side effects of C++'s limitations): `NiceMock<MockFoo>` and `StrictMock<MockFoo>` only work for mock methods defined using the `MOCK_METHOD` family of macros directly in the `MockFoo` class. If a mock method is defined in a base class of `MockFoo`, the \"nice\" or \"strict\" modifier may not affect it, depending on the compiler. In particular, nesting `NiceMock` and `StrictMock` (e.g. `NiceMock<StrictMock<MockFoo> >`) is not* supported. The constructors of the base mock (`MockFoo`) cannot have arguments passed by non-const reference, which happens to be banned by the . During the constructor or destructor of `MockFoo`, the mock object is not nice or strict. This may cause surprises if the constructor or destructor calls a mock method on `this` object. (This behavior, however, is consistent with C++'s general rule: if a constructor or destructor calls a virtual method of `this` object, that method is treated as non-virtual. In other words, to the base class's constructor or destructor, `this` object behaves like an instance of the base class, not the derived class. This rule is required for safety. Otherwise a base constructor may use members of a derived class before they are initialized, or a base destructor may use members of a derived class after they have been destroyed.) Finally, you should be very cautious about when to use naggy or strict mocks, as they tend to make tests more brittle and harder to maintain. When you refactor your code without changing its externally visible behavior, ideally you should't need to update any tests. If your code interacts with a naggy mock, however, you may start to get spammed with warnings as the result of your change. Worse, if your code interacts with a strict mock, your tests may start to fail and you'll be forced to fix them. Our general recommendation is to use nice mocks (not yet the default) most of the time, use naggy mocks (the current default) when developing or debugging tests, and use strict mocks only as the last resort. Sometimes a method has a long list of arguments that is mostly uninteresting. For example, ```cpp class LogSink { public: ... virtual void send(LogSeverity severity, const char* full_filename, const char* base_filename, int line, const struct tm* tm_time, const char* message, sizet messagelen) = 0; }; ``` This method's argument list is lengthy and hard to work with (let's say that the `message` argument is not even 0-terminated). If we mock it as is, using the mock will be awkward. If, however, we try to simplify this interface, we'll need to fix all clients depending on it, which is often infeasible. The trick is to re-dispatch the method in the mock class: ```cpp class ScopedMockLog : public LogSink { public: ... virtual void send(LogSeverity severity, const char* full_filename, const char base_filename, int line, const tm tm_time, const char* message, sizet messagelen) { // We are only interested in the log severity, full file name, and // log message. Log(severity, fullfilename, std::string(message, messagelen)); } // Implements the mock method: // // void Log(LogSeverity severity, // const string& file_path, // const string& message); MOCKMETHOD3(Log, void(LogSeverity severity, const string& filepath, const string& message)); }; ``` By defining a new mock method with a trimmed argument list, we make the mock class much more"
},
{
"data": "Often you may find yourself using classes that don't implement interfaces. In order to test your code that uses such a class (let's call it `Concrete`), you may be tempted to make the methods of `Concrete` virtual and then mock it. Try not to do that. Making a non-virtual function virtual is a big decision. It creates an extension point where subclasses can tweak your class' behavior. This weakens your control on the class because now it's harder to maintain the class' invariants. You should make a function virtual only when there is a valid reason for a subclass to override it. Mocking concrete classes directly is problematic as it creates a tight coupling between the class and the tests - any small change in the class may invalidate your tests and make test maintenance a pain. To avoid such problems, many programmers have been practicing \"coding to interfaces\": instead of talking to the `Concrete` class, your code would define an interface and talk to it. Then you implement that interface as an adaptor on top of `Concrete`. In tests, you can easily mock that interface to observe how your code is doing. This technique incurs some overhead: You pay the cost of virtual function calls (usually not a problem). There is more abstraction for the programmers to learn. However, it can also bring significant benefits in addition to better testability: `Concrete`'s API may not fit your problem domain very well, as you may not be the only client it tries to serve. By designing your own interface, you have a chance to tailor it to your need - you may add higher-level functionalities, rename stuff, etc instead of just trimming the class. This allows you to write your code (user of the interface) in a more natural way, which means it will be more readable, more maintainable, and you'll be more productive. If `Concrete`'s implementation ever has to change, you don't have to rewrite everywhere it is used. Instead, you can absorb the change in your implementation of the interface, and your other code and tests will be insulated from this change. Some people worry that if everyone is practicing this technique, they will end up writing lots of redundant code. This concern is totally understandable. However, there are two reasons why it may not be the case: Different projects may need to use `Concrete` in different ways, so the best interfaces for them will be different. Therefore, each of them will have its own domain-specific interface on top of `Concrete`, and they will not be the same code. If enough projects want to use the same interface, they can always share it, just like they have been sharing `Concrete`. You can check in the interface and the adaptor somewhere near `Concrete` (perhaps in a `contrib` sub-directory) and let many projects use it. You need to weigh the pros and cons carefully for your particular problem, but I'd like to assure you that the Java community has been practicing this for a long time and it's a proven effective technique applicable in a wide variety of situations. :-) Some times you have a non-trivial fake implementation of an"
},
{
"data": "For example: ```cpp class Foo { public: virtual ~Foo() {} virtual char DoThis(int n) = 0; virtual void DoThat(const char s, int p) = 0; }; class FakeFoo : public Foo { public: virtual char DoThis(int n) { return (n > 0) ? '+' : (n < 0) ? '-' : '0'; } virtual void DoThat(const char s, int p) { *p = strlen(s); } }; ``` Now you want to mock this interface such that you can set expectations on it. However, you also want to use `FakeFoo` for the default behavior, as duplicating it in the mock object is, well, a lot of work. When you define the mock class using Google Mock, you can have it delegate its default action to a fake class you already have, using this pattern: ```cpp using ::testing::_; using ::testing::Invoke; class MockFoo : public Foo { public: // Normal mock method definitions using Google Mock. MOCK_METHOD1(DoThis, char(int n)); MOCK_METHOD2(DoThat, void(const char s, int p)); // Delegates the default actions of the methods to a FakeFoo object. // This must be called before the custom ON_CALL() statements. void DelegateToFake() { ONCALL(*this, DoThis()) .WillByDefault(Invoke(&fake_, &FakeFoo::DoThis)); ONCALL(*this, DoThat(, _)) .WillByDefault(Invoke(&fake_, &FakeFoo::DoThat)); } private: FakeFoo fake_; // Keeps an instance of the fake in the mock. }; ``` With that, you can use `MockFoo` in your tests as usual. Just remember that if you don't explicitly set an action in an `ON_CALL()` or `EXPECT_CALL()`, the fake will be called upon to do it: ```cpp using ::testing::_; TEST(AbcTest, Xyz) { MockFoo foo; foo.DelegateToFake(); // Enables the fake for delegation. // Put your ON_CALL(foo, ...)s here, if any. // No action specified, meaning to use the default action. EXPECT_CALL(foo, DoThis(5)); EXPECTCALL(foo, DoThat(, _)); int n = 0; EXPECT_EQ('+', foo.DoThis(5)); // FakeFoo::DoThis() is invoked. foo.DoThat(\"Hi\", &n); // FakeFoo::DoThat() is invoked. EXPECT_EQ(2, n); } ``` Some tips: If you want, you can still override the default action by providing your own `ONCALL()` or using `.WillOnce()` / `.WillRepeatedly()` in `EXPECTCALL()`. In `DelegateToFake()`, you only need to delegate the methods whose fake implementation you intend to use. The general technique discussed here works for overloaded methods, but you'll need to tell the compiler which version you mean. To disambiguate a mock function (the one you specify inside the parentheses of `ON_CALL()`), see the \"Selecting Between Overloaded Functions\" section on this page; to disambiguate a fake function (the one you place inside `Invoke()`), use a `static_cast` to specify the function's type. For instance, if class `Foo` has methods `char DoThis(int n)` and `bool DoThis(double x) const`, and you want to invoke the latter, you need to write `Invoke(&fake_, static_cast<bool (FakeFoo::)(double) const>(&FakeFoo::DoThis))` instead of `Invoke(&fake, &FakeFoo::DoThis)` (The strange-looking thing inside the angled brackets of `staticcast` is the type of a function pointer to the second `DoThis()` method.). Having to mix a mock and a fake is often a sign of something gone wrong. Perhaps you haven't got used to the interaction-based way of testing yet. Or perhaps your interface is taking on too many roles and should be split up. Therefore, don't abuse this*. We would only recommend to do it as an intermediate step when you are refactoring your code. Regarding the tip on mixing a mock and a fake, here's an example on why it may be a bad sign: Suppose you have a class `System` for low-level system operations. In particular, it does file and I/O operations. And suppose you want to test how your code uses `System` to do I/O, and you just want the file operations to work normally. If you mock out the entire `System` class, you'll have to provide a fake implementation for the file operation part, which suggests that `System` is taking on too many roles. Instead, you can define a `FileOps` interface and an `IOOps` interface and split `System`'s functionalities into the"
},
{
"data": "Then you can mock `IOOps` without mocking `FileOps`. When using testing doubles (mocks, fakes, stubs, and etc), sometimes their behaviors will differ from those of the real objects. This difference could be either intentional (as in simulating an error such that you can test the error handling code) or unintentional. If your mocks have different behaviors than the real objects by mistake, you could end up with code that passes the tests but fails in production. You can use the delegating-to-real technique to ensure that your mock has the same behavior as the real object while retaining the ability to validate calls. This technique is very similar to the delegating-to-fake technique, the difference being that we use a real object instead of a fake. Here's an example: ```cpp using ::testing::_; using ::testing::AtLeast; using ::testing::Invoke; class MockFoo : public Foo { public: MockFoo() { // By default, all calls are delegated to the real object. ON_CALL(*this, DoThis()) .WillByDefault(Invoke(&real_, &Foo::DoThis)); ONCALL(*this, DoThat()) .WillByDefault(Invoke(&real_, &Foo::DoThat)); ... } MOCK_METHOD0(DoThis, ...); MOCK_METHOD1(DoThat, ...); ... private: Foo real_; }; ... MockFoo mock; EXPECT_CALL(mock, DoThis()) .Times(3); EXPECT_CALL(mock, DoThat(\"Hi\")) .Times(AtLeast(1)); ... use mock in test ... ``` With this, Google Mock will verify that your code made the right calls (with the right arguments, in the right order, called the right number of times, etc), and a real object will answer the calls (so the behavior will be the same as in production). This gives you the best of both worlds. Ideally, you should code to interfaces, whose methods are all pure virtual. In reality, sometimes you do need to mock a virtual method that is not pure (i.e, it already has an implementation). For example: ```cpp class Foo { public: virtual ~Foo(); virtual void Pure(int n) = 0; virtual int Concrete(const char* str) { ... } }; class MockFoo : public Foo { public: // Mocking a pure method. MOCK_METHOD1(Pure, void(int n)); // Mocking a concrete method. Foo::Concrete() is shadowed. MOCK_METHOD1(Concrete, int(const char* str)); }; ``` Sometimes you may want to call `Foo::Concrete()` instead of `MockFoo::Concrete()`. Perhaps you want to do it as part of a stub action, or perhaps your test doesn't need to mock `Concrete()` at all (but it would be oh-so painful to have to define a new mock class whenever you don't need to mock one of its methods). The trick is to leave a back door in your mock class for accessing the real methods in the base class: ```cpp class MockFoo : public Foo { public: // Mocking a pure method. MOCK_METHOD1(Pure, void(int n)); // Mocking a concrete method. Foo::Concrete() is shadowed. MOCK_METHOD1(Concrete, int(const char* str)); // Use this to call Concrete() defined in Foo. int FooConcrete(const char* str) { return Foo::Concrete(str); } }; ``` Now, you can call `Foo::Concrete()` inside an action by: ```cpp using ::testing::_; using ::testing::Invoke; ... EXPECTCALL(foo, Concrete()) .WillOnce(Invoke(&foo, &MockFoo::FooConcrete)); ``` or tell the mock object that you don't want to mock `Concrete()`: ```cpp using ::testing::Invoke; ... ONCALL(foo, Concrete()) .WillByDefault(Invoke(&foo, &MockFoo::FooConcrete)); ``` (Why don't we just write `Invoke(&foo, &Foo::Concrete)`? If you do that, `MockFoo::Concrete()` will be called (and cause an infinite recursion) since `Foo::Concrete()` is virtual. That's just how C++ works.) You can specify exactly which arguments a mock method is expecting: ```cpp using ::testing::Return; ... EXPECT_CALL(foo, DoThis(5)) .WillOnce(Return('a')); EXPECT_CALL(foo, DoThat(\"Hello\", bar)); ``` You can use matchers to match arguments that have a certain property: ```cpp using ::testing::Ge; using ::testing::NotNull; using ::testing::Return; ... EXPECT_CALL(foo, DoThis(Ge(5))) // The argument must be >= 5. .WillOnce(Return('a')); EXPECT_CALL(foo, DoThat(\"Hello\", NotNull())); // The second argument must not be"
},
{
"data": "``` A frequently used matcher is `_`, which matches anything: ```cpp using ::testing::_; using ::testing::NotNull; ... EXPECTCALL(foo, DoThat(, NotNull())); ``` You can build complex matchers from existing ones using `AllOf()`, `AnyOf()`, and `Not()`: ```cpp using ::testing::AllOf; using ::testing::Gt; using ::testing::HasSubstr; using ::testing::Ne; using ::testing::Not; ... // The argument must be > 5 and != 10. EXPECT_CALL(foo, DoThis(AllOf(Gt(5), Ne(10)))); // The first argument must not contain sub-string \"blah\". EXPECT_CALL(foo, DoThat(Not(HasSubstr(\"blah\")), NULL)); ``` Google Mock matchers are statically typed, meaning that the compiler can catch your mistake if you use a matcher of the wrong type (for example, if you use `Eq(5)` to match a `string` argument). Good for you! Sometimes, however, you know what you're doing and want the compiler to give you some slack. One example is that you have a matcher for `long` and the argument you want to match is `int`. While the two types aren't exactly the same, there is nothing really wrong with using a `Matcher<long>` to match an `int` - after all, we can first convert the `int` argument to a `long` before giving it to the matcher. To support this need, Google Mock gives you the `SafeMatcherCast<T>(m)` function. It casts a matcher `m` to type `Matcher<T>`. To ensure safety, Google Mock checks that (let `U` be the type `m` accepts): Type `T` can be implicitly cast to type `U`; When both `T` and `U` are built-in arithmetic types (`bool`, integers, and floating-point numbers), the conversion from `T` to `U` is not lossy (in other words, any value representable by `T` can also be represented by `U`); and When `U` is a reference, `T` must also be a reference (as the underlying matcher may be interested in the address of the `U` value). The code won't compile if any of these conditions aren't met. Here's one example: ```cpp using ::testing::SafeMatcherCast; // A base class and a child class. class Base { ... }; class Derived : public Base { ... }; class MockFoo : public Foo { public: MOCK_METHOD1(DoThis, void(Derived* derived)); }; ... MockFoo foo; // m is a Matcher<Base*> we got from somewhere. EXPECT_CALL(foo, DoThis(SafeMatcherCast<Derived*>(m))); ``` If you find `SafeMatcherCast<T>(m)` too limiting, you can use a similar function `MatcherCast<T>(m)`. The difference is that `MatcherCast` works as long as you can `static_cast` type `T` to type `U`. `MatcherCast` essentially lets you bypass C++'s type system (`static_cast` isn't always safe as it could throw away information, for example), so be careful not to misuse/abuse it. If you expect an overloaded function to be called, the compiler may need some help on which overloaded version it is. To disambiguate functions overloaded on the const-ness of this object, use the `Const()` argument wrapper. ```cpp using ::testing::ReturnRef; class MockFoo : public Foo { ... MOCK_METHOD0(GetBar, Bar&()); MOCKCONSTMETHOD0(GetBar, const Bar&()); }; ... MockFoo foo; Bar bar1, bar2; EXPECT_CALL(foo, GetBar()) // The non-const GetBar(). .WillOnce(ReturnRef(bar1)); EXPECT_CALL(Const(foo), GetBar()) // The const GetBar(). .WillOnce(ReturnRef(bar2)); ``` (`Const()` is defined by Google Mock and returns a `const` reference to its argument.) To disambiguate overloaded functions with the same number of arguments but different argument types, you may need to specify the exact type of a matcher, either by wrapping your matcher in `Matcher<type>()`, or using a matcher whose type is fixed (`TypedEq<type>`, `An<type>()`, etc): ```cpp using ::testing::An; using ::testing::Lt; using ::testing::Matcher; using ::testing::TypedEq; class MockPrinter : public Printer { public: MOCK_METHOD1(Print, void(int n)); MOCK_METHOD1(Print, void(char c)); }; TEST(PrinterTest, Print) { MockPrinter printer; EXPECT_CALL(printer, Print(An<int>())); // void Print(int); EXPECT_CALL(printer, Print(Matcher<int>(Lt(5)))); // void Print(int); EXPECT_CALL(printer, Print(TypedEq<char>('a'))); // void Print(char); printer.Print(3); printer.Print(6);"
},
{
"data": "} ``` When a mock method is called, the last matching expectation that's still active will be selected (think \"newer overrides older\"). So, you can make a method do different things depending on its argument values like this: ```cpp using ::testing::_; using ::testing::Lt; using ::testing::Return; ... // The default case. EXPECTCALL(foo, DoThis()) .WillRepeatedly(Return('b')); // The more specific case. EXPECT_CALL(foo, DoThis(Lt(5))) .WillRepeatedly(Return('a')); ``` Now, if `foo.DoThis()` is called with a value less than 5, `'a'` will be returned; otherwise `'b'` will be returned. Sometimes it's not enough to match the arguments individually. For example, we may want to say that the first argument must be less than the second argument. The `With()` clause allows us to match all arguments of a mock function as a whole. For example, ```cpp using ::testing::_; using ::testing::Lt; using ::testing::Ne; ... EXPECTCALL(foo, InRange(Ne(0), )) .With(Lt()); ``` says that the first argument of `InRange()` must not be 0, and must be less than the second argument. The expression inside `With()` must be a matcher of type `Matcher< ::testing::tuple<A1, ..., An> >`, where `A1`, ..., `An` are the types of the function arguments. You can also write `AllArgs(m)` instead of `m` inside `.With()`. The two forms are equivalent, but `.With(AllArgs(Lt()))` is more readable than `.With(Lt())`. You can use `Args<k1, ..., kn>(m)` to match the `n` selected arguments (as a tuple) against `m`. For example, ```cpp using ::testing::_; using ::testing::AllOf; using ::testing::Args; using ::testing::Lt; ... EXPECTCALL(foo, Blah(, , )) .With(AllOf(Args<0, 1>(Lt()), Args<1, 2>(Lt()))); ``` says that `Blah()` will be called with arguments `x`, `y`, and `z` where `x < y < z`. As a convenience and example, Google Mock provides some matchers for 2-tuples, including the `Lt()` matcher above. See the for the complete list. Note that if you want to pass the arguments to a predicate of your own (e.g. `.With(Args<0, 1>(Truly(&MyPredicate)))`), that predicate MUST be written to take a `::testing::tuple` as its argument; Google Mock will pass the `n` selected arguments as one single tuple to the predicate. Have you noticed that a matcher is just a fancy predicate that also knows how to describe itself? Many existing algorithms take predicates as arguments (e.g. those defined in STL's `<algorithm>` header), and it would be a shame if Google Mock matchers are not allowed to participate. Luckily, you can use a matcher where a unary predicate functor is expected by wrapping it inside the `Matches()` function. For example, ```cpp std::vector<int> v; ... // How many elements in v are >= 10? const int count = count_if(v.begin(), v.end(), Matches(Ge(10))); ``` Since you can build complex matchers from simpler ones easily using Google Mock, this gives you a way to conveniently construct composite predicates (doing the same using STL's `<functional>` header is just painful). For example, here's a predicate that's satisfied by any number that is >= 0, <= 100, and != 50: ```cpp Matches(AllOf(Ge(0), Le(100), Ne(50))) ``` Since matchers are basically predicates that also know how to describe themselves, there is a way to take advantage of them in assertions. It's called `ASSERTTHAT` and `EXPECTTHAT`: ```cpp ASSERT_THAT(value, matcher); // Asserts that value matches matcher. EXPECT_THAT(value, matcher); // The non-fatal version. ``` For example, in a Google Test test you can write: ```cpp using ::testing::AllOf; using ::testing::Ge; using ::testing::Le; using ::testing::MatchesRegex; using ::testing::StartsWith; ... EXPECT_THAT(Foo(), StartsWith(\"Hello\")); EXPECT_THAT(Bar(), MatchesRegex(\"Line \\\\d+\")); ASSERT_THAT(Baz(), AllOf(Ge(5), Le(10))); ``` which (as you can probably guess) executes `Foo()`, `Bar()`, and `Baz()`, and verifies that: `Foo()` returns a string that starts with `\"Hello\"`. `Bar()` returns a string that matches regular expression `\"Line \\\\d+\"`. `Baz()` returns a number in the range [5,"
},
{
"data": "The nice thing about these macros is that _they read like English_. They generate informative messages too. For example, if the first `EXPECT_THAT()` above fails, the message will be something like: ``` Value of: Foo() Actual: \"Hi, world!\" Expected: starts with \"Hello\" ``` Credit: The idea of `(ASSERT|EXPECT)_THAT` was stolen from the project, which adds `assertThat()` to JUnit. Google Mock provides a built-in set of matchers. In case you find them lacking, you can use an arbitray unary predicate function or functor as a matcher - as long as the predicate accepts a value of the type you want. You do this by wrapping the predicate inside the `Truly()` function, for example: ```cpp using ::testing::Truly; int IsEven(int n) { return (n % 2) == 0 ? 1 : 0; } ... // Bar() must be called with an even number. EXPECT_CALL(foo, Bar(Truly(IsEven))); ``` Note that the predicate function / functor doesn't have to return `bool`. It works as long as the return value can be used as the condition in statement `if (condition) ...`. When you do an `EXPECTCALL(mockobj, Foo(bar))`, Google Mock saves away a copy of `bar`. When `Foo()` is called later, Google Mock compares the argument to `Foo()` with the saved copy of `bar`. This way, you don't need to worry about `bar` being modified or destroyed after the `EXPECT_CALL()` is executed. The same is true when you use matchers like `Eq(bar)`, `Le(bar)`, and so on. But what if `bar` cannot be copied (i.e. has no copy constructor)? You could define your own matcher function and use it with `Truly()`, as the previous couple of recipes have shown. Or, you may be able to get away from it if you can guarantee that `bar` won't be changed after the `EXPECT_CALL()` is executed. Just tell Google Mock that it should save a reference to `bar`, instead of a copy of it. Here's how: ```cpp using ::testing::Eq; using ::testing::ByRef; using ::testing::Lt; ... // Expects that Foo()'s argument == bar. EXPECTCALL(mockobj, Foo(Eq(ByRef(bar)))); // Expects that Foo()'s argument < bar. EXPECTCALL(mockobj, Foo(Lt(ByRef(bar)))); ``` Remember: if you do this, don't change `bar` after the `EXPECT_CALL()`, or the result is undefined. Often a mock function takes a reference to object as an argument. When matching the argument, you may not want to compare the entire object against a fixed object, as that may be over-specification. Instead, you may need to validate a certain member variable or the result of a certain getter method of the object. You can do this with `Field()` and `Property()`. More specifically, ```cpp Field(&Foo::bar, m) ``` is a matcher that matches a `Foo` object whose `bar` member variable satisfies matcher `m`. ```cpp Property(&Foo::baz, m) ``` is a matcher that matches a `Foo` object whose `baz()` method returns a value that satisfies matcher `m`. For example: | Expression | Description | |:--|:--| | `Field(&Foo::number, Ge(3))` | Matches `x` where `x.number >= 3`. | | `Property(&Foo::name, StartsWith(\"John \"))` | Matches `x` where `x.name()` starts with `\"John \"`. | Note that in `Property(&Foo::baz, ...)`, method `baz()` must take no argument and be declared as `const`. BTW, `Field()` and `Property()` can also match plain pointers to objects. For instance, ```cpp Field(&Foo::number, Ge(3)) ``` matches a plain pointer `p` where `p->number >= 3`. If `p` is `NULL`, the match will always fail regardless of the inner matcher. What if you want to validate more than one members at the same time? Remember that there is `AllOf()`. C++ functions often take pointers as"
},
{
"data": "You can use matchers like `IsNull()`, `NotNull()`, and other comparison matchers to match a pointer, but what if you want to make sure the value pointed to by the pointer, instead of the pointer itself, has a certain property? Well, you can use the `Pointee(m)` matcher. `Pointee(m)` matches a pointer iff `m` matches the value the pointer points to. For example: ```cpp using ::testing::Ge; using ::testing::Pointee; ... EXPECT_CALL(foo, Bar(Pointee(Ge(3)))); ``` expects `foo.Bar()` to be called with a pointer that points to a value greater than or equal to 3. One nice thing about `Pointee()` is that it treats a `NULL` pointer as a match failure, so you can write `Pointee(m)` instead of ```cpp AllOf(NotNull(), Pointee(m)) ``` without worrying that a `NULL` pointer will crash your test. Also, did we tell you that `Pointee()` works with both raw pointers and smart pointers (`linkedptr`, `sharedptr`, `scoped_ptr`, and etc)? What if you have a pointer to pointer? You guessed it - you can use nested `Pointee()` to probe deeper inside the value. For example, `Pointee(Pointee(Lt(3)))` matches a pointer that points to a pointer that points to a number less than 3 (what a mouthful...). Sometimes you want to specify that an object argument has a certain property, but there is no existing matcher that does this. If you want good error messages, you should define a matcher. If you want to do it quick and dirty, you could get away with writing an ordinary function. Let's say you have a mock function that takes an object of type `Foo`, which has an `int bar()` method and an `int baz()` method, and you want to constrain that the argument's `bar()` value plus its `baz()` value is a given number. Here's how you can define a matcher to do it: ```cpp using ::testing::MatcherInterface; using ::testing::MatchResultListener; class BarPlusBazEqMatcher : public MatcherInterface<const Foo&> { public: explicit BarPlusBazEqMatcher(int expected_sum) : expectedsum(expected_sum) {} virtual bool MatchAndExplain(const Foo& foo, MatchResultListener* listener) const { return (foo.bar() + foo.baz()) == expectedsum; } virtual void DescribeTo(::std::ostream* os) const { *os << \"bar() + baz() equals \" << expectedsum; } virtual void DescribeNegationTo(::std::ostream* os) const { *os << \"bar() + baz() does not equal \" << expectedsum; } private: const int expectedsum; }; inline Matcher<const Foo&> BarPlusBazEq(int expected_sum) { return MakeMatcher(new BarPlusBazEqMatcher(expected_sum)); } ... EXPECT_CALL(..., DoThis(BarPlusBazEq(5)))...; ``` Sometimes an STL container (e.g. list, vector, map, ...) is passed to a mock function and you may want to validate it. Since most STL containers support the `==` operator, you can write `Eq(expectedcontainer)` or simply `expectedcontainer` to match a container exactly. Sometimes, though, you may want to be more flexible (for example, the first element must be an exact match, but the second element can be any positive number, and so on). Also, containers used in tests often have a small number of elements, and having to define the expected container out-of-line is a bit of a hassle. You can use the `ElementsAre()` or `UnorderedElementsAre()` matcher in such cases: ```cpp using ::testing::_; using ::testing::ElementsAre; using ::testing::Gt; ... MOCK_METHOD1(Foo, void(const vector<int>& numbers)); ... EXPECTCALL(mock, Foo(ElementsAre(1, Gt(0), , 5))); ``` The above matcher says that the container must have 4 elements, which must be 1, greater than 0, anything, and 5 respectively. If you instead write: ```cpp using ::testing::_; using ::testing::Gt; using ::testing::UnorderedElementsAre; ... MOCK_METHOD1(Foo, void(const vector<int>& numbers)); ... EXPECTCALL(mock, Foo(UnorderedElementsAre(1, Gt(0), , 5))); ``` It means that the container must have 4 elements, which under some permutation must be 1, greater than 0, anything, and 5 respectively. `ElementsAre()` and `UnorderedElementsAre()` are overloaded to take 0 to 10"
},
{
"data": "If more are needed, you can place them in a C-style array and use `ElementsAreArray()` or `UnorderedElementsAreArray()` instead: ```cpp using ::testing::ElementsAreArray; ... // ElementsAreArray accepts an array of element values. const int expected_vector1[] = { 1, 5, 2, 4, ... }; EXPECTCALL(mock, Foo(ElementsAreArray(expectedvector1))); // Or, an array of element matchers. Matcher<int> expectedvector2 = { 1, Gt(2), , 3, ... }; EXPECTCALL(mock, Foo(ElementsAreArray(expectedvector2))); ``` In case the array needs to be dynamically created (and therefore the array size cannot be inferred by the compiler), you can give `ElementsAreArray()` an additional argument to specify the array size: ```cpp using ::testing::ElementsAreArray; ... int* const expected_vector3 = new int[count]; ... fill expected_vector3 with values ... EXPECTCALL(mock, Foo(ElementsAreArray(expectedvector3, count))); ``` Tips: `ElementsAre()` can be used to match any container that implements the STL iterator pattern (i.e. it has a `const_iterator` type and supports `begin()/end()`), not just the ones defined in STL. It will even work with container types yet to be written - as long as they follows the above pattern. You can use nested `ElementsAre()` to match nested (multi-dimensional) containers. If the container is passed by pointer instead of by reference, just write `Pointee(ElementsAre(...))`. The order of elements _matters_ for `ElementsAre()`. Therefore don't use it with containers whose element order is undefined (e.g. `hash_map`). Under the hood, a Google Mock matcher object consists of a pointer to a ref-counted implementation object. Copying matchers is allowed and very efficient, as only the pointer is copied. When the last matcher that references the implementation object dies, the implementation object will be deleted. Therefore, if you have some complex matcher that you want to use again and again, there is no need to build it every time. Just assign it to a matcher variable and use that variable repeatedly! For example, ```cpp Matcher<int> in_range = AllOf(Gt(5), Le(10)); ... use inrange as a matcher in multiple EXPECTCALLs ... ``` `ON_CALL` is likely the single most under-utilized construct in Google Mock. There are basically two constructs for defining the behavior of a mock object: `ONCALL` and `EXPECTCALL`. The difference? `ONCALL` defines what happens when a mock method is called, but doesn't imply any expectation on the method being called. `EXPECTCALL` not only defines the behavior, but also sets an expectation that the method will be called with the given arguments, for the given number of times (and in the given order when you specify the order too). Since `EXPECTCALL` does more, isn't it better than `ONCALL`? Not really. Every `EXPECTCALL` adds a constraint on the behavior of the code under test. Having more constraints than necessary is baaad_ - even worse than not having enough constraints. This may be counter-intuitive. How could tests that verify more be worse than tests that verify less? Isn't verification the whole point of tests? The answer, lies in what a test should verify. A good test verifies the contract of the code. If a test over-specifies, it doesn't leave enough freedom to the implementation. As a result, changing the implementation without breaking the contract (e.g. refactoring and optimization), which should be perfectly fine to do, can break such tests. Then you have to spend time fixing them, only to see them broken again the next time the implementation is changed. Keep in mind that one doesn't have to verify more than one property in one test. In fact, it's a good style to verify only one thing in one test. If you do that, a bug will likely break only one or two tests instead of dozens (which case would you rather"
},
{
"data": "If you are also in the habit of giving tests descriptive names that tell what they verify, you can often easily guess what's wrong just from the test log itself. So use `ONCALL` by default, and only use `EXPECTCALL` when you actually intend to verify that the call is made. For example, you may have a bunch of `ONCALL`s in your test fixture to set the common mock behavior shared by all tests in the same group, and write (scarcely) different `EXPECTCALL`s in different `TESTF`s to verify different aspects of the code's behavior. Compared with the style where each `TEST` has many `EXPECTCALL`s, this leads to tests that are more resilient to implementational changes (and thus less likely to require maintenance) and makes the intent of the tests more obvious (so they are easier to maintain when you do need to maintain them). If you are bothered by the \"Uninteresting mock function call\" message printed when a mock method without an `EXPECTCALL` is called, you may use a `NiceMock` instead to suppress all such messages for the mock object, or suppress the message for specific methods by adding `EXPECTCALL(...).Times(AnyNumber())`. DO NOT suppress it by blindly adding an `EXPECT_CALL(...)`, or you'll have a test that's a pain to maintain. If you are not interested in how a mock method is called, just don't say anything about it. In this case, if the method is ever called, Google Mock will perform its default action to allow the test program to continue. If you are not happy with the default action taken by Google Mock, you can override it using `DefaultValue<T>::Set()` (described later in this document) or `ON_CALL()`. Please note that once you expressed interest in a particular mock method (via `EXPECT_CALL()`), all invocations to it must match some expectation. If this function is called but the arguments don't match any `EXPECT_CALL()` statement, it will be an error. If a mock method shouldn't be called at all, explicitly say so: ```cpp using ::testing::_; ... EXPECTCALL(foo, Bar()) .Times(0); ``` If some calls to the method are allowed, but the rest are not, just list all the expected calls: ```cpp using ::testing::AnyNumber; using ::testing::Gt; ... EXPECT_CALL(foo, Bar(5)); EXPECT_CALL(foo, Bar(Gt(10))) .Times(AnyNumber()); ``` A call to `foo.Bar()` that doesn't match any of the `EXPECT_CALL()` statements will be an error. Uninteresting calls and unexpected calls are different concepts in Google Mock. Very different. A call `x.Y(...)` is uninteresting if there's not even a single `EXPECT_CALL(x, Y(...))` set. In other words, the test isn't interested in the `x.Y()` method at all, as evident in that the test doesn't care to say anything about it. A call `x.Y(...)` is unexpected if there are some `EXPECTCALL(x, Y(...))s` set, but none of them matches the call. Put another way, the test is interested in the `x.Y()` method (therefore it explicitly sets some `EXPECTCALL` to verify how it's called); however, the verification fails as the test doesn't expect this particular call to happen. An unexpected call is always an error, as the code under test doesn't behave the way the test expects it to behave. By default, an uninteresting call is not an error, as it violates no constraint specified by the test. (Google Mock's philosophy is that saying nothing means there is no constraint.) However, it leads to a warning, as it might indicate a problem (e.g. the test author might have forgotten to specify a constraint). In Google Mock, `NiceMock` and `StrictMock` can be used to make a mock class \"nice\" or"
},
{
"data": "How does this affect uninteresting calls and unexpected calls? A nice mock suppresses uninteresting call warnings. It is less chatty than the default mock, but otherwise is the same. If a test fails with a default mock, it will also fail using a nice mock instead. And vice versa. Don't expect making a mock nice to change the test's result. A strict mock turns uninteresting call warnings into errors. So making a mock strict may change the test's result. Let's look at an example: ```cpp TEST(...) { NiceMock<MockDomainRegistry> mock_registry; EXPECTCALL(mockregistry, GetDomainOwner(\"google.com\")) .WillRepeatedly(Return(\"Larry Page\")); // Use mock_registry in code under test. ... &mock_registry ... } ``` The sole `EXPECT_CALL` here says that all calls to `GetDomainOwner()` must have `\"google.com\"` as the argument. If `GetDomainOwner(\"yahoo.com\")` is called, it will be an unexpected call, and thus an error. Having a nice mock doesn't change the severity of an unexpected call. So how do we tell Google Mock that `GetDomainOwner()` can be called with some other arguments as well? The standard technique is to add a \"catch all\" `EXPECT_CALL`: ```cpp EXPECTCALL(mockregistry, GetDomainOwner(_)) .Times(AnyNumber()); // catches all other calls to this method. EXPECTCALL(mockregistry, GetDomainOwner(\"google.com\")) .WillRepeatedly(Return(\"Larry Page\")); ``` Remember that `` is the wildcard matcher that matches anything. With this, if `GetDomainOwner(\"google.com\")` is called, it will do what the second `EXPECTCALL` says; if it is called with a different argument, it will do what the first `EXPECT_CALL` says. Note that the order of the two `EXPECTCALLs` is important, as a newer `EXPECTCALL` takes precedence over an older one. For more on uninteresting calls, nice mocks, and strict mocks, read . Although an `EXPECT_CALL()` statement defined earlier takes precedence when Google Mock tries to match a function call with an expectation, by default calls don't have to happen in the order `EXPECT_CALL()` statements are written. For example, if the arguments match the matchers in the third `EXPECT_CALL()`, but not those in the first two, then the third expectation will be used. If you would rather have all calls occur in the order of the expectations, put the `EXPECT_CALL()` statements in a block where you define a variable of type `InSequence`: ```cpp using ::testing::_; using ::testing::InSequence; { InSequence s; EXPECT_CALL(foo, DoThis(5)); EXPECTCALL(bar, DoThat()) .Times(2); EXPECT_CALL(foo, DoThis(6)); } ``` In this example, we expect a call to `foo.DoThis(5)`, followed by two calls to `bar.DoThat()` where the argument can be anything, which are in turn followed by a call to `foo.DoThis(6)`. If a call occurred out-of-order, Google Mock will report an error. Sometimes requiring everything to occur in a predetermined order can lead to brittle tests. For example, we may care about `A` occurring before both `B` and `C`, but aren't interested in the relative order of `B` and `C`. In this case, the test should reflect our real intent, instead of being overly constraining. Google Mock allows you to impose an arbitrary DAG (directed acyclic graph) on the calls. One way to express the DAG is to use the clause of `EXPECT_CALL`. Another way is via the `InSequence()` clause (not the same as the `InSequence` class), which we borrowed from jMock 2. It's less flexible than `After()`, but more convenient when you have long chains of sequential calls, as it doesn't require you to come up with different names for the expectations in the chains. Here's how it works: If we view `EXPECT_CALL()` statements as nodes in a graph, and add an edge from node A to node B wherever A must occur before B, we can get a DAG. We use the term \"sequence\" to mean a directed path in this"
},
{
"data": "Now, if we decompose the DAG into sequences, we just need to know which sequences each `EXPECT_CALL()` belongs to in order to be able to reconstruct the original DAG. So, to specify the partial order on the expectations we need to do two things: first to define some `Sequence` objects, and then for each `EXPECT_CALL()` say which `Sequence` objects it is part of. Expectations in the same sequence must occur in the order they are written. For example, ```cpp using ::testing::Sequence; Sequence s1, s2; EXPECT_CALL(foo, A()) .InSequence(s1, s2); EXPECT_CALL(bar, B()) .InSequence(s1); EXPECT_CALL(bar, C()) .InSequence(s2); EXPECT_CALL(foo, D()) .InSequence(s2); ``` specifies the following DAG (where `s1` is `A -> B`, and `s2` is `A -> C -> D`): ``` +> B | A | | +> C > D ``` This means that A must occur before B and C, and C must occur before D. There's no restriction about the order other than these. When a mock method is called, Google Mock only consider expectations that are still active. An expectation is active when created, and becomes inactive (aka retires) when a call that has to occur later has occurred. For example, in ```cpp using ::testing::_; using ::testing::Sequence; Sequence s1, s2; EXPECTCALL(log, Log(WARNING, , \"File too large.\")) // #1 .Times(AnyNumber()) .InSequence(s1, s2); EXPECTCALL(log, Log(WARNING, , \"Data set is empty.\")) // #2 .InSequence(s1); EXPECTCALL(log, Log(WARNING, , \"User not found.\")) // #3 .InSequence(s2); ``` as soon as either #2 or #3 is matched, #1 will retire. If a warning `\"File too large.\"` is logged after this, it will be an error. Note that an expectation doesn't retire automatically when it's saturated. For example, ```cpp using ::testing::_; ... EXPECTCALL(log, Log(WARNING, , _)); // #1 EXPECTCALL(log, Log(WARNING, , \"File too large.\")); // #2 ``` says that there will be exactly one warning with the message `\"File too large.\"`. If the second warning contains this message too, #2 will match again and result in an upper-bound-violated error. If this is not what you want, you can ask an expectation to retire as soon as it becomes saturated: ```cpp using ::testing::_; ... EXPECTCALL(log, Log(WARNING, , _)); // #1 EXPECTCALL(log, Log(WARNING, , \"File too large.\")) // #2 .RetiresOnSaturation(); ``` Here #2 can be used only once, so if you have two warnings with the message `\"File too large.\"`, the first will match #2 and the second will match #1 - there will be no error. If a mock function's return type is a reference, you need to use `ReturnRef()` instead of `Return()` to return a result: ```cpp using ::testing::ReturnRef; class MockFoo : public Foo { public: MOCK_METHOD0(GetBar, Bar&()); }; ... MockFoo foo; Bar bar; EXPECT_CALL(foo, GetBar()) .WillOnce(ReturnRef(bar)); ``` The `Return(x)` action saves a copy of `x` when the action is created, and always returns the same value whenever it's executed. Sometimes you may want to instead return the live value of `x` (i.e. its value at the time when the action is executed.). If the mock function's return type is a reference, you can do it using `ReturnRef(x)`, as shown in the previous recipe (\"Returning References from Mock Methods\"). However, Google Mock doesn't let you use `ReturnRef()` in a mock function whose return type is not a reference, as doing that usually indicates a user error. So, what shall you do? You may be tempted to try `ByRef()`: ```cpp using testing::ByRef; using testing::Return; class MockFoo : public Foo { public: MOCK_METHOD0(GetValue, int()); }; ... int x = 0; MockFoo foo; EXPECT_CALL(foo, GetValue()) .WillRepeatedly(Return(ByRef(x))); x = 42; EXPECT_EQ(42, foo.GetValue()); ``` Unfortunately, it doesn't work"
},
{
"data": "The above code will fail with error: ``` Value of: foo.GetValue() Actual: 0 Expected: 42 ``` The reason is that `Return(value)` converts `value` to the actual return type of the mock function at the time when the action is created, not when it is executed. (This behavior was chosen for the action to be safe when `value` is a proxy object that references some temporary objects.) As a result, `ByRef(x)` is converted to an `int` value (instead of a `const int&`) when the expectation is set, and `Return(ByRef(x))` will always return 0. `ReturnPointee(pointer)` was provided to solve this problem specifically. It returns the value pointed to by `pointer` at the time the action is executed: ```cpp using testing::ReturnPointee; ... int x = 0; MockFoo foo; EXPECT_CALL(foo, GetValue()) .WillRepeatedly(ReturnPointee(&x)); // Note the & here. x = 42; EXPECT_EQ(42, foo.GetValue()); // This will succeed now. ``` Want to do more than one thing when a function is called? That's fine. `DoAll()` allow you to do sequence of actions every time. Only the return value of the last action in the sequence will be used. ```cpp using ::testing::DoAll; class MockFoo : public Foo { public: MOCK_METHOD1(Bar, bool(int n)); }; ... EXPECTCALL(foo, Bar()) .WillOnce(DoAll(action_1, action_2, ... action_n)); ``` Sometimes a method exhibits its effect not via returning a value but via side effects. For example, it may change some global state or modify an output argument. To mock side effects, in general you can define your own action by implementing `::testing::ActionInterface`. If all you need to do is to change an output argument, the built-in `SetArgPointee()` action is convenient: ```cpp using ::testing::SetArgPointee; class MockMutator : public Mutator { public: MOCK_METHOD2(Mutate, void(bool mutate, int* value)); ... }; ... MockMutator mutator; EXPECTCALL(mutator, Mutate(true, )) .WillOnce(SetArgPointee<1>(5)); ``` In this example, when `mutator.Mutate()` is called, we will assign 5 to the `int` variable pointed to by argument #1 (0-based). `SetArgPointee()` conveniently makes an internal copy of the value you pass to it, removing the need to keep the value in scope and alive. The implication however is that the value must have a copy constructor and assignment operator. If the mock method also needs to return a value as well, you can chain `SetArgPointee()` with `Return()` using `DoAll()`: ```cpp using ::testing::_; using ::testing::Return; using ::testing::SetArgPointee; class MockMutator : public Mutator { public: ... MOCK_METHOD1(MutateInt, bool(int* value)); }; ... MockMutator mutator; EXPECTCALL(mutator, MutateInt()) .WillOnce(DoAll(SetArgPointee<0>(5), Return(true))); ``` If the output argument is an array, use the `SetArrayArgument<N>(first, last)` action instead. It copies the elements in source range `[first, last)` to the array pointed to by the `N`-th (0-based) argument: ```cpp using ::testing::NotNull; using ::testing::SetArrayArgument; class MockArrayMutator : public ArrayMutator { public: MOCKMETHOD2(Mutate, void(int* values, int numvalues)); ... }; ... MockArrayMutator mutator; int values[5] = { 1, 2, 3, 4, 5 }; EXPECT_CALL(mutator, Mutate(NotNull(), 5)) .WillOnce(SetArrayArgument<0>(values, values + 5)); ``` This also works when the argument is an output iterator: ```cpp using ::testing::_; using ::testing::SetArrayArgument; class MockRolodex : public Rolodex { public: MOCKMETHOD1(GetNames, void(std::backinsert_iterator<vector<string> >)); ... }; ... MockRolodex rolodex; vector<string> names; names.push_back(\"George\"); names.push_back(\"John\"); names.push_back(\"Thomas\"); EXPECTCALL(rolodex, GetNames()) .WillOnce(SetArrayArgument<0>(names.begin(), names.end())); ``` If you expect a call to change the behavior of a mock object, you can use `::testing::InSequence` to specify different behaviors before and after the call: ```cpp using ::testing::InSequence; using ::testing::Return; ... { InSequence seq; EXPECTCALL(mymock, IsDirty()) .WillRepeatedly(Return(true)); EXPECTCALL(mymock, Flush()); EXPECTCALL(mymock, IsDirty()) .WillRepeatedly(Return(false)); } my_mock.FlushIfDirty(); ``` This makes `mymock.IsDirty()` return `true` before `mymock.Flush()` is called and return `false`"
},
{
"data": "If the behavior change is more complex, you can store the effects in a variable and make a mock method get its return value from that variable: ```cpp using ::testing::_; using ::testing::SaveArg; using ::testing::Return; ACTION_P(ReturnPointee, p) { return *p; } ... int previous_value = 0; EXPECTCALL(mymock, GetPrevValue()) .WillRepeatedly(ReturnPointee(&previous_value)); EXPECTCALL(mymock, UpdateValue(_)) .WillRepeatedly(SaveArg<0>(&previous_value)); my_mock.DoSomethingToUpdateValue(); ``` Here `my_mock.GetPrevValue()` will always return the argument of the last `UpdateValue()` call. If a mock method's return type is a built-in C++ type or pointer, by default it will return 0 when invoked. Also, in C++ 11 and above, a mock method whose return type has a default constructor will return a default-constructed value by default. You only need to specify an action if this default value doesn't work for you. Sometimes, you may want to change this default value, or you may want to specify a default value for types Google Mock doesn't know about. You can do this using the `::testing::DefaultValue` class template: ```cpp class MockFoo : public Foo { public: MOCK_METHOD0(CalculateBar, Bar()); }; ... Bar default_bar; // Sets the default return value for type Bar. DefaultValue<Bar>::Set(default_bar); MockFoo foo; // We don't need to specify an action here, as the default // return value works for us. EXPECT_CALL(foo, CalculateBar()); foo.CalculateBar(); // This should return default_bar. // Unsets the default return value. DefaultValue<Bar>::Clear(); ``` Please note that changing the default value for a type can make you tests hard to understand. We recommend you to use this feature judiciously. For example, you may want to make sure the `Set()` and `Clear()` calls are right next to the code that uses your mock. You've learned how to change the default value of a given type. However, this may be too coarse for your purpose: perhaps you have two mock methods with the same return type and you want them to have different behaviors. The `ON_CALL()` macro allows you to customize your mock's behavior at the method level: ```cpp using ::testing::_; using ::testing::AnyNumber; using ::testing::Gt; using ::testing::Return; ... ONCALL(foo, Sign()) .WillByDefault(Return(-1)); ON_CALL(foo, Sign(0)) .WillByDefault(Return(0)); ON_CALL(foo, Sign(Gt(0))) .WillByDefault(Return(1)); EXPECTCALL(foo, Sign()) .Times(AnyNumber()); foo.Sign(5); // This should return 1. foo.Sign(-9); // This should return -1. foo.Sign(0); // This should return 0. ``` As you may have guessed, when there are more than one `ON_CALL()` statements, the news order take precedence over the older ones. In other words, the last one that matches the function arguments will be used. This matching order allows you to set up the common behavior in a mock object's constructor or the test fixture's set-up phase and specialize the mock's behavior later. If the built-in actions don't suit you, you can easily use an existing function, method, or functor as an action: ```cpp using ::testing::_; using ::testing::Invoke; class MockFoo : public Foo { public: MOCK_METHOD2(Sum, int(int x, int y)); MOCK_METHOD1(ComplexJob, bool(int x)); }; int CalculateSum(int x, int y) { return x + y; } class Helper { public: bool ComplexJob(int x); }; ... MockFoo foo; Helper helper; EXPECTCALL(foo, Sum(, _)) .WillOnce(Invoke(CalculateSum)); EXPECTCALL(foo, ComplexJob()) .WillOnce(Invoke(&helper, &Helper::ComplexJob)); foo.Sum(5, 6); // Invokes CalculateSum(5, 6). foo.ComplexJob(10); // Invokes helper.ComplexJob(10); ``` The only requirement is that the type of the function, etc must be compatible with the signature of the mock function, meaning that the latter's arguments can be implicitly converted to the corresponding arguments of the former, and the former's return type can be implicitly converted to that of the latter. So, you can invoke something whose type is not exactly the same as the mock function, as long as it's safe to do so - nice, huh? `Invoke()` is very useful for doing actions that are more"
},
{
"data": "It passes the mock function's arguments to the function or functor being invoked such that the callee has the full context of the call to work with. If the invoked function is not interested in some or all of the arguments, it can simply ignore them. Yet, a common pattern is that a test author wants to invoke a function without the arguments of the mock function. `Invoke()` allows her to do that using a wrapper function that throws away the arguments before invoking an underlining nullary function. Needless to say, this can be tedious and obscures the intent of the test. `InvokeWithoutArgs()` solves this problem. It's like `Invoke()` except that it doesn't pass the mock function's arguments to the callee. Here's an example: ```cpp using ::testing::_; using ::testing::InvokeWithoutArgs; class MockFoo : public Foo { public: MOCK_METHOD1(ComplexJob, bool(int n)); }; bool Job1() { ... } ... MockFoo foo; EXPECTCALL(foo, ComplexJob()) .WillOnce(InvokeWithoutArgs(Job1)); foo.ComplexJob(10); // Invokes Job1(). ``` Sometimes a mock function will receive a function pointer or a functor (in other words, a \"callable\") as an argument, e.g. ```cpp class MockFoo : public Foo { public: MOCK_METHOD2(DoThis, bool(int n, bool (*fp)(int))); }; ``` and you may want to invoke this callable argument: ```cpp using ::testing::_; ... MockFoo foo; EXPECTCALL(foo, DoThis(, _)) .WillOnce(...); // Will execute (*fp)(5), where fp is the // second argument DoThis() receives. ``` Arghh, you need to refer to a mock function argument but your version of C++ has no lambdas, so you have to define your own action. :-( Or do you really? Well, Google Mock has an action to solve exactly this problem: ```cpp InvokeArgument<N>(arg1, arg2, ..., arg_m) ``` will invoke the `N`-th (0-based) argument the mock function receives, with `arg1`, `arg2`, ..., and `arg_m`. No matter if the argument is a function pointer or a functor, Google Mock handles them both. With that, you could write: ```cpp using ::testing::_; using ::testing::InvokeArgument; ... EXPECTCALL(foo, DoThis(, _)) .WillOnce(InvokeArgument<1>(5)); // Will execute (*fp)(5), where fp is the // second argument DoThis() receives. ``` What if the callable takes an argument by reference? No problem - just wrap it inside `ByRef()`: ```cpp ... MOCK_METHOD1(Bar, bool(bool (*fp)(int, const Helper&))); ... using ::testing::_; using ::testing::ByRef; using ::testing::InvokeArgument; ... MockFoo foo; Helper helper; ... EXPECTCALL(foo, Bar()) .WillOnce(InvokeArgument<0>(5, ByRef(helper))); // ByRef(helper) guarantees that a reference to helper, not a copy of it, // will be passed to the callable. ``` What if the callable takes an argument by reference and we do not wrap the argument in `ByRef()`? Then `InvokeArgument()` will _make a copy of the argument, and pass a reference to the copy_, instead of a reference to the original value, to the callable. This is especially handy when the argument is a temporary value: ```cpp ... MOCK_METHOD1(DoThat, bool(bool (*f)(const double& x, const string& s))); ... using ::testing::_; using ::testing::InvokeArgument; ... MockFoo foo; ... EXPECTCALL(foo, DoThat()) .WillOnce(InvokeArgument<0>(5.0, string(\"Hi\"))); // Will execute (*f)(5.0, string(\"Hi\")), where f is the function pointer // DoThat() receives. Note that the values 5.0 and string(\"Hi\") are // temporary and dead once the EXPECT_CALL() statement finishes. Yet // it's fine to perform this action later, since a copy of the values // are kept inside the InvokeArgument action. ``` Sometimes you have an action that returns something, but you need an action that returns `void` (perhaps you want to use it in a mock function that returns `void`, or perhaps it needs to be used in `DoAll()` and it's not the last in the list). `IgnoreResult()` lets you do"
},
{
"data": "For example: ```cpp using ::testing::_; using ::testing::Invoke; using ::testing::Return; int Process(const MyData& data); string DoSomething(); class MockFoo : public Foo { public: MOCK_METHOD1(Abc, void(const MyData& data)); MOCK_METHOD0(Xyz, bool()); }; ... MockFoo foo; EXPECTCALL(foo, Abc()) // .WillOnce(Invoke(Process)); // The above line won't compile as Process() returns int but Abc() needs // to return void. .WillOnce(IgnoreResult(Invoke(Process))); EXPECT_CALL(foo, Xyz()) .WillOnce(DoAll(IgnoreResult(Invoke(DoSomething)), // Ignores the string DoSomething() returns. Return(true))); ``` Note that you cannot use `IgnoreResult()` on an action that already returns `void`. Doing so will lead to ugly compiler errors. Say you have a mock function `Foo()` that takes seven arguments, and you have a custom action that you want to invoke when `Foo()` is called. Trouble is, the custom action only wants three arguments: ```cpp using ::testing::_; using ::testing::Invoke; ... MOCK_METHOD7(Foo, bool(bool visible, const string& name, int x, int y, const map<pair<int, int>, double>& weight, double minweight, double maxwight)); ... bool IsVisibleInQuadrant1(bool visible, int x, int y) { return visible && x >= 0 && y >= 0; } ... EXPECTCALL(mock, Foo(, , , , , , )) .WillOnce(Invoke(IsVisibleInQuadrant1)); // Uh, won't compile. :-( ``` To please the compiler God, you can to define an \"adaptor\" that has the same signature as `Foo()` and calls the custom action with the right arguments: ```cpp using ::testing::_; using ::testing::Invoke; bool MyIsVisibleInQuadrant1(bool visible, const string& name, int x, int y, const map<pair<int, int>, double>& weight, double minweight, double maxwight) { return IsVisibleInQuadrant1(visible, x, y); } ... EXPECTCALL(mock, Foo(, , , , , , )) .WillOnce(Invoke(MyIsVisibleInQuadrant1)); // Now it works. ``` But isn't this awkward? Google Mock provides a generic action adaptor, so you can spend your time minding more important business than writing your own adaptors. Here's the syntax: ```cpp WithArgs<N1, N2, ..., Nk>(action) ``` creates an action that passes the arguments of the mock function at the given indices (0-based) to the inner `action` and performs it. Using `WithArgs`, our original example can be written as: ```cpp using ::testing::_; using ::testing::Invoke; using ::testing::WithArgs; ... EXPECTCALL(mock, Foo(, , , , , , )) .WillOnce(WithArgs<0, 2, 3>(Invoke(IsVisibleInQuadrant1))); // No need to define your own adaptor. ``` For better readability, Google Mock also gives you: `WithoutArgs(action)` when the inner `action` takes no argument, and `WithArg<N>(action)` (no `s` after `Arg`) when the inner `action` takes one argument. As you may have realized, `InvokeWithoutArgs(...)` is just syntactic sugar for `WithoutArgs(Invoke(...))`. Here are more tips: The inner action used in `WithArgs` and friends does not have to be `Invoke()` -- it can be anything. You can repeat an argument in the argument list if necessary, e.g. `WithArgs<2, 3, 3, 5>(...)`. You can change the order of the arguments, e.g. `WithArgs<3, 2, 1>(...)`. The types of the selected arguments do not have to match the signature of the inner action exactly. It works as long as they can be implicitly converted to the corresponding arguments of the inner action. For example, if the 4-th argument of the mock function is an `int` and `myaction` takes a `double`, `WithArg<4>(myaction)` will work. The selecting-an-action's-arguments recipe showed us one way to make a mock function and an action with incompatible argument lists fit together. The downside is that wrapping the action in `WithArgs<...>()` can get tedious for people writing the tests. If you are defining a function, method, or functor to be used with `Invoke*()`, and you are not interested in some of its arguments, an alternative to `WithArgs` is to declare the uninteresting arguments as `Unused`. This makes the definition less cluttered and less fragile in case the types of the uninteresting arguments change. It could also increase the chance the action function can be"
},
{
"data": "For example, given ```cpp MOCK_METHOD3(Foo, double(const string& label, double x, double y)); MOCK_METHOD3(Bar, double(int index, double x, double y)); ``` instead of ```cpp using ::testing::_; using ::testing::Invoke; double DistanceToOriginWithLabel(const string& label, double x, double y) { return sqrt(xx + yy); } double DistanceToOriginWithIndex(int index, double x, double y) { return sqrt(xx + yy); } ... EXEPCTCALL(mock, Foo(\"abc\", , _)) .WillOnce(Invoke(DistanceToOriginWithLabel)); EXEPCTCALL(mock, Bar(5, , _)) .WillOnce(Invoke(DistanceToOriginWithIndex)); ``` you could write ```cpp using ::testing::_; using ::testing::Invoke; using ::testing::Unused; double DistanceToOrigin(Unused, double x, double y) { return sqrt(xx + yy); } ... EXEPCTCALL(mock, Foo(\"abc\", , _)) .WillOnce(Invoke(DistanceToOrigin)); EXEPCTCALL(mock, Bar(5, , _)) .WillOnce(Invoke(DistanceToOrigin)); ``` Just like matchers, a Google Mock action object consists of a pointer to a ref-counted implementation object. Therefore copying actions is also allowed and very efficient. When the last action that references the implementation object dies, the implementation object will be deleted. If you have some complex action that you want to use again and again, you may not have to build it from scratch every time. If the action doesn't have an internal state (i.e. if it always does the same thing no matter how many times it has been called), you can assign it to an action variable and use that variable repeatedly. For example: ```cpp Action<bool(int*)> set_flag = DoAll(SetArgPointee<0>(5), Return(true)); ... use set_flag in .WillOnce() and .WillRepeatedly() ... ``` However, if the action has its own state, you may be surprised if you share the action object. Suppose you have an action factory `IncrementCounter(init)` which creates an action that increments and returns a counter whose initial value is `init`, using two actions created from the same expression and using a shared action will exihibit different behaviors. Example: ```cpp EXPECT_CALL(foo, DoThis()) .WillRepeatedly(IncrementCounter(0)); EXPECT_CALL(foo, DoThat()) .WillRepeatedly(IncrementCounter(0)); foo.DoThis(); // Returns 1. foo.DoThis(); // Returns 2. foo.DoThat(); // Returns 1 - Blah() uses a different // counter than Bar()'s. ``` versus ```cpp Action<int()> increment = IncrementCounter(0); EXPECT_CALL(foo, DoThis()) .WillRepeatedly(increment); EXPECT_CALL(foo, DoThat()) .WillRepeatedly(increment); foo.DoThis(); // Returns 1. foo.DoThis(); // Returns 2. foo.DoThat(); // Returns 3 - the counter is shared. ``` C++11 introduced move-only types. A move-only-typed value can be moved from one object to another, but cannot be copied. `std::unique_ptr<T>` is probably the most commonly used move-only type. Mocking a method that takes and/or returns move-only types presents some challenges, but nothing insurmountable. This recipe shows you how you can do it. Note that the support for move-only method arguments was only introduced to gMock in April 2017; in older code, you may find more complex for lack of this feature. Lets say we are working on a fictional project that lets one post and share snippets called buzzes. Your code uses these types: ```cpp enum class AccessLevel { kInternal, kPublic }; class Buzz { public: explicit Buzz(AccessLevel access) { ... } ... }; class Buzzer { public: virtual ~Buzzer() {} virtual std::unique_ptr<Buzz> MakeBuzz(StringPiece text) = 0; virtual bool ShareBuzz(std::uniqueptr<Buzz> buzz, int64t timestamp) = 0; ... }; ``` A `Buzz` object represents a snippet being posted. A class that implements the `Buzzer` interface is capable of creating and sharing `Buzz`es. Methods in `Buzzer` may return a `unique_ptr<Buzz>` or take a `unique_ptr<Buzz>`. Now we need to mock `Buzzer` in our tests. To mock a method that accepts or returns move-only types, you just use the familiar `MOCK_METHOD` syntax as usual: ```cpp class MockBuzzer : public Buzzer { public: MOCKMETHOD1(MakeBuzz, std::uniqueptr<Buzz>(StringPiece text)); MOCKMETHOD2(ShareBuzz, bool(std::uniqueptr<Buzz> buzz, int64_t timestamp)); }; ``` Now that we have the mock class defined, we can use it in"
},
{
"data": "In the following code examples, we assume that we have defined a `MockBuzzer` object named `mockbuzzer`: ```cpp MockBuzzer mockbuzzer; ``` First lets see how we can set expectations on the `MakeBuzz()` method, which returns a `unique_ptr<Buzz>`. As usual, if you set an expectation without an action (i.e. the `.WillOnce()` or `.WillRepeated()` clause), when that expectation fires, the default action for that method will be taken. Since `unique_ptr<>` has a default constructor that returns a null `unique_ptr`, thats what youll get if you dont specify an action: ```cpp // Use the default action. EXPECTCALL(mockbuzzer_, MakeBuzz(\"hello\")); // Triggers the previous EXPECT_CALL. EXPECTEQ(nullptr, mockbuzzer_.MakeBuzz(\"hello\")); ``` If you are not happy with the default action, you can tweak it as usual; see . If you just need to return a pre-defined move-only value, you can use the `Return(ByMove(...))` action: ```cpp // When this fires, the unique_ptr<> specified by ByMove(...) will // be returned. EXPECTCALL(mockbuzzer_, MakeBuzz(\"world\")) .WillOnce(Return(ByMove(MakeUnique<Buzz>(AccessLevel::kInternal)))); EXPECTNE(nullptr, mockbuzzer_.MakeBuzz(\"world\")); ``` Note that `ByMove()` is essential here - if you drop it, the code wont compile. Quiz time! What do you think will happen if a `Return(ByMove(...))` action is performed more than once (e.g. you write `.WillRepeatedly(Return(ByMove(...)));`)? Come think of it, after the first time the action runs, the source value will be consumed (since its a move-only value), so the next time around, theres no value to move from -- youll get a run-time error that `Return(ByMove(...))` can only be run once. If you need your mock method to do more than just moving a pre-defined value, remember that you can always use a lambda or a callable object, which can do pretty much anything you want: ```cpp EXPECTCALL(mockbuzzer_, MakeBuzz(\"x\")) .WillRepeatedly( { return MakeUnique<Buzz>(AccessLevel::kInternal); }); EXPECTNE(nullptr, mockbuzzer_.MakeBuzz(\"x\")); EXPECTNE(nullptr, mockbuzzer_.MakeBuzz(\"x\")); ``` Every time this `EXPECTCALL` fires, a new `uniqueptr<Buzz>` will be created and returned. You cannot do this with `Return(ByMove(...))`. That covers returning move-only values; but how do we work with methods accepting move-only arguments? The answer is that they work normally, although some actions will not compile when any of method's arguments are move-only. You can always use `Return`, or a : ```cpp using ::testing::Unused; EXPECTCALL(mockbuzzer, ShareBuzz(NotNull(), )) .WillOnce(Return(true)); EXPECTTRUE(mockbuzzer_.ShareBuzz(MakeUnique<Buzz>(AccessLevel::kInternal)), 0); EXPECTCALL(mockbuzzer, ShareBuzz(, _)) .WillOnce( { return buzz != nullptr; }); EXPECTFALSE(mockbuzzer_.ShareBuzz(nullptr, 0)); ``` Many built-in actions (`WithArgs`, `WithoutArgs`,`DeleteArg`, `SaveArg`, ...) could in principle support move-only arguments, but the support for this is not implemented yet. If this is blocking you, please file a bug. A few actions (e.g. `DoAll`) copy their arguments internally, so they can never work with non-copyable objects; you'll have to use functors instead. Support for move-only function arguments was only introduced to gMock in April In older code, you may encounter the following workaround for the lack of this feature (it is no longer necessary - we're including it just for reference): ```cpp class MockBuzzer : public Buzzer { public: MOCK_METHOD2(DoShareBuzz, bool(Buzz* buzz, Time timestamp)); bool ShareBuzz(std::unique_ptr<Buzz> buzz, Time timestamp) override { return DoShareBuzz(buzz.get(), timestamp); } }; ``` The trick is to delegate the `ShareBuzz()` method to a mock method (lets call it `DoShareBuzz()`) that does not take move-only parameters. Then, instead of setting expectations on `ShareBuzz()`, you set them on the `DoShareBuzz()` mock method: ```cpp MockBuzzer mockbuzzer; EXPECTCALL(mockbuzzer, DoShareBuzz(NotNull(), )); // When one calls ShareBuzz() on the MockBuzzer like this, the call is // forwarded to DoShareBuzz(), which is mocked. Therefore this statement // will trigger the above EXPECT_CALL. mockbuzzer.ShareBuzz(MakeUnique<Buzz>(AccessLevel::kInternal), 0); ``` Believe it or not, the vast majority of the time spent on compiling a mock class is in generating its constructor and destructor, as they perform non-trivial tasks"
},
{
"data": "verification of the expectations). What's more, mock methods with different signatures have different types and thus their constructors/destructors need to be generated by the compiler separately. As a result, if you mock many different types of methods, compiling your mock class can get really slow. If you are experiencing slow compilation, you can move the definition of your mock class' constructor and destructor out of the class body and into a `.cpp` file. This way, even if you `#include` your mock class in N files, the compiler only needs to generate its constructor and destructor once, resulting in a much faster compilation. Let's illustrate the idea using an example. Here's the definition of a mock class before applying this recipe: ```cpp // File mock_foo.h. ... class MockFoo : public Foo { public: // Since we don't declare the constructor or the destructor, // the compiler will generate them in every translation unit // where this mock class is used. MOCK_METHOD0(DoThis, int()); MOCK_METHOD1(DoThat, bool(const char* str)); ... more mock methods ... }; ``` After the change, it would look like: ```cpp // File mock_foo.h. ... class MockFoo : public Foo { public: // The constructor and destructor are declared, but not defined, here. MockFoo(); virtual ~MockFoo(); MOCK_METHOD0(DoThis, int()); MOCK_METHOD1(DoThat, bool(const char* str)); ... more mock methods ... }; ``` and ```cpp // File mock_foo.cpp. // The definitions may appear trivial, but the functions actually do a // lot of things through the constructors/destructors of the member // variables used to implement the mock methods. MockFoo::MockFoo() {} MockFoo::~MockFoo() {} ``` When it's being destroyed, your friendly mock object will automatically verify that all expectations on it have been satisfied, and will generate failures if not. This is convenient as it leaves you with one less thing to worry about. That is, unless you are not sure if your mock object will be destroyed. How could it be that your mock object won't eventually be destroyed? Well, it might be created on the heap and owned by the code you are testing. Suppose there's a bug in that code and it doesn't delete the mock object properly - you could end up with a passing test when there's actually a bug. Using a heap checker is a good idea and can alleviate the concern, but its implementation may not be 100% reliable. So, sometimes you do want to force Google Mock to verify a mock object before it is (hopefully) destructed. You can do this with `Mock::VerifyAndClearExpectations(&mock_object)`: ```cpp TEST(MyServerTest, ProcessesRequest) { using ::testing::Mock; MockFoo* const foo = new MockFoo; EXPECT_CALL(*foo, ...)...; // ... other expectations ... // server now owns foo. MyServer server(foo); server.ProcessRequest(...); // In case that server's destructor will forget to delete foo, // this will verify the expectations anyway. Mock::VerifyAndClearExpectations(foo); } // server is destroyed when it goes out of scope here. ``` Tip: The `Mock::VerifyAndClearExpectations()` function returns a `bool` to indicate whether the verification was successful (`true` for yes), so you can wrap that function call inside a `ASSERT_TRUE()` if there is no point going further when the verification has failed. Sometimes you may want to \"reset\" a mock object at various check points in your test: at each check point, you verify that all existing expectations on the mock object have been satisfied, and then you set some new expectations on it as if it's newly created. This allows you to work with a mock object in \"phases\" whose sizes are each"
},
{
"data": "One such scenario is that in your test's `SetUp()` function, you may want to put the object you are testing into a certain state, with the help from a mock object. Once in the desired state, you want to clear all expectations on the mock, such that in the `TEST_F` body you can set fresh expectations on it. As you may have figured out, the `Mock::VerifyAndClearExpectations()` function we saw in the previous recipe can help you here. Or, if you are using `ON_CALL()` to set default actions on the mock object and want to clear the default actions as well, use `Mock::VerifyAndClear(&mock_object)` instead. This function does what `Mock::VerifyAndClearExpectations(&mock_object)` does and returns the same `bool`, plus it clears the `ON_CALL()` statements on `mock_object` too. Another trick you can use to achieve the same effect is to put the expectations in sequences and insert calls to a dummy \"check-point\" function at specific places. Then you can verify that the mock function calls do happen at the right time. For example, if you are exercising code: ```cpp Foo(1); Foo(2); Foo(3); ``` and want to verify that `Foo(1)` and `Foo(3)` both invoke `mock.Bar(\"a\")`, but `Foo(2)` doesn't invoke anything. You can write: ```cpp using ::testing::MockFunction; TEST(FooTest, InvokesBarCorrectly) { MyMock mock; // Class MockFunction<F> has exactly one mock method. It is named // Call() and has type F. MockFunction<void(string checkpointname)> check; { InSequence s; EXPECT_CALL(mock, Bar(\"a\")); EXPECT_CALL(check, Call(\"1\")); EXPECT_CALL(check, Call(\"2\")); EXPECT_CALL(mock, Bar(\"a\")); } Foo(1); check.Call(\"1\"); Foo(2); check.Call(\"2\"); Foo(3); } ``` The expectation spec says that the first `Bar(\"a\")` must happen before check point \"1\", the second `Bar(\"a\")` must happen after check point \"2\", and nothing should happen between the two check points. The explicit check points make it easy to tell which `Bar(\"a\")` is called by which call to `Foo()`. Sometimes you want to make sure a mock object is destructed at the right time, e.g. after `bar->A()` is called but before `bar->B()` is called. We already know that you can specify constraints on the order of mock function calls, so all we need to do is to mock the destructor of the mock function. This sounds simple, except for one problem: a destructor is a special function with special syntax and special semantics, and the `MOCK_METHOD0` macro doesn't work for it: ```cpp MOCK_METHOD0(~MockFoo, void()); // Won't compile! ``` The good news is that you can use a simple pattern to achieve the same effect. First, add a mock function `Die()` to your mock class and call it in the destructor, like this: ```cpp class MockFoo : public Foo { ... // Add the following two lines to the mock class. MOCK_METHOD0(Die, void()); virtual ~MockFoo() { Die(); } }; ``` (If the name `Die()` clashes with an existing symbol, choose another name.) Now, we have translated the problem of testing when a `MockFoo` object dies to testing when its `Die()` method is called: ```cpp MockFoo* foo = new MockFoo; MockBar* bar = new MockBar; ... { InSequence s; // Expects *foo to die after bar->A() and before bar->B(). EXPECT_CALL(*bar, A()); EXPECT_CALL(*foo, Die()); EXPECT_CALL(*bar, B()); } ``` And that's that. IMPORTANT NOTE: What we describe in this recipe is ONLY true on platforms where Google Mock is thread-safe. Currently these are only platforms that support the pthreads library (this includes Linux and Mac). To make it thread-safe on other platforms we only need to implement some synchronization operations in `\"gtest/internal/gtest-port.h\"`. In a unit test, it's best if you could isolate and test a piece of code in a single-threaded context. That avoids race conditions and dead locks, and makes debugging your test much"
},
{
"data": "Yet many programs are multi-threaded, and sometimes to test something we need to pound on it from more than one thread. Google Mock works for this purpose too. Remember the steps for using a mock: Create a mock object `foo`. Set its default actions and expectations using `ONCALL()` and `EXPECTCALL()`. The code under test calls methods of `foo`. Optionally, verify and reset the mock. Destroy the mock yourself, or let the code under test destroy it. The destructor will automatically verify it. If you follow the following simple rules, your mocks and threads can live happily together: Execute your test code (as opposed to the code being tested) in one thread. This makes your test easy to follow. Obviously, you can do step #1 without locking. When doing step #2 and #5, make sure no other thread is accessing `foo`. Obvious too, huh? #3 and #4 can be done either in one thread or in multiple threads - anyway you want. Google Mock takes care of the locking, so you don't have to do any - unless required by your test logic. If you violate the rules (for example, if you set expectations on a mock while another thread is calling its methods), you get undefined behavior. That's not fun, so don't do it. Google Mock guarantees that the action for a mock function is done in the same thread that called the mock function. For example, in ```cpp EXPECT_CALL(mock, Foo(1)) .WillOnce(action1); EXPECT_CALL(mock, Foo(2)) .WillOnce(action2); ``` if `Foo(1)` is called in thread 1 and `Foo(2)` is called in thread 2, Google Mock will execute `action1` in thread 1 and `action2` in thread Google Mock does not impose a sequence on actions performed in different threads (doing so may create deadlocks as the actions may need to cooperate). This means that the execution of `action1` and `action2` in the above example may interleave. If this is a problem, you should add proper synchronization logic to `action1` and `action2` to make the test thread-safe. Also, remember that `DefaultValue<T>` is a global resource that potentially affects all living mock objects in your program. Naturally, you won't want to mess with it from multiple threads or when there still are mocks in action. When Google Mock sees something that has the potential of being an error (e.g. a mock function with no expectation is called, a.k.a. an uninteresting call, which is allowed but perhaps you forgot to explicitly ban the call), it prints some warning messages, including the arguments of the function and the return value. Hopefully this will remind you to take a look and see if there is indeed a problem. Sometimes you are confident that your tests are correct and may not appreciate such friendly messages. Some other times, you are debugging your tests or learning about the behavior of the code you are testing, and wish you could observe every mock call that happens (including argument values and the return value). Clearly, one size doesn't fit all. You can control how much Google Mock tells you using the `--gmock_verbose=LEVEL` command-line flag, where `LEVEL` is a string with three possible values: `info`: Google Mock will print all informational messages, warnings, and errors (most verbose). At this setting, Google Mock will also log any calls to the `ONCALL/EXPECTCALL` macros. `warning`: Google Mock will print both warnings and errors (less verbose). This is the default. `error`: Google Mock will print errors only (least"
},
{
"data": "Alternatively, you can adjust the value of that flag from within your tests like so: ```cpp ::testing::FLAGSgmockverbose = \"error\"; ``` Now, judiciously use the right flag to enable Google Mock serve you better! You have a test using Google Mock. It fails: Google Mock tells you that some expectations aren't satisfied. However, you aren't sure why: Is there a typo somewhere in the matchers? Did you mess up the order of the `EXPECT_CALL`s? Or is the code under test doing something wrong? How can you find out the cause? Won't it be nice if you have X-ray vision and can actually see the trace of all `EXPECT_CALL`s and mock method calls as they are made? For each call, would you like to see its actual argument values and which `EXPECT_CALL` Google Mock thinks it matches? You can unlock this power by running your test with the `--gmock_verbose=info` flag. For example, given the test program: ```cpp using testing::_; using testing::HasSubstr; using testing::Return; class MockFoo { public: MOCK_METHOD2(F, void(const string& x, const string& y)); }; TEST(Foo, Bar) { MockFoo mock; EXPECTCALL(mock, F(, _)).WillRepeatedly(Return()); EXPECT_CALL(mock, F(\"a\", \"b\")); EXPECT_CALL(mock, F(\"c\", HasSubstr(\"d\"))); mock.F(\"a\", \"good\"); mock.F(\"a\", \"b\"); } ``` if you run it with `--gmock_verbose=info`, you will see this output: ``` [ RUN ] Foo.Bar footest.cc:14: EXPECTCALL(mock, F(, )) invoked footest.cc:15: EXPECTCALL(mock, F(\"a\", \"b\")) invoked footest.cc:16: EXPECTCALL(mock, F(\"c\", HasSubstr(\"d\"))) invoked footest.cc:14: Mock function call matches EXPECTCALL(mock, F(, ))... Function call: F(@0x7fff7c8dad40\"a\", @0x7fff7c8dad10\"good\") footest.cc:15: Mock function call matches EXPECTCALL(mock, F(\"a\", \"b\"))... Function call: F(@0x7fff7c8dada0\"a\", @0x7fff7c8dad70\"b\") foo_test.cc:16: Failure Actual function call count doesn't match EXPECT_CALL(mock, F(\"c\", HasSubstr(\"d\")))... Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] Foo.Bar ``` Suppose the bug is that the `\"c\"` in the third `EXPECT_CALL` is a typo and should actually be `\"a\"`. With the above message, you should see that the actual `F(\"a\", \"good\")` call is matched by the first `EXPECT_CALL`, not the third as you thought. From that it should be obvious that the third `EXPECT_CALL` is written wrong. Case solved. If you build and run your tests in Emacs, the source file locations of Google Mock and errors will be highlighted. Just press `<Enter>` on one of them and you'll be taken to the offending line. Or, you can just type `C-x `` to jump to the next error. To make it even easier, you can add the following lines to your `~/.emacs` file: ``` (global-set-key \"\\M-m\" 'compile) ; m is for make (global-set-key [M-down] 'next-error) (global-set-key [M-up] '(lambda () (interactive) (next-error -1))) ``` Then you can type `M-m` to start a build, or `M-up`/`M-down` to move back and forth between errors. Google Mock's implementation consists of dozens of files (excluding its own tests). Sometimes you may want them to be packaged up in fewer files instead, such that you can easily copy them to a new machine and start hacking there. For this we provide an experimental Python script `fusegmockfiles.py` in the `scripts/` directory (starting with release 1.2.0). Assuming you have Python 2.4 or above installed on your machine, just go to that directory and run ``` python fusegmockfiles.py OUTPUT_DIR ``` and you should see an `OUTPUT_DIR` directory being created with files `gtest/gtest.h`, `gmock/gmock.h`, and `gmock-gtest-all.cc` in it. These three files contain everything you need to use Google Mock (and Google Test). Just copy them to anywhere you want and you are ready to write tests and use mocks. You can use the file as an example on how to compile your tests against them. The `MATCHER*` family of macros can be used to define custom matchers"
},
{
"data": "The syntax: ```cpp MATCHER(name, descriptionstringexpression) { statements; } ``` will define a matcher with the given name that executes the statements, which must return a `bool` to indicate if the match succeeds. Inside the statements, you can refer to the value being matched by `arg`, and refer to its type by `arg_type`. The description string is a `string`-typed expression that documents what the matcher does, and is used to generate the failure message when the match fails. It can (and should) reference the special `bool` variable `negation`, and should evaluate to the description of the matcher when `negation` is `false`, or that of the matcher's negation when `negation` is `true`. For convenience, we allow the description string to be empty (`\"\"`), in which case Google Mock will use the sequence of words in the matcher name as the description. For example: ```cpp MATCHER(IsDivisibleBy7, \"\") { return (arg % 7) == 0; } ``` allows you to write ```cpp // Expects mock_foo.Bar(n) to be called where n is divisible by 7. EXPECTCALL(mockfoo, Bar(IsDivisibleBy7())); ``` or, ```cpp using ::testing::Not; ... EXPECTTHAT(someexpression, IsDivisibleBy7()); EXPECTTHAT(someother_expression, Not(IsDivisibleBy7())); ``` If the above assertions fail, they will print something like: ``` Value of: some_expression Expected: is divisible by 7 Actual: 27 ... Value of: someotherexpression Expected: not (is divisible by 7) Actual: 21 ``` where the descriptions `\"is divisible by 7\"` and `\"not (is divisible by 7)\"` are automatically calculated from the matcher name `IsDivisibleBy7`. As you may have noticed, the auto-generated descriptions (especially those for the negation) may not be so great. You can always override them with a string expression of your own: ```cpp MATCHER(IsDivisibleBy7, std::string(negation ? \"isn't\" : \"is\") + \" divisible by 7\") { return (arg % 7) == 0; } ``` Optionally, you can stream additional information to a hidden argument named `result_listener` to explain the match result. For example, a better definition of `IsDivisibleBy7` is: ```cpp MATCHER(IsDivisibleBy7, \"\") { if ((arg % 7) == 0) return true; *result_listener << \"the remainder is \" << (arg % 7); return false; } ``` With this definition, the above assertion will give a better message: ``` Value of: some_expression Expected: is divisible by 7 Actual: 27 (the remainder is 6) ``` You should let `MatchAndExplain()` print any additional information that can help a user understand the match result. Note that it should explain why the match succeeds in case of a success (unless it's obvious) - this is useful when the matcher is used inside `Not()`. There is no need to print the argument value itself, as Google Mock already prints it for you. Notes: The type of the value being matched (`argtype`) is determined by the context in which you use the matcher and is supplied to you by the compiler, so you don't need to worry about declaring it (nor can you). This allows the matcher to be polymorphic. For example, `IsDivisibleBy7()` can be used to match any type where the value of `(arg % 7) == 0` can be implicitly converted to a `bool`. In the `Bar(IsDivisibleBy7())` example above, if method `Bar()` takes an `int`, `argtype` will be `int`; if it takes an `unsigned long`, `arg_type` will be `unsigned long`; and so on. Google Mock doesn't guarantee when or how many times a matcher will be invoked. Therefore the matcher logic must be purely functional (i.e. it cannot have any side effect, and the result must not depend on anything other than the value being matched and the matcher parameters). This requirement must be satisfied no matter how you define the matcher"
},
{
"data": "using one of the methods described in the following recipes). In particular, a matcher can never call a mock function, as that will affect the state of the mock object and Google Mock. Sometimes you'll want to define a matcher that has parameters. For that you can use the macro: ```cpp MATCHERP(name, paramname, description_string) { statements; } ``` where the description string can be either `\"\"` or a string expression that references `negation` and `param_name`. For example: ```cpp MATCHER_P(HasAbsoluteValue, value, \"\") { return abs(arg) == value; } ``` will allow you to write: ```cpp EXPECT_THAT(Blah(\"a\"), HasAbsoluteValue(n)); ``` which may lead to this message (assuming `n` is 10): ``` Value of: Blah(\"a\") Expected: has absolute value 10 Actual: -9 ``` Note that both the matcher description and its parameter are printed, making the message human-friendly. In the matcher definition body, you can write `foo_type` to reference the type of a parameter named `foo`. For example, in the body of `MATCHER_P(HasAbsoluteValue, value)` above, you can write `value_type` to refer to the type of `value`. Google Mock also provides `MATCHERP2`, `MATCHERP3`, ..., up to `MATCHER_P10` to support multi-parameter matchers: ```cpp MATCHERPk(name, param1, ..., paramk, descriptionstring) { statements; } ``` Please note that the custom description string is for a particular instance of the matcher, where the parameters have been bound to actual values. Therefore usually you'll want the parameter values to be part of the description. Google Mock lets you do that by referencing the matcher parameters in the description string expression. For example, ```cpp using ::testing::PrintToString; MATCHER_P2(InClosedRange, low, hi, std::string(negation ? \"isn't\" : \"is\") + \" in range [\" + PrintToString(low) + \", \" + PrintToString(hi) + \"]\") { return low <= arg && arg <= hi; } ... EXPECT_THAT(3, InClosedRange(4, 6)); ``` would generate a failure that contains the message: ``` Expected: is in range [4, 6] ``` If you specify `\"\"` as the description, the failure message will contain the sequence of words in the matcher name followed by the parameter values printed as a tuple. For example, ```cpp MATCHER_P2(InClosedRange, low, hi, \"\") { ... } ... EXPECT_THAT(3, InClosedRange(4, 6)); ``` would generate a failure that contains the text: ``` Expected: in closed range (4, 6) ``` For the purpose of typing, you can view ```cpp MATCHERPk(Foo, p1, ..., pk, descriptionstring) { ... } ``` as shorthand for ```cpp template <typename p1type, ..., typename pktype> FooMatcherPk<p1type, ..., pktype> Foo(p1type p1, ..., pktype pk) { ... } ``` When you write `Foo(v1, ..., vk)`, the compiler infers the types of the parameters `v1`, ..., and `vk` for you. If you are not happy with the result of the type inference, you can specify the types by explicitly instantiating the template, as in `Foo<long, bool>(5, false)`. As said earlier, you don't get to (or need to) specify `arg_type` as that's determined by the context in which the matcher is used. You can assign the result of expression `Foo(p1, ..., pk)` to a variable of type `FooMatcherPk<p1type, ..., pktype>`. This can be useful when composing matchers. Matchers that don't have a parameter or have only one parameter have special types: you can assign `Foo()` to a `FooMatcher`-typed variable, and assign `Foo(p)` to a `FooMatcherP<p_type>`-typed variable. While you can instantiate a matcher template with reference types, passing the parameters by pointer usually makes your code more readable. If, however, you still want to pass a parameter by reference, be aware that in the failure message generated by the matcher you will see the value of the referenced object but not its"
},
{
"data": "You can overload matchers with different numbers of parameters: ```cpp MATCHERP(Blah, a, descriptionstring_1) { ... } MATCHERP2(Blah, a, b, descriptionstring_2) { ... } ``` While it's tempting to always use the `MATCHER*` macros when defining a new matcher, you should also consider implementing `MatcherInterface` or using `MakePolymorphicMatcher()` instead (see the recipes that follow), especially if you need to use the matcher a lot. While these approaches require more work, they give you more control on the types of the value being matched and the matcher parameters, which in general leads to better compiler error messages that pay off in the long run. They also allow overloading matchers based on parameter types (as opposed to just based on the number of parameters). A matcher of argument type `T` implements `::testing::MatcherInterface<T>` and does two things: it tests whether a value of type `T` matches the matcher, and can describe what kind of values it matches. The latter ability is used for generating readable error messages when expectations are violated. The interface looks like this: ```cpp class MatchResultListener { public: ... // Streams x to the underlying ostream; does nothing if the ostream // is NULL. template <typename T> MatchResultListener& operator<<(const T& x); // Returns the underlying ostream. ::std::ostream* stream(); }; template <typename T> class MatcherInterface { public: virtual ~MatcherInterface(); // Returns true iff the matcher matches x; also explains the match // result to 'listener'. virtual bool MatchAndExplain(T x, MatchResultListener* listener) const = 0; // Describes this matcher to an ostream. virtual void DescribeTo(::std::ostream* os) const = 0; // Describes the negation of this matcher to an ostream. virtual void DescribeNegationTo(::std::ostream* os) const; }; ``` If you need a custom matcher but `Truly()` is not a good option (for example, you may not be happy with the way `Truly(predicate)` describes itself, or you may want your matcher to be polymorphic as `Eq(value)` is), you can define a matcher to do whatever you want in two steps: first implement the matcher interface, and then define a factory function to create a matcher instance. The second step is not strictly needed but it makes the syntax of using the matcher nicer. For example, you can define a matcher to test whether an `int` is divisible by 7 and then use it like this: ```cpp using ::testing::MakeMatcher; using ::testing::Matcher; using ::testing::MatcherInterface; using ::testing::MatchResultListener; class DivisibleBy7Matcher : public MatcherInterface<int> { public: virtual bool MatchAndExplain(int n, MatchResultListener* listener) const { return (n % 7) == 0; } virtual void DescribeTo(::std::ostream* os) const { *os << \"is divisible by 7\"; } virtual void DescribeNegationTo(::std::ostream* os) const { *os << \"is not divisible by 7\"; } }; inline Matcher<int> DivisibleBy7() { return MakeMatcher(new DivisibleBy7Matcher); } ... EXPECT_CALL(foo, Bar(DivisibleBy7())); ``` You may improve the matcher message by streaming additional information to the `listener` argument in `MatchAndExplain()`: ```cpp class DivisibleBy7Matcher : public MatcherInterface<int> { public: virtual bool MatchAndExplain(int n, MatchResultListener* listener) const { const int remainder = n % 7; if (remainder != 0) { *listener << \"the remainder is \" << remainder; } return remainder == 0; } ... }; ``` Then, `EXPECT_THAT(x, DivisibleBy7());` may general a message like this: ``` Value of: x Expected: is divisible by 7 Actual: 23 (the remainder is 2) ``` You've learned how to write your own matchers in the previous recipe. Just one problem: a matcher created using `MakeMatcher()` only works for one particular type of"
},
{
"data": "If you want a polymorphic matcher that works with arguments of several types (for instance, `Eq(x)` can be used to match a `value` as long as `value` == `x` compiles -- `value` and `x` don't have to share the same type), you can learn the trick from `\"gmock/gmock-matchers.h\"` but it's a bit involved. Fortunately, most of the time you can define a polymorphic matcher easily with the help of `MakePolymorphicMatcher()`. Here's how you can define `NotNull()` as an example: ```cpp using ::testing::MakePolymorphicMatcher; using ::testing::MatchResultListener; using ::testing::NotNull; using ::testing::PolymorphicMatcher; class NotNullMatcher { public: // To implement a polymorphic matcher, first define a COPYABLE class // that has three members MatchAndExplain(), DescribeTo(), and // DescribeNegationTo(), like the following. // In this example, we want to use NotNull() with any pointer, so // MatchAndExplain() accepts a pointer of any type as its first argument. // In general, you can define MatchAndExplain() as an ordinary method or // a method template, or even overload it. template <typename T> bool MatchAndExplain(T* p, MatchResultListener / listener */) const { return p != NULL; } // Describes the property of a value matching this matcher. void DescribeTo(::std::ostream os) const { os << \"is not NULL\"; } // Describes the property of a value NOT matching this matcher. void DescribeNegationTo(::std::ostream os) const { os << \"is NULL\"; } }; // To construct a polymorphic matcher, pass an instance of the class // to MakePolymorphicMatcher(). Note the return type. inline PolymorphicMatcher<NotNullMatcher> NotNull() { return MakePolymorphicMatcher(NotNullMatcher()); } ... EXPECT_CALL(foo, Bar(NotNull())); // The argument must be a non-NULL pointer. ``` Note: Your polymorphic matcher class does not need to inherit from `MatcherInterface` or any other class, and its methods do not need to be virtual. Like in a monomorphic matcher, you may explain the match result by streaming additional information to the `listener` argument in `MatchAndExplain()`. A cardinality is used in `Times()` to tell Google Mock how many times you expect a call to occur. It doesn't have to be exact. For example, you can say `AtLeast(5)` or `Between(2, 4)`. If the built-in set of cardinalities doesn't suit you, you are free to define your own by implementing the following interface (in namespace `testing`): ```cpp class CardinalityInterface { public: virtual ~CardinalityInterface(); // Returns true iff call_count calls will satisfy this cardinality. virtual bool IsSatisfiedByCallCount(int call_count) const = 0; // Returns true iff call_count calls will saturate this cardinality. virtual bool IsSaturatedByCallCount(int call_count) const = 0; // Describes self to an ostream. virtual void DescribeTo(::std::ostream* os) const = 0; }; ``` For example, to specify that a call must occur even number of times, you can write ```cpp using ::testing::Cardinality; using ::testing::CardinalityInterface; using ::testing::MakeCardinality; class EvenNumberCardinality : public CardinalityInterface { public: virtual bool IsSatisfiedByCallCount(int call_count) const { return (call_count % 2) == 0; } virtual bool IsSaturatedByCallCount(int call_count) const { return false; } virtual void DescribeTo(::std::ostream* os) const { *os << \"called even number of times\"; } }; Cardinality EvenNumber() { return MakeCardinality(new EvenNumberCardinality); } ... EXPECT_CALL(foo, Bar(3)) .Times(EvenNumber()); ``` If the built-in actions don't work for you, and you find it inconvenient to use `Invoke()`, you can use a macro from the `ACTION*` family to quickly define a new action that can be used in your code as if it's a built-in action. By writing ```cpp ACTION(name) { statements; } ``` in a namespace scope (i.e. not inside a class or function), you will define an action with the given name that executes the statements. The value returned by `statements` will be used as the return value of the action. Inside the statements, you can refer to the K-th (0-based) argument of the mock function as"
},
{
"data": "For example: ```cpp ACTION(IncrementArg1) { return ++(*arg1); } ``` allows you to write ```cpp ... WillOnce(IncrementArg1()); ``` Note that you don't need to specify the types of the mock function arguments. Rest assured that your code is type-safe though: you'll get a compiler error if `*arg1` doesn't support the `++` operator, or if the type of `++(*arg1)` isn't compatible with the mock function's return type. Another example: ```cpp ACTION(Foo) { (*arg2)(5); Blah(); *arg1 = 0; return arg0; } ``` defines an action `Foo()` that invokes argument #2 (a function pointer) with 5, calls function `Blah()`, sets the value pointed to by argument For more convenience and flexibility, you can also use the following pre-defined symbols in the body of `ACTION`: | `argK_type` | The type of the K-th (0-based) argument of the mock function | |:-|:-| | `args` | All arguments of the mock function as a tuple | | `args_type` | The type of all arguments of the mock function as a tuple | | `return_type` | The return type of the mock function | | `function_type` | The type of the mock function | For example, when using an `ACTION` as a stub action for mock function: ```cpp int DoSomething(bool flag, int* ptr); ``` we have: | Pre-defined Symbol | Is Bound To | |:--|:-| | `arg0` | the value of `flag` | | `arg0_type` | the type `bool` | | `arg1` | the value of `ptr` | | `arg1_type` | the type `int*` | | `args` | the tuple `(flag, ptr)` | | `args_type` | the type `::testing::tuple<bool, int*>` | | `return_type` | the type `int` | | `function_type` | the type `int(bool, int*)` | Sometimes you'll want to parameterize an action you define. For that we have another macro ```cpp ACTION_P(name, param) { statements; } ``` For example, ```cpp ACTION_P(Add, n) { return arg0 + n; } ``` will allow you to write ```cpp // Returns argument #0 + 5. ... WillOnce(Add(5)); ``` For convenience, we use the term arguments for the values used to invoke the mock function, and the term parameters for the values used to instantiate an action. Note that you don't need to provide the type of the parameter either. Suppose the parameter is named `param`, you can also use the Google-Mock-defined symbol `param_type` to refer to the type of the parameter as inferred by the compiler. For example, in the body of `ACTIONP(Add, n)` above, you can write `ntype` for the type of `n`. Google Mock also provides `ACTIONP2`, `ACTIONP3`, and etc to support multi-parameter actions. For example, ```cpp ACTION_P2(ReturnDistanceTo, x, y) { double dx = arg0 - x; double dy = arg1 - y; return sqrt(dxdx + dydy); } ``` lets you write ```cpp ... WillOnce(ReturnDistanceTo(5.0, 26.5)); ``` You can view `ACTION` as a degenerated parameterized action where the number of parameters is 0. You can also easily define actions overloaded on the number of parameters: ```cpp ACTION_P(Plus, a) { ... } ACTION_P2(Plus, a, b) { ... } ``` For maximum brevity and reusability, the `ACTION*` macros don't ask you to provide the types of the mock function arguments and the action parameters. Instead, we let the compiler infer the types for us. Sometimes, however, we may want to be more explicit about the types. There are several tricks to do that. For example: ```cpp ACTION(Foo) { // Makes sure arg0 can be converted to int. int n = arg0; ... use n instead of arg0 here ... } ACTION_P(Bar, param) { // Makes sure the type of arg1 is const"
},
{
"data": "::testing::StaticAssertTypeEq<const char*, arg1_type>(); // Makes sure param can be converted to bool. bool flag = param; } ``` where `StaticAssertTypeEq` is a compile-time assertion in Google Test that verifies two types are the same. Sometimes you want to give an action explicit template parameters that cannot be inferred from its value parameters. `ACTION_TEMPLATE()` supports that and can be viewed as an extension to `ACTION()` and `ACTION_P*()`. The syntax: ```cpp ACTION_TEMPLATE(ActionName, HASmTEMPLATEPARAMS(kind1, name1, ..., kindm, name_m), ANDnVALUEPARAMS(p1, ..., pn)) { statements; } ``` defines an action template that takes m explicit template parameters and n value parameters, where m is between 1 and 10, and n is between 0 and 10. `name_i` is the name of the i-th template parameter, and `kind_i` specifies whether it's a `typename`, an integral constant, or a template. `p_i` is the name of the i-th value parameter. Example: ```cpp // DuplicateArg<k, T>(output) converts the k-th argument of the mock // function to type T and copies it to *output. ACTION_TEMPLATE(DuplicateArg, // Note the comma between int and k: HAS2TEMPLATE_PARAMS(int, k, typename, T), AND1VALUE_PARAMS(output)) { *output = T(::testing::get<k>(args)); } ``` To create an instance of an action template, write: ```cpp ActionName<t1, ..., tm>(v1, ..., vn) ``` where the `t`s are the template arguments and the `v`s are the value arguments. The value argument types are inferred by the compiler. For example: ```cpp using ::testing::_; ... int n; EXPECTCALL(mock, Foo(, _)) .WillOnce(DuplicateArg<1, unsigned char>(&n)); ``` If you want to explicitly specify the value argument types, you can provide additional template arguments: ```cpp ActionName<t1, ..., tm, u1, ..., uk>(v1, ..., v_n) ``` where `ui` is the desired type of `vi`. `ACTIONTEMPLATE` and `ACTION`/`ACTIONP*` can be overloaded on the number of value parameters, but not on the number of template parameters. Without the restriction, the meaning of the following is unclear: ```cpp OverloadedAction<int, bool>(x); ``` Are we using a single-template-parameter action where `bool` refers to the type of `x`, or a two-template-parameter action where the compiler is asked to infer the type of `x`? If you are writing a function that returns an `ACTION` object, you'll need to know its type. The type depends on the macro used to define the action and the parameter types. The rule is relatively simple: | Given Definition | Expression | Has Type | |:|:|:-| | `ACTION(Foo)` | `Foo()` | `FooAction` | | `ACTIONTEMPLATE(Foo, HASmTEMPLATEPARAMS(...), AND0VALUEPARAMS())` | `Foo<t1, ..., tm>()` | `FooAction<t1, ..., t_m>` | | `ACTIONP(Bar, param)` | `Bar(intvalue)` | `BarActionP<int>` | | `ACTIONTEMPLATE(Bar, HASmTEMPLATEPARAMS(...), AND1VALUEPARAMS(p1))` | `Bar<t1, ..., tm>(intvalue)` | `FooActionP<t1, ..., tm, int>` | | `ACTIONP2(Baz, p1, p2)` | `Baz(boolvalue, int_value)` | `BazActionP2<bool, int>` | | `ACTIONTEMPLATE(Baz, HASmTEMPLATEPARAMS(...), AND2VALUEPARAMS(p1, p2))`| `Baz<t1, ..., tm>(boolvalue, intvalue)` | `FooActionP2<t1, ..., t_m, bool, int>` | | ... | ... | ... | Note that we have to pick different suffixes (`Action`, `ActionP`, `ActionP2`, and etc) for actions with different numbers of value parameters, or the action definitions cannot be overloaded on the number of them. While the `ACTION*` macros are very convenient, sometimes they are inappropriate. For example, despite the tricks shown in the previous recipes, they don't let you directly specify the types of the mock function arguments and the action parameters, which in general leads to unoptimized compiler error messages that can baffle unfamiliar users. They also don't allow overloading actions based on parameter types without jumping through some hoops. An alternative to the `ACTION*` macros is to implement `::testing::ActionInterface<F>`, where `F` is the type of the mock function in which the action will be"
},
{
"data": "For example: ```cpp template <typename F>class ActionInterface { public: virtual ~ActionInterface(); // Performs the action. Result is the return type of function type // F, and ArgumentTuple is the tuple of arguments of F. // // For example, if F is int(bool, const string&), then Result would // be int, and ArgumentTuple would be ::testing::tuple<bool, const string&>. virtual Result Perform(const ArgumentTuple& args) = 0; }; using ::testing::_; using ::testing::Action; using ::testing::ActionInterface; using ::testing::MakeAction; typedef int IncrementMethod(int*); class IncrementArgumentAction : public ActionInterface<IncrementMethod> { public: virtual int Perform(const ::testing::tuple<int*>& args) { int* p = ::testing::get<0>(args); // Grabs the first argument. return *p++; } }; Action<IncrementMethod> IncrementArgument() { return MakeAction(new IncrementArgumentAction); } ... EXPECTCALL(foo, Baz()) .WillOnce(IncrementArgument()); int n = 5; foo.Baz(&n); // Should return 5 and change n to 6. ``` The previous recipe showed you how to define your own action. This is all good, except that you need to know the type of the function in which the action will be used. Sometimes that can be a problem. For example, if you want to use the action in functions with different types (e.g. like `Return()` and `SetArgPointee()`). If an action can be used in several types of mock functions, we say it's polymorphic. The `MakePolymorphicAction()` function template makes it easy to define such an action: ```cpp namespace testing { template <typename Impl> PolymorphicAction<Impl> MakePolymorphicAction(const Impl& impl); } // namespace testing ``` As an example, let's define an action that returns the second argument in the mock function's argument list. The first step is to define an implementation class: ```cpp class ReturnSecondArgumentAction { public: template <typename Result, typename ArgumentTuple> Result Perform(const ArgumentTuple& args) const { // To get the i-th (0-based) argument, use ::testing::get<i>(args). return ::testing::get<1>(args); } }; ``` This implementation class does not need to inherit from any particular class. What matters is that it must have a `Perform()` method template. This method template takes the mock function's arguments as a tuple in a single argument, and returns the result of the action. It can be either `const` or not, but must be invokable with exactly one template argument, which is the result type. In other words, you must be able to call `Perform<R>(args)` where `R` is the mock function's return type and `args` is its arguments in a tuple. Next, we use `MakePolymorphicAction()` to turn an instance of the implementation class into the polymorphic action we need. It will be convenient to have a wrapper for this: ```cpp using ::testing::MakePolymorphicAction; using ::testing::PolymorphicAction; PolymorphicAction<ReturnSecondArgumentAction> ReturnSecondArgument() { return MakePolymorphicAction(ReturnSecondArgumentAction()); } ``` Now, you can use this polymorphic action the same way you use the built-in ones: ```cpp using ::testing::_; class MockFoo : public Foo { public: MOCK_METHOD2(DoThis, int(bool flag, int n)); MOCK_METHOD3(DoThat, string(int x, const char str1, const char str2)); }; ... MockFoo foo; EXPECTCALL(foo, DoThis(, _)) .WillOnce(ReturnSecondArgument()); EXPECTCALL(foo, DoThat(, , )) .WillOnce(ReturnSecondArgument()); ... foo.DoThis(true, 5); // Will return 5. foo.DoThat(1, \"Hi\", \"Bye\"); // Will return \"Hi\". ``` When an uninteresting or unexpected call occurs, Google Mock prints the argument values and the stack trace to help you debug. Assertion macros like `EXPECTTHAT` and `EXPECTEQ` also print the values in question when the assertion fails. Google Mock and Google Test do this using Google Test's user-extensible value printer. This printer knows how to print built-in C++ types, native arrays, STL containers, and any type that supports the `<<` operator. For other types, it prints the raw bytes in the value and hopes that you the user can figure it out. explains how to extend the printer to do a better job at printing your particular type than to dump the bytes."
}
] |
{
"category": "App Definition and Development",
"file_name": "rdd-programming-guide.md",
"project_name": "Apache Spark",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "layout: global title: RDD Programming Guide description: Spark SPARKVERSIONSHORT programming guide in Java, Scala and Python license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This will become a table of contents (this text will be scraped). {:toc} At a high level, every Spark application consists of a driver program that runs the user's `main` function and executes various parallel operations on a cluster. The main abstraction Spark provides is a resilient distributed dataset (RDD), which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel. RDDs are created by starting with a file in the Hadoop file system (or any other Hadoop-supported file system), or an existing Scala collection in the driver program, and transforming it. Users may also ask Spark to persist an RDD in memory, allowing it to be reused efficiently across parallel operations. Finally, RDDs automatically recover from node failures. A second abstraction in Spark is shared variables that can be used in parallel operations. By default, when Spark runs a function in parallel as a set of tasks on different nodes, it ships a copy of each variable used in the function to each task. Sometimes, a variable needs to be shared across tasks, or between tasks and the driver program. Spark supports two types of shared variables: broadcast variables, which can be used to cache a value in memory on all nodes, and accumulators, which are variables that are only \"added\" to, such as counters and sums. This guide shows each of these features in each of Spark's supported languages. It is easiest to follow along with if you launch Spark's interactive shell -- either `bin/spark-shell` for the Scala shell or `bin/pyspark` for the Python one. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> Spark {{site.SPARK_VERSION}} works with Python 3.8+. It can use the standard CPython interpreter, so C libraries like NumPy can be used. It also works with PyPy 7.3.6+. Spark applications in Python can either be run with the `bin/spark-submit` script which includes Spark at runtime, or by including it in your setup.py as: {% highlight python %} install_requires=[ 'pyspark=={site.SPARK_VERSION}' ] {% endhighlight %} To run Spark applications in Python without pip installing PySpark, use the `bin/spark-submit` script located in the Spark directory. This script will load Spark's Java/Scala libraries and allow you to submit applications to a cluster. You can also use `bin/pyspark` to launch an interactive Python shell. If you wish to access HDFS data, you need to use a build of PySpark linking to your version of HDFS. are also available on the Spark homepage for common HDFS versions. Finally, you need to import some Spark classes into your program. Add the following line: {% highlight python %} from pyspark import SparkContext, SparkConf {% endhighlight %} PySpark requires the same minor version of Python in both driver and"
},
{
"data": "It uses the default python version in PATH, you can specify which version of Python you want to use by `PYSPARK_PYTHON`, for example: {% highlight bash %} $ PYSPARK_PYTHON=python3.8 bin/pyspark $ PYSPARK_PYTHON=/path-to-your-pypy/pypy bin/spark-submit examples/src/main/python/pi.py {% endhighlight %} </div> <div data-lang=\"scala\" markdown=\"1\"> Spark {{site.SPARKVERSION}} is built and distributed to work with Scala {{site.SCALABINARY_VERSION}} by default. (Spark can be built to work with other versions of Scala, too.) To write applications in Scala, you will need to use a compatible Scala version (e.g. {{site.SCALABINARYVERSION}}.X). To write a Spark application, you need to add a Maven dependency on Spark. Spark is available through Maven Central at: groupId = org.apache.spark artifactId = spark-core{{site.SCALABINARY_VERSION}} version = {{site.SPARK_VERSION}} In addition, if you wish to access an HDFS cluster, you need to add a dependency on `hadoop-client` for your version of HDFS. groupId = org.apache.hadoop artifactId = hadoop-client version = <your-hdfs-version> Finally, you need to import some Spark classes into your program. Add the following lines: {% highlight scala %} import org.apache.spark.SparkContext import org.apache.spark.SparkConf {% endhighlight %} (Before Spark 1.3.0, you need to explicitly `import org.apache.spark.SparkContext._` to enable essential implicit conversions.) </div> <div data-lang=\"java\" markdown=\"1\"> Spark {{site.SPARK_VERSION}} supports for concisely writing functions, otherwise you can use the classes in the package. Note that support for Java 7 was removed in Spark 2.2.0. To write a Spark application in Java, you need to add a dependency on Spark. Spark is available through Maven Central at: groupId = org.apache.spark artifactId = spark-core{{site.SCALABINARY_VERSION}} version = {{site.SPARK_VERSION}} In addition, if you wish to access an HDFS cluster, you need to add a dependency on `hadoop-client` for your version of HDFS. groupId = org.apache.hadoop artifactId = hadoop-client version = <your-hdfs-version> Finally, you need to import some Spark classes into your program. Add the following lines: {% highlight java %} import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.SparkConf; {% endhighlight %} </div> </div> <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> The first thing a Spark program must do is to create a object, which tells Spark how to access a cluster. To create a `SparkContext` you first need to build a object that contains information about your application. {% highlight python %} conf = SparkConf().setAppName(appName).setMaster(master) sc = SparkContext(conf=conf) {% endhighlight %} </div> <div data-lang=\"scala\" markdown=\"1\"> The first thing a Spark program must do is to create a object, which tells Spark how to access a cluster. To create a `SparkContext` you first need to build a object that contains information about your application. Only one SparkContext should be active per JVM. You must `stop()` the active SparkContext before creating a new one. {% highlight scala %} val conf = new SparkConf().setAppName(appName).setMaster(master) new SparkContext(conf) {% endhighlight %} </div> <div data-lang=\"java\" markdown=\"1\"> The first thing a Spark program must do is to create a object, which tells Spark how to access a cluster. To create a `SparkContext` you first need to build a object that contains information about your application. {% highlight java %} SparkConf conf = new SparkConf().setAppName(appName).setMaster(master); JavaSparkContext sc = new JavaSparkContext(conf); {% endhighlight %} </div> </div> The `appName` parameter is a name for your application to show on the cluster UI. `master` is a , or a special \"local\" string to run in local mode. In practice, when running on a cluster, you will not want to hardcode `master` in the program, but rather and receive it there. However, for local testing and unit tests, you can pass \"local\" to run Spark in-process. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> In the PySpark shell, a special interpreter-aware SparkContext is already created for you, in the variable called `sc`. Making your own SparkContext will not"
},
{
"data": "You can set which master the context connects to using the `--master` argument, and you can add Python .zip, .egg or .py files to the runtime path by passing a comma-separated list to `--py-files`. For third-party Python dependencies, see . You can also add dependencies (e.g. Spark Packages) to your shell session by supplying a comma-separated list of Maven coordinates to the `--packages` argument. Any additional repositories where dependencies might exist (e.g. Sonatype) can be passed to the `--repositories` argument. For example, to run `bin/pyspark` on exactly four cores, use: {% highlight bash %} $ ./bin/pyspark --master \"local[4]\" {% endhighlight %} Or, to also add `code.py` to the search path (in order to later be able to `import code`), use: {% highlight bash %} $ ./bin/pyspark --master \"local[4]\" --py-files code.py {% endhighlight %} For a complete list of options, run `pyspark --help`. Behind the scenes, `pyspark` invokes the more general . It is also possible to launch the PySpark shell in , the enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To use IPython, set the `PYSPARKDRIVERPYTHON` variable to `ipython` when running `bin/pyspark`: {% highlight bash %} $ PYSPARKDRIVERPYTHON=ipython ./bin/pyspark {% endhighlight %} To use the Jupyter notebook (previously known as the IPython notebook), {% highlight bash %} $ PYSPARKDRIVERPYTHON=jupyter PYSPARKDRIVERPYTHON_OPTS=notebook ./bin/pyspark {% endhighlight %} You can customize the `ipython` or `jupyter` commands by setting `PYSPARKDRIVERPYTHON_OPTS`. After the Jupyter Notebook server is launched, you can create a new notebook from the \"Files\" tab. Inside the notebook, you can input the command `%pylab inline` as part of your notebook before you start to try Spark from the Jupyter notebook. </div> <div data-lang=\"scala\" markdown=\"1\"> In the Spark shell, a special interpreter-aware SparkContext is already created for you, in the variable called `sc`. Making your own SparkContext will not work. You can set which master the context connects to using the `--master` argument, and you can add JARs to the classpath by passing a comma-separated list to the `--jars` argument. You can also add dependencies (e.g. Spark Packages) to your shell session by supplying a comma-separated list of Maven coordinates to the `--packages` argument. Any additional repositories where dependencies might exist (e.g. Sonatype) can be passed to the `--repositories` argument. For example, to run `bin/spark-shell` on exactly four cores, use: {% highlight bash %} $ ./bin/spark-shell --master \"local[4]\" {% endhighlight %} Or, to also add `code.jar` to its classpath, use: {% highlight bash %} $ ./bin/spark-shell --master \"local[4]\" --jars code.jar {% endhighlight %} To include a dependency using Maven coordinates: {% highlight bash %} $ ./bin/spark-shell --master \"local[4]\" --packages \"org.example:example:0.1\" {% endhighlight %} For a complete list of options, run `spark-shell --help`. Behind the scenes, `spark-shell` invokes the more general . </div> </div> Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. There are two ways to create RDDs: parallelizing an existing collection in your driver program, or referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> Parallelized collections are created by calling `SparkContext`'s `parallelize` method on an existing iterable or collection in your driver program. The elements of the collection are copied to form a distributed dataset that can be operated on in parallel. For example, here is how to create a parallelized collection holding the numbers 1 to 5: {% highlight python %} data = [1, 2, 3, 4, 5] distData = sc.parallelize(data) {% endhighlight %} Once created, the distributed dataset (`distData`) can be operated on in parallel. For example, we can call"
},
{
"data": "a, b: a + b)` to add up the elements of the list. We describe operations on distributed datasets later on. </div> <div data-lang=\"scala\" markdown=\"1\"> Parallelized collections are created by calling `SparkContext`'s `parallelize` method on an existing collection in your driver program (a Scala `Seq`). The elements of the collection are copied to form a distributed dataset that can be operated on in parallel. For example, here is how to create a parallelized collection holding the numbers 1 to 5: {% highlight scala %} val data = Array(1, 2, 3, 4, 5) val distData = sc.parallelize(data) {% endhighlight %} Once created, the distributed dataset (`distData`) can be operated on in parallel. For example, we might call `distData.reduce((a, b) => a + b)` to add up the elements of the array. We describe operations on distributed datasets later on. </div> <div data-lang=\"java\" markdown=\"1\"> Parallelized collections are created by calling `JavaSparkContext`'s `parallelize` method on an existing `Collection` in your driver program. The elements of the collection are copied to form a distributed dataset that can be operated on in parallel. For example, here is how to create a parallelized collection holding the numbers 1 to 5: {% highlight java %} List<Integer> data = Arrays.asList(1, 2, 3, 4, 5); JavaRDD<Integer> distData = sc.parallelize(data); {% endhighlight %} Once created, the distributed dataset (`distData`) can be operated on in parallel. For example, we might call `distData.reduce((a, b) -> a + b)` to add up the elements of the list. We describe operations on distributed datasets later on. </div> </div> One important parameter for parallel collections is the number of partitions to cut the dataset into. Spark will run one task for each partition of the cluster. Typically you want 2-4 partitions for each CPU in your cluster. Normally, Spark tries to set the number of partitions automatically based on your cluster. However, you can also set it manually by passing it as a second parameter to `parallelize` (e.g. `sc.parallelize(data, 10)`). Note: some places in the code use the term slices (a synonym for partitions) to maintain backward compatibility. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> PySpark can create distributed datasets from any storage source supported by Hadoop, including your local file system, HDFS, Cassandra, HBase, , etc. Spark supports text files, , and any other Hadoop . Text file RDDs can be created using `SparkContext`'s `textFile` method. This method takes a URI for the file (either a local path on the machine, or a `hdfs://`, `s3a://`, etc URI) and reads it as a collection of lines. Here is an example invocation: {% highlight python %} >> distFile = sc.textFile(\"data.txt\") {% endhighlight %} Once created, `distFile` can be acted on by dataset operations. For example, we can add up the sizes of all the lines using the `map` and `reduce` operations as follows: `distFile.map(lambda s: len(s)).reduce(lambda a, b: a + b)`. Some notes on reading files with Spark: If using a path on the local filesystem, the file must also be accessible at the same path on worker nodes. Either copy the file to all workers or use a network-mounted shared file system. All of Spark's file-based input methods, including `textFile`, support running on directories, compressed files, and wildcards as well. For example, you can use `textFile(\"/my/directory\")`, `textFile(\"/my/directory/.txt\")`, and `textFile(\"/my/directory/*.gz\")`. The `textFile` method also takes an optional second argument for controlling the number of partitions of the file. By default, Spark creates one partition for each block of the file (blocks being 128MB by default in HDFS), but you can also ask for a higher number of partitions by passing a larger"
},
{
"data": "Note that you cannot have fewer partitions than blocks. Apart from text files, Spark's Python API also supports several other data formats: `SparkContext.wholeTextFiles` lets you read a directory containing multiple small text files, and returns each of them as (filename, content) pairs. This is in contrast with `textFile`, which would return one record per line in each file. `RDD.saveAsPickleFile` and `SparkContext.pickleFile` support saving an RDD in a simple format consisting of pickled Python objects. Batching is used on pickle serialization, with default batch size 10. SequenceFile and Hadoop Input/Output Formats Note this feature is currently marked ```Experimental``` and is intended for advanced users. It may be replaced in future with read/write support based on Spark SQL, in which case Spark SQL is the preferred approach. Writable Support PySpark SequenceFile support loads an RDD of key-value pairs within Java, converts Writables to base Java types, and pickles the resulting Java objects using . When saving an RDD of key-value pairs to SequenceFile, PySpark does the reverse. It unpickles Python objects into Java objects and then converts them to Writables. The following Writables are automatically converted: <table> <thead><tr><th>Writable Type</th><th>Python Type</th></tr></thead> <tr><td>Text</td><td>str</td></tr> <tr><td>IntWritable</td><td>int</td></tr> <tr><td>FloatWritable</td><td>float</td></tr> <tr><td>DoubleWritable</td><td>float</td></tr> <tr><td>BooleanWritable</td><td>bool</td></tr> <tr><td>BytesWritable</td><td>bytearray</td></tr> <tr><td>NullWritable</td><td>None</td></tr> <tr><td>MapWritable</td><td>dict</td></tr> </table> Arrays are not handled out-of-the-box. Users need to specify custom `ArrayWritable` subtypes when reading or writing. When writing, users also need to specify custom converters that convert arrays to custom `ArrayWritable` subtypes. When reading, the default converter will convert custom `ArrayWritable` subtypes to Java `Object[]`, which then get pickled to Python tuples. To get Python `array.array` for arrays of primitive types, users need to specify custom converters. Saving and Loading SequenceFiles Similarly to text files, SequenceFiles can be saved and loaded by specifying the path. The key and value classes can be specified, but for standard Writables this is not required. {% highlight python %} >> rdd = sc.parallelize(range(1, 4)).map(lambda x: (x, \"a\" * x)) >> rdd.saveAsSequenceFile(\"path/to/file\") >> sorted(sc.sequenceFile(\"path/to/file\").collect()) [(1, u'a'), (2, u'aa'), (3, u'aaa')] {% endhighlight %} Saving and Loading Other Hadoop Input/Output Formats PySpark can also read any Hadoop InputFormat or write any Hadoop OutputFormat, for both 'new' and 'old' Hadoop MapReduce APIs. If required, a Hadoop configuration can be passed in as a Python dict. Here is an example using the Elasticsearch ESInputFormat: {% highlight python %} $ ./bin/pyspark --jars /path/to/elasticsearch-hadoop.jar >> conf = {\"es.resource\" : \"index/type\"} # assume Elasticsearch is running on localhost defaults >> rdd = sc.newAPIHadoopRDD(\"org.elasticsearch.hadoop.mr.EsInputFormat\", \"org.apache.hadoop.io.NullWritable\", \"org.elasticsearch.hadoop.mr.LinkedMapWritable\", conf=conf) >> rdd.first() # the result is a MapWritable that is converted to a Python dict (u'Elasticsearch ID', {u'field1': True, u'field2': u'Some Text', u'field3': 12345}) {% endhighlight %} Note that, if the InputFormat simply depends on a Hadoop configuration and/or input path, and the key and value classes can easily be converted according to the above table, then this approach should work well for such cases. If you have custom serialized binary data (such as loading data from Cassandra / HBase), then you will first need to transform that data on the Scala/Java side to something which can be handled by pickle's pickler. A trait is provided for this. Simply extend this trait and implement your transformation code in the ```convert``` method. Remember to ensure that this class, along with any dependencies required to access your ```InputFormat```, are packaged into your Spark job jar and included on the PySpark classpath. See the and the for examples of using Cassandra / HBase ```InputFormat``` and ```OutputFormat``` with custom converters. </div> <div data-lang=\"scala\" markdown=\"1\"> Spark can create distributed datasets from any storage source supported by Hadoop, including your local file system, HDFS, Cassandra, HBase, ,"
},
{
"data": "Spark supports text files, , and any other Hadoop . Text file RDDs can be created using `SparkContext`'s `textFile` method. This method takes a URI for the file (either a local path on the machine, or a `hdfs://`, `s3a://`, etc URI) and reads it as a collection of lines. Here is an example invocation: {% highlight scala %} scala> val distFile = sc.textFile(\"data.txt\") distFile: org.apache.spark.rdd.RDD[String] = data.txt MapPartitionsRDD[10] at textFile at <console>:26 {% endhighlight %} Once created, `distFile` can be acted on by dataset operations. For example, we can add up the sizes of all the lines using the `map` and `reduce` operations as follows: `distFile.map(s => s.length).reduce((a, b) => a + b)`. Some notes on reading files with Spark: If using a path on the local filesystem, the file must also be accessible at the same path on worker nodes. Either copy the file to all workers or use a network-mounted shared file system. All of Spark's file-based input methods, including `textFile`, support running on directories, compressed files, and wildcards as well. For example, you can use `textFile(\"/my/directory\")`, `textFile(\"/my/directory/.txt\")`, and `textFile(\"/my/directory/*.gz\")`. When multiple files are read, the order of the partitions depends on the order the files are returned from the filesystem. It may or may not, for example, follow the lexicographic ordering of the files by path. Within a partition, elements are ordered according to their order in the underlying file. The `textFile` method also takes an optional second argument for controlling the number of partitions of the file. By default, Spark creates one partition for each block of the file (blocks being 128MB by default in HDFS), but you can also ask for a higher number of partitions by passing a larger value. Note that you cannot have fewer partitions than blocks. Apart from text files, Spark's Scala API also supports several other data formats: `SparkContext.wholeTextFiles` lets you read a directory containing multiple small text files, and returns each of them as (filename, content) pairs. This is in contrast with `textFile`, which would return one record per line in each file. Partitioning is determined by data locality which, in some cases, may result in too few partitions. For those cases, `wholeTextFiles` provides an optional second argument for controlling the minimal number of partitions. For , use SparkContext's `sequenceFile interface, like and . In addition, Spark allows you to specify native types for a few common Writables; for example, `sequenceFile[Int, String]` will automatically read IntWritables and Texts. For other Hadoop InputFormats, you can use the `SparkContext.hadoopRDD` method, which takes an arbitrary `JobConf` and input format class, key class and value class. Set these the same way you would for a Hadoop job with your input source. You can also use `SparkContext.newAPIHadoopRDD` for InputFormats based on the \"new\" MapReduce API (`org.apache.hadoop.mapreduce`). `RDD.saveAsObjectFile` and `SparkContext.objectFile` support saving an RDD in a simple format consisting of serialized Java objects. While this is not as efficient as specialized formats like Avro, it offers an easy way to save any RDD. </div> <div data-lang=\"java\" markdown=\"1\"> Spark can create distributed datasets from any storage source supported by Hadoop, including your local file system, HDFS, Cassandra, HBase, , etc. Spark supports text files, , and any other Hadoop . Text file RDDs can be created using `SparkContext`'s `textFile` method. This method takes a URI for the file (either a local path on the machine, or a `hdfs://`, `s3a://`, etc URI) and reads it as a collection of lines. Here is an example invocation: {% highlight java %} JavaRDD<String> distFile = sc.textFile(\"data.txt\"); {% endhighlight %} Once created, `distFile` can be acted on by dataset"
},
{
"data": "For example, we can add up the sizes of all the lines using the `map` and `reduce` operations as follows: `distFile.map(s -> s.length()).reduce((a, b) -> a + b)`. Some notes on reading files with Spark: If using a path on the local filesystem, the file must also be accessible at the same path on worker nodes. Either copy the file to all workers or use a network-mounted shared file system. All of Spark's file-based input methods, including `textFile`, support running on directories, compressed files, and wildcards as well. For example, you can use `textFile(\"/my/directory\")`, `textFile(\"/my/directory/.txt\")`, and `textFile(\"/my/directory/*.gz\")`. The `textFile` method also takes an optional second argument for controlling the number of partitions of the file. By default, Spark creates one partition for each block of the file (blocks being 128MB by default in HDFS), but you can also ask for a higher number of partitions by passing a larger value. Note that you cannot have fewer partitions than blocks. Apart from text files, Spark's Java API also supports several other data formats: `JavaSparkContext.wholeTextFiles` lets you read a directory containing multiple small text files, and returns each of them as (filename, content) pairs. This is in contrast with `textFile`, which would return one record per line in each file. For , use SparkContext's `sequenceFile interface, like and . For other Hadoop InputFormats, you can use the `JavaSparkContext.hadoopRDD` method, which takes an arbitrary `JobConf` and input format class, key class and value class. Set these the same way you would for a Hadoop job with your input source. You can also use `JavaSparkContext.newAPIHadoopRDD` for InputFormats based on the \"new\" MapReduce API (`org.apache.hadoop.mapreduce`). `JavaRDD.saveAsObjectFile` and `JavaSparkContext.objectFile` support saving an RDD in a simple format consisting of serialized Java objects. While this is not as efficient as specialized formats like Avro, it offers an easy way to save any RDD. </div> </div> RDDs support two types of operations: transformations, which create a new dataset from an existing one, and actions, which return a value to the driver program after running a computation on the dataset. For example, `map` is a transformation that passes each dataset element through a function and returns a new RDD representing the results. On the other hand, `reduce` is an action that aggregates all the elements of the RDD using some function and returns the final result to the driver program (although there is also a parallel `reduceByKey` that returns a distributed dataset). All transformations in Spark are <i>lazy</i>, in that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset (e.g. a file). The transformations are only computed when an action requires a result to be returned to the driver program. This design enables Spark to run more efficiently. For example, we can realize that a dataset created through `map` will be used in a `reduce` and return only the result of the `reduce` to the driver, rather than the larger mapped dataset. By default, each transformed RDD may be recomputed each time you run an action on it. However, you may also persist an RDD in memory using the `persist` (or `cache`) method, in which case Spark will keep the elements around on the cluster for much faster access the next time you query it. There is also support for persisting RDDs on disk, or replicated across multiple nodes. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> To illustrate RDD basics, consider the simple program below: {% highlight python %} lines = sc.textFile(\"data.txt\") lineLengths = lines.map(lambda s: len(s)) totalLength ="
},
{
"data": "a, b: a + b) {% endhighlight %} The first line defines a base RDD from an external file. This dataset is not loaded in memory or otherwise acted on: `lines` is merely a pointer to the file. The second line defines `lineLengths` as the result of a `map` transformation. Again, `lineLengths` is not immediately computed, due to laziness. Finally, we run `reduce`, which is an action. At this point Spark breaks the computation into tasks to run on separate machines, and each machine runs both its part of the map and a local reduction, returning only its answer to the driver program. If we also wanted to use `lineLengths` again later, we could add: {% highlight python %} lineLengths.persist() {% endhighlight %} before the `reduce`, which would cause `lineLengths` to be saved in memory after the first time it is computed. </div> <div data-lang=\"scala\" markdown=\"1\"> To illustrate RDD basics, consider the simple program below: {% highlight scala %} val lines = sc.textFile(\"data.txt\") val lineLengths = lines.map(s => s.length) val totalLength = lineLengths.reduce((a, b) => a + b) {% endhighlight %} The first line defines a base RDD from an external file. This dataset is not loaded in memory or otherwise acted on: `lines` is merely a pointer to the file. The second line defines `lineLengths` as the result of a `map` transformation. Again, `lineLengths` is not immediately computed, due to laziness. Finally, we run `reduce`, which is an action. At this point Spark breaks the computation into tasks to run on separate machines, and each machine runs both its part of the map and a local reduction, returning only its answer to the driver program. If we also wanted to use `lineLengths` again later, we could add: {% highlight scala %} lineLengths.persist() {% endhighlight %} before the `reduce`, which would cause `lineLengths` to be saved in memory after the first time it is computed. </div> <div data-lang=\"java\" markdown=\"1\"> To illustrate RDD basics, consider the simple program below: {% highlight java %} JavaRDD<String> lines = sc.textFile(\"data.txt\"); JavaRDD<Integer> lineLengths = lines.map(s -> s.length()); int totalLength = lineLengths.reduce((a, b) -> a + b); {% endhighlight %} The first line defines a base RDD from an external file. This dataset is not loaded in memory or otherwise acted on: `lines` is merely a pointer to the file. The second line defines `lineLengths` as the result of a `map` transformation. Again, `lineLengths` is not immediately computed, due to laziness. Finally, we run `reduce`, which is an action. At this point Spark breaks the computation into tasks to run on separate machines, and each machine runs both its part of the map and a local reduction, returning only its answer to the driver program. If we also wanted to use `lineLengths` again later, we could add: {% highlight java %} lineLengths.persist(StorageLevel.MEMORY_ONLY()); {% endhighlight %} before the `reduce`, which would cause `lineLengths` to be saved in memory after the first time it is computed. </div> </div> <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> Spark's API relies heavily on passing functions in the driver program to run on the cluster. There are three recommended ways to do this: , for simple functions that can be written as an expression. (Lambdas do not support multi-statement functions or statements that do not return a value.) Local `def`s inside the function calling into Spark, for longer code. Top-level functions in a module. For example, to pass a longer function than can be supported using a `lambda`, consider the code below: {% highlight python %} \"\"\"MyScript.py\"\"\" if name == \"main\": def myFunc(s): words = s.split(\" \") return len(words) sc = SparkContext(...)"
},
{
"data": "{% endhighlight %} Note that while it is also possible to pass a reference to a method in a class instance (as opposed to a singleton object), this requires sending the object that contains that class along with the method. For example, consider: {% highlight python %} class MyClass(object): def func(self, s): return s def doStuff(self, rdd): return rdd.map(self.func) {% endhighlight %} Here, if we create a `new MyClass` and call `doStuff` on it, the `map` inside there references the `func` method of that `MyClass` instance, so the whole object needs to be sent to the cluster. In a similar way, accessing fields of the outer object will reference the whole object: {% highlight python %} class MyClass(object): def init(self): self.field = \"Hello\" def doStuff(self, rdd): return rdd.map(lambda s: self.field + s) {% endhighlight %} To avoid this issue, the simplest way is to copy `field` into a local variable instead of accessing it externally: {% highlight python %} def doStuff(self, rdd): field = self.field return rdd.map(lambda s: field + s) {% endhighlight %} </div> <div data-lang=\"scala\" markdown=\"1\"> Spark's API relies heavily on passing functions in the driver program to run on the cluster. There are two recommended ways to do this: , which can be used for short pieces of code. Static methods in a global singleton object. For example, you can define `object MyFunctions` and then pass `MyFunctions.func1`, as follows: {% highlight scala %} object MyFunctions { def func1(s: String): String = { ... } } myRdd.map(MyFunctions.func1) {% endhighlight %} Note that while it is also possible to pass a reference to a method in a class instance (as opposed to a singleton object), this requires sending the object that contains that class along with the method. For example, consider: {% highlight scala %} class MyClass { def func1(s: String): String = { ... } def doStuff(rdd: RDD[String]): RDD[String] = { rdd.map(func1) } } {% endhighlight %} Here, if we create a new `MyClass` instance and call `doStuff` on it, the `map` inside there references the `func1` method of that `MyClass` instance, so the whole object needs to be sent to the cluster. It is similar to writing `rdd.map(x => this.func1(x))`. In a similar way, accessing fields of the outer object will reference the whole object: {% highlight scala %} class MyClass { val field = \"Hello\" def doStuff(rdd: RDD[String]): RDD[String] = { rdd.map(x => field + x) } } {% endhighlight %} is equivalent to writing `rdd.map(x => this.field + x)`, which references all of `this`. To avoid this issue, the simplest way is to copy `field` into a local variable instead of accessing it externally: {% highlight scala %} def doStuff(rdd: RDD[String]): RDD[String] = { val field_ = this.field rdd.map(x => field_ + x) } {% endhighlight %} </div> <div data-lang=\"java\" markdown=\"1\"> Spark's API relies heavily on passing functions in the driver program to run on the cluster. In Java, functions are represented by classes implementing the interfaces in the package. There are two ways to create such functions: Implement the Function interfaces in your own class, either as an anonymous inner class or a named one, and pass an instance of it to Spark. Use to concisely define an implementation. While much of this guide uses lambda syntax for conciseness, it is easy to use all the same APIs in long-form. For example, we could have written our code above as follows: {% highlight java %} JavaRDD<String> lines = sc.textFile(\"data.txt\"); JavaRDD<Integer> lineLengths = lines.map(new Function<String, Integer>() { public Integer call(String s) { return s.length(); } }); int totalLength ="
},
{
"data": "Function2<Integer, Integer, Integer>() { public Integer call(Integer a, Integer b) { return a + b; } }); {% endhighlight %} Or, if writing the functions inline is unwieldy: {% highlight java %} class GetLength implements Function<String, Integer> { public Integer call(String s) { return s.length(); } } class Sum implements Function2<Integer, Integer, Integer> { public Integer call(Integer a, Integer b) { return a + b; } } JavaRDD<String> lines = sc.textFile(\"data.txt\"); JavaRDD<Integer> lineLengths = lines.map(new GetLength()); int totalLength = lineLengths.reduce(new Sum()); {% endhighlight %} Note that anonymous inner classes in Java can also access variables in the enclosing scope as long as they are marked `final`. Spark will ship copies of these variables to each worker node as it does for other languages. </div> </div> One of the harder things about Spark is understanding the scope and life cycle of variables and methods when executing code across a cluster. RDD operations that modify variables outside of their scope can be a frequent source of confusion. In the example below we'll look at code that uses `foreach()` to increment a counter, but similar issues can occur for other operations as well. Consider the naive RDD element sum below, which may behave differently depending on whether execution is happening within the same JVM. A common example of this is when running Spark in `local` mode (`--master = \"local[n]\"`) versus deploying a Spark application to a cluster (e.g. via spark-submit to YARN): <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> {% highlight python %} counter = 0 rdd = sc.parallelize(data) def increment_counter(x): global counter counter += x rdd.foreach(increment_counter) print(\"Counter value: \", counter) {% endhighlight %} </div> <div data-lang=\"scala\" markdown=\"1\"> {% highlight scala %} var counter = 0 var rdd = sc.parallelize(data) // Wrong: Don't do this!! rdd.foreach(x => counter += x) println(\"Counter value: \" + counter) {% endhighlight %} </div> <div data-lang=\"java\" markdown=\"1\"> {% highlight java %} int counter = 0; JavaRDD<Integer> rdd = sc.parallelize(data); // Wrong: Don't do this!! rdd.foreach(x -> counter += x); println(\"Counter value: \" + counter); {% endhighlight %} </div> </div> The behavior of the above code is undefined, and may not work as intended. To execute jobs, Spark breaks up the processing of RDD operations into tasks, each of which is executed by an executor. Prior to execution, Spark computes the task's closure. The closure is those variables and methods which must be visible for the executor to perform its computations on the RDD (in this case `foreach()`). This closure is serialized and sent to each executor. The variables within the closure sent to each executor are now copies and thus, when counter is referenced within the `foreach` function, it's no longer the counter on the driver node. There is still a counter in the memory of the driver node but this is no longer visible to the executors! The executors only see the copy from the serialized closure. Thus, the final value of counter will still be zero since all operations on counter were referencing the value within the serialized closure. In local mode, in some circumstances, the `foreach` function will actually execute within the same JVM as the driver and will reference the same original counter, and may actually update it. To ensure well-defined behavior in these sorts of scenarios one should use an . Accumulators in Spark are used specifically to provide a mechanism for safely updating a variable when execution is split up across worker nodes in a cluster. The Accumulators section of this guide discusses these in more"
},
{
"data": "In general, closures - constructs like loops or locally defined methods, should not be used to mutate some global state. Spark does not define or guarantee the behavior of mutations to objects referenced from outside of closures. Some code that does this may work in local mode, but that's just by accident and such code will not behave as expected in distributed mode. Use an Accumulator instead if some global aggregation is needed. Another common idiom is attempting to print out the elements of an RDD using `rdd.foreach(println)` or `rdd.map(println)`. On a single machine, this will generate the expected output and print all the RDD's elements. However, in `cluster` mode, the output to `stdout` being called by the executors is now writing to the executor's `stdout` instead, not the one on the driver, so `stdout` on the driver won't show these! To print all elements on the driver, one can use the `collect()` method to first bring the RDD to the driver node thus: `rdd.collect().foreach(println)`. This can cause the driver to run out of memory, though, because `collect()` fetches the entire RDD to a single machine; if you only need to print a few elements of the RDD, a safer approach is to use the `take()`: `rdd.take(100).foreach(println)`. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> While most Spark operations work on RDDs containing any type of objects, a few special operations are only available on RDDs of key-value pairs. The most common ones are distributed \"shuffle\" operations, such as grouping or aggregating the elements by a key. In Python, these operations work on RDDs containing built-in Python tuples such as `(1, 2)`. Simply create such tuples and then call your desired operation. For example, the following code uses the `reduceByKey` operation on key-value pairs to count how many times each line of text occurs in a file: {% highlight python %} lines = sc.textFile(\"data.txt\") pairs = lines.map(lambda s: (s, 1)) counts = pairs.reduceByKey(lambda a, b: a + b) {% endhighlight %} We could also use `counts.sortByKey()`, for example, to sort the pairs alphabetically, and finally `counts.collect()` to bring them back to the driver program as a list of objects. </div> <div data-lang=\"scala\" markdown=\"1\"> While most Spark operations work on RDDs containing any type of objects, a few special operations are only available on RDDs of key-value pairs. The most common ones are distributed \"shuffle\" operations, such as grouping or aggregating the elements by a key. In Scala, these operations are automatically available on RDDs containing objects (the built-in tuples in the language, created by simply writing `(a, b)`). The key-value pair operations are available in the class, which automatically wraps around an RDD of tuples. For example, the following code uses the `reduceByKey` operation on key-value pairs to count how many times each line of text occurs in a file: {% highlight scala %} val lines = sc.textFile(\"data.txt\") val pairs = lines.map(s => (s, 1)) val counts = pairs.reduceByKey((a, b) => a + b) {% endhighlight %} We could also use `counts.sortByKey()`, for example, to sort the pairs alphabetically, and finally `counts.collect()` to bring them back to the driver program as an array of objects. Note: when using custom objects as the key in key-value pair operations, you must be sure that a custom `equals()` method is accompanied with a matching `hashCode()` method. For full details, see the contract outlined in the [Object.hashCode() documentation](https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#hashCode--). </div> <div data-lang=\"java\" markdown=\"1\"> While most Spark operations work on RDDs containing any type of objects, a few special operations are only available on RDDs of key-value pairs. The most common ones are distributed \"shuffle\" operations, such as grouping or aggregating the elements by a"
},
{
"data": "In Java, key-value pairs are represented using the class from the Scala standard library. You can simply call `new Tuple2(a, b)` to create a tuple, and access its fields later with `tuple.1()` and `tuple.2()`. RDDs of key-value pairs are represented by the class. You can construct JavaPairRDDs from JavaRDDs using special versions of the `map` operations, like `mapToPair` and `flatMapToPair`. The JavaPairRDD will have both standard RDD functions and special key-value ones. For example, the following code uses the `reduceByKey` operation on key-value pairs to count how many times each line of text occurs in a file: {% highlight scala %} JavaRDD<String> lines = sc.textFile(\"data.txt\"); JavaPairRDD<String, Integer> pairs = lines.mapToPair(s -> new Tuple2(s, 1)); JavaPairRDD<String, Integer> counts = pairs.reduceByKey((a, b) -> a + b); {% endhighlight %} We could also use `counts.sortByKey()`, for example, to sort the pairs alphabetically, and finally `counts.collect()` to bring them back to the driver program as an array of objects. Note: when using custom objects as the key in key-value pair operations, you must be sure that a custom `equals()` method is accompanied with a matching `hashCode()` method. For full details, see the contract outlined in the [Object.hashCode() documentation](https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#hashCode--). </div> </div> The following table lists some of the common transformations supported by Spark. Refer to the RDD API doc (, , , ) and pair RDD functions doc (, ) for details. <table> <thead><tr><th style=\"width:25%\">Transformation</th><th>Meaning</th></tr></thead> <tr> <td> <b>map</b>(<i>func</i>) </td> <td> Return a new distributed dataset formed by passing each element of the source through a function <i>func</i>. </td> </tr> <tr> <td> <b>filter</b>(<i>func</i>) </td> <td> Return a new dataset formed by selecting those elements of the source on which <i>func</i> returns true. </td> </tr> <tr> <td> <b>flatMap</b>(<i>func</i>) </td> <td> Similar to map, but each input item can be mapped to 0 or more output items (so <i>func</i> should return a Seq rather than a single item). </td> </tr> <tr> <td> <b>mapPartitions</b>(<i>func</i>) <a name=\"MapPartLink\"></a> </td> <td> Similar to map, but runs separately on each partition (block) of the RDD, so <i>func</i> must be of type Iterator<T> => Iterator<U> when running on an RDD of type T. </td> </tr> <tr> <td> <b>mapPartitionsWithIndex</b>(<i>func</i>) </td> <td> Similar to mapPartitions, but also provides <i>func</i> with an integer value representing the index of the partition, so <i>func</i> must be of type (Int, Iterator<T>) => Iterator<U> when running on an RDD of type T. </td> </tr> <tr> <td> <b>sample</b>(<i>withReplacement</i>, <i>fraction</i>, <i>seed</i>) </td> <td> Sample a fraction <i>fraction</i> of the data, with or without replacement, using a given random number generator seed. </td> </tr> <tr> <td> <b>union</b>(<i>otherDataset</i>) </td> <td> Return a new dataset that contains the union of the elements in the source dataset and the argument. </td> </tr> <tr> <td> <b>intersection</b>(<i>otherDataset</i>) </td> <td> Return a new RDD that contains the intersection of elements in the source dataset and the argument. </td> </tr> <tr> <td> <b>distinct</b>([<i>numPartitions</i>])) </td> <td> Return a new dataset that contains the distinct elements of the source dataset.</td> </tr> <tr> <td> <b>groupByKey</b>([<i>numPartitions</i>]) <a name=\"GroupByLink\"></a> </td> <td> When called on a dataset of (K, V) pairs, returns a dataset of (K, Iterable<V>) pairs. <br /> <b>Note:</b> If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using <code>reduceByKey</code> or <code>aggregateByKey</code> will yield much better performance. <br /> <b>Note:</b> By default, the level of parallelism in the output depends on the number of partitions of the parent RDD. You can pass an optional <code>numPartitions</code> argument to set a different number of"
},
{
"data": "</td> </tr> <tr> <td> <b>reduceByKey</b>(<i>func</i>, [<i>numPartitions</i>]) <a name=\"ReduceByLink\"></a> </td> <td> When called on a dataset of (K, V) pairs, returns a dataset of (K, V) pairs where the values for each key are aggregated using the given reduce function <i>func</i>, which must be of type (V,V) => V. Like in <code>groupByKey</code>, the number of reduce tasks is configurable through an optional second argument. </td> </tr> <tr> <td> <b>aggregateByKey</b>(<i>zeroValue</i>)(<i>seqOp</i>, <i>combOp</i>, [<i>numPartitions</i>]) <a name=\"AggregateByLink\"></a> </td> <td> When called on a dataset of (K, V) pairs, returns a dataset of (K, U) pairs where the values for each key are aggregated using the given combine functions and a neutral \"zero\" value. Allows an aggregated value type that is different than the input value type, while avoiding unnecessary allocations. Like in <code>groupByKey</code>, the number of reduce tasks is configurable through an optional second argument. </td> </tr> <tr> <td> <b>sortByKey</b>([<i>ascending</i>], [<i>numPartitions</i>]) <a name=\"SortByLink\"></a> </td> <td> When called on a dataset of (K, V) pairs where K implements Ordered, returns a dataset of (K, V) pairs sorted by keys in ascending or descending order, as specified in the boolean <code>ascending</code> argument.</td> </tr> <tr> <td> <b>join</b>(<i>otherDataset</i>, [<i>numPartitions</i>]) <a name=\"JoinLink\"></a> </td> <td> When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key. Outer joins are supported through <code>leftOuterJoin</code>, <code>rightOuterJoin</code>, and <code>fullOuterJoin</code>. </td> </tr> <tr> <td> <b>cogroup</b>(<i>otherDataset</i>, [<i>numPartitions</i>]) <a name=\"CogroupLink\"></a> </td> <td> When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (Iterable<V>, Iterable<W>)) tuples. This operation is also called <code>groupWith</code>. </td> </tr> <tr> <td> <b>cartesian</b>(<i>otherDataset</i>) </td> <td> When called on datasets of types T and U, returns a dataset of (T, U) pairs (all pairs of elements). </td> </tr> <tr> <td> <b>pipe</b>(<i>command</i>, <i>[envVars]</i>) </td> <td> Pipe each partition of the RDD through a shell command, e.g. a Perl or bash script. RDD elements are written to the process's stdin and lines output to its stdout are returned as an RDD of strings. </td> </tr> <tr> <td> <b>coalesce</b>(<i>numPartitions</i>) <a name=\"CoalesceLink\"></a> </td> <td> Decrease the number of partitions in the RDD to numPartitions. Useful for running operations more efficiently after filtering down a large dataset. </td> </tr> <tr> <td> <b>repartition</b>(<i>numPartitions</i>) </td> <td> Reshuffle the data in the RDD randomly to create either more or fewer partitions and balance it across them. This always shuffles all data over the network. <a name=\"RepartitionLink\"></a></td> </tr> <tr> <td> <b>repartitionAndSortWithinPartitions</b>(<i>partitioner</i>) <a name=\"Repartition2Link\"></a></td> <td> Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys. This is more efficient than calling <code>repartition</code> and then sorting within each partition because it can push the sorting down into the shuffle machinery. </td> </tr> </table> The following table lists some of the common actions supported by Spark. Refer to the RDD API doc (, , , ) and pair RDD functions doc (, ) for details. <table> <thead><tr><th>Action</th><th>Meaning</th></tr></thead> <tr> <td> <b>reduce</b>(<i>func</i>) </td> <td> Aggregate the elements of the dataset using a function <i>func</i> (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel. </td> </tr> <tr> <td> <b>collect</b>() </td> <td> Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data. </td> </tr> <tr> <td> <b>count</b>() </td> <td> Return the number of elements in the dataset. </td> </tr> <tr> <td> <b>first</b>() </td> <td> Return the first element of the dataset (similar to take(1)). </td> </tr> <tr> <td> <b>take</b>(<i>n</i>) </td> <td> Return an array with the first <i>n</i> elements of the"
},
{
"data": "</td> </tr> <tr> <td> <b>takeSample</b>(<i>withReplacement</i>, <i>num</i>, [<i>seed</i>]) </td> <td> Return an array with a random sample of <i>num</i> elements of the dataset, with or without replacement, optionally pre-specifying a random number generator seed.</td> </tr> <tr> <td> <b>takeOrdered</b>(<i>n</i>, <i>[ordering]</i>) </td> <td> Return the first <i>n</i> elements of the RDD using either their natural order or a custom comparator. </td> </tr> <tr> <td> <b>saveAsTextFile</b>(<i>path</i>) </td> <td> Write the elements of the dataset as a text file (or set of text files) in a given directory in the local filesystem, HDFS or any other Hadoop-supported file system. Spark will call toString on each element to convert it to a line of text in the file. </td> </tr> <tr> <td> <b>saveAsSequenceFile</b>(<i>path</i>) <br /> (Java and Scala) </td> <td> Write the elements of the dataset as a Hadoop SequenceFile in a given path in the local filesystem, HDFS or any other Hadoop-supported file system. This is available on RDDs of key-value pairs that implement Hadoop's Writable interface. In Scala, it is also available on types that are implicitly convertible to Writable (Spark includes conversions for basic types like Int, Double, String, etc). </td> </tr> <tr> <td> <b>saveAsObjectFile</b>(<i>path</i>) <br /> (Java and Scala) </td> <td> Write the elements of the dataset in a simple format using Java serialization, which can then be loaded using <code>SparkContext.objectFile()</code>. </td> </tr> <tr> <td> <b>countByKey</b>() <a name=\"CountByLink\"></a> </td> <td> Only available on RDDs of type (K, V). Returns a hashmap of (K, Int) pairs with the count of each key. </td> </tr> <tr> <td> <b>foreach</b>(<i>func</i>) </td> <td> Run a function <i>func</i> on each element of the dataset. This is usually done for side effects such as updating an <a href=\"#accumulators\">Accumulator</a> or interacting with external storage systems. <br /><b>Note</b>: modifying variables other than Accumulators outside of the <code>foreach()</code> may result in undefined behavior. See <a href=\"#understanding-closures\">Understanding closures</a> for more details.</td> </tr> </table> The Spark RDD API also exposes asynchronous versions of some actions, like `foreachAsync` for `foreach`, which immediately return a `FutureAction` to the caller instead of blocking on completion of the action. This can be used to manage or wait for the asynchronous execution of the action. Certain operations within Spark trigger an event known as the shuffle. The shuffle is Spark's mechanism for re-distributing data so that it's grouped differently across partitions. This typically involves copying data across executors and machines, making the shuffle a complex and costly operation. To understand what happens during the shuffle, we can consider the example of the operation. The `reduceByKey` operation generates a new RDD where all values for a single key are combined into a tuple - the key and the result of executing a reduce function against all values associated with that key. The challenge is that not all values for a single key necessarily reside on the same partition, or even the same machine, but they must be co-located to compute the result. In Spark, data is generally not distributed across partitions to be in the necessary place for a specific operation. During computations, a single task will operate on a single partition - thus, to organize all the data for a single `reduceByKey` reduce task to execute, Spark needs to perform an all-to-all operation. It must read from all partitions to find all the values for all keys, and then bring together values across partitions to compute the final result for each key - this is called the shuffle. Although the set of elements in each partition of newly shuffled data will be deterministic, and so is the ordering of partitions themselves, the ordering of these elements is"
},
{
"data": "If one desires predictably ordered data following shuffle then it's possible to use: `mapPartitions` to sort each partition using, for example, `.sorted` `repartitionAndSortWithinPartitions` to efficiently sort partitions while simultaneously repartitioning `sortBy` to make a globally ordered RDD Operations which can cause a shuffle include repartition operations like and , 'ByKey operations (except for counting) like and , and join operations like and . The Shuffle is an expensive operation since it involves disk I/O, data serialization, and network I/O. To organize data for the shuffle, Spark generates sets of tasks - map tasks to organize the data, and a set of reduce tasks to aggregate it. This nomenclature comes from MapReduce and does not directly relate to Spark's `map` and `reduce` operations. Internally, results from individual map tasks are kept in memory until they can't fit. Then, these are sorted based on the target partition and written to a single file. On the reduce side, tasks read the relevant sorted blocks. Certain shuffle operations can consume significant amounts of heap memory since they employ in-memory data structures to organize records before or after transferring them. Specifically, `reduceByKey` and `aggregateByKey` create these structures on the map side, and `'ByKey` operations generate these on the reduce side. When data does not fit in memory Spark will spill these tables to disk, incurring the additional overhead of disk I/O and increased garbage collection. Shuffle also generates a large number of intermediate files on disk. As of Spark 1.3, these files are preserved until the corresponding RDDs are no longer used and are garbage collected. This is done so the shuffle files don't need to be re-created if the lineage is re-computed. Garbage collection may happen only after a long period of time, if the application retains references to these RDDs or if GC does not kick in frequently. This means that long-running Spark jobs may consume a large amount of disk space. The temporary storage directory is specified by the `spark.local.dir` configuration parameter when configuring the Spark context. Shuffle behavior can be tuned by adjusting a variety of configuration parameters. See the 'Shuffle Behavior' section within the . One of the most important capabilities in Spark is persisting (or caching) a dataset in memory across operations. When you persist an RDD, each node stores any partitions of it that it computes in memory and reuses them in other actions on that dataset (or datasets derived from it). This allows future actions to be much faster (often by more than 10x). Caching is a key tool for iterative algorithms and fast interactive use. You can mark an RDD to be persisted using the `persist()` or `cache()` methods on it. The first time it is computed in an action, it will be kept in memory on the nodes. Spark's cache is fault-tolerant -- if any partition of an RDD is lost, it will automatically be recomputed using the transformations that originally created it. In addition, each persisted RDD can be stored using a different storage level, allowing you, for example, to persist the dataset on disk, persist it in memory but as serialized Java objects (to save space), replicate it across nodes. These levels are set by passing a `StorageLevel` object (, , ) to `persist()`. The `cache()` method is a shorthand for using the default storage level, which is `StorageLevel.MEMORY_ONLY` (store deserialized objects in memory). The full set of storage levels is: <table> <thead><tr><th style=\"width:23%\">Storage Level</th><th>Meaning</th></tr></thead> <tr> <td> MEMORY_ONLY </td> <td> Store RDD as deserialized Java objects in the"
},
{
"data": "If the RDD does not fit in memory, some partitions will not be cached and will be recomputed on the fly each time they're needed. This is the default level. </td> </tr> <tr> <td> MEMORYANDDISK </td> <td> Store RDD as deserialized Java objects in the JVM. If the RDD does not fit in memory, store the partitions that don't fit on disk, and read them from there when they're needed. </td> </tr> <tr> <td> MEMORYONLYSER <br /> (Java and Scala) </td> <td> Store RDD as <i>serialized</i> Java objects (one byte array per partition). This is generally more space-efficient than deserialized objects, especially when using a <a href=\"tuning.html\">fast serializer</a>, but more CPU-intensive to read. </td> </tr> <tr> <td> MEMORYANDDISK_SER <br /> (Java and Scala) </td> <td> Similar to MEMORYONLYSER, but spill partitions that don't fit in memory to disk instead of recomputing them on the fly each time they're needed. </td> </tr> <tr> <td> DISK_ONLY </td> <td> Store the RDD partitions only on disk. </td> </tr> <tr> <td> MEMORYONLY2, MEMORYANDDISK_2, etc. </td> <td> Same as the levels above, but replicate each partition on two cluster nodes. </td> </tr> <tr> <td> OFF_HEAP (experimental) </td> <td> Similar to MEMORYONLYSER, but store the data in <a href=\"configuration.html#memory-management\">off-heap memory</a>. This requires off-heap memory to be enabled. </td> </tr> </table> Note: *In Python, stored objects will always be serialized with the library, so it does not matter whether you choose a serialized level. The available storage levels in Python include `MEMORYONLY`, `MEMORYONLY_2`, `MEMORYANDDISK`, `MEMORYANDDISK2`, `DISKONLY`, `DISKONLY2`, and `DISKONLY3`.* Spark also automatically persists some intermediate data in shuffle operations (e.g. `reduceByKey`), even without users calling `persist`. This is done to avoid recomputing the entire input if a node fails during the shuffle. We still recommend users call `persist` on the resulting RDD if they plan to reuse it. Spark's storage levels are meant to provide different trade-offs between memory usage and CPU efficiency. We recommend going through the following process to select one: If your RDDs fit comfortably with the default storage level (`MEMORY_ONLY`), leave them that way. This is the most CPU-efficient option, allowing operations on the RDDs to run as fast as possible. If not, try using `MEMORYONLYSER` and to make the objects much more space-efficient, but still reasonably fast to access. (Java and Scala) Don't spill to disk unless the functions that computed your datasets are expensive, or they filter a large amount of the data. Otherwise, recomputing a partition may be as fast as reading it from disk. Use the replicated storage levels if you want fast fault recovery (e.g. if using Spark to serve requests from a web application). All the storage levels provide full fault tolerance by recomputing lost data, but the replicated ones let you continue running tasks on the RDD without waiting to recompute a lost partition. Spark automatically monitors cache usage on each node and drops out old data partitions in a least-recently-used (LRU) fashion. If you would like to manually remove an RDD instead of waiting for it to fall out of the cache, use the `RDD.unpersist()` method. Note that this method does not block by default. To block until resources are freed, specify `blocking=true` when calling this method. Normally, when a function passed to a Spark operation (such as `map` or `reduce`) is executed on a remote cluster node, it works on separate copies of all the variables used in the function. These variables are copied to each machine, and no updates to the variables on the remote machine are propagated back to the driver program. Supporting general, read-write shared variables across tasks would be"
},
{
"data": "However, Spark does provide two limited types of shared variables for two common usage patterns: broadcast variables and accumulators. Broadcast variables allow the programmer to keep a read-only variable cached on each machine rather than shipping a copy of it with tasks. They can be used, for example, to give every node a copy of a large input dataset in an efficient manner. Spark also attempts to distribute broadcast variables using efficient broadcast algorithms to reduce communication cost. Spark actions are executed through a set of stages, separated by distributed \"shuffle\" operations. Spark automatically broadcasts the common data needed by tasks within each stage. The data broadcasted this way is cached in serialized form and deserialized before running each task. This means that explicitly creating broadcast variables is only useful when tasks across multiple stages need the same data or when caching the data in deserialized form is important. Broadcast variables are created from a variable `v` by calling `SparkContext.broadcast(v)`. The broadcast variable is a wrapper around `v`, and its value can be accessed by calling the `value` method. The code below shows this: <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> {% highlight python %} >> broadcastVar = sc.broadcast([1, 2, 3]) <pyspark.core.broadcast.Broadcast object at 0x102789f10> >> broadcastVar.value [1, 2, 3] {% endhighlight %} </div> <div data-lang=\"scala\" markdown=\"1\"> {% highlight scala %} scala> val broadcastVar = sc.broadcast(Array(1, 2, 3)) broadcastVar: org.apache.spark.broadcast.Broadcast[Array[Int]] = Broadcast(0) scala> broadcastVar.value res0: Array[Int] = Array(1, 2, 3) {% endhighlight %} </div> <div data-lang=\"java\" markdown=\"1\"> {% highlight java %} Broadcast<int[]> broadcastVar = sc.broadcast(new int[] {1, 2, 3}); broadcastVar.value(); // returns [1, 2, 3] {% endhighlight %} </div> </div> After the broadcast variable is created, it should be used instead of the value `v` in any functions run on the cluster so that `v` is not shipped to the nodes more than once. In addition, the object `v` should not be modified after it is broadcast in order to ensure that all nodes get the same value of the broadcast variable (e.g. if the variable is shipped to a new node later). To release the resources that the broadcast variable copied onto executors, call `.unpersist()`. If the broadcast is used again afterwards, it will be re-broadcast. To permanently release all resources used by the broadcast variable, call `.destroy()`. The broadcast variable can't be used after that. Note that these methods do not block by default. To block until resources are freed, specify `blocking=true` when calling them. Accumulators are variables that are only \"added\" to through an associative and commutative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric types, and programmers can add support for new types. As a user, you can create named or unnamed accumulators. As seen in the image below, a named accumulator (in this instance `counter`) will display in the web UI for the stage that modifies that accumulator. Spark displays the value for each accumulator modified by a task in the \"Tasks\" table. <p style=\"text-align: center;\"> <img src=\"img/spark-webui-accumulators.png\" title=\"Accumulators in the Spark UI\" alt=\"Accumulators in the Spark UI\" /> </p> Tracking accumulators in the UI can be useful for understanding the progress of running stages (NOTE: this is not yet supported in Python). <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> An accumulator is created from an initial value `v` by calling `SparkContext.accumulator(v)`. Tasks running on a cluster can then add to it using the `add` method or the `+=` operator. However, they cannot read its value. Only the driver program can read the accumulator's value, using its `value`"
},
{
"data": "The code below shows an accumulator being used to add up the elements of an array: {% highlight python %} >> accum = sc.accumulator(0) >> accum Accumulator<id=0, value=0> >> sc.parallelize([1, 2, 3, 4]).foreach(lambda x: accum.add(x)) ... 10/09/29 18:41:08 INFO SparkContext: Tasks finished in 0.317106 s >> accum.value 10 {% endhighlight %} While this code used the built-in support for accumulators of type Int, programmers can also create their own types by subclassing . The AccumulatorParam interface has two methods: `zero` for providing a \"zero value\" for your data type, and `addInPlace` for adding two values together. For example, supposing we had a `Vector` class representing mathematical vectors, we could write: {% highlight python %} class VectorAccumulatorParam(AccumulatorParam): def zero(self, initialValue): return Vector.zeros(initialValue.size) def addInPlace(self, v1, v2): v1 += v2 return v1 vecAccum = sc.accumulator(Vector(...), VectorAccumulatorParam()) {% endhighlight %} </div> <div data-lang=\"scala\" markdown=\"1\"> A numeric accumulator can be created by calling `SparkContext.longAccumulator()` or `SparkContext.doubleAccumulator()` to accumulate values of type Long or Double, respectively. Tasks running on a cluster can then add to it using the `add` method. However, they cannot read its value. Only the driver program can read the accumulator's value, using its `value` method. The code below shows an accumulator being used to add up the elements of an array: {% highlight scala %} scala> val accum = sc.longAccumulator(\"My Accumulator\") accum: org.apache.spark.util.LongAccumulator = LongAccumulator(id: 0, name: Some(My Accumulator), value: 0) scala> sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum.add(x)) ... 10/09/29 18:41:08 INFO SparkContext: Tasks finished in 0.317106 s scala> accum.value res2: Long = 10 {% endhighlight %} While this code used the built-in support for accumulators of type Long, programmers can also create their own types by subclassing . The AccumulatorV2 abstract class has several methods which one has to override: `reset` for resetting the accumulator to zero, `add` for adding another value into the accumulator, `merge` for merging another same-type accumulator into this one. Other methods that must be overridden are contained in the . For example, supposing we had a `MyVector` class representing mathematical vectors, we could write: {% highlight scala %} class VectorAccumulatorV2 extends AccumulatorV2[MyVector, MyVector] { private val myVector: MyVector = MyVector.createZeroVector def reset(): Unit = { myVector.reset() } def add(v: MyVector): Unit = { myVector.add(v) } ... } // Then, create an Accumulator of this type: val myVectorAcc = new VectorAccumulatorV2 // Then, register it into spark context: sc.register(myVectorAcc, \"MyVectorAcc1\") {% endhighlight %} Note that, when programmers define their own type of AccumulatorV2, the resulting type can be different than that of the elements added. </div> <div data-lang=\"java\" markdown=\"1\"> A numeric accumulator can be created by calling `SparkContext.longAccumulator()` or `SparkContext.doubleAccumulator()` to accumulate values of type Long or Double, respectively. Tasks running on a cluster can then add to it using the `add` method. However, they cannot read its value. Only the driver program can read the accumulator's value, using its `value` method. The code below shows an accumulator being used to add up the elements of an array: {% highlight java %} LongAccumulator accum = jsc.sc().longAccumulator(); sc.parallelize(Arrays.asList(1, 2, 3, 4)).foreach(x -> accum.add(x)); // ... // 10/09/29 18:41:08 INFO SparkContext: Tasks finished in 0.317106 s accum.value(); // returns 10 {% endhighlight %} While this code used the built-in support for accumulators of type Long, programmers can also create their own types by subclassing . The AccumulatorV2 abstract class has several methods which one has to override: `reset` for resetting the accumulator to zero, `add` for adding another value into the accumulator, `merge` for merging another same-type accumulator into this one. Other methods that must be overridden are contained in the"
},
{
"data": "For example, supposing we had a `MyVector` class representing mathematical vectors, we could write: {% highlight java %} class VectorAccumulatorV2 implements AccumulatorV2<MyVector, MyVector> { private MyVector myVector = MyVector.createZeroVector(); public void reset() { myVector.reset(); } public void add(MyVector v) { myVector.add(v); } ... } // Then, create an Accumulator of this type: VectorAccumulatorV2 myVectorAcc = new VectorAccumulatorV2(); // Then, register it into spark context: jsc.sc().register(myVectorAcc, \"MyVectorAcc1\"); {% endhighlight %} Note that, when programmers define their own type of AccumulatorV2, the resulting type can be different than that of the elements added. Warning: When a Spark task finishes, Spark will try to merge the accumulated updates in this task to an accumulator. If it fails, Spark will ignore the failure and still mark the task successful and continue to run other tasks. Hence, a buggy accumulator will not impact a Spark job, but it may not get updated correctly although a Spark job is successful. </div> </div> For accumulator updates performed inside <b>actions only</b>, Spark guarantees that each task's update to the accumulator will only be applied once, i.e. restarted tasks will not update the value. In transformations, users should be aware of that each task's update may be applied more than once if tasks or job stages are re-executed. Accumulators do not change the lazy evaluation model of Spark. If they are being updated within an operation on an RDD, their value is only updated once that RDD is computed as part of an action. Consequently, accumulator updates are not guaranteed to be executed when made within a lazy transformation like `map()`. The below code fragment demonstrates this property: <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> {% highlight python %} accum = sc.accumulator(0) def g(x): accum.add(x) return f(x) data.map(g) {% endhighlight %} </div> <div data-lang=\"scala\" markdown=\"1\"> {% highlight scala %} val accum = sc.longAccumulator data.map { x => accum.add(x); x } // Here, accum is still 0 because no actions have caused the map operation to be computed. {% endhighlight %} </div> <div data-lang=\"java\" markdown=\"1\"> {% highlight java %} LongAccumulator accum = jsc.sc().longAccumulator(); data.map(x -> { accum.add(x); return f(x); }); // Here, accum is still 0 because no actions have caused the `map` to be computed. {% endhighlight %} </div> </div> The describes how to submit applications to a cluster. In short, once you package your application into a JAR (for Java/Scala) or a set of `.py` or `.zip` files (for Python), the `bin/spark-submit` script lets you submit it to any supported cluster manager. The package provides classes for launching Spark jobs as child processes using a simple Java API. Spark is friendly to unit testing with any popular unit test framework. Simply create a `SparkContext` in your test with the master URL set to `local`, run your operations, and then call `SparkContext.stop()` to tear it down. Make sure you stop the context within a `finally` block or the test framework's `tearDown` method, as Spark does not support two contexts running concurrently in the same program. You can see some on the Spark website. In addition, Spark includes several samples in the `examples` directory (, , , ). You can run Java and Scala examples by passing the class name to Spark's `bin/run-example` script; for instance: ./bin/run-example SparkPi For Python examples, use `spark-submit` instead: ./bin/spark-submit examples/src/main/python/pi.py For R examples, use `spark-submit` instead: ./bin/spark-submit examples/src/main/r/dataframe.R For help on optimizing your programs, the and guides provide information on best practices. They are especially important for making sure that your data is stored in memory in an efficient format. For help on deploying, the describes the components involved in distributed operation and supported cluster managers."
}
] |
{
"category": "App Definition and Development",
"file_name": "v1.15.md",
"project_name": "EDB",
"subcategory": "Database"
} | [
{
"data": "History of user-visible changes in the 1.15 minor release of CloudNativePG. For a complete list of changes, please refer to the on the release branch in GitHub. !!! Warning This is expected to be the last release in the 1.15.X series. Users are encouraged to update to a newer minor version soon. Release date: Oct 6, 2022 Enhancements: Introduce `leaseDuration` and `renewDeadline` parameters in the controller manager to enhance configuration of the leader election in operator deployments (#759) Improve the mechanism that checks that the backup object store is empty before archiving a WAL file for the first time: a new file called `.check-empty-wal-archive` is placed in the `PGDATA` immediately after the cluster is bootstrapped and it is then removed after the first WAL file is successfully archived Security: Explicitly set permissions of the instance manager binary that is copied in the `distroless/static:nonroot` container image, by using the `nonroot:nonroot` user (#754) Fixes: Make the cluster's conditions compatible with `metav1.Conditions` struct (#720) Drop any active connection on a standby after it is promoted to primary (#737) Honor `MAPPEDMETRIC` and `DURATION` metric types conversion in the native Prometheus exporter (#765) Release date: Sep 6, 2022 Enhancements: Enable configuration of low-level network TCP settings in the PgBouncer connection pooler implementation (#584) Make sure that the `cnpg.io/instanceName` and the `cnpg.io/podRole` labels are always present on pods and PVCs (#632 and #680) Propagate the `role` label of an instance to the underlying PVC (#634) Fixes: Prevent multiple in-place upgrade processes of the operator from running simultaneously by atomically checking whether another one is in progress (#655) Avoid using a hardcoded file name to store the newly uploaded instance manager, preventing a possible race condition during online upgrades of the operator (#660) Prevent a panic from happening when invoking `GetAllAccessibleDatabases` (#641) Release date: Aug 12, 2022 Enhancements: Enable the configuration of the `huge_pages` option for PostgreSQL (#456) Enhance log during promotion and demotion, after a failover or a switchover, by printing the time elapsed between the request of promotion and the actual availability for writes (#371) Add the `instanceName` and `clusterName` labels on jobs, pods, and PVCs to improve interaction with these resources (#534) Add instructions on how to create PostGIS clusters (#570) Security: Explicitly assign `securityContext` to the `Pooler` deployment (#485) Add read timeout values to the internal web servers to prevent Slowloris DDoS (#437) Fixes: Use the correct delays for restarts (`stopDelay`) and for switchover (`switchoverDelay`), as they were erroneously swapped before. This is an important fix, as it might block indefinitely restarts if `switchoverDelay` is not set and uses the default value of 40000000 seconds (#531) Prevent the metrics collector from causing panic when the query returns an error (#396) Removing an unsafe debug message that was referencing an unchecked pointer, leading in some cases to segmentation faults regardless of the log"
},
{
"data": "(#491) Prevent panic when fencing in case the cluster had no annotation (#512) Avoid updating the CRD if a TLS certificate is not changed (#501) Handle conflicts while injecting a certificate in the CRD (#547) Backup and recovery: Correctly pass object store credentials in Google Cloud (#454) Minor changes: Set the default operand image to PostgreSQL 15.0 Release date: Jul 7, 2022 (patch release) Enhancements: Improve logging of the instance manager during switchover and failover Require Barman >= 3.0.0 for future support of PostgreSQL 15 in backup and recovery Changes: Set the default operand image to PostgreSQL 15.0 Fixes: Fix the initialization order inside the `WithActiveInstance` function that starts the CSV log pipe for the PostgreSQL server, ensuring proper logging in the cluster initialization phase - this is especially useful in bootstrap operations like recovery from a backup are failing (before this patch, such logs were not sent to the standard output channel and were permanently lost) Avoid an unnecessary switchover when a hot standby sensitive parameter is decreased, and the primary has already restarted Properly quote role names in `ALTER ROLE` statements Backup and recovery: Fix the algorithm detecting the closest Barman backup for PITR, which was comparing the requested recovery timestamp with the backup start instead of the end Fix Point in Time Recovery based on a transaction ID, a named restore point, or the immediate target by providing a new field called `backupID` in the `recoveryTarget` section Fix encryption parameters invoking `barman-cloud-wal-archive` and `barman-cloud-backup` commands Stop ignoring `barmanObjectStore.serverName` option when recovering from a backup object store using a server name that doesnt match the current cluster name `cnpg` plug-in: Make sure that the plug-in complies with the `-n` parameter when specified by the user Fix the `status` command to sort results and remove variability in the output Release date: May 27, 2022 (patch release) Minor changes: Enable configuration of the `archive_timeout` setting for PostgreSQL, which was previously a fixed parameter (by default set to 5 minutes) Introduce a new field called `backupOwnerReference` in the `scheduledBackup` resource to set the ownership reference on the created backup resources, with possible values being `none` (default), `self` (objects owned by the scheduled backup object), and `cluster` (owned by the Postgres cluster object) Introduce automated collection of `pgstatwal` metrics for PostgreSQL 14 or higher in the native Prometheus exporter Set the default operand image to PostgreSQL"
},
{
"data": "Fixes: Fix fencing by killing orphaned processes related to `postgres` Enable the CSV log pipe inside the `WithActiveInstance` function to collect logs from recovery bootstrap jobs and help in the troubleshooting phase Prevent bootstrapping a new cluster with a non-empty backup object store, removing the risk of overwriting existing backups With the `recovery` bootstrap method, make sure that the recovery object store and the backup object store are different to avoid overwriting existing backups Re-queue the reconciliation loop if the RBAC for backups is not yet created Fix an issue with backups and the wrong specification of the cluster name property Ensures that operator pods always have the latest certificates in the case of a deployment of the operator in high availability, with more than one replica Fix the `cnpg report operator` command to correctly handle the case of a deployment of the operator in high availability, with more than one replica Properly propagate changes in the clusters `inheritedMetadata` set of labels and annotations to the related resources of the cluster without requiring a restart Fix the `cnpg` plugin to correctly parse any custom configmap and secret name defined in the operator deployment, instead of relying just on the default values Fix the local building of the documentation by using the `minidocks/mkdocs` image for `mkdocs` Release date: 21 April 2022 Features: Fencing: Introduction of the fencing capability for a cluster or a given set of PostgreSQL instances through the `cnpg.io/fencedInstances` annotation, which, if not empty, disables switchover/failovers in the cluster; fenced instances are shut down and the pod is kept running (while considered not ready) for inspection and emergencies LDAP authentication: Allow LDAP Simple Bind and Search+Bind configuration options in the `pg_hba.conf` to be defined in the Postgres cluster spec declaratively, enabling the optional use of Kubernetes secrets for sensitive options such as `ldapbindpasswd` Introduction of the `primaryUpdateMethod` option, accepting the values of `switchover` (default) and `restart`, to be used in case of unsupervised `primaryUpdateStrategy`; this method controls what happens to the primary instance during the rolling update procedure New `report` command in the `kubectl cnp` plugin for better diagnosis and more effective troubleshooting of both the operator and a specific Postgres cluster Prune those `Backup` objects that are no longer in the backup object store Specification of target timeline and `LSN` in Point-In-Time Recovery bootstrap method Support for the `AWSSESSIONTOKEN` authentication token in AWS S3 through the `sessionToken` option Default image name for PgBouncer in `Pooler` pods set to `quay.io/enterprisedb/pgbouncer:1.17.0` Fixes: Base backup detection for Point-In-Time Recovery via `targetTime` correctly works now, as previously a target prior to the latest available backup was not possible (the detection algorithm was always wrong by selecting the last backup as a starting point) Improved resilience of hot standby sensitive parameters by relying on the values the operator collects from `pg_controldata` Intermediate certificates handling has been improved by properly discarding invalid entries, instead of throwing an invalid certificate error Prometheus exporter metric collection queries in the databases are now committed instead of rolled back (this might result in a change in the number of rolled back transactions that are visible from downstream dashboards, where applicable) Version 1.15.0 is the first release of CloudNativePG. Previously, this software was called EDB Cloud Native PostgreSQL (now EDB Postgres for Kubernetes). If you are looking for information about a previous release, please refer to the ."
}
] |
{
"category": "App Definition and Development",
"file_name": "create-indexes-check-constraints.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Indexes and check constraints on json and jsonb columns headerTitle: Create indexes and check constraints on JSON columns linkTitle: Indexes and check constraints description: Create indexes and check constraints on \"json\" and \"jsonb\" columns. menu: v2.18: identifier: create-indexes-check-constraints parent: api-ysql-datatypes-json weight: 40 type: docs The examples in this section rely on the . Often, when JSON documents are inserted into a table, the table will have just a self-populating surrogate primary key column and a value column, like `doc`, of data type `jsonb`. Choosing `jsonb` allows the use of a broader range of operators and functions, and allows these to execute more efficiently, than does choosing `json`. It's most likely that each document will be a JSON object and that all will conform to the same structure definition. (The structure can be defined formally, and externally, by a so-called \"\".) In other words, each object will have the same set of possible key names (but some might be missing) and the same JSON data type for the value for each key. And when a data type is compound, the same notion of common structure definition will apply, extending the notion recursively to arbitrary depth. Here is an example. To reduce clutter, the primary key is not defined to be self-populating. ```plpgsql create table books(k int primary key, doc jsonb not null); insert into books(k, doc) values (1, '{ \"ISBN\" : 4582546494267, \"title\" : \"Macbeth\", \"author\" : {\"givenname\": \"William\", \"familyname\": \"Shakespeare\"}, \"year\" : 1623}'), (2, '{ \"ISBN\" : 8760835734528, \"title\" : \"Hamlet\", \"author\" : {\"givenname\": \"William\", \"familyname\": \"Shakespeare\"}, \"year\" : 1603, \"editors\" : [\"Lysa\", \"Elizabeth\"] }'), (3, '{ \"ISBN\" : 7658956876542, \"title\" : \"Oliver Twist\", \"author\" : {\"givenname\": \"Charles\", \"familyname\": \"Dickens\"}, \"year\" : 1838, \"genre\" : \"novel\", \"editors\" : [\"Mark\", \"Tony\", \"Britney\"] }'), (4, '{ \"ISBN\" : 9874563896457, \"title\" : \"Great Expectations\", \"author\" : {\"family_name\": \"Dickens\"}, \"year\" : 1950, \"genre\" : \"novel\", \"editors\" : [\"Robert\", \"John\", \"Melisa\", \"Elizabeth\"] }'), (5, '{ \"ISBN\" : 8647295405123, \"title\" : \"A Brief History of Time\", \"author\" : {\"givenname\": \"Stephen\", \"familyname\": \"Hawking\"}, \"year\" : 1988, \"genre\" : \"science\", \"editors\" : [\"Melisa\", \"Mark\", \"John\", \"Fred\", \"Jane\"] }'), (6, '{ \"ISBN\" : 6563973589123, \"year\" : 1989, \"genre\" : \"novel\", \"title\" : \"Joy Luck Club\", \"author\" : {\"givenname\": \"Amy\", \"familyname\": \"Tan\"}, \"editors\" : [\"Ruilin\", \"Aiping\"]}'); ``` Some of the rows have some of the keys missing. But the row with \"k=6\" has every key. You will probably want at least to know if your corpus contains a non-conformant document and, in some cases, you will want to disallow non-conformant documents. You might want to insist that the ISBN is always defined and is a positive 13-digit number. You will almost certainly want to retrieve documents, typically not by providing the key, but rather by using predicates on their contentin particular, the primitive values that they contain. You will probably want, also, to project out values of"
},
{
"data": "For example, you might want to see the title and author of books whose publication year is later than 1850. Of course, then, you will want these queries to be supported by indexes. The alternative, a table scan over a huge corpus where each document is analyzed on the fly to evaluate the selection predicates, would probably perform too poorly. Here's how to insist that each JSON document is an object: ```plpgsql alter table books add constraint booksdocis_object check (jsonb_typeof(doc) = 'object'); ``` Here's how to insist that the ISBN is always defined and is a positive 13-digit number: ```plpgsql alter table books add constraint booksisbnispositive13digitnumber check ( (doc->'ISBN') is not null and jsonb_typeof(doc->'ISBN') = 'number' and (doc->>'ISBN')::bigint > 0 and length(((doc->>'ISBN')::bigint)::text) = 13 ); ``` Notice that if the key \"ISBN\" is missing altogether, then the expression `doc->'ISBN'` will yield a genuine SQL `NULL`. But the producer of the document might have decided to represent \"No information is available about this book's ISBN\" with the special JSON value null for the key \"ISBN\". (Recall that this special value has its own data type.) This is why it's insufficient to test just that yields number (and therefore not null) for the key \"ISBN\" and why the separate `IS NOT NULL` test is done as well. The high-level point is that YSQL allows you to express a constraint using any expression that can be evaluated by referencing values from a single row. The expression can include a PL/pgSQL function. This allows a constraint to be implemented to insist that the keys in the JSON object are from a known list: ```plpgsql create function toplevelkeysok(jsonobj in jsonb) returns boolean language plpgsql as $body$ declare key text; legal_keys constant varchar(10)[] := array[ 'ISBN', 'title', 'year', 'genre', 'author', 'editors']; begin for key in ( select jsonbobjectkeys(json_obj) ) loop if not (key = any (legal_keys)) then return false; end if; end loop; return true; end; $body$; alter table books add constraint booksdockeys_OK check (toplevelkeys_ok(doc)); ``` See the account of the . Proper practice requires that when a table has a surrogate primary key, it must also have a unique, `NOT NULL`, business key. The obvious candidate for the `books` table is the value for the \"ISBN\" key. The `NOT NULL` rule is already enforced by the \"booksisbnispositive13digitnumber\" constraint. Uniqueness is enforced in the obvious way: ```plpgsql create unique index booksisbnunq on books((doc->>'ISBN') hash); ``` You might want to support range queries that reference the value for the \"year\" key like this: ```plpgsql select (doc->>'ISBN')::bigint as year, doc->>'title' as title, (doc->>'year')::int as year from books where (doc->>'year')::int > 1850 order by 3; ``` You'll probably want to support this with an index. And if you realize that the publication year is unknown for a substantial proportion of the books, you will probably want to take advantage of a partial index, thus: ```plpgsql create index books_year on books ((doc->>'year') asc) where doc->>'year' is not null; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "standalone.md",
"project_name": "KubeDB by AppsCode",
"subcategory": "Database"
} | [
{
"data": "title: Updating Redis Standalone menu: docs_{{ .version }}: identifier: rd-update-version-standalone name: Standalone parent: rd-update-version weight: 20 menuname: docs{{ .version }} sectionmenuid: guides New to KubeDB? Please start . This guide will show you how to use `KubeDB` Enterprise operator to update the version of `Redis` standalone. At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using . Install `KubeDB` Community and Enterprise operator in your cluster following the steps . You should be familiar with the following `KubeDB` concepts: - To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. ```bash $ kubectl create ns demo namespace/demo created ``` Note: YAML files used in this tutorial are stored in directory of repository. Now, we are going to deploy a `Redis` standalone database with version `6.2.14`. In this section, we are going to deploy a Redis standalone database. Then, in the next section we will update the version of the database using `RedisOpsRequest` CRD. Below is the YAML of the `Redis` CR that we are going to create, ```yaml apiVersion: kubedb.com/v1alpha2 kind: Redis metadata: name: redis-quickstart namespace: demo spec: version: 6.2.14 storageType: Durable storage: storageClassName: \"standard\" accessModes: ReadWriteOnce resources: requests: storage: 1Gi ``` Let's create the `Redis` CR we have shown above, ```bash $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param \"info.version\" >}}/docs/examples/redis/update-version/rd-standalone.yaml redis.kubedb.com/redis-quickstart created ``` Now, wait until `redis-quickstart` created has status `Ready`. i.e, ```bash $ kubectl get rd -n demo NAME VERSION STATUS AGE redis-quickstart 6.2.14 Ready 5m14s ``` We are now ready to apply the `RedisOpsRequest` CR to update this database. Here, we are going to update `Redis` standalone from `6.2.14` to"
},
{
"data": "In order to update the standalone database, we have to create a `RedisOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `RedisOpsRequest` CR that we are going to create, ```yaml apiVersion: ops.kubedb.com/v1alpha1 kind: RedisOpsRequest metadata: name: update-standalone namespace: demo spec: type: UpdateVersion databaseRef: name: redis-quickstart updateVersion: targetVersion: 7.0.14 ``` Here, `spec.databaseRef.name` specifies that we are performing operation on `redis-quickstart` Redis database. `spec.type` specifies that we are going to perform `UpdateVersion` on our database. `spec.updateVersion.targetVersion` specifies the expected version of the database `7.0.14`. Let's create the `RedisOpsRequest` CR we have shown above, ```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param \"info.version\" >}}/docs/examples/redis/update-version/update-standalone.yaml redisopsrequest.ops.kubedb.com/update-standalone created ``` If everything goes well, `KubeDB` Enterprise operator will update the image of `Redis` object and related `StatefulSets` and `Pods`. Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CR, ```bash $ watch kubectl get redisopsrequest -n demo Every 2.0s: kubectl get redisopsrequest -n demo NAME TYPE STATUS AGE update-standalone UpdateVersion Successful 3m45s ``` We can see from the above output that the `RedisOpsRequest` has succeeded. Now, we are going to verify whether the `Redis` and the related `StatefulSets` their `Pods` have the new version image. Let's check, ```bash $ kubectl get redis -n demo redis-quickstart -o=jsonpath='{.spec.version}{\"\\n\"}' 7.0.14 $ kubectl get statefulset -n demo redis-quickstart -o=jsonpath='{.spec.template.spec.containers[0].image}{\"\\n\"}' redis:7.0.14@sha256:dfeb5451fce377ab47c5bb6b6826592eea534279354bbfc3890c0b5e9b57c763 $ kubectl get pods -n demo redis-quickstart-0 -o=jsonpath='{.spec.containers[0].image}{\"\\n\"}' redis:7.0.14@sha256:dfeb5451fce377ab47c5bb6b6826592eea534279354bbfc3890c0b5e9b57c763 ``` You can see from above, our `Redis` standalone database has been updated with the new version. So, the UpdateVersion process is successfully completed. To clean up the Kubernetes resources created by this tutorial, run: ```bash $ kubectl patch -n demo rd/redis-quickstart -p '{\"spec\":{\"terminationPolicy\":\"WipeOut\"}}' --type=\"merge\" redis.kubedb.com/redis-quickstart patched $ kubectl delete -n demo redis redis-quickstart redis.kubedb.com \"redis-quickstart\" deleted $ kubectl delete -n demo redisopsrequest update-standalone redisopsrequest.ops.kubedb.com \"update-standalone\" deleted ``` eleted ``` Detail concepts of . Redis databases using Stash. . Monitor your Redis database with KubeDB using . Monitor your Redis database with KubeDB using ."
}
] |
{
"category": "App Definition and Development",
"file_name": "v1.13.0-next.0-changelog.md",
"project_name": "Backstage",
"subcategory": "Application Definition & Image Build"
} | [
{
"data": "7908d72e033: Introduce a new global config parameter, `enableExperimentalRedirectFlow`. When enabled, auth will happen with an in-window redirect flow rather than through a popup window. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 7908d72e033: Introduce a new global config parameter, `enableExperimentalRedirectFlow`. When enabled, auth will happen with an in-window redirect flow rather than through a popup window. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 7908d72e033: Introduce a new global config parameter, `enableExperimentalRedirectFlow`. When enabled, auth will happen with an in-window redirect flow rather than through a popup window. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 4dbf3d3e4da: Added a new EntitySwitch isResourceType to allow different views depending on Resource type fc6cab4eb48: Added `isEntityWith` condition helper for `EntitySwitch` case statements. 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] f64345108a0: BREAKING: The configuration of the `GitlabDiscoveryEntityProvider` has changed as follows: The configuration key `branch` is now used to define the branch from which the catalog-info should be discovered. The old configuration key `branch` is now called `fallbackBranch`. This value specifies which branch should be used if no default branch is defined on the project itself. To migrate to the new configuration value, rename `branch` to `fallbackBranch` 7b1b7bfdb7b: The gitlab org data integration now makes use of the GraphQL API to determine the relationships between imported User and Group entities, effectively making this integration usable without an administrator account's Personal Access Token. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 7eba760e6f6: Added an endpoint to fetch anonymous aggregated results from an entity 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 804f6d16b0c: BREAKING: `KubernetesBuilder.create` now requires a `permissions` field of type `PermissionEvaluator`. The kubernetes `/proxy` endpoint now requires two tokens: the `Backstage-Kubernetes-Authorization` header should contain a bearer token for the target cluster, and the `Authorization` header should contain a backstage identity token. The kubernetes `/proxy` endpoint now requires a `Backstage-Kubernetes-Cluster` header replacing the previously required `X-Kubernetes-Cluster` header. 75d4985f5e8: Fixes bug whereby backstage crashes when bad credentials are provided to the kubernetes plugin. 83d250badc6: Fix parsing error when kubernetes api is returning badly structured data. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] cdab34fd9a2: scaffolder/next: removing the `routeRefs` and exporting the originals on `scaffolderPlugin.routes.x` instead e5ad1bd61ec: Allow `TemplateListPage` and `TemplateWizardPage` to be passed in as props 92cf86a4b5d: Added a `templateFilter` prop to the `<Router/>` component to allow for filtering of templates through a function. 259d3407b9b: Move `CategoryPicker` from `scaffolder` into `scaffolder-react` Move `ContextMenu` into `scaffolder-react` and rename it to `ScaffolderPageContextMenu` e27ddc36dad: Added a possibility to cancel the running task (executing of a scaffolder template) 57c1b4752fa: Allow use of `{ exists: true }` value inside filters to filter entities that has that key. this example will filter all entities that has the annotation `someAnnotation` set to any value. ```yaml ui:options: catalogFilter: kind: Group metadata.annotations.someAnnotation: { exists: true } ``` 7a6b16cc506: `scaffolder/next`: Bump `@rjsf/*` deps to 5.3.1 f84fc7fd040: Updated dependency `@rjsf/validator-ajv8` to `5.3.0`. 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 1ad400bb2de: Add Gitlab Scaffolder Plugin Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected]"
},
{
"data": "259d3407b9b: Move `CategoryPicker` from `scaffolder` into `scaffolder-react` Move `ContextMenu` into `scaffolder-react` and rename it to `ScaffolderPageContextMenu` 2cfd03d7376: To offer better customization options, `ScaffolderPageContextMenu` takes callbacks as props instead of booleans 48da4c46e45: `scaffolder/next`: Export the `TemplateGroupFilter` and `TemplateGroups` and make an extensible component e27ddc36dad: Added a possibility to cancel the running task (executing of a scaffolder template) 7a6b16cc506: `scaffolder/next`: Bump `@rjsf/*` deps to 5.3.1 f84fc7fd040: Updated dependency `@rjsf/validator-ajv8` to `5.3.0`. 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 5e959c9eb62: Allow generic Vault clients to be passed into the builder Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8cce2205a39: Register unhandled rejection and uncaught exception handlers to avoid backend crashes. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 420164593cf: Improve GitlabUrlReader to only load requested sub-path Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] b9839d7135c: Fixed backend start command on Windows by removing the use of platform dependent path joins. c07c3b7364b: Add `onboard` command. While still in development, this command aims to guide users in setting up their Backstage App. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 7908d72e033: Introduce a new global config parameter, `enableExperimentalRedirectFlow`. When enabled, auth will happen with an in-window redirect flow rather than through a popup window. 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) 7245e744ab1: Fixed the font color on `BackstageHeaderLabel` to respect the active page theme. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Bumped create-app version. Updated dependencies @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 911c25de59c: Add support for auto-fixing missing imports detected by the `no-undeclared-imports` rule. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 9129ca8cabb: Log API report instructions when api-report is missing. Updated dependencies @backstage/[email protected] @backstage/[email protected] b348420a804: Adding global-agent to enable the ability to publish through a proxy Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] ca50c3bbea1: Corrected styling of nested objects in AsyncAPI to avoid inappropriate uppercase text transformation of nested objects. 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d8f774c30df: Enforce the secret visibility of certificates and client secrets in the auth backend. Also, document all known options for each auth plugin. 7908d72e033: Introduce a new global config parameter,"
},
{
"data": "When enabled, auth will happen with an in-window redirect flow rather than through a popup window. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] c9a0fdcd2c8: Fix deprecated types. 899ebfd8e02: Add full text search support to the `by-query` endpoint. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] c9a0fdcd2c8: Fix deprecated types. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] c9a0fdcd2c8: Fix deprecated types. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] c9a0fdcd2c8: Fix deprecated types. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] e47e69eadf0: Updated dependency `@apollo/server` to `^4.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected]"
},
{
"data": "8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 7eba760e6f6: Added an endpoint to fetch anonymous aggregated results from an entity Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 7eba760e6f6: Added an endpoint to fetch anonymous aggregated results from an entity Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] Updated dependencies @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] e47e69eadf0: Updated dependency `@apollo/server` to `^4.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected]"
},
{
"data": "8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 804f6d16b0c: Introduced proxy permission types to be used with the kubernetes proxy endpoint's permission framework integration. Updated dependencies @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] e23abb37ec1: Rename output parameter `mergeRequestURL` of `publish:gitlab:merge-request` action to `mergeRequestUrl`. e27ddc36dad: Added a possibility to cancel the running task (executing of a scaffolder template) c9a0fdcd2c8: Fix deprecated types. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 48da4c46e45: Export `typeguard` for `isTemplateEntityV1beta3` Updated dependencies @backstage/[email protected] @backstage/[email protected] e27ddc36dad: Added a possibility to cancel the running task (executing of a scaffolder template) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] b2e182cdfa4: Fixes a UI bug in search result item which rendered the item text with incorrect font size and color 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) 99df676e324: Allow external links to be added as shortcuts Updated dependencies"
},
{
"data": "@backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] f538b9c5b83: The `Check` type now optionally includes the `failureMetadata` and `successMetadata` as returned by the `runChecks` call. 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] b2e182cdfa4: Fixes a UI bug in search result item which rendered the item text with incorrect font size and color 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 7e0c7b09a47: Fix a bug that caused the header to not render when generating a document for the first time 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e00acb28db: Small tweaks to remove warnings in the console during development (mainly focusing on techdocs) Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @internal/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] [email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected]"
}
] |
{
"category": "App Definition and Development",
"file_name": "assume_exception_lvalue.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"`exceptiontype &assumeexception() & noexcept`\" description = \"Narrow contract lvalue reference observer of the stored exception. Constexpr propagating, never throws.\" categories = [\"observers\"] weight = 780 +++ Narrow contract lvalue reference observer of the stored exception. `NoValuePolicy::narrowexceptioncheck()` is first invoked, then the reference to the exception is returned. As a valid default constructed exception is always present, no undefined behaviour occurs unless `NoValuePolicy::narrowexceptioncheck()` does that. Note that if `exception_type` is `void`, only a `const` overload returning `void` is present. Requires: Always available. Complexity: Depends on `NoValuePolicy::narrowexceptioncheck()`. Guarantees: An exception is never thrown."
}
] |
{
"category": "App Definition and Development",
"file_name": "hll_union_agg.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" HLL is an engineering implementation based on the HyperLogLog algorithm, which is used to save the intermediate results of HyperLogGog calculation process. It can only be used as the value column of a table and reduce the amount of data through aggregation to achieve the purpose of speeding up the query. An estimated result with an error of about 1% based on HLL. The HLL column is generated by other columns or based on data loaded into the table. During loading, the function is used to specify which column is used to generate the HLL column. It is often used to replace Count Distinct, and to calculate UVs quickly in business by combining rollup. ```Haskell HLLUNIONAGG(hll) ``` ```plain text MySQL > select HLLUNIONAGG(uvset) from testuv; +-+ | HLLUNIONAGG(`uv_set`) | +-+ | 17721 | +-+ ``` HLLUNIONAGG,HLL,UNION,AGG"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.1.0.4.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Should set MALLOC\\ARENA\\MAX in hadoop-config.sh | Minor | scripts | Todd Lipcon | Todd Lipcon | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | 1.x: FSEditLog failure removes the wrong edit stream when storage dirs have same name | Blocker | namenode | Todd Lipcon | Todd Lipcon | | | Fix performance regression in shuffle | Major | performance, tasktracker | Luke Lu | Luke Lu |"
}
] |
{
"category": "App Definition and Development",
"file_name": "extending-the-model.md",
"project_name": "Backstage",
"subcategory": "Application Definition & Image Build"
} | [
{
"data": "id: extending-the-model title: Extending the model description: Documentation on extending the catalog model The Backstage catalog is based on the , and borrows a lot of its semantics as well. This page describes those semantics at a higher level and how to extend them to fit your organization. Backstage comes with a number of catalog concepts out of the box: There are a number of builtin versioned kinds, such as `Component`, `User` etc. These encapsulate the high level concept of an entity, and define the schema for its entity definition data. An entity has both a metadata object and a spec object at the root. Each kind may or may not have a type. For example, there are several well known types of component, such as `service` and `website`. These clarify the more detailed nature of the entity, and may affect what features are exposed in the interface. Entities may have a number of on them. These can be added either by humans into the descriptor files, or added by automated processes when the entity is ingested into the catalog. Entities may have a number of labels on them. Entities may have a number of relations, expressing how they relate to each other in different ways. We'll list different possibilities for extending this below. Example intents: \"I want to evolve this core kind, tweaking the semantics a bit so I will bump the apiVersion a step\" \"This core kind is a decent fit but we want to evolve it at will so we'll move it to our own company's apiVersion space and use that instead of `backstage.io`.\" The `backstage.io` apiVersion space is reserved for use by the Backstage maintainers. Please do not change or add versions within that space. If you add an space of your own, you are effectively branching out from the underlying kind and making your own. An entity kind is identified by the apiVersion + kind pair, so even though the resulting entity may be similar to the core one, there will be no guarantees that plugins will be able to parse or understand its data. See below about adding a new kind. Example intents: \"The kinds that come with the package are lacking. I want to model this other thing that is a poor fit for either of the builtins.\" \"This core kind is a decent fit but we want to evolve it at will so we'll move it to our own company's apiVersion space and use that instead of `backstage.io`.\" A is an overarching family, or an idea if you will, of entities that also share a schema. Backstage comes with a number of builtin ones that we believe are useful for a large variety of needs that one may want to model in Backstage. The primary ambition is to map things to these kinds, but sometimes you may want or need to extend beyond them. Introducing a new apiVersion is basically the same as adding a new kind. Bear in mind that most plugins will be compiled against the builtin `@backstage/catalog-model` package and have expectations that kinds align with that. The catalog backend itself, from a storage and API standpoint, does not care about the kind of entities it"
},
{
"data": "Extending with new kinds is mainly a matter of permitting them to pass validation when building the backend catalog using the `CatalogBuilder`, and then to make plugins be able to understand the new kind. For the consuming side, it's a different story. Adding a kind has a very large impact. The very foundation of Backstage is to attach behavior and views and functionality to entities that we ascribe some meaning to. There will be many places where code checks `if (kind === 'X')` for some hard coded `X`, and casts it to a concrete type that it imported from a package such as `@backstage/catalog-model`. If you want to model something that doesn't feel like a fit for either of the builtin kinds, feel free to reach out to the Backstage maintainers to discuss how to best proceed. If you end up adding that new kind, you must namespace its `apiVersion` accordingly with a prefix that makes sense, typically based on your organization name - e.g. `my-company.net/v1`. Also do pick a new `kind` identifier that does not collide with the builtin kinds. Example intents: \"This is clearly a component, but it's of a type that doesn't quite fit with the ones I've seen before.\" \"We don't call our teams \"team\", can't we put \"flock\" as the group type?\" Some entity kinds have a `type` field in its spec. This is where an organization are free to express the variety of entities within a kind. This field is expected to follow some taxonomy that makes sense for yourself. The chosen value may affect what operations and views are enabled in Backstage for that entity. Inside Spotify our model has grown significantly over the years, and our component types now include ML models, apps, data pipelines and many more. It might be tempting to put software that doesn't fit into any of the existing types into an Other catch-all type. There are a few reasons why we advise against this; firstly, we have found that it is preferred to match the conceptual model that your engineers have when describing your software. Secondly, Backstage helps your engineers manage their software by integrating the infrastructure tooling through plugins. Different plugins are used for managing different types of components. For example, the only makes sense for Websites. The more specific you can be in how you model your software, the easier it is to provide plugins that are contextual. Adding a new type takes relatively little effort and carries little risk. Any type value is accepted by the catalog backend, but plugins may have to be updated if you want particular behaviors attached to that new type. Example intents: \"We want to import our old catalog but the default set of allowed characters for a metadata.name are too strict.\" \"I want to change the rules for annotations so that I'm allowed to store any data in annotation values, not just strings.\" After pieces of raw entity data have been read from a location, they are passed through a field format validation step. This ensures that the types and syntax of the base envelope and metadata make sense - in short, things that aren't"
},
{
"data": "Some or all of these validators can be replaced when building the backend using the catalog's dedicated `catalogModelExtensionPoint` (or directly on the `CatalogBuilder` if you are still using the old backend system). The risk and impact of this type of extension varies, based on what it is that you want to do. For example, extending the valid character set for kinds, namespaces and names can be fairly harmless, with a few notable exceptions - there is code that expects these to never ever contain a colon or slash, for example, and introducing URL-unsafe characters risks breaking plugins that aren't careful about encoding arguments. Supporting non-strings in annotations may be possible but has not yet been tried out in the real world - there is likely to be some level of plugin breakage that can be hard to predict. You must also be careful about not making the rules more strict than they used to be after populating the catalog with data. This risks making previously valid entities start having processing errors and fail to update. Before making this kind of extension, we recommend that you contact the Backstage maintainers or a support partner to discuss your use case. This is an example of relaxing the format rules of the `metadata.name` field: ```ts import { createBackend } from '@backstage/backend-defaults'; import { createBackendModule } from '@backstage/backend-plugin-api'; import { catalogModelExtensionPoint } from '@backstage/plugin-catalog-node/alpha'; const myCatalogCustomizations = createBackendModule({ pluginId: 'catalog', moduleId: 'catalog-customization', register(reg) { reg.registerInit({ deps: { catalogModel: catalogModelExtensionPoint, }, async init({ catalogModel }) { catalogModel.setFieldValidators({ // This is only one of many methods that you can pass into // setFieldValidators; your editor of choice should help you // find the others. The length checks and regexp inside are // just examples and can be adjusted as needed, but take care // to test your changes thoroughly to ensure that you get // them right. isValidEntityName(value) { return ( typeof value === 'string' && value.length >= 1 && value.length <= 63 && /^[A-Za-z0-9@+_.-]+$/.test(value) ); }, }); }, }); }, }); const backend = createBackend(); // ... add other backend features and the catalog backend itself here ... backend.add(myCatalogCustomizations); backend.start(); ``` Example intent: \"I don't like that the owner is mandatory. I'd like it to be optional.\" After reading and policy-checked entity data from a location, it is sent through the processor chain looking for processors that implement the `validateEntityKind` step, to see that the data is of a known kind and abides by its schema. There is a builtin processor that implements this for all known core kinds and matches the data against their fixed validation schema. This processor can be replaced when building the backend catalog using the `CatalogBuilder`, with a processor of your own that validates the data differently. This replacement processor must have a name that matches the builtin processor, `BuiltinKindsEntityProcessor`. This type of extension is high risk, and may have high impact across the ecosystem depending on the type of change that is made. It is therefore not recommended in normal cases. There will be a large number of plugins and processors - and even the core itself - that make assumptions about the shape of the data and import the typescript data type from the `@backstage/catalog-model` package. Example intent: \"Our entities have this auxiliary property that I would like to express for several entity kinds and it doesn't really fit as a spec"
},
{
"data": "The metadata object is currently left open for extension. Any unknown fields found in the metadata will just be stored verbatim in the catalog. However we want to caution against extending the metadata excessively. Firstly, you run the risk of colliding with future extensions to the model. Secondly, it is common that this type of extension lives more comfortably elsewhere - primarily in the metadata labels or annotations, but sometimes you even may want to make a new component type or similar instead. There are some situations where metadata can be the right place. If you feel that you have run into such a case and that it would apply to others, do feel free to contact the Backstage maintainers or a support partner to discuss your use case. Maybe we can extend the core model to benefit both you and others. Example intent: \"The builtin Component kind is fine but we want to add an additional field to the spec for describing whether it's in prod or staging.\" A kind's schema validation typically doesn't forbid \"unknown\" fields in an entity `spec`, and the catalog will happily store whatever is in it. So doing this will usually work from the catalog's point of view. Adding fields like this is subject to the same risks as mentioned about metadata extensions above. Firstly, you run the risk of colliding with future extensions to the model. Secondly, it is common that this type of extension lives more comfortably elsewhere - primarily in the metadata labels or annotations, but sometimes you even may want to make a new component type or similar instead. There are some situations where the spec can be the right place. If you feel that you have run into such a case and that it would apply to others, do feel free to contact the Backstage maintainers or a support partner to discuss your use case. Maybe we can extend the core model to benefit both you and others. Example intents: \"Our custom made build system has the concept of a named pipeline-set, and we want to associate individual components with their corresponding pipeline-sets so we can show their build status.\" \"We have an alerting system that automatically monitors service health, and there's this integration key that binds the service to an alerts pool. We want to be able to show the ongoing alerts for our services in Backstage so it'd be nice to attach that integration key to the entity somehow.\" Annotations are mainly intended to be consumed by plugins, for feature detection or linking into external systems. Sometimes they are added by humans, but often they are automatically generated at ingestion time by processors. There is a set of , but you are free to add additional ones. This carries no risk or impact to other systems as long as you abide by the following naming rules. The `backstage.io` annotation prefix is reserved for use by the Backstage maintainers. Reach out to us if you feel that you would like to make an addition to that prefix. Annotations that pertain to a well known third party system should ideally be prefixed with a domain, in a way that makes sense to a reader and connects it clearly to the system (or the maker of the"
},
{
"data": "For example, you might use a `pagerduty.com` prefix for pagerduty related annotations, but maybe not `ldap.com` for LDAP annotations since it's not directly affiliated with or owned by an LDAP foundation/company/similar. Annotations that have no prefix at all, are considered local to your Backstage instance and can be used freely as such, but you should not make use of them outside of your organization. For example, if you were to open source a plugin that generates or consumes annotations, then those annotations must be properly prefixed with your company domain or a domain that pertains to the annotation at hand. Example intents: \"Our process reaping system wants to periodically scrape for components that have a certain property.\" \"It'd be nice if our service owners could just tag their components somehow to let the CD system know to automatically generate SRV records or not for that service.\" Labels are mainly intended to be used for filtering of entities, by external systems that want to find entities that have some certain property. This is sometimes used for feature detection / selection. An example could be to add a label `deployments.my-company.net/register-srv: \"true\"`. At the time of writing this, the use of labels is very limited and we are still settling together with the community on how to best use them. If you feel that your use case fits the labels best, we would appreciate if you let the Backstage maintainers know. You are free to add labels. This carries no risk or impact to other systems as long as you abide by the following naming rules. The `backstage.io` label prefix is reserved for use by the Backstage maintainers. Reach out to us if you feel that you would like to make an addition to that prefix. Labels that pertain to a well known third party system should ideally be prefixed with a domain, in a way that makes sense to a reader and connects it clearly to the system (or the maker of the system). For example, you might use a `pagerduty.com` prefix for pagerduty related labels, but maybe not `ldap.com` for LDAP labels since it's not directly affiliated with or owned by an LDAP foundation/company/similar. Labels that have no prefix at all, are considered local to your Backstage instance and can be used freely as such, but you should not make use of them outside of your organization. For example, if you were to open source a plugin that generates or consumes labels, then those labels must be properly prefixed with your company domain or a domain that pertains to the label at hand. Example intents: \"We have this concept of service maintainership, separate from ownership, that we would like to make relations to individual users for.\" \"We feel that we want to explicitly model the team-to-global-department mapping as a relation, because it is core to our org setup and we frequently query for it.\" Any processor can emit relations for entities as they are being processed, and new processors can be added when building the backend catalog using the `CatalogBuilder`. They can emit relations based on the entity data itself, or based on information gathered from elsewhere. Relations are directed and go from a source entity to a target"
},
{
"data": "They are also tied to the entity that originated them - the one that was subject to processing when the relation was emitted. Relations may be dangling (referencing something that does not actually exist by that name in the catalog), and callers need to be aware of that. There is a set of , but you are free to emit your own as well. You cannot change the fact that they are directed and have a source and target that have to be an , but you can invent your own types. You do not have to make any changes to the catalog backend in order to accept new relation types. At the time of writing this, we do not have any namespacing/prefixing scheme for relation types. The type is also not validated to contain only some particular set of characters. Until rules for this are settled, you should stick to using only letters, dashes and digits, and to avoid collisions with future core relation types, you may want to prefix the type somehow. For example: `myCompany-maintainerOf` + `myCompany-maintainedBy`. If you have a suggestion for a relation type to be elevated to the core offering, reach out to the Backstage maintainers or a support partner. Example intents: \"The ownerOf/ownedBy relation types sound like a good fit for expressing how users are technical owners of our company specific ServiceAccount kind, and we want to reuse those relation types for that.\" At the time of writing, this is uncharted territory. If the documented use of a relation states that one end of the relation commonly is a User or a Group, for example, then consumers are likely to have conditional statements on the form `if (x.kind === 'User') {} else {}`, which get confused when an unexpected kind appears. If you want to extend the use of an established relation type in a way that has an effect outside of your organization, reach out to the Backstage maintainers or a support partner to discuss risk/impact. It may even be that one end of the relation could be considered for addition to the core. Example intent: \"We would like to convey entity statuses through the catalog in a generic way, as an integration layer. Our monitoring and alerting system has a plugin with Backstage, and it would be useful if the entity's status field contained the current alert state close to the actual entity data for anyone to consume. We find the `status.items` semantics a poor fit, so we would prefer to make our own custom field under `status` for these purposes.\" We have not yet ventured to define any generic semantics for the `status` object. We recommend sticking with the `status.items` mechanism where possible (see below), since third party consumers will not be able to consume your status information otherwise. Please reach out to the maintainers on Discord or by making a GitHub issue describing your use case if you are interested in this topic. Example intent: \"The semantics of the entity `status.items` field are fine for our needs, but we want to contribute our own type of status into that array instead of the catalog specific one.\" This is a simple, low risk way of adding your own status information to"
},
{
"data": "Consumers will be able to easily track and display the status together with other types / sources. We recommend that any status type that are not strictly private within the organization be namespaced to avoid collisions. Statuses emitted by Backstage core processes will for example be prefixed with `backstage.io/`, your organization may prefix with `my-org.net/`, and `pagerduty.com/active-alerts` could be a sensible complete status item type for that particular external system. The mechanics for how to emit custom statuses is not in place yet, so if this is of interest to you, you might consider contacting the maintainers on Discord or my making a GitHub issue describing your use case. also contains more context. Example intent: \"I have multiple versions of my API deployed in different environments so I want to have `mytool-dev` and `mytool-prod` as different entities.\" While it's possible to have different versions of the same thing represented as separate entities, it's something we generally recommend against. We believe that a developer should be able to just find for example one `Component` representing a service, and to be able to see the different code versions that are deployed throughout your stack within its view. This reasoning works similarly for other kinds as well, such as `API`. That being said - sometimes the differences between versions are so large, that they represent what is for all intents and purposes an entirely new entity as seen from the consumer's point of view. This can happen for example for different significant major versions of an API, and in particular if the two major versions coexist in the ecosystem for some time. In those cases, it can be motivated to have one `my-api-v2` and one `my-api-v3` named entity. This matches the end user's expectations when searching for the API, and matches the desire to maybe have separate documentation for the two and similar. But use this sparingly - only do it if the extra modelling burden is outweighed by any potential better clarity for users. When writing your custom plugins, we encourage designing them such that they can show all the different variations through environments etc under one canonical reference to your software in the catalog. For example for a continuous deployment plugin, a user is likely to be greatly helped by being able to see the entity's versions deployed in all different environments next to each other in one view. That is also where they might be offered the ability to promote from one environment to the other, do rollbacks, see their relative performance metrics, and similar. This coherency and collection of tooling in one place is where something like Backstage can offer the most value and effectiveness of use. Splitting your entities apart into small islands makes this harder. This section walks you through the steps involved extending the catalog model with a new Entity type. The first step of introducing a custom entity is to define what shape and schema it has. We do this using a TypeScript type, as well as a JSONSchema schema. Most of the time you will want to have at least the TypeScript type of your extension available in both frontend and backend code, which means you likely want to have an isomorphic package that houses these"
},
{
"data": "Within the Backstage main repo the package naming pattern of `<plugin>-common` is used for isomorphic packages, and you may choose to adopt this pattern as well. You can generate an isomorphic plugin package by running:`yarn new --select plugin-common` or you can run `yarn new` and then select \"plugin-common\" from the list of options There's at this point no existing templates for generating isomorphic plugins using the `@backstage/cli`. Perhaps the simplest way to get started right now is to copy the contents of one of the existing packages in the main repository, such as `plugins/scaffolder-common`, and rename the folder and file contents to the desired name. This example uses foobar as the plugin name so the plugin will be named foobar-common. Once you have a common package in place you can start adding your own entity definitions. For the exact details on how to do that we defer to getting inspired by the existing package. But in short you will need to declare a TypeScript type and a JSONSchema for the new entity kind. The next step is to create a custom processor for your new entity kind. This will be used within the catalog to make sure that it's able to ingest and validate entities of our new kind. Just like with the definition package, you can find inspiration in for example the existing . We also provide a high-level example of what a catalog process for a custom entity might look like: ```ts import { CatalogProcessor, CatalogProcessorEmit, processingResult } from '@backstage/plugin-catalog-node'; import { LocationSpec } from '@backstage/plugin-catalog-common' import { Entity, entityKindSchemaValidator } from '@backstage/catalog-model'; // For an example of the JSONSchema format and how to use $ref markers to the // base definitions, see: // https://github.com/backstage/backstage/tree/master/packages/catalog-model/src/schema/kinds/Component.v1alpha1.schema.json import { foobarEntityV1alpha1Schema } from '@internal/catalog-model'; export class FoobarEntitiesProcessor implements CatalogProcessor { // You often end up wanting to support multiple versions of your kind as you // iterate on the definition, so we keep each version inside this array as a // convenient pattern. private readonly validators = [ // This is where we use the JSONSchema that we export from our isomorphic // package entityKindSchemaValidator(foobarEntityV1alpha1Schema), ]; // Return processor name getProcessorName(): string { return 'FoobarEntitiesProcessor' } // validateEntityKind is responsible for signaling to the catalog processing // engine that this entity is valid and should therefore be submitted for // further processing. async validateEntityKind(entity: Entity): Promise<boolean> { for (const validator of this.validators) { // If the validator throws an exception, the entity will be marked as // invalid. if (validator(entity)) { return true; } } // Returning false signals that we don't know what this is, passing the // responsibility to other processors to try to validate it instead. return false; } async postProcessEntity( entity: Entity, _location: LocationSpec, emit: CatalogProcessorEmit, ): Promise<Entity> { if ( entity.apiVersion === 'example.com/v1alpha1' && entity.kind === 'Foobar' ) { const foobarEntity = entity as FoobarEntityV1alpha1; // Typically you will want to emit any relations associated with the // entity here. emit(processingResult.relation({ ... })) } return entity; } } ``` Once the processor is created it can be wired up to the catalog via the `CatalogBuilder` in `packages/backend/src/plugins/catalog.ts`: ```ts title=\"packages/backend/src/plugins/catalog.ts\" / highlight-add-next-line / import { FoobarEntitiesProcessor } from '@internal/plugin-foobar-backend'; export default async function createPlugin( env: PluginEnvironment, ): Promise<Router> { const builder = await CatalogBuilder.create(env); / highlight-add-next-line / builder.addProcessor(new FoobarEntitiesProcessor()); const { processingEngine, router } = await builder.build(); // .. } ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "spark.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "type: runners title: \"Apache Spark Runner\" aliases: /learn/runners/spark/ <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> The Apache Spark Runner can be used to execute Beam pipelines using . The Spark Runner can execute Spark pipelines just like a native Spark application; deploying a self-contained application for local mode, running on Spark's Standalone RM, or using YARN or Mesos. The Spark Runner executes Beam pipelines on top of Apache Spark, providing: Batch and streaming (and combined) pipelines. The same fault-tolerance as provided by RDDs and DStreams. The same features Spark provides. Built-in metrics reporting using Spark's metrics system, which reports Beam Aggregators as well. Native support for Beam side-inputs via spark's Broadcast variables. The documents the currently supported capabilities of the Spark Runner. The Spark runner comes in three flavors: A legacy Runner which supports only Java (and other JVM-based languages) and that is based on Spark RDD/DStream An Structured Streaming Spark Runner which supports only Java (and other JVM-based languages) and that is based on Spark Datasets and the framework. Note: It is still experimental, its coverage of the Beam model is partial. As for now it only supports batch mode. A portable Runner which supports Java, Python, and Go This guide is split into two parts to document the non-portable and the portable functionality of the Spark Runner. Please use the switcher below to select the appropriate Runner: Beam and its Runners originally only supported JVM-based languages (e.g. Java/Scala/Kotlin). Python and Go SDKs were added later on. The architecture of the Runners had to be changed significantly to support executing pipelines written in other languages. If your applications only use Java, then you should currently go with one of the java based runners. If you want to run Python or Go pipelines with Beam on Spark, you need to use the portable Runner. For more information on portability, please visit the . <nav class=\"language-switcher\"> <strong>Adapt for:</strong> <ul> <li data-value=\"java\">Non portable (Java)</li> <li data-value=\"py\">Portable (Java/Python/Go)</li> </ul> </nav> The Spark runner currently supports Spark's 3.2.x branch. Note: Support for Spark 2.4.x was dropped with Beam 2.46.0. {{< paragraph class=\"language-java\" >}} You can add a dependency on the latest version of the Spark runner by adding to your pom.xml the following: {{< /paragraph >}} {{< highlight java >}} <dependency> <groupId>org.apache.beam</groupId> <artifactId>beam-runners-spark-3</artifactId> <version>{{< param release_latest >}}</version> </dependency> {{< /highlight >}} {{< paragraph class=\"language-java\" >}} In some cases, such as running in local mode/Standalone, your (self-contained) application would be required to pack Spark by explicitly adding the following dependencies in your pom.xml: {{< /paragraph >}} {{< highlight java >}} <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.12</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.12</artifactId> <version>${spark.version}</version> </dependency> {{< /highlight >}} {{< paragraph class=\"language-java\" >}} And shading the application jar using the maven shade plugin: {{< /paragraph >}} {{< highlight java >}} <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <configuration> <createDependencyReducedPom>false</createDependencyReducedPom> <filters> <filter> <artifact>:</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <shadedArtifactAttached>true</shadedArtifactAttached> <shadedClassifierName>shaded</shadedClassifierName> <transformers> <transformer"
},
{
"data": "</transformers> </configuration> </execution> </executions> </plugin> {{< /highlight >}} {{< paragraph class=\"language-java\" >}} After running <code>mvn package</code>, run <code>ls target</code> and you should see (assuming your artifactId is `beam-examples` and the version is `1.0.0`): {{< /paragraph >}} {{< highlight java >}} beam-examples-1.0.0-shaded.jar {{< /highlight >}} {{< paragraph class=\"language-java\" >}} To run against a Standalone cluster simply run: {{< /paragraph >}} {{< paragraph class=\"language-java\" >}} <br><b>For RDD/DStream based runner:</b><br> {{< /paragraph >}} {{< highlight java >}} spark-submit --class com.beam.examples.BeamPipeline --master spark://HOST:PORT target/beam-examples-1.0.0-shaded.jar --runner=SparkRunner {{< /highlight >}} {{< paragraph class=\"language-java\" >}} <br><b>For Structured Streaming based runner:</b><br> {{< /paragraph >}} {{< highlight java >}} spark-submit --class com.beam.examples.BeamPipeline --master spark://HOST:PORT target/beam-examples-1.0.0-shaded.jar --runner=SparkStructuredStreamingRunner {{< /highlight >}} {{< paragraph class=\"language-py\" >}} You will need Docker to be installed in your execution environment. To develop Apache Beam with Python you have to install the Apache Beam Python SDK: `pip install apache_beam`. Please refer to the on how to create a Python pipeline. {{< /paragraph >}} {{< highlight py >}} pip install apache_beam {{< /highlight >}} {{< paragraph class=\"language-py\" >}} Starting from Beam 2.20.0, pre-built Spark Job Service Docker images are available at . {{< /paragraph >}} {{< paragraph class=\"language-py\" >}} For older Beam versions, you will need a copy of Apache Beam's source code. You can download it on the . {{< /paragraph >}} {{< paragraph class=\"language-py\" >}} Start the JobService endpoint: with Docker (preferred): `docker run --net=host apache/beamsparkjob_server:latest` or from Beam source code: `./gradlew :runners:spark:3:job-server:runShadow` {{< /paragraph >}} {{< paragraph class=\"language-py\" >}} The JobService is the central instance where you submit your Beam pipeline. The JobService will create a Spark job for the pipeline and execute the job. To execute the job on a Spark cluster, the Beam JobService needs to be provided with the Spark master address. {{< /paragraph >}} {{< paragraph class=\"language-py\" >}}2. Submit the Python pipeline to the above endpoint by using the `PortableRunner`, `jobendpoint` set to `localhost:8099` (this is the default address of the JobService), and `environmenttype` set to `LOOPBACK`. For example:{{< /paragraph >}} {{< highlight py >}} import apache_beam as beam from apachebeam.options.pipelineoptions import PipelineOptions options = PipelineOptions([ \"--runner=PortableRunner\", \"--job_endpoint=localhost:8099\", \"--environment_type=LOOPBACK\" ]) with beam.Pipeline(options) as p: ... {{< /highlight >}} Deploying your Beam pipeline on a cluster that already has a Spark deployment (Spark classes are available in container classpath) does not require any additional dependencies. For more details on the different deployment modes see: , , or . {{< paragraph class=\"language-py\" >}}1. Start a Spark cluster which exposes the master on port 7077 by default.{{< /paragraph >}} {{< paragraph class=\"language-py\" >}} Start JobService that will connect with the Spark master: with Docker (preferred): `docker run --net=host apache/beamsparkjob_server:latest --spark-master-url=spark://localhost:7077` or from Beam source code: `./gradlew :runners:spark:3:job-server:runShadow -PsparkMasterUrl=spark://localhost:7077` {{< /paragraph >}} {{< paragraph class=\"language-py\" >}}3. Submit the pipeline as above. Note however that `environment_type=LOOPBACK` is only intended for local testing. See for details.{{< /paragraph >}} {{< paragraph class=\"language-py\" >}} (Note that, depending on your cluster setup, you may need to change the `environment_type` option. See for details.) {{< /paragraph >}} To run Beam jobs written in Python, Go, and other supported languages, you can use the `SparkRunner` and `PortableRunner` as described on the Beam's page (also see ). The following example runs a portable Beam job in Python from the Dataproc cluster's master node with Yarn backed. Note: This example executes successfully with Dataproc 2.0, Spark 3.1.2 and Beam 2.37.0. Create a Dataproc cluster with component enabled. <pre> gcloud dataproc clusters create <b><i>CLUSTER_NAME</i></b> \\ --optional-components=DOCKER \\ --image-version=<b><i>DATAPROCIMAGEVERSION</i></b> \\ --region=<b><i>REGION</i></b> \\ --enable-component-gateway \\ --scopes=https://www.googleapis.com/auth/cloud-platform \\ --properties spark:spark.master.rest.enabled=true </pre> `--optional-components`: Docker. `--image-version`: the , which determines the Spark version installed on the cluster (for example, see the Apache Spark component versions listed for the latest and previous four"
},
{
"data": "`--region`: a supported Dataproc . `--enable-component-gateway`: enable access to . `--scopes`: enable API access to GCP services in the same project. `--properties`: add specific configuration for some component, here spark.master.rest is enabled to use job submit to the cluster. Create a Cloud Storage bucket. <pre> gsutil mb <b><i>BUCKET_NAME</i></b> </pre> Install the necessary Python libraries for the job in your local environment. <pre> python -m pip install apache-beam[gcp]==<b><i>BEAM_VERSION</i></b> </pre> Bundle the word count example pipeline along with all dependencies, artifacts, etc. required to run the pipeline into a jar that can be executed later. <pre> python -m apache_beam.examples.wordcount \\ --runner=SparkRunner \\ --outputexecutablepath=<b><i>OUTPUTJARPATH</b></i> \\ --output=gs://<b><i>BUCKET_NAME</i></b>/python-wordcount-out \\ --spark_version=3 </pre> `--runner`(required): `SparkRunner`. `--outputexecutablepath`(required): path for the bundle jar to be created. `--output`(required): where output shall be written. `--spark_version`(optional): select spark version 3 (default) or 2 (deprecated!). Submit spark job to Dataproc cluster's master node. <pre> gcloud dataproc jobs submit spark \\ --cluster=<b><i>CLUSTER_NAME</i></b> \\ --region=<b><i>REGION</i></b> \\ --class=org.apache.beam.runners.spark.SparkPipelineRunner \\ --jars=<b><i>OUTPUTJARPATH</b></i> </pre> `--cluster`: name of created Dataproc cluster. `--region`: a supported Dataproc . `--class`: the entry point for your application. `--jars`: path to the bundled jar including your application and all dependencies. Check that the results were written to your bucket. <pre> gsutil cat gs://<b><i>BUCKETNAME</b></i>/python-wordcount-out-<b><i>SHARDID</b></i> </pre> When executing your pipeline with the Spark Runner, you should consider the following pipeline options. {{< paragraph class=\"language-java\" >}} <br><b>For RDD/DStream based runner:</b><br> {{< /paragraph >}} <div class=\"table-container-wrapper\"> <table class=\"language-java table table-bordered\"> <tr> <th>Field</th> <th>Description</th> <th>Default Value</th> </tr> <tr> <td><code>runner</code></td> <td>The pipeline runner to use. This option allows you to determine the pipeline runner at runtime.</td> <td>Set to <code>SparkRunner</code> to run using Spark.</td> </tr> <tr> <td><code>sparkMaster</code></td> <td>The url of the Spark Master. This is the equivalent of setting <code>SparkConf#setMaster(String)</code> and can either be <code>local[x]</code> to run local with x cores, <code>spark://host:port</code> to connect to a Spark Standalone cluster, <code>mesos://host:port</code> to connect to a Mesos cluster, or <code>yarn</code> to connect to a yarn cluster.</td> <td><code>local[4]</code></td> </tr> <tr> <td><code>storageLevel</code></td> <td>The <code>StorageLevel</code> to use when caching RDDs in batch pipelines. The Spark Runner automatically caches RDDs that are evaluated repeatedly. This is a batch-only property as streaming pipelines in Beam are stateful, which requires Spark DStream's <code>StorageLevel</code> to be <code>MEMORY_ONLY</code>.</td> <td>MEMORY_ONLY</td> </tr> <tr> <td><code>batchIntervalMillis</code></td> <td>The <code>StreamingContext</code>'s <code>batchDuration</code> - setting Spark's batch interval.</td> <td><code>1000</code></td> </tr> <tr> <td><code>enableSparkMetricSinks</code></td> <td>Enable reporting metrics to Spark's metrics Sinks.</td> <td>true</td> </tr> <tr> <td><code>cacheDisabled</code></td> <td>Disable caching of reused PCollections for whole Pipeline. It's useful when it's faster to recompute RDD rather than save.</td> <td>false</td> </tr> </table> </div> {{< paragraph class=\"language-java\" >}} <br><b>For Structured Streaming based runner:</b><br> {{< /paragraph >}} <div class=\"table-container-wrapper\"> <table class=\"language-java table table-bordered\"> <tr> <th>Field</th> <th>Description</th> <th>Default Value</th> </tr> <tr> <td><code>runner</code></td> <td>The pipeline runner to use. This option allows you to determine the pipeline runner at runtime.</td> <td>Set to <code>SparkStructuredStreamingRunner</code> to run using Spark Structured Streaming.</td> </tr> <tr> <td><code>sparkMaster</code></td> <td>The url of the Spark Master. This is the equivalent of setting <code>SparkConf#setMaster(String)</code> and can either be <code>local[x]</code> to run local with x cores, <code>spark://host:port</code> to connect to a Spark Standalone cluster, <code>mesos://host:port</code> to connect to a Mesos cluster, or <code>yarn</code> to connect to a yarn cluster.</td> <td><code>local[4]</code></td> </tr> <tr> <td><code>testMode</code></td> <td>Enable test mode that gives useful debugging information: catalyst execution plans and Beam DAG printing</td> <td>false</td> </tr> <tr> <td><code>enableSparkMetricSinks</code></td> <td>Enable reporting metrics to Spark's metrics Sinks.</td> <td>true</td> </tr> <tr> <td><code>checkpointDir</code></td> <td>A checkpoint directory for streaming resilience, ignored in batch. For durability, a reliable filesystem such as HDFS/S3/GS is necessary.</td> <td>local dir in /tmp</td> </tr> <tr> <td><code>filesToStage</code></td> <td>Jar-Files to send to all workers and put on the"
},
{
"data": "<td>all files from the classpath</td> </tr> <tr> <td><code>EnableSparkMetricSinks</code></td> <td>Enable/disable sending aggregator values to Spark's metric sinks</td> <td>true</td> </tr> </table> </div> <div class=\"table-container-wrapper\"> <table class=\"language-py table table-bordered\"> <tr> <th>Field</th> <th>Description</th> <th>Value</th> </tr> <tr> <td><code>--runner</code></td> <td>The pipeline runner to use. This option allows you to determine the pipeline runner at runtime.</td> <td>Set to <code>PortableRunner</code> to run using Spark.</td> </tr> <tr> <td><code>--job_endpoint</code></td> <td>Job service endpoint to use. Should be in the form hostname:port, e.g. localhost:3000</td> <td>Set to match your job service endpoint (localhost:8099 by default)</td> </tr> </table> </div> When submitting a Spark application to cluster, it is common (and recommended) to use the <code>spark-submit</code> script that is provided with the spark installation. The <code>PipelineOptions</code> described above are not to replace <code>spark-submit</code>, but to complement it. Passing any of the above mentioned options could be done as one of the <code>application-arguments</code>, and setting <code>--master</code> takes precedence. For more on how to generally use <code>spark-submit</code> checkout Spark . You can monitor a running Spark job using the Spark . By default, this is available at port `4040` on the driver node. If you run Spark on your local machine that would be `http://localhost:4040`. Spark also has a history server to . {{< paragraph class=\"language-java\" >}} Metrics are also available via . Spark provides a that allows reporting Spark metrics to a variety of Sinks. The Spark runner reports user-defined Beam Aggregators using this same metrics system and currently supports and . Providing support for additional Sinks supported by Spark is easy and straight-forward. {{< /paragraph >}} {{< paragraph class=\"language-py\" >}}Spark metrics are not yet supported on the portable runner.{{< /paragraph >}} {{< paragraph class=\"language-java\" >}} <br><b>For RDD/DStream based runner:</b><br> If your pipeline uses an <code>UnboundedSource</code> the Spark Runner will automatically set streaming mode. Forcing streaming mode is mostly used for testing and is not recommended. <br> <br><b>For Structured Streaming based runner:</b><br> Streaming mode is not implemented yet in the Spark Structured Streaming runner. {{< /paragraph >}} {{< paragraph class=\"language-py\" >}} Streaming is not yet supported on the Spark portable runner. {{< /paragraph >}} {{< paragraph class=\"language-java\" >}} <br><b>For RDD/DStream based runner:</b><br> If you would like to execute your Spark job with a provided <code>SparkContext</code>, such as when using the , or use <code>StreamingListeners</code>, you can't use <code>SparkPipelineOptions</code> (the context or a listener cannot be passed as a command-line argument anyway). Instead, you should use <code>SparkContextOptions</code> which can only be used programmatically and is not a common <code>PipelineOptions</code> implementation. <br> <br><b>For Structured Streaming based runner:</b><br> Provided SparkSession and StreamingListeners are not supported on the Spark Structured Streaming runner {{< /paragraph >}} {{< paragraph class=\"language-py\" >}} Provided SparkContext and StreamingListeners are not supported on the Spark portable runner. {{< /paragraph >}} To submit a beam job directly on spark kubernetes cluster without spinning up an extra job server, you can do: ``` spark-submit --master MASTER_URL \\ --conf spark.kubernetes.driver.podTemplateFile=driverpodtemplate.yaml \\ --conf spark.kubernetes.executor.podTemplateFile=executorpodtemplate.yaml \\ --class org.apache.beam.runners.spark.SparkPipelineRunner \\ --conf spark.kubernetes.container.image=apache/spark:v3.3.2 \\ ./wc_job.jar ``` Similar to run the beam job on Dataproc, you can bundle the job jar like below. The example use the `PROCESS` type of to execute the job by processes. ``` python -m beamexamplewc \\ --runner=SparkRunner \\ --outputexecutablepath=./wc_job.jar \\ --environment_type=PROCESS \\ --environment_config='{\\\"command\\\": \\\"/opt/apache/beam/boot\\\"}' \\ --spark_version=3 ``` And below is an example of kubernetes executor pod template, the `initContainer` is required to download the beam SDK harness to run the beam pipelines. ``` spec: containers: name: spark-kubernetes-executor volumeMounts: name: beam-data mountPath: /opt/apache/beam/ initContainers: name: init-beam image: apache/beampython3.7sdk command: cp /opt/apache/beam/boot /init-container/data/boot volumeMounts: name: beam-data mountPath: /init-container/data volumes: name: beam-data emptyDir: {} ``` An of configuring Spark to run Apache beam job with a job server."
}
] |
{
"category": "App Definition and Development",
"file_name": "declare_masking_rules.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "Put on your Masks ! =============================================================================== The main idea of this extension is to offer anonymization by design. The data masking rules should be written by the people who develop the application because they have the best knowledge of how the data model works. Therefore masking rules must be implemented directly inside the database schema. This allows to mask the data directly inside the PostgreSQL instance without using an external tool and thus limiting the exposure and the risks of data leak. The data masking rules are declared simply by using [security labels]: <!-- demo/declaremaskingrules.sql --> ```sql CREATE TABLE player( id SERIAL, name TEXT, points INT); INSERT INTO player VALUES ( 1, 'Kareem Abdul-Jabbar', 38387), ( 5, 'Michael Jordan', 32292 ); SECURITY LABEL FOR anon ON COLUMN player.name IS 'MASKED WITH FUNCTION anon.fakelastname()'; SECURITY LABEL FOR anon ON COLUMN player.id IS 'MASKED WITH VALUE NULL'; ``` Escaping String literals As you may have noticed the masking rule definitions are placed between single quotes. Therefore if you need to use a string inside a masking rule, you need to use [C-Style escapes] like this: ```sql SECURITY LABEL FOR anon ON COLUMN player.name IS E'MASKED WITH VALUE \\'CONFIDENTIAL\\''; ``` Or use [dollar quoting] which is easier to read: ```sql SECURITY LABEL FOR anon ON COLUMN player.name IS 'MASKED WITH VALUE $$CONFIDENTIAL$$'; ``` Listing masking rules To display all the masking rules declared in the current database, check out the `anon.pgmaskingrules`: ```sql SELECT * FROM anon.pgmaskingrules; ``` Debugging masking rules When an error occurs to due a wrong masking rule, you can get more detailed information about the problem by setting `clientminmessages` to `DEBUG` and you will get useful details ``` sql postgres=# SET clientminmessages=DEBUG; SET postgres=# SELECT anon.anonymize_database(); DEBUG: Anonymize table public.bar with firstname = anon.fakefirstname() DEBUG: Anonymize table public.foo with id = NULL ERROR: Cannot mask a \"NOT NULL\" column with a NULL value HINT: If privacybydesign is enabled, add a default value to the column CONTEXT: PL/pgSQL function anon.anonymize_table(regclass) line 47 at RAISE SQL function \"anonymize_database\" statement 1 ``` Removing a masking rule You can simply erase a masking rule like this: ```sql SECURITY LABEL FOR anon ON COLUMN player.name IS NULL; ``` To remove all rules at once, you can use: ```sql SELECT anon.removemasksforallcolumns(); ``` Limitations The maximum length of a masking rule is 1024 characters. If you need more, you should probably [write a dedicated masking function]. The masking rules are NOT INHERITED* ! If you have split a table into multiple partitions, you need to declare the masking rules for each partition. Declaring Rules with COMMENTs is deprecated. Previous version of the extension allowed users to declare masking rules using the `COMMENT` syntax. This is not suppported any more. `SECURITY LABELS` are now the only way to declare rules."
}
] |
{
"category": "App Definition and Development",
"file_name": "tools_dump.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "The `tools dump` command dumps the database data and objects schema to the client file system, in the format described in the : ```bash {{ ydb-cli }} [connection options] tools dump [options] ``` {% include %} `[options]`: Command parameters: `-p PATH` or `--path PATH`: Path to the database directory with objects or a path to the table to be dumped. The root database directory is used by default. The dump includes all subdirectories whose names don't begin with a dot and the tables in them whose names don't begin with a dot. To dump such tables or the contents of such directories, you can specify their names explicitly in this parameter. `-o PATH` or `--output PATH`: Path to the directory in the client file system to dump the data to. If such a directory doesn't exist, it will be created. The entire path to it must already exist, however. If the specified directory exists, it must be empty. If the parameter is omitted, a directory with the name `backup_YYYYDDMMTHHMMSS` will be created in the current directory, with YYYYDDMM being the date and HHMMSS: the time when the dump began. `--exclude STRING`: Template () to exclude paths from export. Specify this parameter multiple times for different templates. `--scheme-only`: Dump only the details about the database schema objects, without dumping their data `--consistency-level VAL`: The consistency level. Possible options: `database`: A fully consistent dump, with one snapshot taken before starting dumping. Applied by default. `table`: Consistency within each dumped table, taking individual independent snapshots for each table dumped. Might run faster and have a smaller effect on the current workload processing in the database. `--avoid-copy`: Do not create a snapshot before dumping. The consistency snapshot taken by default might be inapplicable in some cases (for example, for tables with external blobs). `--save-partial-result`: Don't delete the result of partial dumping. Without this option, the dumps that terminated with an error are deleted. `--ordered`: Rows in the exported tables will be sorted by the primary key. {% include %} With automatic creation of the `backup_...` directory In the current directory: ``` {{ ydb-cli }} --profile quickstart tools dump ``` To a specific directory: ``` {{ ydb-cli }} --profile quickstart tools dump -o ~/backup_quickstart ``` ``` {{ ydb-cli }} --profile quickstart tools dump -p dir1 --scheme-only ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "explicit_valueorerror_converting_constructor.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"`explicit basicresult(concepts::valueor_error<T, E> &&)`\" description = \"Explicit converting constructor from `concepts::valueorerror<T, E>` concept matching types. Available if `convert::valueorerror<>` permits it. Constexpr, triviality and noexcept propagating.\" categories = [\"constructors\", \"explicit-constructors\", \"converting-constructors\"] weight = 300 +++ Explicit converting constructor from {{% api \"concepts::valueorerror<T, E>\" %}} concept matching types. Delegates to the `basic_result` move constructor. Requires: `convert::`{{% api \"valueorerror<T, U>\" %}} has an available call operator, and if the input is a `basicresult` or `basicoutcome`, then `convert::valueorerror<>` has enabled those inputs for that `convert::valueorerror<>` specialisation. Complexity: Same as for the copy or move constructor from the input's `.value()` or `.error()` respectively. Constexpr, triviality and noexcept of underlying operations is propagated. Guarantees: If an exception is thrown during the operation, the object is left in a partially completed state, as per the normal rules for the same operation on a `struct`."
}
] |
{
"category": "App Definition and Development",
"file_name": "README_crdb.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "These files are a fork of `pkg/json` package. `scanner.go`, `decoder.go`, and `reader.go` were copied from pkg/json Additional utility methods added (e.g. to reset decoder) Ability to query reader position, etc. Package name renamed from `json` to `tokenizer` Removal of unused functions Probably, the largest change from the original is in the `scanner.go`. Changes were made to support un-escaping escaped characters, including unicode characters."
}
] |
{
"category": "App Definition and Development",
"file_name": "kbcli_alert_add-receiver.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli alert add-receiver Add alert receiver, such as email, slack, webhook and so on. ``` kbcli alert add-receiver [flags] ``` ``` kbcli alert add-receiver --webhook='url=https://open.feishu.cn/open-apis/bot/v2/hook/foo' kbcli alert add-receiver --webhook='url=https://open.feishu.cn/open-apis/bot/v2/hook/foo,token=XXX' kbcli alert add-receiver --email='[email protected],[email protected]' kbcli alert add-receiver --email='[email protected],[email protected]' --cluster=mycluster kbcli alert add-receiver --email='[email protected],[email protected]' --cluster=mycluster --severity=warning kbcli alert add-receiver --slack api_url=https://hooks.slackConfig.com/services/foo,channel=monitor,username=kubeblocks-alert-bot ``` ``` --cluster stringArray Cluster name, such as mycluster, more than one cluster can be specified, such as mycluster1,mycluster2 --email stringArray Add email address, such as [email protected], more than one emailConfig can be specified separated by comma -h, --help help for add-receiver --severity stringArray Alert severity level, critical, warning or info, more than one severity level can be specified, such as critical,warning --slack stringArray Add slack receiver, such as api_url=https://hooks.slackConfig.com/services/foo,channel=monitor,username=kubeblocks-alert-bot --webhook stringArray Add webhook receiver, such as url=https://open.feishu.cn/open-apis/bot/v2/hook/foo,token=xxxxx ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Manage alert receiver, include add, list and delete receiver."
}
] |
{
"category": "App Definition and Development",
"file_name": "string_manipulation.md",
"project_name": "MongoDB",
"subcategory": "Database"
} | [
{
"data": "For string manipulation, use the util/mongoutils/str.h library. `util/mongoutils/str.h` provides string helper functions for each manipulation. `str::stream()` is quite useful for assembling strings inline: ``` uassert(12345, str::stream() << \"bad ns:\" << ns, isOk); ``` ``` / A StringData object wraps a 'const std::string&' or a 'const char*' without copying its contents. The most common usage is as a function argument that takes any of the two forms of strings above. Fundamentally, this class tries to work around the fact that string literals in C++ are char[N]'s. * Important: the object StringData wraps must remain alive while the StringData is. */ class StringData { ``` See also ."
}
] |
{
"category": "App Definition and Development",
"file_name": "performance-test.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/operations/performance-test sidebar_position: 54 sidebar_label: Testing Hardware title: \"How to Test Your Hardware with ClickHouse\" import SelfManaged from '@site/docs/en/snippets/selfmanagedonlynoroadmap.md'; <SelfManaged /> You can run a basic ClickHouse performance test on any server without installation of ClickHouse packages. You can run the benchmark with a single script. Download the script. ``` wget https://raw.githubusercontent.com/ClickHouse/ClickBench/main/hardware/hardware.sh ``` Run the script. ``` chmod a+x ./hardware.sh ./hardware.sh ``` Copy the output and send it to [email protected] All the results are published here: https://clickhouse.com/benchmark/hardware/"
}
] |
{
"category": "App Definition and Development",
"file_name": "tx-control.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "title: \"Overview of the code recipe for setting up the transaction execution mode in {{ ydb-short-name }}\" description: \"In this article, you will learn how to set up the transaction execution mode in different SDKs to execute queries against {{ ydb-short-name }}.\" To run your queries, first you need to specify the in the {{ ydb-short-name }} SDK. Below are code examples showing the {{ ydb-short-name }} SDK built-in tools to create an object for the transaction execution mode. {% include %} {% list tabs %} Go (native) ```go package main import ( \"context\" \"os\" \"github.com/ydb-platform/ydb-go-sdk/v3\" \"github.com/ydb-platform/ydb-go-sdk/v3/table\" ) func main() { ctx, cancel := context.WithCancel(context.Background()) defer cancel() db, err := ydb.Open(ctx, os.Getenv(\"YDBCONNECTIONSTRING\"), ydb.WithAccessTokenCredentials(os.Getenv(\"YDB_TOKEN\")), ) if err != nil { panic(err) } defer db.Close(ctx) txControl := table.TxControl( table.BeginTx(table.WithSerializableReadWrite()), table.CommitTx(), ) err := driver.Table().Do(scope.Ctx, func(ctx context.Context, s table.Session) error { , , err := s.Execute(ctx, txControl, \"SELECT 1\", nil) return err }) if err != nil { fmt.Printf(\"unexpected error: %v\", err) } } ``` PHP ```php <?php use YdbPlatform\\Ydb\\Ydb; $config = [ // Database path 'database' => '/ru-central1/b1glxxxxxxxxxxxxxxxx/etn0xxxxxxxxxxxxxxxx', // Database endpoint 'endpoint' => 'ydb.serverless.yandexcloud.net:2135', // Auto discovery (dedicated server only) 'discovery' => false, // IAM config 'iam_config' => [ // 'rootcertfile' => './CA.pem', Root CA file (uncomment for dedicated server only) ], 'credentials' => new AccessTokenAuthentication('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA') // use from reference/ydb-sdk/auth ]; $ydb = new Ydb($config); $ydb->table()->retryTransaction(function(Session $session){ $session->query('SELECT 1;'); }) ``` {% endlist %} {% list tabs %} Go (native) ```go package main import ( \"context\" \"os\" \"github.com/ydb-platform/ydb-go-sdk/v3\" \"github.com/ydb-platform/ydb-go-sdk/v3/table\" ) func main() { ctx, cancel := context.WithCancel(context.Background()) defer cancel() db, err := ydb.Open(ctx, os.Getenv(\"YDBCONNECTIONSTRING\"), ydb.WithAccessTokenCredentials(os.Getenv(\"YDB_TOKEN\")), ) if err != nil { panic(err) } defer db.Close(ctx) txControl := table.TxControl( table.BeginTx(table.WithOnlineReadOnly(table.WithInconsistentReads())), table.CommitTx(), ) err := driver.Table().Do(scope.Ctx, func(ctx context.Context, s table.Session) error { , , err := s.Execute(ctx, txControl, \"SELECT 1\", nil) return err }) if err != nil { fmt.Printf(\"unexpected error: %v\", err) } } ``` PHP {% include %} {% endlist %} {% list tabs %} Go (native) ```go package main import ( \"context\" \"os\" \"github.com/ydb-platform/ydb-go-sdk/v3\" \"github.com/ydb-platform/ydb-go-sdk/v3/table\" ) func main() { ctx, cancel := context.WithCancel(context.Background()) defer cancel() db, err := ydb.Open(ctx, os.Getenv(\"YDBCONNECTIONSTRING\"), ydb.WithAccessTokenCredentials(os.Getenv(\"YDB_TOKEN\")), ) if err != nil { panic(err) } defer db.Close(ctx) txControl := table.TxControl( table.BeginTx(table.WithStaleReadOnly()), table.CommitTx(), ) err := driver.Table().Do(scope.Ctx, func(ctx context.Context, s table.Session) error { , , err := s.Execute(ctx, txControl, \"SELECT 1\", nil) return err }) if err != nil { fmt.Printf(\"unexpected error: %v\", err) } } ``` PHP {% include %} {% endlist %} {% list tabs %} Go (native) ```go package main import ( \"context\" \"os\" \"github.com/ydb-platform/ydb-go-sdk/v3\" \"github.com/ydb-platform/ydb-go-sdk/v3/table\" ) func main() { ctx, cancel := context.WithCancel(context.Background()) defer cancel() db, err := ydb.Open(ctx, os.Getenv(\"YDBCONNECTIONSTRING\"), ydb.WithAccessTokenCredentials(os.Getenv(\"YDB_TOKEN\")), ) if err != nil { panic(err) } defer db.Close(ctx) txControl := table.TxControl( table.BeginTx(table.WithSnapshotReadOnly()), table.CommitTx(), ) err := driver.Table().Do(scope.Ctx, func(ctx context.Context, s table.Session) error { , , err := s.Execute(ctx, txControl, \"SELECT 1\", nil) return err }) if err != nil { fmt.Printf(\"unexpected error: %v\", err) } } ``` PHP {% include %} {% endlist %}"
}
] |
{
"category": "App Definition and Development",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "OceanBase",
"subcategory": "Database"
} | [
{
"data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity, and expression, level of experience, education, socioeconomic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [email protected]. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality about the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4. For answers to common questions about this code of conduct, see ."
}
] |
{
"category": "App Definition and Development",
"file_name": "e5.0.4.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Unified the configuration formats for `cluster.core_nodes` and `cluster.statics.seeds`. Now they both support formats in array `[\"[email protected]\", \"[email protected]\"]` and the comma-separated string `\"[email protected],[email protected]\"`. Introduced a new function to convert a formatted date to an integer timestamp: datetounix_ts/3. `datetounix_ts(TimeUnit, FormatString, InputDateTimeString)` Optimized the configuration priority mechanism to fix the issue where the configuration changes made to `etc/emqx.conf` do not take effect after restarting EMQX. More information about the new mechanism: Deprecated the integration with StatsD. Set the level of plugin configuration options to low, users usually manage the plugins through the dashboard, rarely modify them manually, so we lowered the level. Renamed `etcd.ssl` to `etcd.ssl_options` to keep all SSL options consistent in the configuration file. Improved the storage format of Unicode characters in data files, Now we can store Unicode characters. For example: `SELECT * FROM \"t/1\" WHERE clientid = \"--\"`. Added `shutdown_counter` printout to `emqx ctl listeners` command. Increased the time precision of trace logs from second to microsecond. For example, change from `2023-05-02T08:43:50+00:00` to `2023-05-02T08:43:50.237945+00:00`. Renamed `maxmessagequeuelen` to `maxmailboxsize` in the `forceshutdown` configuration. The old name is kept as an alias, so this change is backward compatible. Hide the `resourceoption.requesttimeout` of the webhook and it will use the value of `http` `request_timeout`. Added node rebalance/node evacuation functionality. See also: Implemented Pulsar Producer Bridge and only producer role is supported now. Introduced 3 built-in functions in the rule engine SQL-like language for creating values of the MongoDB date type. Supported and schemas in Schema Registry. Implemented OpenTSDB data bridge. Implemented Oracle Database Bridge. Added enterprise data bridge for Apache IoTDB. Improved get config items performance by eliminating temporary references. Simplified the configuration of the `retainer` feature. Marked `flow_control` as a non-importance field. Improved the security and privacy of some resource logs by masking sensitive information in the log. Reduced resource usage per MQTT packet handling. Reduced memory footprint in hot code path. The hot path includes the code that is frequently executed in core functionalities such as message handling, connection management, authentication, and authorization. Improved the configuration of the limiter. Reduced the complexity of the limiter's configuration. Updated the `configs/limiter` API to suit this refactor. Reduced the memory usage of the limiter configuration. Optimized the instance of limiter for whose rate is `infinity` to reduce memory and CPU usage. Removed the default limit of connect rate which used to be `1000/s`. Added support for QUIC TLS password-protected certificate file. Fixed the issue that could lead to crash logs being printed when stopping EMQX via `systemd`. `2023-03-29T16:43:25.915761+08:00 [error] Generic server memsup terminating. Reason: {portdied,normal}. Last message: {'EXIT',<0.2117.0>,{portdied,normal}}. State: [{data,[{\"Timeout\",60000}]},{items,{\"Memory Usage\",[{\"Allocated\",929959936},{\"Total\",3832242176}]}},{items,{\"Worst Memory User\",[{\"Pid\",<0.2031.0>},{\"Memory\",4720472}]}}]. 2023-03-29T16:43:25.924764+08:00 [error] crasher: initial call: memsup:init/1, pid: <0.2116.0>, registeredname: memsup, exit: {{portdied,normal},[{genserver,handlecommonreply,8,[{file,\"genserver.erl\"},{line,811}]},{proclib,initpdoapply,3,[{file,\"proclib.erl\"},{line,226}]}]}, ancestors: [osmonsup,<0.2114.0>], messagequeuelen: 0, messages: [], links: [<0.2115.0>], dictionary: [], trapexit: true, status: running, heapsize: 4185, stacksize: 29, reductions: 187637; neighbours: 2023-03-29T16:43:25.924979+08:00 [error] Supervisor: {local,osmonsup}. Context:"
},
{
"data": "Reason: {portdied,normal}. Offender: id=memsup,pid=<0.2116.0>.` Fixed error in `/api/v5/monitor_current` API endpoint that happens when some EMQX nodes are down. Prior to this fix, sometimes the request returned HTTP code 500 and the following message: `{\"code\":\"INTERNAL_ERROR\",\"message\":\"error, badarg, [{erlang,'++',[{error,nodedown},[{node,'[email protected]'}]], ...` Fixed the crash issue of the alarm system. Leverage Mnesia dirty operations and circumvent extraneous calls to enhance 'emqx_alarm' performance. Use 'emqxresourcemanager' for reactivating alarms that have already been triggered. Implement the newly developed, fail-safe 'emqxalarm' API to control the activation and deactivation of alarms, thus preventing 'emqxresource_manager' from crashing due to alarm timeouts. The alarm system is susceptible to crashing under these concurrent conditions: A significant number of resources fail, such as when bridges continuously attempt to trigger alarms due to recurring errors. The system is under an extremely high load. Fixed HTTP path handling when composing the URL for the HTTP requests in authentication and authorization modules. Avoid unnecessary URL normalization since we cannot assume that external servers treat original and normalized URLs equally. This led to bugs like . Fixed the issue that path segments could be HTTP encoded twice. Fixed a bug where external plugins could not be configured via environment variables in a lone-node cluster. Fixed a compatibility issue of limiter configuration introduced by e5.0.3 which broke the upgrade from previous versions if the `capacity` is `infinity`. In e5.0.3 we have replaced `capacity` with `burst`. After this fix, a `capacity = infinity` config will be automatically converted to equivalent `burst = 0`. Deprecated config `broker.shareddispatchack_enabled`. This was designed to avoid dispatching messages to a shared-subscription session that has the client disconnected. However, since e5.0.0, this feature is no longer helpful because the shared-subscription messages in an expired session will be redispatched to other sessions in the group. See also: <https://github.com/emqx/emqx/pull/9104> . Improved bridges API error handling. If Webhook bridge URL is not valid, the bridges API will return '400' error instead of '500'. Fixed the issue that the priority of the configuration cannot be set during the rolling upgrade. For example, when authorization is modified in e5.0.2 and then upgraded e5.0.3 through the rolling upgrade, the authorization will be restored to the default. Added the limiter API `/configs/limiter` which was deleted by mistake back. Added several fixes, enhancements, and features in Mria: Protect `mria:join/1,2` with a global lock to prevent conflicts between two nodes trying to join each other simultaneously Implement new function `mria:sync_transaction/4,3,2`, which blocks the caller until a transaction is imported to the local node (if the local node is a replicant, otherwise, it behaves exactly the same as `mria:transaction/3,2`) Optimize `mria:running_nodes/0` Optimize `mria:ro_transaction/2` when called on a replicant node"
},
{
"data": "Added the following fixes and features in Mria: Call `mriarlog:role/1` safely in mriamembership to ensure that mriamembership genserver won't crash if RPC to another node fails Add an extra field to `?rlog_sync` table to facilitate extending this functionality in future . Wrapped potentially sensitive data in `emqxconnectorhttp` if `Authorization` headers are being passed at initialization. Stopped emitting useless crash report when EMQX stops. Fixed the issue where EMQX cannot start when `sysmon.os.memcheckinterval` is disabled. Fixed an issue where the buffering layer processes could use a lot of CPU when inflight window is full. A summary has been added for all endpoints in the HTTP API documentation (accessible at \"http://<emqxhostname>:18083/api-docs\"). Health Check Interval and Auto Restart Interval now support the range from 1ms to 1 hour. Fixed an issue where the rule engine was unable to access variables exported by `FOREACH` - `DO` clause. Given a payload: `{\"date\": \"2023-05-06\", \"array\": [\"a\"]}`, as well as the following SQL statement: `FOREACH payload.date as date, payload.array as elem DO date, elem FROM \"t/#\" -- {\"date\": \"2023-05-06\", \"array\": [\"a\"]}` Prior to the fix, the `date` variable exported by `FOREACH` could not be accessed in the `DO` clause of the above SQL, resulting in the following output for the SQL statement: `[{\"elem\": \"a\",\"date\": \"undefined\"}]`. Correctness check of the rules is enforced before saving the authorization file source. Previously, Saving wrong rules could lead to EMQX restart failure. Fixed an issue where trying to get bridge info or metrics could result in a crash when a node is joining a cluster. Fixed data bridge resource update race condition. In the 'delete + create' process for EMQX resource updates, long bridge creation times could cause dashboard request timeouts. If a bridge resource update was initiated before completion of its creation, it led to an erroneous deletion from the runtime, despite being present in the config file. This fix addresses the race condition in bridge resource updates, ensuring the accurate identification and addition of new resources, and maintaining consistency between runtime and configuration file statuses. Fixed the issue where the default value of SSL certificate for Dashboard Listener was not correctly interpolated, which caused HTTPS to be inaccessible when `verify_peer` and `cacertfile` were using the default configuration. Fixed the issue where the lack of a default value for `ssloptions` in listeners results in startup failure. For example, such command(`EMQXLISTENERSWSSDEFAULTBIND='0.0.0.0:8089' ./bin/emqx console`) would have caused a crash before. TDEngine data bridge now supports \"Supertable\" and \"Create Tables Automatically\". Before this fix, an insert with a supertable in the template will fail, like this: `insert into ${clientid} using msg TAGS (${clientid}) values (${ts},${msg})`. Add missing support of the event `$events/deliverydropped` into the rule engine test API `ruletest`. Ported some time formating fixes in Rule-Engine functions from version 4.4. Fix \"internal error 500\" when getting bridge statistics page while a node is joining the cluster. Avoid double percent-decode for topic name in API `/topics/{topic}` and `/topics`. Fix a config value handling for bridge resource option `autorestartinterval`, now it can be set to `infinity`."
}
] |
{
"category": "App Definition and Development",
"file_name": "hive_catalog.md",
"project_name": "Flink",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Hive Catalog\" weight: 2 type: docs aliases: /dev/table/connectors/hive/hive_catalog.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Hive Metastore has evolved into the de facto metadata hub over the years in Hadoop ecosystem. Many companies have a single Hive Metastore service instance in their production to manage all of their metadata, either Hive metadata or non-Hive metadata, as the source of truth. For users who have both Hive and Flink deployments, `HiveCatalog` enables them to use Hive Metastore to manage Flink's metadata. For users who have just Flink deployment, `HiveCatalog` is the only persistent catalog provided out-of-box by Flink. Without a persistent catalog, users using have to repeatedly create meta-objects like a Kafka table in each session, which wastes a lot of time. `HiveCatalog` fills this gap by empowering users to create tables and other meta-objects only once, and reference and manage them with convenience later on across sessions. Setting up a `HiveCatalog` in Flink requires the same as those of an overall Flink-Hive integration. Setting up a `HiveCatalog` in Flink requires the same as those of an overall Flink-Hive integration. Once configured properly, `HiveCatalog` should just work out of box. Users can create Flink meta-objects with DDL, and should see them immediately afterwards. `HiveCatalog` can be used to handle two kinds of tables: Hive-compatible tables and generic tables. Hive-compatible tables are those stored in a Hive-compatible way, in terms of both metadata and data in the storage layer. Therefore, Hive-compatible tables created via Flink can be queried from Hive side. Generic tables, on the other hand, are specific to Flink. When creating generic tables with `HiveCatalog`, we're just using HMS to persist the metadata. While these tables are visible to Hive, it's unlikely Hive is able to understand the metadata. And therefore using such tables in Hive leads to undefined behavior. It's recommended to switch to to create Hive-compatible tables. If you want to create Hive-compatible tables with default dialect, make sure to set `'connector'='hive'` in your table properties, otherwise a table is considered generic by default in `HiveCatalog`. Note that the `connector` property is not required if you use Hive dialect. We will walk through a simple example here. Have a Hive Metastore running. Here, we set up a local Hive Metastore and our `hive-site.xml` file in local path `/opt/hive-conf/hive-site.xml`. We have some configs like the following: ```xml <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value> <description>metadata is stored in a MySQL server</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>MySQL JDBC driver class</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>...</value> <description>user name for connecting to mysql server</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>...</value> <description>password for connecting to mysql server</description> </property> <property> <name>hive.metastore.uris</name> <value>thrift://localhost:9083</value> <description>IP address (or fully-qualified domain name) and port of the metastore host</description> </property> <property>"
},
{
"data": "<value>true</value> </property> </configuration> ``` Test connection to the HMS with Hive Cli. Running some commands, we can see we have a database named `default` and there's no table in it. ```bash hive> show databases; OK default Time taken: 0.032 seconds, Fetched: 1 row(s) hive> show tables; OK Time taken: 0.028 seconds, Fetched: 0 row(s) ``` Add all Hive dependencies to `/lib` dir in Flink distribution, and create a Hive catalog in Flink SQL CLI as following: ```bash Flink SQL> CREATE CATALOG myhive WITH ( 'type' = 'hive', 'hive-conf-dir' = '/opt/hive-conf' ); ``` Bootstrap a local Kafka cluster with a topic named \"test\", and produce some simple data to the topic as tuple of name and age. ```bash localhost$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test tom,15 john,21 ``` These message can be seen by starting a Kafka console consumer. ```bash localhost$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning tom,15 john,21 ``` Create a simple Kafka table with Flink SQL DDL, and verify its schema. ```bash Flink SQL> USE CATALOG myhive; Flink SQL> CREATE TABLE mykafka (name String, age Int) WITH ( 'connector.type' = 'kafka', 'connector.version' = 'universal', 'connector.topic' = 'test', 'connector.properties.bootstrap.servers' = 'localhost:9092', 'format.type' = 'csv', 'update-mode' = 'append' ); [INFO] Table has been created. Flink SQL> DESCRIBE mykafka; root |-- name: STRING |-- age: INT ``` Verify the table is also visible to Hive via Hive Cli: ```bash hive> show tables; OK mykafka Time taken: 0.038 seconds, Fetched: 1 row(s) ``` Run a simple select query from Flink SQL Client in a Flink cluster, either standalone or yarn-session. ```bash Flink SQL> select * from mykafka; ``` Produce some more messages in the Kafka topic ```bash localhost$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning tom,15 john,21 kitty,30 amy,24 kaiky,18 ``` You should see results produced by Flink in SQL Client now, as: ```bash SQL Query Result (Table) Refresh: 1 s Page: Last of 1 name age tom 15 john 21 kitty 30 amy 24 kaiky 18 ``` `HiveCatalog` supports all Flink types for generic tables. For Hive-compatible tables, `HiveCatalog` needs to map Flink data types to corresponding Hive types as described in the following table: <table class=\"table table-bordered\"> <thead> <tr> <th class=\"text-center\" style=\"width: 25%\">Flink Data Type</th> <th class=\"text-center\">Hive Data Type</th> </tr> </thead> <tbody> <tr> <td class=\"text-center\">CHAR(p)</td> <td class=\"text-center\">CHAR(p)</td> </tr> <tr> <td class=\"text-center\">VARCHAR(p)</td> <td class=\"text-center\">VARCHAR(p)</td> </tr> <tr> <td class=\"text-center\">STRING</td> <td class=\"text-center\">STRING</td> </tr> <tr> <td class=\"text-center\">BOOLEAN</td> <td class=\"text-center\">BOOLEAN</td> </tr> <tr> <td class=\"text-center\">TINYINT</td> <td class=\"text-center\">TINYINT</td> </tr> <tr> <td class=\"text-center\">SMALLINT</td> <td class=\"text-center\">SMALLINT</td> </tr> <tr> <td class=\"text-center\">INT</td> <td class=\"text-center\">INT</td> </tr> <tr> <td class=\"text-center\">BIGINT</td> <td class=\"text-center\">LONG</td> </tr> <tr> <td class=\"text-center\">FLOAT</td> <td class=\"text-center\">FLOAT</td> </tr> <tr> <td class=\"text-center\">DOUBLE</td> <td class=\"text-center\">DOUBLE</td> </tr> <tr> <td class=\"text-center\">DECIMAL(p, s)</td> <td class=\"text-center\">DECIMAL(p, s)</td> </tr> <tr> <td class=\"text-center\">DATE</td> <td class=\"text-center\">DATE</td> </tr> <tr> <td class=\"text-center\">TIMESTAMP(9)</td> <td class=\"text-center\">TIMESTAMP</td> </tr> <tr> <td class=\"text-center\">BYTES</td> <td class=\"text-center\">BINARY</td> </tr> <tr> <td class=\"text-center\">ARRAY<T></td> <td class=\"text-center\">LIST<T></td> </tr> <tr> <td class=\"text-center\">MAP<K, V></td> <td class=\"text-center\">MAP<K, V></td> </tr> <tr> <td class=\"text-center\">ROW</td> <td class=\"text-center\">STRUCT</td> </tr> </tbody> </table> Something to note about the type mapping: Hive's `CHAR(p)` has a maximum length of 255 Hive's `VARCHAR(p)` has a maximum length of 65535 Hive's `MAP` only supports primitive key types while Flink's `MAP` can be any data type Hive's `UNION` type is not supported Hive's `TIMESTAMP` always has precision 9 and doesn't support other precisions. Hive UDFs, on the other hand, can process `TIMESTAMP` values with a precision <= 9. Hive doesn't support Flink's `TIMESTAMPWITHTIMEZONE`, `TIMESTAMPWITHLOCALTIME_ZONE`, and `MULTISET` Flink's `INTERVAL` type cannot be mapped to Hive `INTERVAL` type yet"
}
] |
{
"category": "App Definition and Development",
"file_name": "3.7.9.md",
"project_name": "RabbitMQ",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "RabbitMQ `3.7.9` is a maintenance release. It focuses on bug fixes and minor usability improvements. CLI tools in this release will often produce an extra line of output, as they did in `3.6.x` releases, when `-q` is not provided. Tools that parse CLI command output should use `-q --no-table-headers` to suppress all additional output meant for interactive use or avoid parsing output entirely (e.g. use the HTTP API). When upgrading to this release and upgrading Erlang to 21.0 at the same time, extra care has to be taken. Since CLI tools from RabbitMQ releases older than 3.7.7 will fail on Erlang 21, RabbitMQ must be upgraded before Erlang. See upgrade and compatibility notes if upgrading from an earlier release. See the for general documentation on upgrades and for release notes of other releases. Queue deletion loaded bindings in an inefficient way. GitHub issue: Heartbeat monitor now correctly sends heartbeats at half the negotiated timeout interval. It previously could fail to do so because it considered its own traffic to be on-the-wire activity from the peer. GitHub issue: Nodes were using a was not enabled. GitHub issue: `ERLEPMDPORT` was ignored when configured in `rabbitmq-env.conf`. GitHub issue: Proxy Protocol dependency is now compatible with Erlang/OTP 21. GitHub issue: It is now possible to configure using new style config format. GitHub issue: When a listener fails to start (bind to a server socket), error messages involve less context and are easier to read. GitHub issue: Improved error reporting for when `erl` or `erl.exe` are no in node's `PATH`. GitHub issue: 10 TLS connection acceptors are now used by default. GitHub issue: `rabbitmqctl list_` commands did not include table column headers. GitHub issue: If `RABBITMQ_NODENAME` is configured, CLI tools will use its hostname part when generating its own Erlang node name. GitHub issue: On Windows CLI tool batch scripts exited with a 0 exit code when it failed to connect to the node. Contributed by Artem Zinenko. GitHub issue: . `rabbitmqctl stop` now supports `--idempotent` that makes the command exit with a success when target node is not running. GitHub issue: `rabbitmqctl add_vhost` is now idempotent (no longer returns an error when vhost already exists) GitHub issue: Logo link now works better with a non-blank API endpoint prefix. GitHub issue: Decimal headers and argument values are now serialised to JSON"
},
{
"data": "GitHub issue: It is now possible to configure both HTTPS and HTTP listeners using new syntax that's consistent with Web STOMP and Web MQTT plugins: ``` ini management.tcp.port = 15672 management.ssl.port = 15671 management.ssl.cacertfile = /path/to/ca_certificate.pem management.ssl.certfile = /path/to/server_certificate.pem management.ssl.keyfile = /path/to/server_key.pem ``` GitHub issue: It is now possible to configure `Content-Security-Policy` (CSP) header set by the API. GitHub issue: It is now possible to configure `Strict-Transport-Policy` (HSTS) header set by the API. GitHub issue: `GET /api/nodes/{node}` endpoint aggregated data for other cluster nodes only to discard it later. GitHub issue: When `Handle.exe` is used and returns no file handle information in its output, a warning will be logged. GitHub issue: String matching queries now support multi-value results. GitHub issue: `addomain` and `aduser` are new variables available in LDAP plugin queries. They are extracted from the username when it's in `Domain\\User` format, which is typically specific to ActiveDirectory. GitHub issue: Search queries that return referrals will result in an error instead of an exception. GitHub issue: Advanced WebSocket options now can be configured. Compression is enabled by default. Compression won't be used with clients that do not support it. GitHub issues: , WebSocket `PING` frames are now ignored instead of being propagated to MQTT frame handler. GitHub issue: Advanced WebSocket options now can be configured. Compression is enabled by default. Compression won't be used with clients that do not support it. GitHub issues: EC2 API endpoint requests used an unreasonably low timeout (100 ms). The new value is 10 seconds. GitHub issue: It wasn't possible to specify Consul service tags via new style config format. GitHub issue: It wasn't possible to configure lock key prefix via new style config format. GitHub issues: Lock acquisition timeout now can be configured using `clusterformation.consul.locktimeout` as well as `clusterformation.consul.lockwait_time` (an alias), to be consistent with the Etcd implementation. GitHub issue: Lock acquisition timeout now can be configured using `clusterformation.etcd.locktimeout` as well as `clusterformation.etcd.lockwait_time` (an alias), to be consistent with the Consul implementation. GitHub issue: Throughput optimizations reduce probability of high memory consumption by `rabbit_event` processes due to event backlog accumulation. GitHub issue: Post-installation script renamed `rabbitmq.conf` to `rabbitmq-env.conf`. A long time ago `rabbitmq.conf` was used to configure environment variables (like `rabbitmq-env.conf` today) and old post-installation steps were not removed when `rabbitmq.conf` was re-adopted for new style config files. GitHub issue:"
}
] |
{
"category": "App Definition and Development",
"file_name": "markdown.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "date: 2016-04-09T16:50:16+02:00 title: Markdown syntax weight: 15 {{% notice note %}} This page is a shameful copy of the great . Only difference is information about image customization (, ...) {{% /notice%}} Let's face it: Writing content for the Web is tiresome. WYSIWYG editors help alleviate this task, but they generally result in horrible code, or worse yet, ugly web pages. Markdown is a better way to write HTML, without all the complexities and ugliness that usually accompanies it. Some of the key benefits are: Markdown is simple to learn, with minimal extra characters so it's also quicker to write content. Less chance of errors when writing in markdown. Produces valid XHTML output. Keeps the content and the visual display separate, so you cannot mess up the look of your site. Write in any text editor or Markdown application you like. Markdown is a joy to use! John Gruber, the author of Markdown, puts it like this: The overriding design goal for Markdowns formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like its been marked up with tags or formatting instructions. While Markdowns syntax has been influenced by several existing text-to-HTML filters, the single biggest source of inspiration for Markdowns syntax is the format of plain text email. -- John Gruber Grav ships with built-in support for and . You must enable Markdown Extra in your `system.yaml` configuration file Without further delay, let us go over the main elements of Markdown and what the resulting HTML looks like: {{% notice info %}} <i class=\"fas fa-bookmark\"></i> Bookmark this page for easy future reference! {{% /notice %}} Headings from `h1` through `h6` are constructed with a `#` for each level: ```markdown ``` Renders to: <!-- markdownlint-disable MD025 --> <!-- markdownlint-enable MD025 --> HTML: ```html <h1>h1 Heading</h1> <h2>h2 Heading</h2> <h3>h3 Heading</h3> <h4>h4 Heading</h4> <h5>h5 Heading</h5> <h6>h6 Heading</h6> ``` Comments should be HTML compatible ```html <!-- This is a comment --> ``` Comment below should NOT be seen: <!-- This is a comment --> The HTML `<hr>` element is for creating a \"thematic break\" between paragraph-level elements. In markdown, you can create a `<hr>` with any of the following: `_`: three consecutive underscores ``: three consecutive dashes ``: three consecutive asterisks renders to: _ Body copy written as normal, plain text will be wrapped with `<p></p>` tags in the rendered HTML. So this body copy: ```markdown Lorem ipsum dolor sit amet, graecis denique ei vel, at duo primis mandamus. Et legere ocurreret pri, animal tacimates complectitur ad cum. Cu eum inermis inimicus efficiendi. Labore officiis his ex, soluta officiis concludaturque ei qui, vide sensibus vim ad. ``` renders to this HTML: ```html <p>Lorem ipsum dolor sit amet, graecis denique ei vel, at duo primis mandamus. Et legere ocurreret pri, animal tacimates complectitur ad cum. Cu eum inermis inimicus efficiendi. Labore officiis his ex, soluta officiis concludaturque ei qui, vide sensibus vim ad.</p> ``` For emphasizing a snippet of text with a heavier font-weight. The following snippet of text is rendered as bold"
},
{
"data": "```markdown rendered as bold text ``` renders to: <!-- markdownlint-disable MD036 --> rendered as bold text <!-- markdownlint-enable MD036 --> and this HTML ```html <strong>rendered as bold text</strong> ``` For emphasizing a snippet of text with italics. The following snippet of text is rendered as italicized text. ```markdown rendered as italicized text ``` renders to: <!-- markdownlint-disable MD036 --> rendered as italicized text <!-- markdownlint-enable MD036 --> and this HTML: ```html <em>rendered as italicized text</em> ``` In GFM (GitHub flavored Markdown) you can do strikethroughs. ```markdown Strike through this text. ``` Which renders to: Strike through this text. HTML: ```html <del>Strike through this text.</del> ``` For quoting blocks of content from another source within your document. Add `>` before any text you want to quote. ```markdown Fusion Drive combines a hard drive with a flash storage (solid-state drive) and presents it as a single logical volume with the space of both drives combined. ``` Renders to: Fusion Drive combines a hard drive with a flash storage (solid-state drive) and presents it as a single logical volume with the space of both drives combined. and this HTML: ```html <blockquote> <p><strong>Fusion Drive</strong> combines a hard drive with a flash storage (solid-state drive) and presents it as a single logical volume with the space of both drives combined.</p> </blockquote> ``` Blockquotes can also be nested: ```markdown Donec massa lacus, ultricies a ullamcorper in, fermentum sed augue. Nunc augue augue, aliquam non hendrerit ac, commodo vel nisi. > Sed adipiscing elit vitae augue consectetur a gravida nunc vehicula. Donec auctor odio non est accumsan facilisis. Aliquam id turpis in dolor tincidunt mollis ac eu diam. Mauris sit amet ligula egestas, feugiat metus tincidunt, luctus libero. Donec congue finibus tempor. Vestibulum aliquet sollicitudin erat, ut aliquet purus posuere luctus. ``` Renders to: Donec massa lacus, ultricies a ullamcorper in, fermentum sed augue. Nunc augue augue, aliquam non hendrerit ac, commodo vel nisi. > Sed adipiscing elit vitae augue consectetur a gravida nunc vehicula. Donec auctor odio non est accumsan facilisis. Aliquam id turpis in dolor tincidunt mollis ac eu diam. Mauris sit amet ligula egestas, feugiat metus tincidunt, luctus libero. Donec congue finibus tempor. Vestibulum aliquet sollicitudin erat, ut aliquet purus posuere luctus. {{% notice note %}} The old mechanism for notices overriding the block quote syntax (`>>>`) has been deprecated. Notices are now handled via a dedicated plugin called {{% /notice %}} A list of items in which the order of the items does not explicitly"
},
{
"data": "You may use any of the following symbols to denote bullets for each list item: ```markdown valid bullet valid bullet valid bullet ``` For example ```markdown Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Phasellus iaculis neque Purus sodales ultricies Vestibulum laoreet porttitor sem Ac tristique libero volutpat at Faucibus porta lacus fringilla vel Aenean sit amet erat nunc Eget porttitor lorem ``` Renders to: <!-- markdownlint-disable MD004 --> Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Phasellus iaculis neque Purus sodales ultricies Vestibulum laoreet porttitor sem Ac tristique libero volutpat at Faucibus porta lacus fringilla vel Aenean sit amet erat nunc Eget porttitor lorem <!-- markdownlint-enable MD004 --> And this HTML ```html <ul> <li>Lorem ipsum dolor sit amet</li> <li>Consectetur adipiscing elit</li> <li>Integer molestie lorem at massa</li> <li>Facilisis in pretium nisl aliquet</li> <li>Nulla volutpat aliquam velit <ul> <li>Phasellus iaculis neque</li> <li>Purus sodales ultricies</li> <li>Vestibulum laoreet porttitor sem</li> <li>Ac tristique libero volutpat at</li> </ul> </li> <li>Faucibus porta lacus fringilla vel</li> <li>Aenean sit amet erat nunc</li> <li>Eget porttitor lorem</li> </ul> ``` A list of items in which the order of items does explicitly matter. ```markdown Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Faucibus porta lacus fringilla vel Aenean sit amet erat nunc Eget porttitor lorem ``` Renders to: Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Faucibus porta lacus fringilla vel Aenean sit amet erat nunc Eget porttitor lorem And this HTML: ```html <ol> <li>Lorem ipsum dolor sit amet</li> <li>Consectetur adipiscing elit</li> <li>Integer molestie lorem at massa</li> <li>Facilisis in pretium nisl aliquet</li> <li>Nulla volutpat aliquam velit</li> <li>Faucibus porta lacus fringilla vel</li> <li>Aenean sit amet erat nunc</li> <li>Eget porttitor lorem</li> </ol> ``` TIP: If you just use `1.` for each number, Markdown will automatically number each item. For example: ```markdown Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Faucibus porta lacus fringilla vel Aenean sit amet erat nunc Eget porttitor lorem ``` Renders to: Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Faucibus porta lacus fringilla vel Aenean sit amet erat nunc Eget porttitor lorem Wrap inline snippets of code with `` ` ``. ```markdown In this example, `<section></section>` should be wrapped as code. ``` Renders to: In this example, `<section></section>` should be wrapped as code. HTML: ```html <p>In this example, <code><section></section></code> should be wrapped as <strong>code</strong>.</p> ``` Or indent several lines of code by at least two spaces, as in: ```markdown // Some comments line 1 of code line 2 of code line 3 of code ``` Renders to: <!-- markdownlint-disable MD046 --> // Some comments line 1 of code line 2 of code line 3 of code <!-- markdownlint-enable MD046 --> HTML: ```html <pre> <code> // Some comments line 1 of code line 2 of code line 3 of code </code> </pre> ``` Use \"fences\" ```` ``` ```` to block in multiple lines of code. ```markdown Sample text here... ``` HTML: ```html <pre> <code>Sample text here...</code> </pre> ``` GFM, or \"GitHub Flavored Markdown\" also supports syntax highlighting. To activate it, simply add the file extension of the language you want to use directly after the first code \"fence\", ` ```js `, and syntax highlighting will automatically be applied in the rendered HTML. See for additional documentation. For example, to apply syntax highlighting to JavaScript code: ```plaintext ```js grunt.initConfig({ assemble: { options: { assets: 'docs/assets', data: 'src/data/*.{json,yml}', helpers: 'src/custom-helpers.js', partials: ['src/partials//*.{hbs,md}'] }, pages: { options: { layout: 'default.hbs' }, files: { './': ['src/templates/pages/index.hbs'] } } } }; ``` ``` Renders to: ```js grunt.initConfig({ assemble: { options: { assets: 'docs/assets', data: 'src/data/*.{json,yml}', helpers: 'src/custom-helpers.js', partials: ['src/partials//*.{hbs,md}'] }, pages: { options: { layout: 'default.hbs' }, files: { './':"
},
{
"data": "} } } }; ``` Tables are created by adding pipes as dividers between each cell, and by adding a line of dashes (also separated by bars) beneath the header. Note that the pipes do not need to be vertically aligned. ```markdown | Option | Description | | | -- | | data | path to data files to supply the data that will be passed into templates. | | engine | engine to be used for processing templates. Handlebars is the default. | | ext | extension to be used for dest files. | ``` Renders to: | Option | Description | | | -- | | data | path to data files to supply the data that will be passed into templates. | | engine | engine to be used for processing templates. Handlebars is the default. | | ext | extension to be used for dest files. | And this HTML: ```html <table> <tr> <th>Option</th> <th>Description</th> </tr> <tr> <td>data</td> <td>path to data files to supply the data that will be passed into templates.</td> </tr> <tr> <td>engine</td> <td>engine to be used for processing templates. Handlebars is the default.</td> </tr> <tr> <td>ext</td> <td>extension to be used for dest files.</td> </tr> </table> ``` Adding a colon on the right side of the dashes below any heading will right align text for that column. ```markdown | Option | Description | | :| --:| | data | path to data files to supply the data that will be passed into templates. | | engine | engine to be used for processing templates. Handlebars is the default. | | ext | extension to be used for dest files. | ``` | Option | Description | | :| --:| | data | path to data files to supply the data that will be passed into templates. | | engine | engine to be used for processing templates. Handlebars is the default. | | ext | extension to be used for dest files. | ```markdown ``` Renders to (hover over the link, there is no tooltip): HTML: ```html <a href=\"http://assemble.io\">Assemble</a> ``` ```markdown ``` Renders to (hover over the link, there should be a tooltip): HTML: ```html <a href=\"https://github.com/upstage/\" title=\"Visit Upstage!\">Upstage</a> ``` Named anchors enable you to jump to the specified anchor point on the same page. For example, each of these chapters: ```markdown * ``` will jump to these sections: ```markdown Content for chapter one. Content for chapter one. Content for chapter one. ``` NOTE that specific placement of the anchor tag seems to be arbitrary. They are placed inline here since it seems to be unobtrusive, and it works. Images have a similar syntax to links but include a preceding exclamation point. ```markdown ``` or ```markdown ``` Like links, Images also have a footnote style syntax ```markdown ! ``` ! With a reference later in the document defining the URL location: Add HTTP parameters `width` and/or `height` to the link image to resize the image. Values are CSS values (default is `auto`). ```markdown ``` ```markdown ``` ```markdown ``` Add a HTTP `classes` parameter to the link image to add CSS classes. `shadow`and `border` are available but you could define other ones. ```markdown ``` ```markdown ``` ```markdown ``` Add a HTTP `featherlight` parameter to the link image to disable lightbox. By default lightbox is enabled using the featherlight.js plugin. You can disable this by defining `featherlight` to `false`. ```markdown ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.2.7.2.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Followup fixes after YARN-2019 regarding RM behavior when state-store error occurs | Major | . | Jian He | Jian He | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Rolling upgrade is restoring blocks from trash multiple times | Major | datanode | Nathan Roberts | Keisuke Ogiwara | | | Display count of nodes blacklisted by apps in the web UI | Major | capacityscheduler, resourcemanager | Varun Vasudev | Varun Vasudev | | | Implement listLocatedStatus for ViewFileSystem to speed up split calculation | Blocker | fs | Gera Shegalov | Gera Shegalov | | | Allow appending to existing SequenceFiles | Major | io | Stephen Rose | Kanaka Kumar Avvaru | | | Block scanner INFO message is spamming logs | Major | datanode | Yongjun Zhang | Yongjun Zhang | | | Optimize datanode writes for small writes and flushes | Critical | . | Kihwal Lee | Kihwal Lee | | | Upgrade Tomcat dependency to 6.0.44. | Major | build | Chris Nauroth | Chris Nauroth | | | YARN architecture document needs updating | Major | documentation | Allen Wittenauer | Brahma Reddy Battula | | | When the DFSClient lease cannot be renewed, abort open-for-write files rather than the entire DFSClient | Major | . | Ming Ma | Ming Ma | | | Configurably turn off the saving of container info in Generic AHS | Major | timelineserver, yarn | Eric Payne | Eric Payne | | | Skip unit tests based on maven profile rather than NativeCodeLoader.isNativeCodeLoaded | Minor | test | Masatake Iwasaki | Masatake Iwasaki | | | Trash documentation should describe its directory structure and configurations | Minor | documentation | Suman Sehgal | Weiwei Yang | | | Allow NN to startup if there are files having a lease but are not under construction | Minor | namenode | Tsz Wo Nicholas Sze | Jing Zhao | | | AccessControlList should avoid calling getGroupNames in isUserInList with empty groups. | Major | security | zhihai xu | zhihai xu | | | Remove duplicate close for LogWriter in AppLogAggregatorImpl#uploadLogsForContainers | Minor | nodemanager | zhihai xu | zhihai xu | | | For better error recovery, check if the directory exists before using it for localization. | Major | nodemanager | zhihai xu | zhihai xu | | | HdfsServerConstants#ReplicaState#getState should avoid calling values() since it creates a temporary array | Major | performance | Staffan Friberg | Staffan Friberg | | | Recommission a datanode with 500k blocks may pause NN for 30 seconds | Major | namenode | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Log slow name resolutions | Major |"
},
{
"data": "| Sidharta Seethana | Sidharta Seethana | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Duplicate surefire plugin config in hadoop-common | Major | test | Andrey Klochkov | Andrey Klochkov | | | BlockManager should remove a block from excessReplicateMap and decrement ExcessBlocks metric when the block is removed | Critical | namenode | Akira Ajisaka | Akira Ajisaka | | | Allow better control of getContentSummary | Major | . | Kihwal Lee | Kihwal Lee | | | MiniYarnCluster should turn on timeline service if configured to do so | Major | . | Mit Desai | Mit Desai | | | Incorrect \"nodes in service\" metrics caused all writes to fail | Major | . | Ming Ma | Ming Ma | | | Change \"DFSInputStream has been closed already\" message to debug log level | Minor | hdfs-client | Charles Lamb | Charles Lamb | | | HarFs incorrectly declared as requiring an authority | Critical | fs | Gera Shegalov | Brahma Reddy Battula | | | Reduce cannot use more than 2G memory for the final merge | Major | mrv2 | stanley shi | Gera Shegalov | | | setStoragePolicy with folder behavior is different after cluster restart | Major | . | Peter Shi | Surendra Singh Lilhore | | | HistoryFileManager should check whether summaryFile exists to avoid FileNotFoundException causing HistoryFileInfo into MOVE\\_FAILED state | Minor | jobhistoryserver | zhihai xu | zhihai xu | | | hdfs crypto class not found in Windows | Critical | scripts | Sumana Sathish | Anu Engineer | | | Avoid retry cache collision when Standby NameNode loading edits | Critical | namenode | Carrey Zhan | Ming Ma | | | JHS sorting on state column not working in webUi | Minor | jobhistoryserver | Bibin A Chundatt | zhihai xu | | | Should use equals when compare Resource in RMNodeImpl#ReconnectNodeTransition | Minor | resourcemanager | zhihai xu | zhihai xu | | | Correct HTTP method in WebHDFS document | Major | documentation | Akira Ajisaka | Brahma Reddy Battula | | | Two RMNodes for the same NodeId are used in RM sometimes after NM is reconnected. | Major | resourcemanager | zhihai xu | zhihai xu | | | org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well | Critical | . | Kris Geusebroek | Akira Ajisaka | | | Error handling in snappy decompressor throws invalid exceptions | Major | io, native | Todd Lipcon | Matt Foley | | | Preserve compatibility of ClientProtocol#rollingUpgrade after finalization | Critical | rolling upgrades | Andrew Wang | Andrew Wang | | | Several NPEs when deleting local files on NM recovery | Major | nodemanager | Karthik Kambatla | Varun Saxena | | | Prevent processing preemption events on the main RM dispatcher | Major | resourcemanager, scheduler | Jason Lowe | Varun Saxena | | | ShuffleHandler passes wrong \"base\" parameter to getMapOutputInfo if mapId is not in the"
},
{
"data": "| Major | mrv2, nodemanager | zhihai xu | zhihai xu | | | ActiveStandbyElector shouldn't call monitorLockNodeAsync multiple times | Major | ha | zhihai xu | zhihai xu | | | [JDK8] 'mvn site' fails | Major | api, site | Akira Ajisaka | Brahma Reddy Battula | | | TestShuffleHandler#testGetMapOutputInfo is failing | Major | test | Devaraj K | zhihai xu | | | Bzip2Factory is not thread safe | Major | io | Jason Lowe | Brahma Reddy Battula | | | RawLocalFileSystem.listStatus() returns null for UNIX pipefile | Critical | . | Haohui Mai | Kanaka Kumar Avvaru | | | Scheduler must re-request container resources when RMContainer transitions from ALLOCATED to KILLED | Critical | capacityscheduler, fairscheduler, resourcemanager | Peng Zhang | Peng Zhang | | | Application History Server UI NPEs when accessing apps run after RM restart | Major | timelineserver | Eric Payne | Eric Payne | | | AsyncDispatcher can hang while stopping if it is configured for draining events on stop | Critical | . | Varun Saxena | Varun Saxena | | | Retrospect on decision of making RM crashed if any exception throw in ZKRMStateStore | Critical | . | Junping Du | Jian He | | | Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared | Major | . | Zhe Zhang | Zhe Zhang | | | Fetch the application report from the AHS if the RM does not know about it | Major | . | Mit Desai | Mit Desai | | | ContainerLogsUtils#getContainerLogFile fails to read container log files from full disks. | Critical | nodemanager | zhihai xu | zhihai xu | | | AsyncDispatcher may overloaded with RMAppNodeUpdateEvent when Node is connected/disconnected | Critical | resourcemanager | Rohith Sharma K S | Bibin A Chundatt | | | VolumeScanner thread exits with exception if there is no block pool to be scanned but there are suspicious blocks | Major | datanode | Colin P. McCabe | Colin P. McCabe | | | Applications using FileContext fail with the default file system configured to be wasb/s3/etc. | Blocker | fs | Chris Nauroth | Chris Nauroth | | | MetricsSinkAdapter hangs when being stopped | Critical | . | Jian He | Brahma Reddy Battula | | | RM hangs on draining events | Major | . | Jian He | Jian He | | | Quota by storage type usage incorrectly initialized upon namenode restart | Major | namenode | Kihwal Lee | Xiaoyu Yao | | | Completed container whose app is finished is not removed from NMStateStore | Major | . | Jun Gong | Jun Gong | | | ClientRMService getApplications has high scheduler lock contention | Major | resourcemanager | Jason Lowe | Jason Lowe | | | HDFS concat should keep srcs order | Blocker | . | Yong Zhang | Yong Zhang | | | AM may fail instead of retrying if RM shuts down during the allocate call | Critical | . | Anubhav Dhoot | Anubhav Dhoot | | | HDFS architecture documentation of version"
},
{
"data": "is outdated about append write support | Major | documentation | Hong Dai Thanh | Ajith S | | | Memory leak in ResourceManager with SIMPLE mode | Critical | resourcemanager | mujunchao | mujunchao | | | Enable optimized block reports | Major | . | Rushabh S Shah | Daryn Sharp | | | The remaining space check in BlockPlacementPolicyDefault is flawed | Critical | . | Kihwal Lee | Kihwal Lee | | | MapReduce doesn't set the HADOOP\\_CLASSPATH for jar lib in distributed cache. | Critical | . | Junping Du | Junping Du | | | RMNode transitioned from RUNNING to REBOOTED because its response id had not been reset synchronously | Major | resourcemanager | Jun Gong | Jun Gong | | | Add a unit test for INotify functionality across a layout version upgrade | Major | namenode | Zhe Zhang | Zhe Zhang | | | NameNode refresh doesn't remove DataNodes that are no longer in the allowed list | Major | datanode, namenode | Daniel Templeton | Daniel Templeton | | | hadoop fs -getmerge doc is wrong | Major | documentation | Daniel Templeton | Jagadesh Kiran N | | | BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally block | Minor | util | Ted Yu | Kiran Kumar M R | | | Flaw in registration bookeeping can make DN die on reconnect | Critical | . | Kihwal Lee | Kihwal Lee | | | Remove unimplemented option for \\`hadoop fs -ls\\` from document in branch-2.7 | Major | documentation, fs | Akira Ajisaka | Akira Ajisaka | | | Interrupted exception can occur when Client#stop is called | Minor | . | Oleg Zhurakousky | Kuhu Shukla | | | RM WebServices missing scheme for appattempts logLinks | Major | . | Jonathan Eagles | Jonathan Eagles | | | Capacity Scheduler headroom for DRF is wrong | Major | capacityscheduler | Chang Li | Chang Li | | | Stack trace is missing when error occurs in client protocol provider's constructor | Major | client | Chang Li | Chang Li | | | App local logs are leaked if log aggregation fails to initialize for the app | Major | log-aggregation, nodemanager | Jason Lowe | Jason Lowe | | | dfsadmin -metasave prints \"NaN\" for cache used% | Major | . | Archana T | Brahma Reddy Battula | | | ShuffleHandler can possibly exhaust nodemanager file descriptors | Major | mrv2, nodemanager | Nathan Roberts | Kuhu Shukla | | | Update document for the Storage policy name | Minor | documentation | J.Andreina | J.Andreina | | | MapReduce AM should have java.io.tmpdir=./tmp to be consistent with tasks | Major | mr-am | Jason Lowe | Naganarasimha G R | | | LineRecordReader may give incomplete record and wrong position/key information for uncompressed input"
},
{
"data": "| Critical | mrv2 | zhihai xu | zhihai xu | | | Task attempts that fail from the ASSIGNED state can disappear | Major | mr-am | Jason Lowe | Chang Li | | | FairScheduler: ContinuousSchedulingThread can fail to shutdown | Critical | fairscheduler | zhihai xu | zhihai xu | | | Doc updation for commands in HDFS Federation | Minor | documentation | J.Andreina | J.Andreina | | | WebAppProxyServlet should not redirect to RM page if AHS is enabled | Major | . | Mit Desai | Mit Desai | | | ApplicationHistoryServer reverses the order of the filters it gets | Major | timelineserver | Mit Desai | Mit Desai | | | Transfer failure during pipeline recovery causes permanent write failures | Critical | . | Kihwal Lee | Kihwal Lee | | | AsyncDispatcher exit with NPE on TaskAttemptImpl#sendJHStartEventForAssignedFailTask | Critical | . | Bibin A Chundatt | Bibin A Chundatt | | | AMLauncher does not retry on failures when talking to NM | Critical | resourcemanager | Anubhav Dhoot | Anubhav Dhoot | | | Fix wrong value of JOB\\_FINISHED event in JobHistoryEventHandler | Major | . | Shinichi Yamashita | Shinichi Yamashita | | | hadoop-project declares duplicate, conflicting curator dependencies | Minor | build | Steve Loughran | Rakesh R | | | ContainerMetrics unregisters during getMetrics and leads to ConcurrentModificationException | Major | nodemanager | Jason Lowe | zhihai xu | | | RMStateStore FENCED state doesn't work due to updateFencedState called by stateMachine.doTransition | Critical | resourcemanager | zhihai xu | zhihai xu | | | Slow datanode I/O can cause a wrong node to be marked bad | Critical | . | Kihwal Lee | Kihwal Lee | | | Incorrect javadoc in WritableUtils.java | Minor | documentation | Martin Petricek | Jagadesh Kiran N | | | Delayed rolling upgrade finalization can cause heartbeat expiration and write failures | Critical | . | Kihwal Lee | Walter Su | | | Reading small file (\\< 512 bytes) that is open for append fails due to incorrect checksum | Blocker | . | Bogdan Raducanu | Jing Zhao | | | Interrupted client may try to fail-over and retry | Major | ipc | Kihwal Lee | Kihwal Lee | | | 2.7 RM app page is broken | Blocker | . | Chang Li | Chang Li | | | ZKRMStateStore shouldn't create new session without occurrance of SESSIONEXPIED | Blocker | resourcemanager | Bibin A Chundatt | Varun Saxena | | | Set SO\\_KEEPALIVE on shuffle connections | Major | mrv2, nodemanager | Nathan Roberts | Chang Li | | | ACLs on root directory may be lost after NN restart | Critical | namenode | Xiao Chen | Xiao Chen | | | RM crashes with NPE if leaf queue becomes parent queue during restart | Major | capacityscheduler, resourcemanager | Jason Lowe | Varun Saxena | | | CORS support for ResourceManager REST API | Major | . | Prakash Ramachandran | Varun Vasudev | | | Slow delegation token renewal can severely prolong RM recovery | Major | resourcemanager | Jason Lowe | Sunil Govindan | | | DFSClient#callAppend() is not backward compatible for slightly older NameNodes | Blocker |"
},
{
"data": "| Tony Wu | Tony Wu | | | Delayed heartbeat processing causes storm of subsequent heartbeats | Major | datanode | Chris Nauroth | Arpit Agarwal | | | Document fsck -blockId and -storagepolicy options in branch-2.7 | Major | documentation | Akira Ajisaka | Akira Ajisaka | | | Replication violates block placement policy. | Blocker | namenode | Rushabh S Shah | Rushabh S Shah | | | Race condition in MiniMRYarnCluster when getting history server address | Major | . | Jian He | Jian He | | | TestSubmitApplicationWithRMHA fails on branch-2.7 and branch-2.6 as some of the test cases time out | Major | . | Varun Saxena | Varun Saxena | | | TestJobHistoryEventHandler fails as AHS in MiniYarnCluster no longer binds to default port 8188 | Major | . | Varun Saxena | Varun Saxena | | | Memory leak for HistoryFileManager.getJobSummary() | Critical | jobhistoryserver | Junping Du | Junping Du | | | DistCp has incorrect chunkFilePath for multiple jobs when strategy is dynamic | Major | distcp | Kuhu Shukla | Kuhu Shukla | | | Incessant retries if NoAuthException is thrown by Zookeeper in non HA mode | Major | resourcemanager | Varun Saxena | Varun Saxena | | | Fix TestDistributedShell timeout as AHS in MiniYarnCluster no longer binds to default port 8188 | Major | . | MENG DING | MENG DING | | | Make DataStreamer#block thread safe and verify genStamp in commitBlock | Critical | . | Chang Li | Chang Li | | | RM fail with noAuth error if switched from failover mode to non-failover mode | Major | resourcemanager | Jian He | Varun Saxena | | | [Branch-2] there are duplicate dependency definitions in pom's | Major | build | Sangjin Lee | Sangjin Lee | | | [Branch-2] Setting HADOOP\\_HOME explicitly should be allowed | Blocker | scripts | Karthik Kambatla | Karthik Kambatla | | | Fix typo of property name in yarn-default.xml | Major | documentation | Anthony Rojas | Anthony Rojas | | | TestMRTimelineEventHandling fails | Major | test | Sangjin Lee | Sangjin Lee | | | Public resource localization fails with NPE | Blocker | nodemanager | Jason Lowe | Jason Lowe | | | getContentSummary() on standby should throw StandbyException | Critical | . | Brahma Reddy Battula | Brahma Reddy Battula | | | DistributedFileSystem#concat fails if the target path is relative. | Major | hdfs-client | Kazuho Fujii | Kazuho Fujii | | | ApplicationHistoryServer binds to default port 8188 in MiniYARNCluster | Critical | timelineserver | Hitesh Shah | Vinod Kumar Vavilapalli | | | Bump up commons-collections version to 3.2.2 to address a security flaw | Blocker | build, security | Wei-Chiu Chuang | Wei-Chiu Chuang | | | NMs reconnecting with changed capabilities can lead to wrong cluster resource calculations | Critical | resourcemanager | Varun Vasudev | Varun Vasudev | | | \"Total megabyte-seconds\" in job counters is slightly misleading | Minor |"
},
{
"data": "| Nathan Roberts | Nathan Roberts | | | FileSystemNodeLabelStore should check for root dir existence on startup | Major | resourcemanager | Jason Lowe | Kuhu Shukla | | | hdfs and nfs builds broken on -missing compile-time dependency on netty | Major | nfs | Konstantin Boudnik | Tom Zeng | | | multibyte delimiters with LineRecordReader cause duplicate records | Major | mrv1, mrv2 | Dustin Cote | Wilfred Spiegelenburg | | | Rollingupgrade finalization is not backward compatible | Blocker | . | Kihwal Lee | Kihwal Lee | | | Encryption zone on root not loaded from fsimage after NN restart | Critical | . | Xiao Chen | Xiao Chen | | | DFSClient deadlock when close file and failed to renew lease | Blocker | hdfs-client | DENG FEI | Brahma Reddy Battula | | | ZKRMStateStore.syncInternal shouldn't wait for sync completion for avoiding blocking ZK's event thread | Blocker | . | Tsuyoshi Ozawa | Tsuyoshi Ozawa | | | Fix deadlock in RMAppImpl | Blocker | . | Yesha Vora | Jian He | | | NodeManager Disk Checker parameter documentation is not correct | Minor | documentation, nodemanager | Takashi Ohnishi | Weiwei Yang | | | Datanode may deadlock while handling a bad volume | Blocker | . | Kihwal Lee | Walter Su | | | Reduce client failures during datanode restart | Major | . | Kihwal Lee | Kihwal Lee | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | [JDK 8] TestClientRMService.testGetLabelsToNodes fails | Major | test | Robert Kanter | Robert Kanter | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | getTransferredContainers can be a bottleneck during AM registration | Major | scheduler | Jason Lowe | Sunil Govindan | | | ATS Web Performance issue at load time when large number of jobs | Major | resourcemanager, webapp, yarn | Xuan Gong | Xuan Gong | | | Fixed the typo with the configuration name: APPLICATION\\HISTORY\\PREFIX\\MAX\\APPS | Major | resourcemanager, webapp, yarn | Xuan Gong | Xuan Gong | | | Allow jobs to be submitted to reservation that is active but does not have any allocations | Major | capacityscheduler, fairscheduler, resourcemanager | Subru Krishnan | Subru Krishnan | | | RM HA UI redirection needs to be fixed when both RMs are in standby mode | Major | resourcemanager | Xuan Gong | Xuan Gong | | | Add documentation for node labels feature | Major | documentation | Gururaj Shetty | Wangda Tan | | | Both RM in active state when Admin#transitionToActive failure from refeshAll() | Critical | resourcemanager | Bibin A Chundatt | Bibin A Chundatt | | | RM should print alert messages if Zookeeper and Resourcemanager gets connection issue | Critical | yarn | Yesha Vora | Xuan Gong | | | Host framework UIs in YARN for use with the ATS | Major | timelineserver | Jonathan Eagles | Jonathan Eagles | | | Killing a container that is localizing can orphan resources in the DOWNLOADING state | Major | nodemanager | Jason Lowe | Varun Saxena |"
}
] |
{
"category": "App Definition and Development",
"file_name": "29-changes.md",
"project_name": "TDengine",
"subcategory": "Database"
} | [
{
"data": "title: Changes in TDengine 3.0 sidebar_label: Changes in TDengine 3.0 description: This document describes how TDengine SQL has changed in version 3.0 compared with previous versions. | # | Element | <div style={{width: 60}}>Change</div> | Description | | - | :- | :-- | :- | | 1 | VARCHAR | Added | Alias of BINARY. | 2 | TIMESTAMP literal | Added | TIMESTAMP 'timestamp format' syntax now supported. | 3 | ROWTS pseudocolumn | Added | Indicates the primary key. Alias of C0. | 4 | _IROWTS pseudocolumn | Added | Used to retrieve timestamps with INTERP function. | 5 | INFORMATION_SCHEMA | Added | Database for system metadata containing all schema definitions | 6 | PERFORMANCE_SCHEMA | Added | Database for system performance information. | 7 | Connection queries | Deprecated | Connection queries are no longer supported. The syntax and interfaces are deprecated. | 8 | Mixed operations | Enhanced | Mixing scalar and vector operations in queries has been enhanced and is supported in all SELECT clauses. | 9 | Tag operations | Added | Tag columns can be used in queries and clauses like data columns. | 10 | Timeline clauses and time functions in supertables | Enhanced | When PARTITION BY is not used, data in supertables is merged into a single timeline. | 11 | GEOMETRY | Added | Geometry The following data types can be used in the schema for standard tables. | # | Statement | <div style={{width: 60}}>Change</div> | Description | | - | :- | :-- | :- | | 1 | ALTER ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error \"This statement is no longer supported.\" | 2 | ALTER ALL DNODES | Added | Modifies the configuration of all dnodes. | 3 | ALTER DATABASE | Modified | Deprecated<ul><li>QUORUM: Specified the required number of confirmations. TDengine 3.0 provides strict consistency by default and doesn't allow to change to weak consistency. </li><li>BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>COMP: Cannot be modified. <br/>Added</li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>WALFSYNCPERIOD: Replaces the FSYNC parameter. </li><li>WALLEVEL: Replaces the WAL parameter. </li><li>WALRETENTIONPERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. </li><li>WALRETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. <br/>Modified</li><li>REPLICA: Cannot be modified. </li><li>KEEP: Now supports units. </li></ul> | 4 | ALTER STABLE | Modified | Deprecated<ul><li>CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG. <br/>Added</li><li>RENAME TAG: Replaces CHANGE TAG. </li><li>COMMENT: Specifies comments for a supertable. </li></ul> | 5 | ALTER TABLE | Modified | Deprecated<ul><li>CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG. <br/>Added</li><li>RENAME TAG: Replaces CHANGE TAG. </li><li>COMMENT: Specifies comments for a standard table. </li><li>TTL: Specifies the time-to-live for a standard"
},
{
"data": "</li></ul> | 6 | ALTER USER | Modified | Deprecated<ul><li>PRIVILEGE: Specified user permissions. Replaced by GRANT and REVOKE. <br/>Added</li><li>ENABLE: Enables or disables a user. </li><li>SYSINFO: Specifies whether a user can query system information. </li></ul> | 7 | COMPACT VNODES | Not supported | Compacted the data on a vnode. Not supported. | 8 | CREATE ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error \"This statement is no longer supported.\" | 9 | CREATE DATABASE | Modified | Deprecated<ul><li>BLOCKS: Specified the number of blocks for each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHE: Specified the size of the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>DAYS: The length of time to store in a single file. Replaced by DURATION. </li><li>FSYNC: Specified the fsync interval when WAL was set to 2. Replaced by WALFSYNCPERIOD. </li><li>QUORUM: Specified the number of confirmations required. STRICT is now used to specify strong or weak consistency. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>WAL: Specified the WAL level. Replaced by WALLEVEL. <br/>Added</li><li>BUFFER: Specifies the size of the write cache pool for each vnode. </li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>DURATION: Replaces DAYS. Now supports units. </li><li>PAGES: Specifies the number of pages in the metadata storage engine cache on each vnode. </li><li>PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. </li><li>RETENTIONS: Specifies the aggregation interval and retention period </li><li>STRICT: Specifies whether strong data consistency is enabled. </li><li>SINGLESTABLE: Specifies whether a database can contain multiple supertables. </li><li>VGROUPS: Specifies the initial number of vgroups when a database is created. </li><li>WALFSYNCPERIOD: Replaces the FSYNC parameter. </li><li>WALLEVEL: Replaces the WAL parameter. </li><li>WALRETENTIONPERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. </li><li>WALRETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. <br/>Modified</li><li>KEEP: Now supports units. </li></ul> | 10 | CREATE DNODE | Modified | Now supports specifying hostname and port separately<ul><li>CREATE DNODE dnodehostname PORT port_val</li></ul> | 11 | CREATE INDEX | Added | Creates an SMA index. | 12 | CREATE MNODE | Added | Creates an mnode. | 13 | CREATE QNODE | Added | Creates a qnode. | 14 | CREATE STABLE | Modified | New parameter added<li>COMMENT: Specifies comments for the supertable. </li> | 15 | CREATE STREAM | Added | Creates a stream. | 16 | CREATE TABLE | Modified | New parameters added<ul><li>COMMENT: Specifies comments for the table </li><li>WATERMARK: Specifies the window closing time. </li><li>MAX_DELAY: Specifies the maximum delay for pushing stream processing results. </li><li>ROLLUP: Specifies aggregate functions to roll up. Rolling up a function provides downsampled results based on multiple axes. </li><li>SMA: Provides user-defined precomputation of aggregates based on data blocks. </li><li>TTL: Specifies the time-to-live for a standard table. </li></ul> | 17 | CREATE TOPIC | Added | Creates a"
},
{
"data": "| 18 | DROP ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error \"This statement is no longer supported.\" | 19 | DROP CONSUMER GROUP | Added | Deletes a consumer group. | 20 | DROP INDEX | Added | Deletes an index. | 21 | DROP MNODE | Added | Creates an mnode. | 22 | DROP QNODE | Added | Creates a qnode. | 23 | DROP STREAM | Added | Deletes a stream. | 24 | DROP TABLE | Modified | Added batch deletion syntax. | 25 | DROP TOPIC | Added | Deletes a topic. | 26 | EXPLAIN | Added | Query the execution plan of a query statement. | 27 | GRANT | Added | Grants permissions to a user. | 28 | KILL TRANSACTION | Added | Terminates an mnode transaction. | 29 | KILL STREAM | Deprecated | Terminated a continuous query. The continuous query feature has been replaced with the stream processing feature. | 31 | REVOKE | Added | Revokes permissions from a user. | 32 | SELECT | Modified | <ul><li>SELECT does not use the implicit results column. Output columns must be specified in the SELECT clause. </li><li>DISTINCT support is enhanced. In previous versions, DISTINCT only worked on the tag column and could not be used with JOIN or GROUP BY. </li><li>JOIN support is enhanced. The following are now supported after JOIN: a WHERE clause with OR, operations on multiple tables, and GROUP BY on multiple tables. </li><li>Subqueries after FROM are enhanced. Levels of nesting are no longer restricted. Subqueries can be used with UNION ALL. Other syntax restrictions are eliminated. </li><li>All scalar functions can be used after WHERE. </li><li>GROUP BY is enhanced. You can group by any scalar expression or combination thereof. </li><li>SESSION can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. </li><li>STATE_WINDOW can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. </li><li>ORDER BY is enhanced. It is no longer required to use ORDER BY and GROUP BY together. There is no longer a restriction on the number of order expressions. NULLS FIRST and NULLS LAST syntax has been added. Any expression that conforms to the ORDER BY semantics can be used. </li><li>Added PARTITION BY syntax. PARTITION BY replaces GROUP BY tags. </li></ul> | 33 | SHOW ACCOUNTS | Deprecated | This Enterprise Edition-only statement has been removed. It returns the error \"This statement is no longer supported.\" | 34 | SHOW APPS | Added | Shows all clients (such as applications) that connect to the cluster. | 35 | SHOW CONSUMERS | Added | Shows information about all active consumers in the system. | 36 | SHOW DATABASES | Modified | Only shows database names. | 37 | SHOW FUNCTIONS | Modified | Only shows UDF names. | 38 | SHOW LICENCE | Added | Alias of SHOW GRANTS. | 39 | SHOW INDEXES | Added | Shows indices that have been created. | 40 | SHOW LOCAL VARIABLES | Added | Shows the working configuration of the client. | 41 | SHOW MODULES | Deprecated | Shows information about modules installed in the"
},
{
"data": "| 42 | SHOW QNODES | Added | Shows information about qnodes in the system. | 43 | SHOW STABLES | Modified | Only shows supertable names. | 44 | SHOW STREAMS | Modified | This statement previously showed continuous queries. The continuous query feature has been replaced with the stream processing feature. This statement now shows streams that have been created. | 45 | SHOW SUBSCRIPTIONS | Added | Shows all subscriptions in the current database. | 46 | SHOW TABLES | Modified | Only shows table names. | 47 | SHOW TABLE DISTRIBUTED | Added | Shows how table data is distributed. This replaces the `SELECT blockdist() FROM { tbname | stbname }` command. | 48 | SHOW TOPICS | Added | Shows all subscribed topics in the current database. | 49 | SHOW TRANSACTIONS | Added | Shows all running transactions in the system. | 50 | SHOW DNODE VARIABLES | Added | Shows the configuration of the specified dnode. | 51 | SHOW VNODES | Not supported | Shows information about vnodes in the system. Not supported. | 52 | TRIM DATABASE | Added | Deletes data that has expired and orders the remaining data in accordance with the storage configuration. | 53 | REDISTRIBUTE VGROUP | Added | Adjust the distribution of VNODES in VGROUP. | 54 | BALANCE VGROUP | Added | Auto adjust the distribution of VNODES in VGROUP. | # | Function | <div style={{width: 60}}>Change</div> | Description | | - | :- | :-- | :- | | 1 | TWA | Added | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. | 2 | IRATE | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. | 3 | LEASTSQUARES | Enhanced | Can be used on supertables. | 4 | ELAPSED | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. | 5 | DIFF | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. | 6 | DERIVATIVE | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. | 7 | CSUM | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. | 8 | MAVG | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. | 9 | SAMPLE | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. | 10 | STATECOUNT | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. | 11 | STATEDURATION | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline."
}
] |
{
"category": "App Definition and Development",
"file_name": "vclock_struct.md",
"project_name": "Tarantool",
"subcategory": "Database"
} | [
{
"data": "Status*: in progress Start date*: 16-05-2018 Authors*: Ilya Markov @IlyaMarkovMipt \\<[email protected]\\> Issues*: Overview of possible implementations of vector clocks in large scale replicasets. Vector clocks are used for following states(more exactly LSN) of nodes in replicasets. Currently, the clocks are implemented with static arrays with size limited by constant `VCLOCK_MAX` Indices of the array represent replica identifier in replicaset, value is LSN. In a large scale environment array is far from the best implementation in terms of time and memory consumption. The main problem here is that within large scale nodes may be added and deleted and the array may contain large gaps. So most of memory space might turn out to be useless. For example, in star topology, one replica has fully filled vclock, others have large arrays with only two valuable for them cells. The new design must address the following requirements: Minimize memory consumption within constantly changing replicaset. Fast vector clock comparison, following taking into account frequent updated nodes. As a possible solution to address the gap problem is to use a tree. The tree allocates nodes only for non-empty values. So memory usage in this case is minimized. Comparison and vclock following would take O(N), N -size of replicaset. This time complexity is the same as in implementation with static array but with worse constant. Though operations like set and get take O(logN) instead constant time in array. As we can notice vclock_get is highly used with replica ids, which are written in logs. Under assumption that number of writing replicas is less than the size of replicaset, the problem with vclock_get may be solved with some fixed size cache in front of tree, which will contain frequently replicas lsns. Another approach addressing gap problem is shifting replica id to the start of vclock array, getting rid of gaps. This idea helps avoiding gaps and simplifies comparison, setting vector clocks. On the other hand, it requires dedicated calls which follow the state of vclock and shift it, when gaps are found. Also the shift requires remapping of replica identifiers which also costs something in terms of memory and time consumption. Allocate fixed size arrays for ranges of ids and store references to them in hash/tree index. For example, we have several ranges of ids: 1-10, 65-100. Let's assume size of each array is 32. For this set, there would 3 ranges: 1-32, 65-96, 97-128. The index would contain 3 records, which could be get by 1, 65, 97 respectively. In this approach gaps are limited to the certain size, there is no need in shifting. Copying and comparison are almost the same as in approach with static size array. One more possible solution to gap problem may be lists. But, as we need to index sometimes, we can use skip-lists, which in terms of time complexity of indexing are almost the same as trees. Moreover, traversing lists is faster than trees. Bad side of the idea is that it consumes memory excessively. The most easiest to implement solution is a tree. Nevertheless, it needs an optimizations for vclock_get. The paging looks like an approach which solves the current problem with gaps and doesn't create new problems or complexities. The shifting with remapping looks the worst one to my mind, mostly because of its difficulty and generating new maintaining processes(e.g remapping) and, therefore, new possible problems. Skip-lists are just one of variations of trees, but with extra memory consumption."
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.2.8.1.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Root privilege escalation in experimental Docker support | Blocker | nodemanager, security | Allen Wittenauer | Varun Vasudev |"
}
] |
{
"category": "App Definition and Development",
"file_name": "5.1.1.md",
"project_name": "Siddhi",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "<p style=\"word-wrap: break-word\">Returns the results of AND operation for all the events.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <BOOL> and(<BOOL> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The value that needs to be AND operation.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">BOOL</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from cscStream#window.lengthBatch(10) select and(isFraud) as isFraudTransaction insert into alertStream; ``` <p style=\"word-wrap: break-word\">This will returns the result for AND operation of isFraud values as a boolean value for event chunk expiry by window length batch.</p> <p style=\"word-wrap: break-word\">Calculates the average for all the events.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <DOUBLE> avg(<INT|LONG|DOUBLE|FLOAT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The value that need to be averaged.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream#window.timeBatch select avg(temp) as avgTemp insert into barStream; ``` <p style=\"word-wrap: break-word\">avg(temp) returns the average temp value for all the events based on their arrival and expiry.</p> <p style=\"word-wrap: break-word\">Returns the count of all the events.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <LONG> count() <LONG> count(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">This function accepts one parameter. It can belong to any one of the available types.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream#window.timeBatch(10 sec) select count() as count insert into barStream; ``` <p style=\"word-wrap: break-word\">This will return the count of all the events for time batch in 10 seconds.</p> <p style=\"word-wrap: break-word\">This returns the count of distinct occurrences for a given arg.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <LONG> distinctCount(<INT|LONG|DOUBLE|FLOAT|STRING> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The object for which the number of distinct occurences needs to be counted.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select distinctcount(pageID) as count insert into barStream; ``` <p style=\"word-wrap: break-word\">distinctcount(pageID) for the following output returns '3' when the available values are as follows.<br> \"WEBPAGE1\"<br> \"WEBPAGE1\"<br> \"WEBPAGE2\"<br> \"WEBPAGE3\"<br> \"WEBPAGE1\"<br> \"WEBPAGE2\"<br> The three distinct occurences identified are 'WEBPAGE1', 'WEBPAGE2', and 'WEBPAGE3'.</p> <p style=\"word-wrap: break-word\">Returns the maximum value for all the"
},
{
"data": "<span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT> max(<INT|LONG|DOUBLE|FLOAT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The value that needs to be compared to find the maximum value.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream#window.timeBatch(10 sec) select max(temp) as maxTemp insert into barStream; ``` <p style=\"word-wrap: break-word\">max(temp) returns the maximum temp value recorded for all the events based on their arrival and expiry.</p> <p style=\"word-wrap: break-word\">This is the attribute aggregator to store the maximum value for a given attribute throughout the lifetime of the query regardless of any windows in-front.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT> maxForever(<INT|LONG|DOUBLE|FLOAT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The value that needs to be compared to find the maximum value.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from inputStream select maxForever(temp) as max insert into outputStream; ``` <p style=\"word-wrap: break-word\">maxForever(temp) returns the maximum temp value recorded for all the events throughout the lifetime of the query.</p> <p style=\"word-wrap: break-word\">Returns the minimum value for all the events.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT> min(<INT|LONG|DOUBLE|FLOAT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The value that needs to be compared to find the minimum value.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from inputStream select min(temp) as minTemp insert into outputStream; ``` <p style=\"word-wrap: break-word\">min(temp) returns the minimum temp value recorded for all the events based on their arrival and expiry.</p> <p style=\"word-wrap: break-word\">This is the attribute aggregator to store the minimum value for a given attribute throughout the lifetime of the query regardless of any windows in-front.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT> minForever(<INT|LONG|DOUBLE|FLOAT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The value that needs to be compared to find the minimum value.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from inputStream select minForever(temp) as max insert into outputStream; ``` <p style=\"word-wrap: break-word\">minForever(temp) returns the minimum temp value recorded for all the events throughoutthe lifetime of the query.</p> <p style=\"word-wrap: break-word\">Returns the results of OR operation for all the"
},
{
"data": "<span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <BOOL> or(<BOOL> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The value that needs to be OR operation.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">BOOL</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from cscStream#window.lengthBatch(10) select or(isFraud) as isFraudTransaction insert into alertStream; ``` <p style=\"word-wrap: break-word\">This will returns the result for OR operation of isFraud values as a boolean value for event chunk expiry by window length batch.</p> <p style=\"word-wrap: break-word\">Returns the calculated standard deviation for all the events.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <DOUBLE> stdDev(<INT|LONG|DOUBLE|FLOAT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The value that should be used to calculate the standard deviation.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from inputStream select stddev(temp) as stdTemp insert into outputStream; ``` <p style=\"word-wrap: break-word\">stddev(temp) returns the calculated standard deviation of temp for all the events based on their arrival and expiry.</p> <p style=\"word-wrap: break-word\">Returns the sum for all the events.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <LONG|DOUBLE> sum(<INT|LONG|DOUBLE|FLOAT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The value that needs to be summed.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from inputStream select sum(volume) as sumOfVolume insert into outputStream; ``` <p style=\"word-wrap: break-word\">This will returns the sum of volume values as a long value for each event arrival and expiry.</p> <p style=\"word-wrap: break-word\">Union multiple sets. <br> This attribute aggregator maintains a union of sets. The given input set is put into the union set and the union set is returned.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <OBJECT> unionSet(<OBJECT> set) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">set</td> <td style=\"vertical-align: top; word-wrap: break-word\">The java.util.Set object that needs to be added into the union set.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from stockStream select createSet(symbol) as initialSet insert into initStream from initStream#window.timeBatch(10 sec) select unionSet(initialSet) as distinctSymbols insert into distinctStockStream; ``` <p style=\"word-wrap: break-word\">distinctStockStream will return the set object which contains the distinct set of stock symbols received during a sliding window of 10 seconds.</p> <p style=\"word-wrap: break-word\">Generates a UUID (Universally Unique"
},
{
"data": "<span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <STRING> UUID() ``` <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from TempStream select convert(roomNo, 'string') as roomNo, temp, UUID() as messageID insert into RoomTempStream; ``` <p style=\"word-wrap: break-word\">This will converts a room number to string, introducing a message ID to each event asUUID() returns a34eec40-32c2-44fe-8075-7f4fde2e2dd8<br><br>from TempStream<br>select convert(roomNo, 'string') as roomNo, temp, UUID() as messageID<br>insert into RoomTempStream;</p> <p style=\"word-wrap: break-word\">Converts the first parameter according to the cast.to parameter. Incompatible arguments cause Class Cast exceptions if further processed. This function is used with map extension that returns attributes of the object type. You can use this function to cast the object to an accurate and concrete type.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> cast(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> to.be.caster, <STRING> cast.to) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">to.be.caster</td> <td style=\"vertical-align: top; word-wrap: break-word\">This specifies the attribute to be casted.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">cast.to</td> <td style=\"vertical-align: top; word-wrap: break-word\">A string constant parameter expressing the cast to type using one of the following strings values: int, long, float, double, string, bool.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select symbol as name, cast(temp, 'double') as temp insert into barStream; ``` <p style=\"word-wrap: break-word\">This will cast the fooStream temp field value into 'double' format.</p> <p style=\"word-wrap: break-word\">Returns the value of the first input parameter that is not null, and all input parameters have to be on the same type.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> coalesce(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> arg, <INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> ...) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">This function accepts one or more parameters. They can belong to any one of the available types. All the specified parameters should be of the same type.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select coalesce('123', null, '789') as value insert into barStream; ``` <p style=\"word-wrap: break-word\">This will returns first null value 123.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from fooStream select coalesce(null, 76, 567) as value insert into barStream; ``` <p style=\"word-wrap: break-word\">This will returns first null value 76.</p> <span id=\"example-3\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 3</span> ``` from fooStream select coalesce(null, null, null) as value insert into barStream; ``` <p style=\"word-wrap: break-word\">This will returns null as there are no notnull values.</p> <p style=\"word-wrap: break-word\">Converts the first input parameter according to the convertedTo parameter.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT|STRING|BOOL> convert(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL> to.be.converted, <STRING> converted.to) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align:"
},
{
"data": "<td style=\"vertical-align: top; word-wrap: break-word\">This specifies the value to be converted.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">converted.to</td> <td style=\"vertical-align: top; word-wrap: break-word\">A string constant parameter to which type the attribute need to be converted using one of the following strings values: 'int', 'long', 'float', 'double', 'string', 'bool'.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select convert(temp, 'double') as temp insert into barStream; ``` <p style=\"word-wrap: break-word\">This will convert fooStream temp value into 'double'.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from fooStream select convert(temp, 'int') as temp insert into barStream; ``` <p style=\"word-wrap: break-word\">This will convert fooStream temp value into 'int' (value = \"convert(45.9, 'int') returns 46\").</p> <p style=\"word-wrap: break-word\">Includes the given input parameter in a java.util.HashSet and returns the set. </p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <OBJECT> createSet(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL> input) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">input</td> <td style=\"vertical-align: top; word-wrap: break-word\">The input that needs to be added into the set.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from stockStream select createSet(symbol) as initialSet insert into initStream; ``` <p style=\"word-wrap: break-word\">For every incoming stockStream event, the initStream stream will produce a set object having only one element: the symbol in the incoming stockStream.</p> <p style=\"word-wrap: break-word\">Returns the current timestamp of siddhi application in milliseconds.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <LONG> currentTimeMillis() ``` <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select symbol as name, currentTimeMillis() as eventTimestamp insert into barStream; ``` <p style=\"word-wrap: break-word\">This will extract current siddhi application timestamp.</p> <p style=\"word-wrap: break-word\">Checks if the 'attribute' parameter is null and if so returns the value of the 'default' parameter</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> default(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> attribute, <INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> default) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">attribute</td> <td style=\"vertical-align: top; word-wrap: break-word\">The attribute that could be null.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">default</td> <td style=\"vertical-align: top; word-wrap: break-word\">The default value that will be used when 'attribute' parameter is null</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from TempStream select default(temp, 0.0) as temp, roomNum insert into StandardTempStream; ``` <p style=\"word-wrap: break-word\">This will replace TempStream's temp attribute with default value if the temp is null.</p> <p style=\"word-wrap: break-word\">Returns the timestamp of the processed"
},
{
"data": "<span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <LONG> eventTimestamp() ``` <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select symbol as name, eventTimestamp() as eventTimestamp insert into barStream; ``` <p style=\"word-wrap: break-word\">This will extract current events timestamp.</p> <p style=\"word-wrap: break-word\">Evaluates the 'condition' parameter and returns value of the 'if.expression' parameter if the condition is true, or returns value of the 'else.expression' parameter if the condition is false. Here both 'if.expression' and 'else.expression' should be of the same type.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> ifThenElse(<BOOL> condition, <INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> if.expression, <INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> else.expression) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">condition</td> <td style=\"vertical-align: top; word-wrap: break-word\">This specifies the if then else condition value.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">BOOL</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">if.expression</td> <td style=\"vertical-align: top; word-wrap: break-word\">This specifies the value to be returned if the value of the condition parameter is true.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">else.expression</td> <td style=\"vertical-align: top; word-wrap: break-word\">This specifies the value to be returned if the value of the condition parameter is false.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` @info(name = 'query1') from sensorEventStream select sensorValue, ifThenElse(sensorValue>35,'High','Low') as status insert into outputStream; ``` <p style=\"word-wrap: break-word\">This will returns High if sensorValue = 50.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` @info(name = 'query1') from sensorEventStream select sensorValue, ifThenElse(voltage < 5, 0, 1) as status insert into outputStream; ``` <p style=\"word-wrap: break-word\">This will returns 1 if voltage= 12.</p> <span id=\"example-3\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 3</span> ``` @info(name = 'query1') from userEventStream select userName, ifThenElse(password == 'admin', true, false) as passwordState insert into outputStream; ``` <p style=\"word-wrap: break-word\">This will returns passwordState as true if password = admin.</p> <p style=\"word-wrap: break-word\">Checks whether the parameter is an instance of Boolean or not.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <BOOL> instanceOfBoolean(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The parameter to be checked.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select instanceOfBoolean(switchState) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">This will return true if the value of switchState is true.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from fooStream select instanceOfBoolean(value) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">if the value = 32 then this will returns false as the value is not an instance of the boolean.</p> <p style=\"word-wrap: break-word\">Checks whether the parameter is an instance of Double or not.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <BOOL> instanceOfDouble(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size:"
},
{
"data": "font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The parameter to be checked.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select instanceOfDouble(value) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">This will return true if the value field format is double ex : 56.45.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from fooStream select instanceOfDouble(switchState) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">if the switchState = true then this will returns false as the value is not an instance of the double.</p> <p style=\"word-wrap: break-word\">Checks whether the parameter is an instance of Float or not.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <BOOL> instanceOfFloat(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The parameter to be checked.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select instanceOfFloat(value) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">This will return true if the value field format is float ex : 56.45f.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from fooStream select instanceOfFloat(switchState) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">if the switchState = true then this will returns false as the value is an instance of the boolean not a float.</p> <p style=\"word-wrap: break-word\">Checks whether the parameter is an instance of Integer or not.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <BOOL> instanceOfInteger(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The parameter to be checked.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select instanceOfInteger(value) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">This will return true if the value field format is integer.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from fooStream select instanceOfInteger(switchState) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">if the switchState = true then this will returns false as the value is an instance of the boolean not a long.</p> <p style=\"word-wrap: break-word\">Checks whether the parameter is an instance of Long or not.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <BOOL> instanceOfLong(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The parameter to be"
},
{
"data": "<td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select instanceOfLong(value) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">This will return true if the value field format is long ex : 56456l.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from fooStream select instanceOfLong(switchState) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">if the switchState = true then this will returns false as the value is an instance of the boolean not a long.</p> <p style=\"word-wrap: break-word\">Checks whether the parameter is an instance of String or not.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <BOOL> instanceOfString(<INT|LONG|DOUBLE|FLOAT|STRING|BOOL|OBJECT> arg) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">The parameter to be checked.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT<br>STRING<br>BOOL<br>OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream select instanceOfString(value) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">This will return true if the value field format is string ex : 'test'.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from fooStream select instanceOfString(switchState) as state insert into barStream; ``` <p style=\"word-wrap: break-word\">if the switchState = true then this will returns false as the value is an instance of the boolean not a string.</p> <p style=\"word-wrap: break-word\">Returns the maximum value of the input parameters.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT> maximum(<INT|LONG|DOUBLE|FLOAT> arg, <INT|LONG|DOUBLE|FLOAT> ...) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">This function accepts one or more parameters. They can belong to any one of the available types. All the specified parameters should be of the same type.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` @info(name = 'query1') from inputStream select maximum(price1, price2, price3) as max insert into outputStream; ``` <p style=\"word-wrap: break-word\">This will returns the maximum value of the input parameters price1, price2, price3.</p> <p style=\"word-wrap: break-word\">Returns the minimum value of the input parameters.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT|LONG|DOUBLE|FLOAT> minimum(<INT|LONG|DOUBLE|FLOAT> arg, <INT|LONG|DOUBLE|FLOAT> ...) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">arg</td> <td style=\"vertical-align: top; word-wrap: break-word\">This function accepts one or more parameters. They can belong to any one of the available types. All the specified parameters should be of the same type.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>DOUBLE<br>FLOAT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` @info(name = 'query1') from inputStream select maximum(price1, price2, price3) as max insert into outputStream; ``` <p style=\"word-wrap: break-word\">This will returns the minimum value of the input parameters price1, price2,"
},
{
"data": "<p style=\"word-wrap: break-word\">Returns the size of an object of type java.util.Set.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` <INT> sizeOfSet(<OBJECT> set) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">set</td> <td style=\"vertical-align: top; word-wrap: break-word\">The set object. This parameter should be of type java.util.Set. A set object may be created by the 'set' attribute aggregator in Siddhi. </td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">OBJECT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from stockStream select initSet(symbol) as initialSet insert into initStream; ;from initStream#window.timeBatch(10 sec) select union(initialSet) as distinctSymbols insert into distinctStockStream; from distinctStockStream select sizeOfSet(distinctSymbols) sizeOfSymbolSet insert into sizeStream; ``` <p style=\"word-wrap: break-word\">The sizeStream stream will output the number of distinct stock symbols received during a sliding window of 10 seconds.</p> <p style=\"word-wrap: break-word\">The pol2Cart function calculating the cartesian coordinates x & y for the given theta, rho coordinates and adding them as new attributes to the existing events.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` pol2Cart(<DOUBLE> theta, <DOUBLE> rho) pol2Cart(<DOUBLE> theta, <DOUBLE> rho, <DOUBLE> z) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">theta</td> <td style=\"vertical-align: top; word-wrap: break-word\">The theta value of the coordinates.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">DOUBLE</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">rho</td> <td style=\"vertical-align: top; word-wrap: break-word\">The rho value of the coordinates.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">DOUBLE</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">z</td> <td style=\"vertical-align: top; word-wrap: break-word\">z value of the cartesian coordinates.</td> <td style=\"vertical-align: top\">If z value is not given, drop the third parameter of the output.</td> <td style=\"vertical-align: top\">DOUBLE</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from PolarStream#pol2Cart(theta, rho) select x, y insert into outputStream ; ``` <p style=\"word-wrap: break-word\">This will return cartesian coordinates (4.99953024681082, 0.06853693328228748) for theta: 0.7854 and rho: 5.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from PolarStream#pol2Cart(theta, rho, 3.4) select x, y, z insert into outputStream ; ``` <p style=\"word-wrap: break-word\">This will return cartesian coordinates (4.99953024681082, 0.06853693328228748, 3.4)for theta: 0.7854 and rho: 5 and z: 3.4.</p> <p style=\"word-wrap: break-word\">The logger logs the message on the given priority with or without processed event.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` log() log(<STRING> log.message) log(<BOOL> is.event.logged) log(<STRING> log.message, <BOOL> is.event.logged) log(<STRING> priority, <STRING> log.message) log(<STRING> priority, <STRING> log.message, <BOOL> is.event.logged) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">priority</td> <td style=\"vertical-align: top; word-wrap: break-word\">The priority/type of this log message (INFO, DEBUG, WARN, FATAL, ERROR, OFF, TRACE).</td> <td style=\"vertical-align: top\">INFO</td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">log.message</td> <td style=\"vertical-align: top; word-wrap: break-word\">This message will be logged.</td> <td style=\"vertical-align: top\"><siddhi app name> :</td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align:"
},
{
"data": "<td style=\"vertical-align: top; word-wrap: break-word\">To log the processed event.</td> <td style=\"vertical-align: top\">true</td> <td style=\"vertical-align: top\">BOOL</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` from fooStream#log(\"INFO\", \"Sample Event :\", true) select * insert into barStream; ``` <p style=\"word-wrap: break-word\">This will log as INFO with the message \"Sample Event :\" + fooStream:events.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` from fooStream#log(\"Sample Event :\", true) select * insert into barStream; ``` <p style=\"word-wrap: break-word\">This will logs with default log level as INFO.</p> <span id=\"example-3\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 3</span> ``` from fooStream#log(\"Sample Event :\", fasle) select * insert into barStream; ``` <p style=\"word-wrap: break-word\">This will only log message.</p> <span id=\"example-4\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 4</span> ``` from fooStream#log(true) select * insert into barStream; ``` <p style=\"word-wrap: break-word\">This will only log fooStream:events.</p> <span id=\"example-5\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 5</span> ``` from fooStream#log() select * insert into barStream; ``` <p style=\"word-wrap: break-word\">This will only log fooStream:events.</p> <span id=\"example-6\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 6</span> ``` from fooStream#log(\"Sample Event :\") select * insert into barStream; ``` <p style=\"word-wrap: break-word\">This will log message and fooStream:events.</p> <p style=\"word-wrap: break-word\">A window that holds an incoming events batch. When a new set of events arrives, the previously arrived old events will be expired. Batch window can be used to aggregate events that comes in batches. If it has the parameter length specified, then batch window process the batch as several chunks.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` batch() batch(<INT> window.length) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">window.length</td> <td style=\"vertical-align: top; word-wrap: break-word\">The length of a chunk</td> <td style=\"vertical-align: top\">If length value was not given it assign 0 as length and process the whole batch as once</td> <td style=\"vertical-align: top\">INT</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define stream consumerItemStream (itemId string, price float) from consumerItemStream#window.batch() select price, str:groupConcat(itemId) as itemIds group by price insert into outputStream; ``` <p style=\"word-wrap: break-word\">This will output comma separated items IDs that have the same price for each incoming batch of events.</p> <p style=\"word-wrap: break-word\">This window outputs the arriving events as and when they arrive, and resets (expires) the window periodically based on the given cron expression.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` cron(<STRING> cron.expression) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">cron.expression</td> <td style=\"vertical-align: top; word-wrap: break-word\">The cron expression that resets the window.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define stream InputEventStream (symbol string, price float, volume int); @info(name = 'query1')"
},
{
"data": "InputEventStream#cron('/5 * ?') select symbol, sum(price) as totalPrice insert into OutputStream; ``` <p style=\"word-wrap: break-word\">This let the totalPrice to gradually increase and resets to zero as a batch every 5 seconds.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` define stream StockEventStream (symbol string, price float, volume int) define window StockEventWindow (symbol string, price float, volume int) cron('/5 * ?'); @info(name = 'query0') from StockEventStream insert into StockEventWindow; @info(name = 'query1') from StockEventWindow select symbol, sum(price) as totalPrice insert into OutputStream ; ``` <p style=\"word-wrap: break-word\">The defined window will let the totalPrice to gradually increase and resets to zero as a batch every 5 seconds.</p> <p style=\"word-wrap: break-word\">A delay window holds events for a specific time period that is regarded as a delay period before processing them.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` delay(<INT|LONG|TIME> window.delay) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">window.delay</td> <td style=\"vertical-align: top; word-wrap: break-word\">The time period (specified in sec, min, ms) for which the window should delay the events.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define window delayWindow(symbol string, volume int) delay(1 hour); define stream PurchaseStream(symbol string, volume int); define stream DeliveryStream(symbol string); define stream OutputStream(symbol string); @info(name='query1') from PurchaseStream select symbol, volume insert into delayWindow; @info(name='query2') from delayWindow join DeliveryStream on delayWindow.symbol == DeliveryStream.symbol select delayWindow.symbol insert into OutputStream; ``` <p style=\"word-wrap: break-word\">In this example, purchase events that arrive in the 'PurchaseStream' stream are directed to a delay window. At any given time, this delay window holds purchase events that have arrived within the last hour. These purchase events in the window are matched by the 'symbol' attribute, with delivery events that arrive in the 'DeliveryStream' stream. This monitors whether the delivery of products is done with a minimum delay of one hour after the purchase.</p> <p style=\"word-wrap: break-word\">A sliding time window based on external time. It holds events that arrived during the last windowTime period from the external timestamp, and gets updated on every monotonically increasing timestamp.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` externalTime(<LONG> timestamp, <INT|LONG|TIME> window.time) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">timestamp</td> <td style=\"vertical-align: top; word-wrap: break-word\">The time which the window determines as current time and will act upon. The value of this parameter should be monotonically increasing.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">LONG</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">window.time</td> <td style=\"vertical-align: top; word-wrap: break-word\">The sliding time period for which the window should hold events.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define window cseEventWindow (symbol string, price float, volume int) externalTime(eventTime, 20 sec) output expired events; @info(name = 'query0') from cseEventStream insert into cseEventWindow; @info(name = 'query1') from cseEventWindow select symbol, sum(price) as price insert expired events into outputStream ; ``` <p style=\"word-wrap: break-word\">processing events arrived within the last 20 seconds from the eventTime and output expired"
},
{
"data": "<p style=\"word-wrap: break-word\">A batch (tumbling) time window based on external time, that holds events arrived during windowTime periods, and gets updated for every windowTime.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` externalTimeBatch(<LONG> timestamp, <INT|LONG|TIME> window.time) externalTimeBatch(<LONG> timestamp, <INT|LONG|TIME> window.time, <INT|LONG|TIME> start.time) externalTimeBatch(<LONG> timestamp, <INT|LONG|TIME> window.time, <INT|LONG|TIME> start.time, <INT|LONG|TIME> timeout) externalTimeBatch(<LONG> timestamp, <INT|LONG|TIME> window.time, <INT|LONG|TIME> start.time, <INT|LONG|TIME> timeout, <BOOL> replace.with.batchtime) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">timestamp</td> <td style=\"vertical-align: top; word-wrap: break-word\">The time which the window determines as current time and will act upon. The value of this parameter should be monotonically increasing.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">LONG</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">window.time</td> <td style=\"vertical-align: top; word-wrap: break-word\">The batch time period for which the window should hold events.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">start.time</td> <td style=\"vertical-align: top; word-wrap: break-word\">User defined start time. This could either be a constant (of type int, long or time) or an attribute of the corresponding stream (of type long). If an attribute is provided, initial value of attribute would be considered as startTime.</td> <td style=\"vertical-align: top\">Timestamp of first event</td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">timeout</td> <td style=\"vertical-align: top; word-wrap: break-word\">Time to wait for arrival of new event, before flushing and giving output for events belonging to a specific batch.</td> <td style=\"vertical-align: top\">System waits till an event from next batch arrives to flush current batch</td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">replace.with.batchtime</td> <td style=\"vertical-align: top; word-wrap: break-word\">This indicates to replace the expired event timeStamp as the batch end timeStamp</td> <td style=\"vertical-align: top\">System waits till an event from next batch arrives to flush current batch</td> <td style=\"vertical-align: top\">BOOL</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define window cseEventWindow (symbol string, price float, volume int) externalTimeBatch(eventTime, 1 sec) output expired events; @info(name = 'query0') from cseEventStream insert into cseEventWindow; @info(name = 'query1') from cseEventWindow select symbol, sum(price) as price insert expired events into outputStream ; ``` <p style=\"word-wrap: break-word\">This will processing events that arrive every 1 seconds from the eventTime.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` define window cseEventWindow (symbol string, price float, volume int) externalTimeBatch(eventTime, 20 sec, 0) output expired events; ``` <p style=\"word-wrap: break-word\">This will processing events that arrive every 1 seconds from the eventTime. Starts on 0th millisecond of an hour.</p> <span id=\"example-3\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 3</span> ``` define window cseEventWindow (symbol string, price float, volume int) externalTimeBatch(eventTime, 2 sec, eventTimestamp, 100) output expired events; ``` <p style=\"word-wrap: break-word\">This will processing events that arrive every 2 seconds from the eventTim. Considers the first event's eventTimestamp value as startTime. Waits 100 milliseconds for the arrival of a new event before flushing current batch.</p> <p><i>Deprecated</i></p> <p style=\"word-wrap: break-word\">This window returns the latest events with the most frequently occurred value for a given attribute(s). Frequency calculation for this window processor is based on Misra-Gries counting algorithm.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` frequent(<INT> event.count) frequent(<INT> event.count, <STRING> attribute) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0,"
},
{
"data": "font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">event.count</td> <td style=\"vertical-align: top; word-wrap: break-word\">The number of most frequent events to be emitted to the stream.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">attribute</td> <td style=\"vertical-align: top; word-wrap: break-word\">The attributes to group the events. If no attributes are given, the concatenation of all the attributes of the event is considered.</td> <td style=\"vertical-align: top\">The concatenation of all the attributes of the event is considered.</td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` @info(name = 'query1') from purchase[price >= 30]#window.frequent(2) select cardNo, price insert all events into PotentialFraud; ``` <p style=\"word-wrap: break-word\">This will returns the 2 most frequent events.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` @info(name = 'query1') from purchase[price >= 30]#window.frequent(2, cardNo) select cardNo, price insert all events into PotentialFraud; ``` <p style=\"word-wrap: break-word\">This will returns the 2 latest events with the most frequently appeared card numbers.</p> <p style=\"word-wrap: break-word\">A sliding length window that holds the last 'window.length' events at a given time, and gets updated for each arrival and expiry.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` length(<INT> window.length) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">window.length</td> <td style=\"vertical-align: top; word-wrap: break-word\">The number of events that should be included in a sliding length window.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define window StockEventWindow (symbol string, price float, volume int) length(10) output all events; @info(name = 'query0') from StockEventStream insert into StockEventWindow; @info(name = 'query1') from StockEventWindow select symbol, sum(price) as price insert all events into outputStream ; ``` <p style=\"word-wrap: break-word\">This will process last 10 events in a sliding manner.</p> <p style=\"word-wrap: break-word\">A batch (tumbling) length window that holds and process a number of events as specified in the window.length.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` lengthBatch(<INT> window.length) lengthBatch(<INT> window.length, <BOOL> stream.current.event) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">window.length</td> <td style=\"vertical-align: top; word-wrap: break-word\">The number of events the window should tumble.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">stream.current.event</td> <td style=\"vertical-align: top; word-wrap: break-word\">Let the window stream the current events out as and when they arrive to the window while expiring them in batches.</td> <td style=\"vertical-align: top\">false</td> <td style=\"vertical-align: top\">BOOL</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define stream InputEventStream (symbol string, price float, volume int); @info(name = 'query1') from InputEventStream#lengthBatch(10) select symbol, sum(price) as price insert into OutputStream; ``` <p style=\"word-wrap: break-word\">This collect and process 10 events as a batch and output"
},
{
"data": "<span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` define stream InputEventStream (symbol string, price float, volume int); @info(name = 'query1') from InputEventStream#lengthBatch(10, true) select symbol, sum(price) as sumPrice insert into OutputStream; ``` <p style=\"word-wrap: break-word\">This window sends the arriving events directly to the output letting the <code>sumPrice</code> to increase gradually, after every 10 events it clears the window as a batch and resets the <code>sumPrice</code> to zero.</p> <span id=\"example-3\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 3</span> ``` define stream InputEventStream (symbol string, price float, volume int); define window StockEventWindow (symbol string, price float, volume int) lengthBatch(10) output all events; @info(name = 'query0') from InputEventStream insert into StockEventWindow; @info(name = 'query1') from StockEventWindow select symbol, sum(price) as price insert all events into OutputStream ; ``` <p style=\"word-wrap: break-word\">This uses an defined window to process 10 events as a batch and output all events.</p> <p><i>Deprecated</i></p> <p style=\"word-wrap: break-word\">This window identifies and returns all the events of which the current frequency exceeds the value specified for the supportThreshold parameter.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` lossyFrequent(<DOUBLE> support.threshold) lossyFrequent(<DOUBLE> support.threshold, <DOUBLE> error.bound) lossyFrequent(<DOUBLE> support.threshold, <DOUBLE> error.bound, <STRING> attribute) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">support.threshold</td> <td style=\"vertical-align: top; word-wrap: break-word\">The support threshold value.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">DOUBLE</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">error.bound</td> <td style=\"vertical-align: top; word-wrap: break-word\">The error bound value.</td> <td style=\"vertical-align: top\">`support.threshold`/10</td> <td style=\"vertical-align: top\">DOUBLE</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">attribute</td> <td style=\"vertical-align: top; word-wrap: break-word\">The attributes to group the events. If no attributes are given, the concatenation of all the attributes of the event is considered.</td> <td style=\"vertical-align: top\">The concatenation of all the attributes of the event is considered.</td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">Yes</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define stream purchase (cardNo string, price float); define window purchaseWindow (cardNo string, price float) lossyFrequent(0.1, 0.01); @info(name = 'query0') from purchase[price >= 30] insert into purchaseWindow; @info(name = 'query1') from purchaseWindow select cardNo, price insert all events into PotentialFraud; ``` <p style=\"word-wrap: break-word\">lossyFrequent(0.1, 0.01) returns all the events of which the current frequency exceeds 0.1, with an error bound of 0.01.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` define stream purchase (cardNo string, price float); define window purchaseWindow (cardNo string, price float) lossyFrequent(0.3, 0.05, cardNo); @info(name = 'query0') from purchase[price >= 30] insert into purchaseWindow; @info(name = 'query1') from purchaseWindow select cardNo, price insert all events into PotentialFraud; ``` <p style=\"word-wrap: break-word\">lossyFrequent(0.3, 0.05, cardNo) returns all the events of which the cardNo attributes frequency exceeds 0.3, with an error bound of 0.05.</p> <p style=\"word-wrap: break-word\">This is a session window that holds events that belong to a specific session. The events that belong to a specific session are identified by a grouping attribute (i.e., a session key). A session gap period is specified to determine the time period after which the session is considered to be expired. A new event that arrives with a specific value for the session key is matched with the session window with the same session"
},
{
"data": "can be out of order and late arrival of events, these events can arrive after the session is expired, to include those events to the matching session key specify a latency time period that is less than the session gap period.To have aggregate functions with session windows, the events need to be grouped by the session key via a 'group by' clause.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` session(<INT|LONG|TIME> window.session) session(<INT|LONG|TIME> window.session, <STRING> window.key) session(<INT|LONG|TIME> window.session, <STRING> window.key, <INT|LONG|TIME> window.allowed.latency) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">window.session</td> <td style=\"vertical-align: top; word-wrap: break-word\">The time period for which the session considered is valid. This is specified in seconds, minutes, or milliseconds (i.e., 'min', 'sec', or 'ms'.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">window.key</td> <td style=\"vertical-align: top; word-wrap: break-word\">The grouping attribute for events.</td> <td style=\"vertical-align: top\">default-key</td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">window.allowed.latency</td> <td style=\"vertical-align: top; word-wrap: break-word\">This specifies the time period for which the session window is valid after the expiration of the session. The time period specified here should be less than the session time gap (which is specified via the 'window.session' parameter).</td> <td style=\"vertical-align: top\">0</td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define stream PurchaseEventStream (user string, item_number int, price float, quantity int); @info(name='query0) from PurchaseEventStream#window.session(5 sec, user, 2 sec) select * insert all events into OutputStream; ``` <p style=\"word-wrap: break-word\">This query processes events that arrive at the PurchaseEvent input stream. The 'user' attribute is the session key, and the session gap is 5 seconds. '2 sec' is specified as the allowed latency. Therefore, events with the matching user name that arrive 2 seconds after the expiration of the session are also considered when performing aggregations for the session identified by the given user name.</p> <p style=\"word-wrap: break-word\">This window holds a batch of events that equal the number specified as the windowLength and sorts them in the given order.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` sort(<INT> window.length, <STRING|DOUBLE|INT|LONG|FLOAT|LONG> attribute) sort(<INT> window.length, <STRING|DOUBLE|INT|LONG|FLOAT|LONG> attribute, <STRING> order, <STRING> ...) sort(<INT> window.length, <STRING|DOUBLE|INT|LONG|FLOAT|LONG> attribute, <STRING> order, <STRING|DOUBLE|INT|LONG|FLOAT|LONG> attribute, <STRING|DOUBLE|INT|LONG|FLOAT|LONG> ...) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">window.length</td> <td style=\"vertical-align: top; word-wrap: break-word\">The size of the window length.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">attribute</td> <td style=\"vertical-align: top; word-wrap: break-word\">The attribute that should be checked for the order.</td> <td style=\"vertical-align: top\">The concatenation of all the attributes of the event is considered.</td> <td style=\"vertical-align: top\">STRING<br>DOUBLE<br>INT<br>LONG<br>FLOAT<br>LONG</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">Yes</td> </tr> <tr> <td style=\"vertical-align: top\">order</td> <td style=\"vertical-align: top; word-wrap: break-word\">The order define as \"asc\" or \"desc\".</td> <td style=\"vertical-align: top\">asc</td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size:"
},
{
"data": "font-weight: bold;\">EXAMPLE 1</span> ``` define stream cseEventStream (symbol string, price float, volume long); define window cseEventWindow (symbol string, price float, volume long) sort(2,volume, 'asc'); @info(name = 'query0') from cseEventStream insert into cseEventWindow; @info(name = 'query1') from cseEventWindow select volume insert all events into outputStream ; ``` <p style=\"word-wrap: break-word\">sort(5, price, 'asc') keeps the events sorted by price in the ascending order. Therefore, at any given time, the window contains the 5 lowest prices.</p> <p style=\"word-wrap: break-word\">A sliding time window that holds events that arrived during the last windowTime period at a given time, and gets updated for each event arrival and expiry.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` time(<INT|LONG|TIME> window.time) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">window.time</td> <td style=\"vertical-align: top; word-wrap: break-word\">The sliding time period for which the window should hold events.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define window cseEventWindow (symbol string, price float, volume int) time(20) output all events; @info(name = 'query0') from cseEventStream insert into cseEventWindow; @info(name = 'query1') from cseEventWindow select symbol, sum(price) as price insert all events into outputStream ; ``` <p style=\"word-wrap: break-word\">This will processing events that arrived within the last 20 milliseconds.</p> <p style=\"word-wrap: break-word\">A batch (tumbling) time window that holds and process events that arrive during 'window.time' period as a batch.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` timeBatch(<INT|LONG|TIME> window.time) timeBatch(<INT|LONG|TIME> window.time, <INT|LONG> start.time) timeBatch(<INT|LONG|TIME> window.time, <BOOL> stream.current.event) timeBatch(<INT|LONG|TIME> window.time, <INT|LONG> start.time, <BOOL> stream.current.event) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">window.time</td> <td style=\"vertical-align: top; word-wrap: break-word\">The batch time period in which the window process the events.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">start.time</td> <td style=\"vertical-align: top; word-wrap: break-word\">This specifies an offset in milliseconds in order to start the window at a time different to the standard time.</td> <td style=\"vertical-align: top\">Timestamp of first event</td> <td style=\"vertical-align: top\">INT<br>LONG</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">stream.current.event</td> <td style=\"vertical-align: top; word-wrap: break-word\">Let the window stream the current events out as and when they arrive to the window while expiring them in batches.</td> <td style=\"vertical-align: top\">false</td> <td style=\"vertical-align: top\">BOOL</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define stream InputEventStream (symbol string, price float, volume int); @info(name = 'query1') from InputEventStream#timeBatch(20 sec) select symbol, sum(price) as price insert into OutputStream; ``` <p style=\"word-wrap: break-word\">This collect and process incoming events as a batch every 20 seconds and output them.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` define stream InputEventStream (symbol string, price float, volume int); @info(name = 'query1') from InputEventStream#timeBatch(20 sec, true) select symbol, sum(price) as sumPrice insert into OutputStream; ``` <p style=\"word-wrap: break-word\">This window sends the arriving events directly to the output letting the <code>sumPrice</code> to increase gradually and on every 20 second interval it clears the window as a batch resetting the <code>sumPrice</code> to zero.</p> <span id=\"example-3\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size:"
},
{
"data": "font-weight: bold;\">EXAMPLE 3</span> ``` define stream InputEventStream (symbol string, price float, volume int); define window StockEventWindow (symbol string, price float, volume int) timeBatch(20 sec) output all events; @info(name = 'query0') from InputEventStream insert into StockEventWindow; @info(name = 'query1') from StockEventWindow select symbol, sum(price) as price insert all events into OutputStream ; ``` <p style=\"word-wrap: break-word\">This uses an defined window to process events arrived every 20 seconds as a batch and output all events.</p> <p style=\"word-wrap: break-word\">A sliding time window that, at a given time holds the last window.length events that arrived during last window.time period, and gets updated for every event arrival and expiry.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` timeLength(<INT|LONG|TIME> window.time, <INT> window.length) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">window.time</td> <td style=\"vertical-align: top; word-wrap: break-word\">The sliding time period for which the window should hold events.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT<br>LONG<br>TIME</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">window.length</td> <td style=\"vertical-align: top; word-wrap: break-word\">The number of events that should be be included in a sliding length window..</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">INT</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` define stream cseEventStream (symbol string, price float, volume int); define window cseEventWindow (symbol string, price float, volume int) timeLength(2 sec, 10); @info(name = 'query0') from cseEventStream insert into cseEventWindow; @info(name = 'query1') from cseEventWindow select symbol, price, volume insert all events into outputStream; ``` <p style=\"word-wrap: break-word\">window.timeLength(2 sec, 10) holds the last 10 events that arrived during last 2 seconds and gets updated for every event arrival and expiry.</p> <p style=\"word-wrap: break-word\">In-memory transport that can communicate with other in-memory transports within the same JVM, itis assumed that the publisher and subscriber of a topic uses same event schema (stream definition).</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` @sink(type=\"inMemory\", topic=\"<STRING>\", @map(...))) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">topic</td> <td style=\"vertical-align: top; word-wrap: break-word\">Event will be delivered to allthe subscribers of the same topic</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` @sink(type='inMemory', @map(type='passThrough')) define stream BarStream (symbol string, price float, volume long) ``` <p style=\"word-wrap: break-word\">In this example BarStream uses inMemory transport which emit the Siddhi events internally without using external transport and transformation.</p> <p style=\"word-wrap: break-word\">This is a sink that can be used as a logger. This will log the output events in the output stream with user specified priority and a prefix</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` @sink(type=\"log\", priority=\"<STRING>\", prefix=\"<STRING>\", @map(...))) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">priority</td> <td style=\"vertical-align: top; word-wrap: break-word\">This will set the logger priority i.e log level. Accepted values are INFO, DEBUG, WARN, FATAL, ERROR, OFF, TRACE</td> <td style=\"vertical-align: top\">INFO</td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> <tr> <td style=\"vertical-align: top\">prefix</td> <td style=\"vertical-align: top; word-wrap: break-word\">This will be the prefix to the output"
},
{
"data": "If the output stream has event [2,4] and the prefix is given as \"Hello\" then the log will show \"Hello : [2,4]\"</td> <td style=\"vertical-align: top\">default prefix will be <Siddhi App Name> : <Stream Name></td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">Yes</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` @sink(type='log', prefix='My Log', priority='DEBUG') define stream BarStream (symbol string, price float, volume long) ``` <p style=\"word-wrap: break-word\">In this example BarStream uses log sink and the prefix is given as My Log. Also the priority is set to DEBUG.</p> <span id=\"example-2\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 2</span> ``` @sink(type='log', priority='DEBUG') define stream BarStream (symbol string, price float, volume long) ``` <p style=\"word-wrap: break-word\">In this example BarStream uses log sink and the priority is set to DEBUG. User has not specified prefix so the default prefix will be in the form <Siddhi App Name> : <Stream Name></p> <span id=\"example-3\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 3</span> ``` @sink(type='log', prefix='My Log') define stream BarStream (symbol string, price float, volume long) ``` <p style=\"word-wrap: break-word\">In this example BarStream uses log sink and the prefix is given as My Log. User has not given a priority so it will be set to default INFO.</p> <span id=\"example-4\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 4</span> ``` @sink(type='log') define stream BarStream (symbol string, price float, volume long) ``` <p style=\"word-wrap: break-word\">In this example BarStream uses log sink. The user has not given prefix or priority so they will be set to their default values.</p> <p style=\"word-wrap: break-word\">Pass-through mapper passed events (Event[]) through without any mapping or modifications.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` @sink(..., @map(type=\"passThrough\") ``` <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` @sink(type='inMemory', @map(type='passThrough')) define stream BarStream (symbol string, price float, volume long); ``` <p style=\"word-wrap: break-word\">In the following example BarStream uses passThrough outputmapper which emit Siddhi event directly without any transformation into sink.</p> <p style=\"word-wrap: break-word\">In-memory source that can communicate with other in-memory sinks within the same JVM, it is assumed that the publisher and subscriber of a topic uses same event schema (stream definition).</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` @source(type=\"inMemory\", topic=\"<STRING>\", @map(...))) ``` <span id=\"query-parameters\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">QUERY PARAMETERS</span> <table> <tr> <th>Name</th> <th style=\"min-width: 20em\">Description</th> <th>Default Value</th> <th>Possible Data Types</th> <th>Optional</th> <th>Dynamic</th> </tr> <tr> <td style=\"vertical-align: top\">topic</td> <td style=\"vertical-align: top; word-wrap: break-word\">Subscribes to sent on the given topic.</td> <td style=\"vertical-align: top\"></td> <td style=\"vertical-align: top\">STRING</td> <td style=\"vertical-align: top\">No</td> <td style=\"vertical-align: top\">No</td> </tr> </table> <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` @source(type='inMemory', @map(type='passThrough')) define stream BarStream (symbol string, price float, volume long) ``` <p style=\"word-wrap: break-word\">In this example BarStream uses inMemory transport which passes the received event internally without using external transport.</p> <p style=\"word-wrap: break-word\">Pass-through mapper passed events (Event[]) through without any mapping or modifications.</p> <span id=\"syntax\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Syntax</span> ``` @source(..., @map(type=\"passThrough\") ``` <span id=\"examples\" class=\"md-typeset\" style=\"display: block; font-weight: bold;\">Examples</span> <span id=\"example-1\" class=\"md-typeset\" style=\"display: block; color: rgba(0, 0, 0, 0.54); font-size: 12.8px; font-weight: bold;\">EXAMPLE 1</span> ``` @source(type='tcp', @map(type='passThrough')) define stream BarStream (symbol string, price float, volume long); ``` <p style=\"word-wrap: break-word\">In this example BarStream uses passThrough inputmapper which passes the received Siddhi event directly without any transformation into source.</p>"
}
] |
{
"category": "App Definition and Development",
"file_name": "fix-12305.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Eliminated passing incomplete client/connection information into `emqx_cm`. This could lead to internal inconsistency and affect memory consumption and some processes like node evacuation."
}
] |
{
"category": "App Definition and Development",
"file_name": "SHOW_ROUTINE_LOAD_TASK.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" import RoutineLoadPrivNote from '../../../assets/commonMarkdown/RoutineLoadPrivNote.md' Shows the execution information of load tasks within a Routine Load job. :::note <RoutineLoadPrivNote /> For the relationship between a Routine Load job and the load tasks in it, see ::: ```SQL SHOW ROUTINE LOAD TASK [ FROM <db_name>] [ WHERE JobName = <job_name> ] ``` :::note You can add the `\\G` option to the statement (such as `SHOW ROUTINE LOAD TASK WHERE JobName = <job_name>\\G`) to vertically display the return result rather than in the usual horizontal table format. ::: | Parameter | Required | Description | | - | | -- | | db_name | No | The name of the database to which the Routine Load job belongs. | | JobName | No | The name of the Routine Load job. | | Parameter | Description | | -- | | | TaskId | The globally unique ID of the load task, automatically generated by StarRocks. | | TxnId | The ID of the transaction to which the load task belongs. | | TxnStatus | The status of the transaction to which the load task belongs. `UNKNOWN` indicates that the transaction is not started yet, possibly because the load task is not dispatched or executed. | | JobId | The ID of the load job. | | CreateTime | The date and time when the load task was created. | | LastScheduledTime | The date and time when the load task was last scheduled. | | ExecuteStartTime | The date and time when the load task was executed. | | Timeout | The timeout period for the load task, controlled by the FE parameter and the `tasktimeoutsecond` parameter in for the Routine Load job. | | BeId | The ID of the BE that executes the load task. | | DataSourceProperties | The load task's progress (measured in the offset) of consuming messages in partitions of the topic. | | Message | The information returned for the load task, including task error information. | View all load tasks in the Routine Load job `exampletblordertest`. ```SQL MySQL [exampledb]> SHOW ROUTINE LOAD TASK WHERE JobName = \"exampletbl_ordertest\"; +--+-+--+-+++++++--+ | TaskId | TxnId | TxnStatus | JobId | CreateTime | LastScheduledTime | ExecuteStartTime | Timeout | BeId | DataSourceProperties | Message | +--+-+--+-+++++++--+ | abde6998-c19a-43d6-b48c-6ca7e14144a3 | -1 | UNKNOWN | 10208 | 2023-12-22 12:46:10 | 2023-12-22 12:47:00 | NULL | 60 | -1 | Progress:{\"0\":6},LatestOffset:null | there is no new data in kafka/pulsar, wait for 10 seconds to schedule again | +--+-+--+-+++++++--+ 1 row in set (0.00 sec) ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "v22.5.3.21-stable.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Fix possible crash in `Distributed` async insert in case of removing a replica from config. (). Backported in : Any allocations inside OvercommitTracker may lead to deadlock. Logging was not very informative so it's easier just to remove logging. Fixes . (). Backported in : Fix bug in filesystem cache that could happen in some corner case which coincided with cache capacity hitting the limit. Closes . (). Backported in : Fix error `Block structure mismatch` which could happen for INSERT into table with attached MATERIALIZED VIEW and enabled setting `extremes = 1`. Closes and . (). Backported in : Fixed error `Not found column Type in block` in selects with `PREWHERE` and read-in-order optimizations. (). Backported in : Declare RabbitMQ queue without default arguments `x-max-length` and `x-overflow`. (). Backported in : Fix incorrect fetch postgresql tables query fro PostgreSQL database engine. Closes . (). Retry docker buildx commands with progressive sleep in between (). Add docker_server.py running to backport and release CIs ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "windows_.md",
"project_name": "Tremor",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "A comma delimited set of window references. Windows can be local or modular ```tremor win, mymodule::onesec, mymodule::fivesec ``` The identifers refer to a window definition and can be used in operators to define a temporally bound set of events based on the semantics of the window definition. In a tilt frame - or set of windows, the output of a window can is the input the next window in a sequence. This is a form of temporal window-driven event compaction that allows memory be conserved. At 1000 events per second, a 1 minute window needs to store 60,000 events per group per second. But 60 1 second windows can be merged with aggregate functions like `dds` and `hdr` histograms. Say, each histogram is 1k of memory per group per frame - that is a cost of 2k bytes per group. In a streaming system - indefinite aggregation of in memory events is always a tradeoff against available reosurces, and the relative business value. Often multiple windows in a tilt frame can be more effective than a single very long lived window."
}
] |
{
"category": "App Definition and Development",
"file_name": "Email.md",
"project_name": "SeaTunnel",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Email sink connector Send the data as a file to email. The tested email version is 1.5.6. | name | type | required | default value | |--|--|-|| | emailfromaddress | string | yes | - | | emailtoaddress | string | yes | - | | email_host | string | yes | - | | emailtransportprotocol | string | yes | - | | emailsmtpauth | string | yes | - | | emailauthorizationcode | string | yes | - | | emailmessageheadline | string | yes | - | | emailmessagecontent | string | yes | - | | common-options | | no | - | Sender Email Address . Address to receive mail. SMTP server to connect to. The protocol to load the session . Whether to authenticate the customer. authorization code,You can obtain the authorization code from the mailbox Settings. The subject line of the entire message. The body of the entire message. Sink plugin common parameters, please refer to for details. ```bash EmailSink { emailfromaddress = \"[email protected]\" emailtoaddress = \"[email protected]\" email_host=\"smtp.qq.com\" emailtransportprotocol=\"smtp\" emailsmtpauth=\"true\" emailauthorizationcode=\"\" emailmessageheadline=\"\" emailmessagecontent=\"\" } ``` Add Email Sink Connector"
}
] |
{
"category": "App Definition and Development",
"file_name": "instance_manager.md",
"project_name": "EDB",
"subcategory": "Database"
} | [
{
"data": "CloudNativePG does not rely on an external tool for failover management. It simply relies on the Kubernetes API server and a native key component called: the Postgres instance manager. The instance manager takes care of the entire lifecycle of the PostgreSQL leading process (also known as `postmaster`). When you create a new cluster, the operator makes a Pod per instance. The field `.spec.instances` specifies how many instances to create. Each Pod will start the instance manager as the parent process (PID 1) for the main container, which in turn runs the PostgreSQL instance. During the lifetime of the Pod, the instance manager acts as a backend to handle the . The startup and liveness probes rely on `pg_isready`, while the readiness probe checks if the database is up and able to accept connections using the superuser credentials. The readiness probe is positive when the Pod is ready to accept traffic. The liveness probe controls when to restart the container once the startup probe interval has elapsed. !!! Important The liveness and readiness probes will report a failure if the probe command fails three times with a 10-second interval between each check. The liveness probe detects if the PostgreSQL instance is in a broken state and needs to be restarted. The value in `startDelay` is used to delay the probe's execution, preventing an instance with a long startup time from being restarted. The interval (in seconds) after the Pod has started before the liveness probe starts working is expressed in the `.spec.startDelay` parameter, which defaults to 3600 seconds. The correct value for your cluster is related to the time needed by PostgreSQL to start. !!! Warning If `.spec.startDelay` is too low, the liveness probe will start working before the PostgreSQL startup is complete, and the Pod could be restarted"
},
{
"data": "When a Pod running Postgres is deleted, either manually or by Kubernetes following a node drain operation, the kubelet will send a termination signal to the instance manager, and the instance manager will take care of shutting down PostgreSQL in an appropriate way. The `.spec.smartShutdownTimeout` and `.spec.stopDelay` options, expressed in seconds, control the amount of time given to PostgreSQL to shut down. The values default to 180 and 1800 seconds, respectively. The shutdown procedure is composed of two steps: The instance manager requests a smart shut down, disallowing any new connection to PostgreSQL. This step will last for up to `.spec.smartShutdownTimeout` seconds. If PostgreSQL is still up, the instance manager requests a fast shut down, terminating any existing connection and exiting promptly. If the instance is archiving and/or streaming WAL files, the process will wait for up to the remaining time set in `.spec.stopDelay` to complete the operation and then forcibly shut down. Such a timeout needs to be at least 15 seconds. !!! Important In order to avoid any data loss in the Postgres cluster, which impacts the database RPO, don't delete the Pod where the primary instance is running. In this case, perform a switchover to another instance first. During a switchover, the shutdown procedure is slightly different from the general case. Indeed, the operator requires the former primary to issue a fast shut down before the selected new primary can be promoted, in order to ensure that all the data are available on the new primary. For this reason, the `.spec.switchoverDelay`, expressed in seconds, controls the time given to the former primary to shut down gracefully and archive all the WAL files. By default it is set to `3600` (1 hour). !!! Warning The `.spec.switchoverDelay` option affects the RPO and RTO of your PostgreSQL database. Setting it to a low value, might favor RTO over RPO but lead to data loss at cluster level and/or backup level. On the contrary, setting it to a high value, might remove the risk of data loss while leaving the cluster without an active primary for a longer time during the switchover. In case of primary pod failure, the cluster will go into failover mode. Please refer to the for details."
}
] |
{
"category": "App Definition and Development",
"file_name": "sysbench-ysql.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Benchmark YSQL performance using sysbench headerTitle: sysbench linkTitle: sysbench description: Benchmark YSQL performance using sysbench. headcontent: Benchmark YSQL performance using sysbench menu: v2.18: identifier: sysbench-ysql parent: benchmark weight: 5 type: docs sysbench is a popular tool for benchmarking databases like PostgreSQL and MySQL, as well as system capabilities like CPU, memory, and I/O. The is forked from the with a few modifications to better reflect YugabyteDB's distributed nature. To ensure the recommended hardware requirements are met and the database is correctly configured before benchmarking, review the . Install sysbench using the following steps: ```sh $ cd $HOME $ git clone https://github.com/yugabyte/sysbench.git $ cd sysbench $ ./autogen.sh && ./configure --with-pgsql && make -j && sudo make install ``` This installs the sysbench utility in `/usr/local/bin`. Make sure you have the `ysqlsh` exported to the `PATH` variable. ```sh $ export PATH=$PATH:/path/to/ysqlsh ``` Start your YugabyteDB cluster by following the steps in . {{< tip title=\"Tip\" >}} You will need the IP addresses of the nodes in the cluster for the next step. {{< /tip>}} Run the `run_sysbench.sh` shell script to load the data and run the various workloads: ```sh ./run_sysbench.sh --ip <ip> ``` This script runs all 8 workloads using 64 threads with the number of tables as 10 and the table size as 100k. If you want to run the benchmark with a different count of tables and tablesize, do the following: ```sh ./run_sysbench.sh --ip <ip> --numtables <number of tables> --tablesize <number of rows in each table> ``` You can choose to run the following workloads individually: oltp_insert oltppointselect oltpwriteonly oltpreadonly oltpreadwrite oltpupdateindex oltpupdatenon_index oltp_delete Before starting the workload, load the data as follows: ```sh $ sysbench <workload> \\ --tables=10 \\ --table-size=100000 \\ --rangekeypartitioning=true \\ --db-driver=pgsql \\ --pgsql-host=127.0.0.1 \\ --pgsql-port=5433 \\ --pgsql-user=yugabyte \\ --pgsql-db=yugabyte \\ prepare ``` Run a workload as follows: ```sh $ sysbench <workload> \\ --tables=10 \\ --table-size=100000 \\ --rangekeypartitioning=true \\ --db-driver=pgsql \\ --pgsql-host=127.0.0.1 \\ --pgsql-port=5433 \\ --pgsql-user=yugabyte \\ --pgsql-db=yugabyte \\ --threads=64 \\ --time=120 \\ --warmup-time=120 \\ run ``` The following results are for a 3-node cluster with each node on a c5.4xlarge AWS instance (16 cores, 32 GB of RAM, and 2 EBS volumes), all in the same AZ with the client VM running in the same AZ. | Workload | Throughput (txns/sec) | Latency (ms) | | :-- | :-- | :-- | | OLTPREADONLY | 3276 | 39 | | OLTPREADWRITE | 487 | 265 | | OLTPWRITEONLY | 1818 | 70 | | OLTPPOINTSELECT | 95695 | 1.3 | | OLTP_INSERT | 6348 | 20.1 | | OLTPUPDATEINDEX | 4052 | 31 | | OLTPUPDATENON_INDEX | 11496 | 11 | | OLTP_DELETE | 67499 | 1.9 |"
}
] |
{
"category": "App Definition and Development",
"file_name": "CONTRIBUTING.md",
"project_name": "Redis",
"subcategory": "Database"
} | [
{
"data": "By contributing code to the Redis project in any form you agree to the Redis Software Grant and Contributor License Agreement attached below. Only contributions made under the Redis Software Grant and Contributor License Agreement may be accepted by Redis, and any contribution is subject to the terms of the Redis dual-license under RSALv2/SSPLv1 as described in the LICENSE.txt file included in the Redis source distribution. To specify the intellectual property license granted in any Contribution, Redis Ltd., (\"Redis\") requires a Software Grant and Contributor License Agreement (\"Agreement\"). This Agreement is for your protection as a contributor as well as the protection of Redis and its users; it does not change your rights to use your own Contribution for any other purpose. By making any Contribution, You accept and agree to the following terms and conditions for the Contribution. Except for the license granted in this Agreement to Redis and the recipients of the software distributed by Redis, You reserve all right, title, and interest in and to Your Contribution. Definitions 1.1. \"You\" (or \"Your\") means the copyright owner or legal entity authorized by the copyright owner that is entering into this Agreement with Redis. For legal entities, the entity making a Contribution and all other entities that Control, are Controlled by, or are under common Control with that entity are considered to be a single contributor. For the purposes of this definition, \"Control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. 1.2. \"Contribution\" means the code, documentation, or any original work of authorship, including any modifications or additions to an existing work described above. \"Work\" means any software project stewarded by Redis. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You grant to Redis and to the recipients of the software distributed by Redis a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contribution and such derivative works. Grant of Patent License. Subject to the terms and conditions of this Agreement, You grant to Redis and to the recipients of the software distributed by Redis a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution alone or by a combination of Your Contribution with the Work to which such Contribution was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes a direct or contributory patent infringement, then any patent licenses granted to the claimant entity under this Agreement for that Contribution or Work terminate as of the date such litigation is filed. Representations and"
},
{
"data": "You represent and warrant that: (i) You are legally entitled to grant the above licenses; and (ii) if You are an entity, each employee or agent designated by You is authorized to submit the Contribution on behalf of You; and (iii) your Contribution is Your original work, and that it will not infringe on any third party's intellectual property right(s). Disclaimer. You are not expected to provide support for Your Contribution, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contribution on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Enforceability. Nothing in this Agreement will be construed as creating any joint venture, employment relationship, or partnership between You and Redis. If any provision of this Agreement is held to be unenforceable, the remaining provisions of this Agreement will not be affected. This represents the entire agreement between You and Redis relating to the Contribution. GitHub issues SHOULD ONLY BE USED to report bugs and for DETAILED feature requests. Everything else should be asked on Discord: https://discord.com/invite/redis PLEASE DO NOT POST GENERAL QUESTIONS that are not about bugs or suspected bugs in the GitHub issues system. We'll be delighted to help you and provide all the support on Discord. There is also an active community of Redis users at Stack Overflow: https://stackoverflow.com/questions/tagged/redis Issues and pull requests for documentation belong on the redis-doc repo: https://github.com/redis/redis-doc If you are reporting a security bug or vulnerability, see SECURITY.md. If it is a major feature or a semantical change, please don't start coding straight away: if your feature is not a conceptual fit you'll lose a lot of time writing the code without any reason. Start by posting in the mailing list and creating an issue at Github with the description of, exactly, what you want to accomplish and why. Use cases are important for features to be accepted. Here you can see if there is consensus about your idea. If in step 1 you get an acknowledgment from the project leaders, use the following procedure to submit a patch: a. Fork Redis on GitHub ( https://docs.github.com/en/github/getting-started-with-github/fork-a-repo ) b. Create a topic branch (git checkout -b my_branch) c. Push to your branch (git push origin my_branch) d. Initiate a pull request on GitHub ( https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request ) e. Done :) Keep in mind that we are very overloaded, so issues and PRs sometimes wait for a very long time. However this is not a lack of interest, as the project gets more and more users, we find ourselves in a constant need to prioritize certain issues/PRs over others. If you think your issue/PR is very important try to popularize it, have other users commenting and sharing their point of view, and so forth. This helps. For minor fixes - open a pull request on GitHub. Additional information on the RSALv2/SSPLv1 dual-license is also found in the LICENSE.txt file."
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.0.20.3.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | publish hadoop jars to apache mvn repo. | Major | build | Giridharan Kesavan | Giridharan Kesavan | | | Namenode in Safemode reports to Simon non-zero number of deleted files during startup | Minor | namenode | Hairong Kuang | Suresh Srinivas | | | Incorrect exit codes for \"dfs -chown\", \"dfs -chgrp\" | Minor | fs | Ravi Phulari | Ravi Phulari | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Support for file sizes less than 1MB in DFSIO benchmark. | Major | benchmarks | Konstantin Shvachko | Konstantin Shvachko | | | Update eclipse .classpath template | Major | . | Aaron T. Myers | Aaron T. Myers | | | Un-deprecate the old MapReduce API in the 0.20 branch | Blocker | documentation | Tom White | Todd Lipcon | | | Miscellaneous improvements to HTML markup for web UIs | Minor | . | Todd Lipcon | Eugene Koontz | | | Update the patch level of Jetty | Major | . | Owen O'Malley | Owen O'Malley | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Rack replication policy can be violated for over replicated blocks | Critical | . | Hairong Kuang | Jitendra Nath Pandey | | | FileInputFormat may change the file system of an input path | Blocker | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Invalid example in the documentation of org.apache.hadoop.mapreduce.{Mapper,Reducer} | Trivial | documentation | Benoit Sigoure | Benoit Sigoure | | | FSImage.saveFSImage can lose edits | Blocker | namenode | Todd Lipcon | Konstantin Shvachko | | | DFSClient does not retry in getFileChecksum(..) | Major | hdfs-client | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Race condition between rollEditLog or rollFSImage ant FSEditsLog.write operations corrupts edits log | Blocker | namenode | Cosmin Lehene | Todd Lipcon | | | Incorrect exit codes for \"dfs -chown\", \"dfs -chgrp\" when input is given in wildcard format. | Minor | fs | Ravi Phulari | Ravi Phulari | | | WebServer shouldn't increase port number in case of negative port setting caused by Jetty's race | Major | . | Konstantin Boudnik | Konstantin Boudnik | | | ConcurrentModificationException in JobInProgress | Blocker | jobtracker | Amareshwari Sriramadasu | Dick King | | | Job.getJobID() will always return null | Blocker | client | Amar Kamat | Amareshwari Sriramadasu | | | \"java.lang.ArithmeticException: Non-terminating decimal expansion; no exact representable decimal result.\" while running \"hadoop jar"
},
{
"data": "pi 4 30\" | Minor | examples | Victor Pakhomov | Tsz Wo Nicholas Sze | | | Clearing namespace quota on \"/\" corrupts FS image | Blocker | namenode | Aaron T. Myers | Aaron T. Myers | | | The efficient comparators aren't always used except for BytesWritable and Text | Major | . | Owen O'Malley | Owen O'Malley | | | IPC leaks call parameters when exceptions thrown | Blocker | . | Todd Lipcon | Todd Lipcon | | | Fix BooleanWritable comparator in 0.20 | Major | io | Owen O'Malley | Johannes Zillmann | | | TestNodeCount logic incorrect in branch-0.20 | Minor | namenode, test | Todd Lipcon | Todd Lipcon | | | Eclipse Plugin does not work with Eclipse Ganymede (3.4) | Major | . | Aaron Kimball | Alex Kozlov | | | IPC doesn't properly handle IOEs thrown by socket factory | Major | ipc | Todd Lipcon | Todd Lipcon | | | TestDFSShell failing in branch-20 | Critical | test | Todd Lipcon | Todd Lipcon | | | bug setting block size hdfsOpenFile | Blocker | libhdfs | Eli Collins | Eli Collins | | | TestDistributedFileSystem fails with Wrong FS on weird hosts | Minor | test | Todd Lipcon | Todd Lipcon | | | Quota bug for partial blocks allows quotas to be violated | Blocker | namenode | Eli Collins | Eli Collins | | | TestCLI fails on Ubuntu with default /etc/hosts | Minor | . | Todd Lipcon | Konstantin Boudnik | | | Capacity Scheduler unit tests fail with class not found | Major | capacity-sched | Owen O'Malley | Owen O'Malley | | | Native Libraries do not load if a different platform signature is returned from org.apache.hadoop.util.PlatformName | Major | native | Stephen Watt | Stephen Watt | | | Reduce dev. cycle time by moving system testing artifacts from default build and push to maven for HDFS | Major | . | Arun C Murthy | Luke Lu | | | Thousand of CLOSE\\_WAIT socket | Major | hdfs-client | Dennis Cheung | Bharath Mundlapudi | | | raise contrib junit test jvm memory size to 512mb | Major | test | Owen O'Malley | Owen O'Malley | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Benchmark overhead of RPC session establishment | Major | benchmarks | Konstantin Shvachko | Konstantin Shvachko | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Dry entropy pool on Hudson boxes causing test timeouts | Major | test | Todd Lipcon | Konstantin Boudnik | | | Remove ref of 20.3 release from branch-0.20 CHANGES.txt | Major | documentation | Eli Collins | Eli Collins |"
}
] |
{
"category": "App Definition and Development",
"file_name": "v21.3.20.1-lts.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Quota limit was not reached, but the limit was exceeded. This PR fixes . (). Backported in : Fix null pointer dereference in low cardinality data when deserializing LowCardinality data in the Native format. (). Backported in : fix crash when used fuzzBits with multiply same FixedString, Close . (). Fix data race in ProtobufSchemas (). Fix possible Pipeline stuck in case of StrictResize processor. (). Fix arraySlice with null args. (). Fix queries with hasColumnInTable constant condition and non existing column (). Merge (). Merge (). Cherry pick to 21.3: Merge ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "jsonb-ysql.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: JSON support in YSQL headerTitle: JSON support linkTitle: JSON support description: YSQL JSON Support in YugabyteDB. headcontent: Explore YugabyteDB support for JSON data menu: v2.18: name: JSON support identifier: explore-json-support-1-ysql parent: explore weight: 260 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../jsonb-ysql/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL </a> </li> <li > <a href=\"../jsonb-ycql/\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL </a> </li> </ul> JSON data types are for storing JSON (JavaScript Object Notation) data, as specified in . Such data can also be stored as `text`, but the JSON data types have the advantage of enforcing that each stored value is valid according to the JSON rules. Assorted JSON-specific functions and operators are also available for data stored in these data types. {{% explore-setup-single %}} JSON functionality in YSQL is nearly identical to the . YSQL supports the following two JSON data types: jsonb* - does not preserve white space, does not preserve the order of object keys, and does not keep duplicate object keys. If duplicate keys are specified in the input, only the last value is kept. json* - stores an exact copy of the input text, and therefore preserves semantically-insignificant white space between tokens, as well as the order of keys in JSON objects. Also, if a JSON object in the value contains the same key more than once, all the key/value pairs are kept. The processing functions consider the last value as the operative one. {{< tip title=\"When to use jsonb or json\" >}} In general, most applications should prefer to store JSON data as jsonb, unless there are quite specialized needs, such as legacy assumptions about ordering of object keys. They accept almost identical sets of values as input. The major practical difference is one of efficiency: json stores an exact copy of the input text, which processing functions must re-parse on each execution jsonb data is stored in a decomposed binary format that makes it slightly slower to input due to added conversion overhead, but significantly faster to process, because no re-parsing is needed. jsonb also supports indexing, which can be a significant advantage. {{< /tip >}} This section focuses on only the jsonb type. Create a basic table `books` with a primary key and one `jsonb` column `doc` that contains various details about each book. ```plpgsql yugabyte=# CREATE TABLE books(k int primary key, doc jsonb not null); ``` Next, insert some rows which contain details about various books. These details are represented as JSON documents, as shown below."
},
{
"data": "yugabyte=# INSERT INTO books(k, doc) values (1, '{ \"ISBN\" : 4582546494267, \"title\" : \"Macbeth\", \"author\" : {\"givenname\": \"William\", \"familyname\": \"Shakespeare\"}, \"year\" : 1623}'), (2, '{ \"ISBN\" : 8760835734528, \"title\" : \"Hamlet\", \"author\" : {\"givenname\": \"William\", \"familyname\": \"Shakespeare\"}, \"year\" : 1603, \"editors\" : [\"Lysa\", \"Elizabeth\"] }'), (3, '{ \"ISBN\" : 7658956876542, \"title\" : \"Oliver Twist\", \"author\" : {\"givenname\": \"Charles\", \"familyname\": \"Dickens\"}, \"year\" : 1838, \"genre\" : \"novel\", \"editors\" : [\"Mark\", \"Tony\", \"Britney\"] }'), (4, '{ \"ISBN\" : 9874563896457, \"title\" : \"Great Expectations\", \"author\" : {\"family_name\": \"Dickens\"}, \"year\" : 1950, \"genre\" : \"novel\", \"editors\" : [\"Robert\", \"John\", \"Melisa\", \"Elizabeth\"] }'), (5, '{ \"ISBN\" : 8647295405123, \"title\" : \"A Brief History of Time\", \"author\" : {\"givenname\": \"Stephen\", \"familyname\": \"Hawking\"}, \"year\" : 1988, \"genre\" : \"science\", \"editors\" : [\"Melisa\", \"Mark\", \"John\", \"Fred\", \"Jane\"] }'), (6, '{ \"ISBN\" : 6563973589123, \"year\" : 1989, \"genre\" : \"novel\", \"title\" : \"Joy Luck Club\", \"author\" : {\"givenname\": \"Amy\", \"familyname\": \"Tan\"}, \"editors\" : [\"Ruilin\", \"Aiping\"]}'); ``` {{< note title=\"Note\" >}} Some of the rows in the example have some of the keys missing (intentional). But the row with \"k=6\" has every key. {{< /note >}} List all the rows thus: ```plpgsql yugabyte=# SELECT * FROM books; ``` This is the result: ```output k | doc 5 | {\"ISBN\": 8647295405123, \"year\": 1988, \"genre\": \"science\", \"title\": \"A Brief History of Time\", \"author\": {\"givenname\": \"Stephen\", \"familyname\": \"Hawking\"}, \"editors\": [\"Melisa\", \"Mark\", \"John\", \"Fred\", \"Jane\"]} 1 | {\"ISBN\": 4582546494267, \"year\": 1623, \"title\": \"Macbeth\", \"author\": {\"givenname\": \"William\", \"familyname\": \"Shakespeare\"}} 6 | {\"ISBN\": 6563973589123, \"year\": 1989, \"genre\": \"novel\", \"title\": \"Joy Luck Club\", \"author\": {\"givenname\": \"Amy\", \"familyname\": \"Tan\"}, \"editors\": [\"Ruilin\", \"Aiping\"]} 4 | {\"ISBN\": 9874563896457, \"year\": 1950, \"genre\": \"novel\", \"title\": \"Great Expectations\", \"author\": {\"family_name\": \"Dickens\"}, \"editors\": [\"Robert\", \"John\", \"Melisa\", \"Elizabeth\"]} 2 | {\"ISBN\": 8760835734528, \"year\": 1603, \"title\": \"Hamlet\", \"author\": {\"givenname\": \"William\", \"familyname\": \"Shakespeare\"}, \"editors\": [\"Lysa\", \"Elizabeth\"]} 3 | {\"ISBN\": 7658956876542, \"year\": 1838, \"genre\": \"novel\", \"title\": \"Oliver Twist\", \"author\": {\"givenname\": \"Charles\", \"familyname\": \"Dickens\"}, \"editors\": [\"Mark\", \"Tony\", \"Britney\"]} (6 rows) ``` YSQL has two native operators, the and the , to query JSON documents. The `->` operator returns a JSON object, while the `->>` operator returns text. These operators work on both `JSON` as well as `JSONB` columns to select a subset of attributes as well as to inspect the JSON document. The following example shows how to select a few attributes from each document. ```plpgsql yugabyte=# SELECT doc->'title' AS book_title, CONCAT(doc->'author'->'family_name', ', ', doc->'author'->'given_name') AS author FROM books; ``` This is the result: ```output book_title | author +-- \"A Brief History of Time\" | \"Hawking\", \"Stephen\" \"Macbeth\" | \"Shakespeare\", \"William\" \"Joy Luck Club\" | \"Tan\", \"Amy\" \"Great Expectations\" | \"Dickens\", \"Hamlet\" | \"Shakespeare\", \"William\" \"Oliver Twist\" | \"Dickens\", \"Charles\" (6 rows) ``` Because the `->` operator returns an object, you can chain it to inspect deep into a JSON document, as follows:"
},
{
"data": "yugabyte=# SELECT '{\"title\": \"Macbeth\", \"author\": {\"given_name\": \"William\"}}'::jsonb -> 'author' -> 'givenname' as firstname; ``` This is the result: ```output first_name \"William\" (1 row) ``` The can be used to check if a JSON document contains a certain attribute. For example, if you want to find a count of the records where the `doc` column contains a property named genre, run the following statement: ```plpgsql yugabyte=# SELECT doc->'title' AS book_title, doc->'genre' AS genre FROM books WHERE doc ? 'genre'; ``` This is the result: ```output book_title | genre +-- \"A Brief History of Time\" | \"science\" \"Joy Luck Club\" | \"novel\" \"Great Expectations\" | \"novel\" \"Oliver Twist\" | \"novel\" (4 rows) ``` The tests whether one document contains another. If you want to find all books that contain the JSON value `{\"author\": {\"given_name\": \"William\"}}` (in other words, the author of the book has the given name William), do the following: ```plpgsql yugabyte=# SELECT doc->'title' AS book_title, CONCAT(doc->'author'->'family_name', ', ', doc->'author'->'given_name') AS author FROM books WHERE doc @> '{\"author\": {\"given_name\": \"William\"}}'::jsonb; ``` This is the result: ```output book_title | author +-- \"Macbeth\" | \"Shakespeare\", \"William\" \"Hamlet\" | \"Shakespeare\", \"William\" (2 rows) ``` You can update a JSON document in a number of ways, as shown in the following examples. Use the to either update or insert the attribute into the existing JSON document. For example, if you want to add a `stock` attribute to all the books, do the following: ```plpgsql yugabyte=# UPDATE books SET doc = doc || '{\"stock\": \"true\"}'; ``` This is the result: ```sql yugabyte=# SELECT doc->'title' AS title, doc->'stock' AS stock FROM books; ``` ```output title | stock +-- \"A Brief History of Time\" | \"true\" \"Macbeth\" | \"true\" \"Joy Luck Club\" | \"true\" \"Great Expectations\" | \"true\" \"Hamlet\" | \"true\" \"Oliver Twist\" | \"true\" (6 rows) ``` Use the to remove an attribute: ```plpgsql yugabyte=# UPDATE books SET doc = doc - 'stock'; ``` This removes the field from all the documents, as shown below. ```sql yugabyte=# SELECT doc->'title' AS title, doc->'stock' AS stock FROM books; ``` ```output title | stock +- \"A Brief History of Time\" | \"Macbeth\" | \"Joy Luck Club\" | \"Great Expectations\" | \"Hamlet\" | \"Oliver Twist\" | (6 rows) ``` To replace an entire document, run the following SQL statement: ```plpgsql UPDATE books SET doc = '{\"ISBN\": 4582546494267, \"year\": 1623, \"title\": \"Macbeth\", \"author\": {\"givenname\": \"William\", \"familyname\": \"Shakespeare\"}}' WHERE k=1; ``` YSQL supports a large number of operators and built-in functions that operate on JSON documents. This section highlights a few of these built-in functions. YSQL supports all of the built-in functions supported by PostgreSQL. For a complete list, refer to . The function expands the top-level JSON document into a set of key-value pairs, as shown below. ```plpgsql yugabyte=# SELECT jsonb_each(doc) FROM books WHERE k=1; ``` The output is shown below. ```output jsonb_each (ISBN,4582546494267) (year,1623) (title,\"\"\"Macbeth\"\"\") (author,\"{\"\"givenname\"\": \"\"William\"\", \"\"familyname\"\": \"\"Shakespeare\"\"}\") (4 rows) ``` The function retrieves the keys of the top-level JSON document thus:"
},
{
"data": "yugabyte=# SELECT jsonbobjectkeys(doc) FROM books WHERE k=1; ``` This is the result: ```output jsonbobjectkeys ISBN year title author (4 rows) ``` When you select a `jsonb` (or `json`) value in `ysqlsh`, you see the terse `text` typecast of the value. The function returns a more human-readable format: ```plpgsql yugabyte=# SELECT jsonb_pretty(doc) FROM books WHERE k=1; ``` This is the result: ```output jsonb_pretty -- { + \"ISBN\": 4582546494267, + \"year\": 1623, + \"title\": \"Macbeth\", + \"author\": { + \"given_name\": \"William\", + \"family_name\": \"Shakespeare\"+ } + } (1 row) ``` You can create constraints on `jsonb` data types. This section includes some examples. For a fuller discussion, refer to . Here's how to insist that each JSON document is an object: ```plpgsql alter table books add constraint booksdocis_object check (jsonb_typeof(doc) = 'object'); ``` Here's how to insist that the ISBN is always defined and is a positive 13-digit number: ```plpgsql alter table books add constraint booksisbnispositive13digitnumber check ( (doc->'ISBN') is not null and jsonb_typeof(doc->'ISBN') = 'number' and (doc->>'ISBN')::bigint > 0 and length(((doc->>'ISBN')::bigint)::text) = 13 ); ``` Indexes are essential to perform efficient lookups by document attributes. Without indexes, queries on document attributes end up performing a full table scan and process each JSON document. This section outlines some of the indexes supported. If you want to support range queries that reference the value for the year attribute, do the following: ```plpgsql CREATE INDEX books_year ON books (((doc->>'year')::int) ASC) WHERE doc->>'year' is not null; ``` This will make the following query efficient: ```plpgsql select (doc->>'ISBN')::bigint as isbn, doc->>'title' as title, (doc->>'year')::int as year from books where (doc->>'year')::int > 1850 and doc->>'year' IS NOT NULL order by 3; ``` You might want to index only those documents that contain the attribute (as opposed to indexing the rows that have a `NULL` value for that attribute). This is a common scenario because not all the documents would have all the attributes defined. This can be achieved using a partial index. In the previous section where you created a secondary index, not all the books may have the `year` attribute defined. Suppose that you want to index only those documents that have a `NOT NULL` `year` attribute. Create the following partial index: ```plpgsql CREATE INDEX books_year ON books ((doc->>'year') ASC) WHERE doc->>'year' IS NOT NULL; ``` You can create a unique index on the \"ISBN\" key for the books table as follows: ```plpgsql CREATE UNIQUE INDEX booksisbnunq on books((doc->>'ISBN')); ``` Inserting a row with a duplicate value would fail as shown below. The book has a new primary key `k` but an existing ISBN, `4582546494267`. ```plpgsql yugabyte=# INSERT INTO books values (7, '{ \"ISBN\" : 4582546494267, \"title\" : \"Fake Book with duplicate ISBN\" }'); ``` ```output ERROR: 23505: duplicate key value violates unique constraint \"booksisbnunq\" ``` reference"
}
] |
{
"category": "App Definition and Development",
"file_name": "golden_data_test_framework.md",
"project_name": "MongoDB",
"subcategory": "Database"
} | [
{
"data": "Golden Data test framework provides ability to run and manage tests that produce an output which is verified by comparing it to the checked-in, known valid output. Any differences result in test failure and either the code or expected output has to be updated. Golden Data tests excel at bulk diffing of failed test outputs and bulk accepting of new test outputs. Code under test produces a deterministic output: That way tests can consistently succeed or fail. Incremental changes to code under test or test fixture result in incremental changes to the output. As an alternative to ASSERT for large output comparison: Serves the same purpose, but provides tools for diffing/updating. The outputs can't be objectively verified (e.g. by verifying well known properties). Examples: Verifying if sorting works, can be done by verifying that output is sorted. SHOULD NOT use Golden Data tests. Verifying that pretty printing works, MAY use Golden Data tests to verify the output, as there might not be well known properties or those properties can easily change. As stability/versioning/regression testing. Golden Data tests by storing recorded outputs, are good candidate for preserving behavior of legacy versions or detecting undesired changes in behavior, even in cases when new behavior meets other correctness criteria. Tests MUST produce text output that is diffable can be inspected in the pull request. Tests MUST produce an output that is deterministic and repeatable. Including running on different platforms. Same as with ASSERT_EQ. Tests SHOULD produce an output that changes incrementally in response to the incremental test or code changes. Multiple test variations MAY be bundled into a single test. Recommended when testing same feature with different inputs. This helps reviewing the outputs by grouping similar tests together, and also reduces the number of output files. Changes to test fixture or test code that affect non-trivial amount test outputs MUST BE done in separate pull request from production code changes: Pull request for test code only changes can be easily reviewed, even if large number of test outputs are modified. While such changes can still introduce merge conflicts, they don't introduce risk of regression (if outputs were valid Pull requests with mixed production Tests in the same suite SHOULD share the fixtures when appropriate. This reduces cost of adding new tests to the suite. Changes to the fixture may only affect expected outputs from that fixtures, and those output can be updated in bulk. Tests in different suites SHOULD NOT reuse/share fixtures. Changes to the fixture can affect large number of expected outputs. There are exceptions to that rule, and tests in different suites MAY reuse/share fixtures if: Test fixture is considered stable and changes rarely. Tests suites are related, either by sharing tests, or testing similar components. Setup/teardown costs are excessive, and sharing the same instance of a fixture for performance reasons can't be avoided. Tests SHOULD print both inputs and outputs of the tested code. This makes it easy for reviewers to verify of the expected outputs are indeed correct by having both input and output next to each other. Otherwise finding the input used to produce the new output may not be practical, and might not even be included in the diff. When resolving merge conflicts on the expected output files, one of the approaches below SHOULD be used: \"Accept theirs\", rerun the tests and verify new"
},
{
"data": "This doesn't require knowledge of production/test code changes in \"theirs\" branch, but requires re-review and re-acceptance of c hanges done by local branch. \"Accept yours\", rerun the tests and verify the new outputs. This approach requires knowledge of production/test code changes in \"theirs\" branch. However, if such changes resulted in straightforward and repetitive output changes, like due to printing code change or fixture change, it may be easier to verify than reinspecting local changes. Expected test outputs SHOULD be reused across tightly-coupled test suites. The suites are tightly-coupled if: Share the same tests, inputs and fixtures. Test similar scenarios. Test different code paths, but changes to one of the code path is expected to be accompanied by changes to the other code paths as well. Tests SHOULD use different test files, for legitimate and expected output differences between those suites. Examples: Functional tests, integration tests and unit tests that test the same behavior in different environments. Versioned tests, where expected behavior is the same for majority of test inputs/scenarios. AVOID manually modifying expected output files. Those files are considered to be auto generated. Instead, run the tests and then copy the generated output as a new expected output file. See \"How to diff and accept new test outputs\" section for instructions. Each golden data test should produce a text output that will be later verified. The output format must be text, but otherwise test author can choose a most appropriate output format (text, json, bson, yaml or mixed). If a test consists of multiple variations each variation should be clearly separated from each other. Note: Test output is usually only written. It is ok to focus on just writing serialization/printing code without a need to provide deserialization/parsing code. When actual test output is different from expected output, test framework will fail the test, log both outputs and also create following files, that can be inspected later: <outputpath>/actual/<testpath> - with actual test output <outputpath>/expected/<testpath> - with expected test output `::mongo::unittest::GoldenTestConfig` - Provides a way to configure test suite(s). Defines where the expected output files are located in the source repo. `::mongo::unittest::GoldenTestContext` - Provides an output stream where tests should write their outputs. Verifies the output with the expected output that is in the source repo See: Example: ```c++ GoldenTestConfig myConfig(\"src/mongo/myexpectedoutput\"); TEST(MySuite, MyTest) { GoldenTestContext ctx(myConfig); ctx.outStream() << \"print something here\" << std::endl; ctx.outStream() << \"print something else\" << std::endl; } void runVariation(GoldenTestContext& ctx, const std::string& variationName, T input) { ctx.outStream() << \"VARIATION \" << variationName << std::endl; ctx.outStream() << \"input: \" << input << std::endl; ctx.outStream() << \"output: \" << runCodeUnderTest(input) << std::endl; ctx.outStream() << std::endl; } TEST_F(MySuiteFixture, MyFeatureATest) { GoldenTestContext ctx(myConfig); runMyVariation(ctx, \"variation 1\", \"some input testing A #1\") runMyVariation(ctx, \"variation 2\", \"some input testing A #2\") runMyVariation(ctx, \"variation 3\", \"some input testing A #3\") } TEST_F(MySuiteFixture, MyFeatureBTest) { GoldenTestContext ctx(myConfig); runMyVariation(ctx, \"variation 1\", \"some input testing B #1\") runMyVariation(ctx, \"variation 2\", \"some input testing B #2\") runMyVariation(ctx, \"variation 3\", \"some input testing B #3\") runMyVariation(ctx, \"variation 4\", \"some input testing B #4\") } ``` Also see self-test: Use buildscripts/golden_test.py command line tool to manage the test outputs. This includes: diffing all output differences of all tests in a given test run output. accepting all output differences of all tests in a given test run output. buildscripts/golden_test.py requires a one-time workstation setup. Note: this setup is only required to use buildscripts/golden_test.py"
},
{
"data": "It is NOT required to just run the Golden Data tests when not using buildscripts/golden_test.py. Create a yaml config file, as described by . Set GOLDENTESTCONFIG_PATH environment variable to config file location, so that is available when running tests and when running buildscripts/golden_test.py tool. Use buildscripts/golden_test.py builtin setup to initialize default config for your current platform. Instructions for Linux Run buildscripts/golden_test.py setup utility ```bash buildscripts/golden_test.py setup ``` Instructions for Windows Run buildscripts/golden_test.py setup utility. You may be asked for a password, when not running in \"Run as administrator\" shell. ```cmd c:\\python\\python310\\python.exe buildscripts/golden_test.py setup ``` This is the same config as that would be setup by the This config uses a unique subfolder folder for each test run. (default) Allows diffing each test run separately. Works with multiple source repos. Instructions for Linux/macOS: This config uses a unique subfolder folder for each test run. (default) Allows diffing each test run separately. Works with multiple source repos. Create ~/.goldentestconfig.yml with following contents: ```yaml outputRootPattern: /var/tmp/test_output/out-%%%%-%%%%-%%%%-%%%% diffCmd: git diff --no-index \"{{expected}}\" \"{{actual}}\" ``` Update .bashrc, .zshrc ```bash export GOLDENTESTCONFIGPATH=~/.goldentest_config.yml ``` alternatively modify /etc/environment or other configuration if needed by Debugger/IDE etc. Instructions for Windows: Create %LocalAppData%\\.goldentestconfig.yml with the following contents: ```yaml outputRootPattern: 'C:\\Users\\Administrator\\AppData\\Local\\Temp\\test_output\\out-%%%%-%%%%-%%%%-%%%%' diffCmd: 'git diff --no-index \"{{expected}}\" \"{{actual}}\"' ``` Add GOLDENTESTCONFIGPATH=~/.goldentest_config.yml environment variable: ```cmd runas /profile /user:administrator \"setx GOLDENTESTCONFIGPATH %LocalAppData%\\.goldentest_config.yml\" ``` ```bash $> buildscripts/golden_test.py list ``` ```bash $> buildscripts/golden_test.py diff ``` This will run the diffCmd that was specified in the config file ```bash $> buildscripts/golden_test.py accept ``` This will copy all actual test outputs from that test run to the source repo and new expected outputs. Get expected and actual output paths for most recent test run: ```bash $> buildscripts/golden_test.py get ``` Get expected and actual output paths for most most recent test run: ```bash $> buildscripts/goldentest.py getroot ``` Get all available commands and options: ```bash $> buildscripts/golden_test.py --help ``` Parse the test log to find the root output locations where expected and actual output files were written. Then compare the folders to see the differences for tests that failed. Example: (linux/macOS) ```bash $> cat test.log | grep \"^{\" | jq -s -c -r '.[] | select(.id == 6273501 ) | .attr.expectedOutputRoot + \" \" +.attr.actualOutputRoot ' | sort | uniq $> diff -ruN --unidirectional-new-file --color=always <expectedroot> <actualroot> ``` Parse logs and find the the expected and actual outputs for each failed test. Example: (linux/macOS) ```bash $> cat test.log | grep \"^{\" | jq -s '.[] | select(.id == 6273501 ) | .attr.testPath,.attr.expectedOutput,.attr.actualOutput' ``` Golden Data test config file is a YAML file specified as: ```yaml outputRootPattern: type: String optional: true description: Root path patten that will be used to write expected and actual test outputs for all tests in the test run. If not specified a temporary folder location will be used. Path pattern string may use '%' characters in the last part of the path. '%' characters in the last part of the path will be replaced with random lowercase hexadecimal digits. examples: /var/tmp/test_output/out-%%%%-%%%%-%%%%-%%%% /var/tmp/test_output diffCmd: type: String optional: true description: Shell command to diff a single golden test run output. {{expected}} and {{actual}} variables should be used and will be replaced with expected and actual output folder paths respectively. This property is not used to decide whether the test passes or fails; it is only used to display differences once we've decided that a test failed. examples: git diff --no-index \"{{expected}}\" \"{{actual}}\" diff -ruN --unidirectional-new-file --color=always \"{{expected}}\" \"{{actual}}\" ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "25_io_csv.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Prompt: Is CSV format supported in Apache Beam? Response: is a data format for storing and exchanging data. Apache Beam supports reading data from and writing data to CSV files using the `ReadFromText` and `WriteToText` transforms in the `TextIO` module. For more information, see the TextIO connector documentation: To get started with CSV and Apache Beam, refer to the . Here's an example of Apache Beam pipeline code for reading data from a CSV file: ```python class CsvOptions(PipelineOptions): @classmethod def addargparse_args(cls, parser): parser.add_argument( '--file_path', default=\"gs://your-bucket/your-file.csv\", help='Csv file path' ) options = CsvOptions() with beam.Pipeline(options=options) as p: output = (p | \"Read from Csv file\" >> ReadFromCsv( path=options.file_path ) | \"Log Data\" >> Map(logging.info)) ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "helm-chart.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Use Helm Chart to deploy on Google Kubernetes Engine (GKE) headerTitle: Google Kubernetes Engine (GKE) linkTitle: Google Kubernetes Engine (GKE) description: Use Helm Chart to deploy a single-zone YugabyteDB cluster on Google Kubernetes Engine (GKE). menu: v2.18: parent: deploy-kubernetes-sz name: Google Kubernetes Engine identifier: k8s-gke-1 weight: 623 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../helm-chart/\" class=\"nav-link active\"> <i class=\"fa-regular fa-dharmachakra\" aria-hidden=\"true\"></i> Helm chart </a> </li> <li > <a href=\"../statefulset-yaml/\" class=\"nav-link\"> <i class=\"fa-solid fa-cubes\" aria-hidden=\"true\"></i> YAML (remote disk) </a> </li> <li > <a href=\"../statefulset-yaml-local-ssd/\" class=\"nav-link\"> <i class=\"fa-solid fa-cubes\" aria-hidden=\"true\"></i> YAML (local disk) </a> </li> </ul> You must have a Google Kubernetes Engine (GKE) cluster that has Helm configured. If you have not installed the Helm client (`helm`), see . The YugabyteDB Helm chart has been tested with the following software versions: GKE running Kubernetes 1.20 or later. The Helm chart you use to install YugabyteDB creates three YB-Master and three YB-TServers, each with 2 CPU cores, for a total of 12 CPU cores. This means you need a Kubernetes cluster with more than 12 CPU cores. If the cluster contains three nodes, then each node should have more than 4 cores. Helm 3.4 or later. For optimal performance, ensure you set the appropriate on each node in your Kubernetes cluster. The following steps show how to meet these prerequisites: Download and install the . Configure defaults for Google Cloud. Execute the following command to set the project ID to `yugabyte`. You can change this as needed. ```sh gcloud config set project yugabyte ``` Execute the following command to set the default compute zone to `us-west1-b`. You can change this as needed. ```sh gcloud config set compute/zone us-west1-b ``` Install `kubectl`. Refer to kubectl installation instructions for your . Note that GKE is usually two or three major releases behind the upstream or OSS Kubernetes release. This means you have to make sure that you have the latest kubectl version that is compatible across different Kubernetes distributions. Ensure that `helm` is installed. First, check the Helm version, as follows: ```sh helm version ``` Expect to see the output similar to the following. Note that the `tiller` server-side component has been removed in Helm 3. ```output version.BuildInfo{Version:\"v3.0.3\", GitCommit:\"ac925eb7279f4a6955df663a0128044a8a6b7593\", GitTreeState:\"clean\", GoVersion:\"go1.13.6\"} ``` Create a Kubernetes cluster by running the following command: ```sh gcloud container clusters create yugabyte --machine-type=n1-standard-8 ``` As stated in , the default configuration in the YugabyteDB Helm chart requires Kubernetes nodes to have a total of 12 CPU cores and 45 GB RAM allocated to YugabyteDB. This can be three nodes with 4 CPU cores and 15 GB RAM allocated to YugabyteDB. The smallest Google Cloud machine type that meets this requirement is `n1-standard-8` which has 8 CPU cores and 30GB RAM. Creating a YugabyteDB cluster involves a number of steps. To add the YugabyteDB charts repository, run the following command: ```sh helm repo add yugabytedb https://charts.yugabyte.com ``` Make sure that you have the latest updates to the repository by running the following command: ```sh helm repo update ``` Execute the following command: ```sh helm search repo yugabytedb/yugabyte --version {{<yb-version version=\"v2.18\" format=\"short\">}} ``` Expect the following output: ```output NAME CHART VERSION APP VERSION DESCRIPTION yugabytedb/yugabyte {{<yb-version version=\"v2.18\" format=\"short\">}} {{<yb-version"
},
{
"data": "format=\"build\">}} YugabyteDB is the high-performance distributed ... ``` Run the following commands to create a namespace and then install YugabyteDB: ```sh kubectl create namespace yb-demo helm install yb-demo yugabytedb/yugabyte --version {{<yb-version version=\"v2.18\" format=\"short\">}} --namespace yb-demo --wait ``` You can check the status of the cluster using the following command: ```sh helm status yb-demo -n yb-demo ``` ```output NAME: yb-demo LAST DEPLOYED: Thu Feb 13 13:29:13 2020 NAMESPACE: yb-demo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Get YugabyteDB Pods by running this command: kubectl --namespace yb-demo get pods Get list of YugabyteDB services that are running: kubectl --namespace yb-demo get services Get information about the load balancer services: kubectl get svc --namespace yb-demo Connect to one of the tablet server: kubectl exec --namespace yb-demo -it yb-tserver-0 -- bash Run YSQL shell from inside of a tablet server: kubectl exec --namespace yb-demo -it yb-tserver-0 -- ysqlsh -h yb-tserver-0.yb-tservers.yb-demo Cleanup YugabyteDB Pods helm delete yb-demo --purge NOTE: You need to manually delete the persistent volume kubectl delete pvc --namespace yb-demo -l app=yb-master kubectl delete pvc --namespace yb-demo -l app=yb-tserver ``` Check the pods, as follows: ```sh kubectl get pods --namespace yb-demo ``` ```output NAME READY STATUS RESTARTS AGE yb-master-0 1/1 Running 0 4m yb-master-1 1/1 Running 0 4m yb-master-2 1/1 Running 0 4m yb-tserver-0 1/1 Running 0 4m yb-tserver-1 1/1 Running 0 4m yb-tserver-2 1/1 Running 0 4m ``` Check the services, as follows: ```sh kubectl get services --namespace yb-demo ``` ```output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE yb-master-ui LoadBalancer 10.109.39.242 35.225.153.213 7000:31920/TCP 10s yb-masters ClusterIP None <none> 7100/TCP,7000/TCP 10s yb-tserver-service LoadBalancer 10.98.36.163 35.225.153.214 6379:30929/TCP,9042:30975/TCP,5433:30048/TCP 10s yb-tservers ClusterIP None <none> 7100/TCP,9000/TCP,6379/TCP,9042/TCP,5433/TCP 10s ``` You can even check the history of the `yb-demo` deployment, as follows: ```sh helm history yb-demo -n yb-demo ``` ```output REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Tue Apr 21 17:29:01 2020 deployed yugabyte-{{<yb-version version=\"v2.18\" format=\"short\">}} {{<yb-version version=\"v2.18\" format=\"build\">}} Install complete ``` To connect and use the YSQL Shell `ysqlsh`, run the following command: ```sh kubectl exec -n yb-demo -it yb-tserver-0 -- ysqlsh -h yb-tserver-0.yb-tservers.yb-demo ``` To connect and use the YCQL Shell `ycqlsh`, run the following command: ```sh kubectl exec -n yb-demo -it yb-tserver-0 -- ycqlsh yb-tserver-0.yb-tservers.yb-demo ``` To connect an external program, get the load balancer `EXTERNAL-IP` address of the `yb-tserver-service` service and connect using port 5433 for YSQL or port 9042 for YCQL, as follows: ```sh kubectl get services --namespace yb-demo ``` ```output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... yb-tserver-service LoadBalancer 10.98.36.163 35.225.153.214 6379:30929/TCP,9042:30975/TCP,5433:30048/TCP 10s ... ``` You can configure the cluster using the same commands and options that are described in . By default, the YugabyteDB Helm chart exposes the client API endpoints, as well as YB-Master UI endpoint using two LoadBalancers. To expose the client APIs using independent LoadBalancers, you can execute the following command: ```sh helm install yb-demo yugabytedb/yugabyte -f https://raw.githubusercontent.com/yugabyte/charts/master/stable/yugabyte/expose-all.yaml --version {{<yb-version version=\"v2.18\" format=\"short\">}} --namespace yb-demo --wait ``` You can also bring up an internal LoadBalancer (for either YB-Master or YB-TServer services), if required. You would need to specify the required for your cloud provider. The following command brings up an internal LoadBalancer for the YB-TServer service in Google Cloud Platform: ```sh helm install yugabyte -f https://raw.githubusercontent.com/yugabyte/charts/master/stable/yugabyte/expose-all.yaml --version {{<yb-version version=\"v2.18\" format=\"short\">}} --namespace yb-demo --name yb-demo \\ --set annotations.tserver.loadbalancer.\"cloud\\.google\\.com/load-balancer-type\"=Internal --wait ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "create-shadow-rule.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"CREATE SHADOW RULE\" weight = 1 +++ The `CREATE SHADOW RULE` syntax is used to create a shadow rule. {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql CreateShadowRule ::= 'CREATE' 'SHADOW' 'RULE' ifNotExists? shadowRuleDefinition (',' shadowRuleDefinition)* ifNotExists ::= 'IF' 'NOT' 'EXISTS' shadowRuleDefinition ::= ruleName '(' storageUnitMapping shadowTableRule (',' shadowTableRule)* ')' storageUnitMapping ::= 'SOURCE' '=' storageUnitName ',' 'SHADOW' '=' storageUnitName shadowTableRule ::= tableName '(' shadowAlgorithm ')' shadowAlgorithm ::= 'TYPE' '(' 'NAME' '=' algorithmType ',' propertiesDefinition ')' ruleName ::= identifier storageUnitName ::= identifier tableName ::= identifier algorithmName ::= identifier algorithmType ::= string propertiesDefinition ::= 'PROPERTIES' '(' key '=' value (',' key '=' value)* ')' key ::= string value ::= literal ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} Duplicate `ruleName` cannot be created; `storageUnitMapping` specifies the mapping relationship between the `source` database and the shadow library. You need to use the storage unit managed by RDL, please refer to ; `shadowAlgorithm` can act on multiple `shadowTableRule` at the same time; If `algorithmName` is not specified, it will be automatically generated according to `ruleName`, `tableName` and `algorithmType`; `algorithmType` currently supports `VALUEMATCH`, `REGEXMATCH` and `SQL_HINT`; `ifNotExists` caluse is used for avoid `Duplicate shadow rule` error. Create a shadow rule ```sql CREATE SHADOW RULE shadow_rule( SOURCE=demo_ds, SHADOW=demodsshadow, torder(TYPE(NAME=\"SQLHINT\")), torderitem(TYPE(NAME=\"VALUEMATCH\", PROPERTIES(\"operation\"=\"insert\",\"column\"=\"userid\", \"value\"='1'))) ); ``` Create a shadow rule with `ifNotExists` clause ```sql CREATE SHADOW RULE IF NOT EXISTS shadow_rule( SOURCE=demo_ds, SHADOW=demodsshadow, torder(TYPE(NAME=\"SQLHINT\")), torderitem(TYPE(NAME=\"VALUEMATCH\", PROPERTIES(\"operation\"=\"insert\",\"column\"=\"userid\", \"value\"='1'))) ); ``` `CREATE`, `SHADOW`, `RULE`, `SOURCE`, `SHADOW`, `TYPE`, `NAME`, `PROPERTIES`"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "add quotes for command in msvs generator () () add VCToolsVersion for msvs () () some Python lint issues () () use generatoroutput as outputdir () () msvs:* add SpectreMitigation attribute () () flake8 extended-ignore () () No buildtype in defaultvariables () () README.md: Add pipx installation and run instructions () () Add command line argument for `gyp --version` () () ninja build for iOS () () zos:* support IBM Open XL C/C++ & PL/I compilers on z/OS () () lock windows env () () move configuration information into pyproject.toml () () node.js debugger adds stderr (but exit code is 0) -> shouldn't throw () () add PRODUCTDIRABS variable () () execvp: printf: Argument list too long () () msvs:* avoid fixing path for arguments with \"=\" () () support building shared libraries on z/OS () () Add proper support for IBM i () () make:* only generate makefile for multiple toolsets if requested () () msvs:* add support for Visual Studio 2022 () () align flake8 test () () msvs:* fix paths again in action command arguments () () add python 3.6 to node-gyp integration test () revert for windows compatibility () support msvsquotecmd in ninja generator () () .S is an extension for asm file on Windows () () build failure with ninja and Python 3 on Windows () () add support of utf8 encoding () () py lint () use LDFLAGS_host for host toolset () () msvs.py: remove overindentation () () update gyp.el to change case to cl-case () () update shebang lines from python to python3 () () remove support for Python 2 revert posix build job () () Remove support for Python 2 () () msvs:* On Windows, arguments passed to the \"action\" commands are no longer transformed to replace slashes with backslashes. xcode:* --cross-compiling overrides arch-specific settings () msvs:* do not fix paths in action command arguments () cmake on python 3 () ValueError: invalid mode: 'rU' while trying to load binding.gyp () xcode cmake parsing () do not rewrite absolute paths to avoid long paths () () only include MARMASM when toolset is target () Correctly rename object files for absolute paths in MSVS generator. The Makefile generator will now output shared libraries directly to the product directory on all platforms (previously only macOS). Extended compilecommandsjson generator to consider more file extensions than just `c` and `cc`. `cpp` and `cxx` are now supported. Source files with duplicate basenames are now supported. The `--no-duplicate-basename-check` option was removed. The `msvsenablemarmasm` configuration option was removed in favor of auto-inclusion of the \"marmasm\" sections for Windows on ARM. Added support for passing arbitrary architectures to Xcode builds, enables `arm64` builds. Fixed a bug on Solaris where copying archives failed. Added support for MSVC cross-compilation. This allows compilation on x64 for a Windows ARM target. Fixed XCode CLT version detection on macOS Catalina. Relicensed to Node.js contributors. Fixed Windows bug introduced in v0.2.0. This is the first release of this project, based on https://chromium.googlesource.com/external/gyp with changes made over the years in Node.js and node-gyp."
}
] |
{
"category": "App Definition and Development",
"file_name": "latency-histogram.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: View latency histogram and P99 latency metrics for slow queries headerTitle: Latency histogram and P99 latencies linkTitle: Latency histogram description: View histogram and P99 latency metrics for slow queries menu: v2.18_yugabyte-platform: parent: alerts-monitoring identifier: latency-histogram weight: 50 type: docs Percentile metrics form the core set of metrics that enable users to measure query performance against SLOs (Service Level Objectives). Surfacing percentile metrics per normalized query and by Ops (Operations/second) type enables you to measure query performance against SLOs. Additionally, these metrics can help identify performance issues efficiently and quickly. You can view P99, P95, P90, and P50 metrics for every query displayed on the dashboard. You can view latency histograms for every YSQL query you run on one or multiple nodes of your universe and get an aggregated view of the metrics. Slow queries are only available for YSQL, with percentile metrics available in YBA version 2.18.2 or later, and latency histogram support in YugabyteDB version 2.18.1 (or later), or 2.19.1 (or later). To view the latency histogram and P99 metrics, access the Slow Queries dashboard and run YSQL queries using the following steps: Navigate to Universes, select your universe, then select Queries > Slow Queries. You may have to enable the Query monitoring option if it is not already. Run some queries on your universe by selecting one or more queries in the Slow Queries tab. You can see the query details listing the P25, P50, P90, P95, and P99 latency metrics as per the following illustration. To discard latency statistics gathered so far, click the Reset stats button on the Slow Queries dashboard to run on each node. {{< note title=\"Note\" >}} If latency histogram is not reported by YBA, then the histogram graph is not displayed. If the P99 latency statistics are not reported by YBA (for example, because the database version doesn't support it), then `-` is displayed in the Query Details. {{< /note >}} for details on how to run queries and view the results. Latency histogram and percentile metrics are obtained using the `yblatencyhistogram` column in `pgstatstatements` and `ybgetpercentile` function. Refer to for details on using latency histograms in YugabyteDB."
}
] |
{
"category": "App Definition and Development",
"file_name": "v22.8.10.29-lts.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Fixed backward incompatibility in (de)serialization of states of `min`, `max`, `any`, `argMin`, `argMax` aggregate functions with `String` argument. The incompatibility was introduced in https://github.com/ClickHouse/ClickHouse/pull/41431 and affects 22.9, 22.10 and 22.11 branches (fixed since 22.9.6, 22.10.4 and 22.11.2 correspondingly). Some minor releases of 22.3, 22.7 and 22.8 branches are also affected: 22.3.13...22.3.14 (fixed since 22.3.15), 22.8.6...22.8.9 (fixed since 22.8.10), 22.7.6 and newer (will not be fixed in 22.7, we recommend to upgrade from 22.7.* to 22.8.10 or newer). This release note does not concern users that have never used affected versions. Incompatible versions append extra `'\\0'` to strings when reading states of the aggregate functions mentioned above. For example, if an older version saved state of `anyState('foobar')` to `statecolumn` then incompatible version will print `'foobar\\0'` on `anyMerge(statecolumn)`. Also incompatible versions write states of the aggregate functions without trailing `'\\0'`. Newer versions (that have the fix) can correctly read data written by all versions including incompatible versions, except one corner case. If an incompatible version saved a state with a string that actually ends with null character, then newer version will trim trailing `'\\0'` when reading state of affected aggregate function. For example, if an incompatible version saved state of `anyState('abrac\\0dabra\\0')` to `statecolumn` then incompatible versions will print `'abrac\\0dabra'` on `anyMerge(statecolumn)`. The issue also affects distributed queries when an incompatible version works in a cluster together with older or newer versions. (). Backported in : Wait for all files are in sync before archiving them in integration tests. (). Backported in : - Fix several buffer over-reads. (). Backported in : Fixed queries with `SAMPLE BY` with prewhere optimization on tables using `Merge` engine. (). Backported in : Fix a bug when row level filter uses default value of column. (). Backported in : Fixed primary key analysis with conditions involving `toString(enum)`. (). Fix 02267fileglobsschemainference.sql flakiness (). Update SECURITY.md on new stable tags (). Use all parameters with prefixes from ssm (). Temporarily disable `testhivequery` (). Do not checkout submodules recursively (). Use docker images cache from merged PRs in master and release branches (). Fix pagination issue in GITHUBJOBID() ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "Data_distribution.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" tocmaxheading_level: 4 import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; Configuring appropriate partitioning and bucketing at table creation can help to achieve even data distribution. Even data distribution means dividing the data into subsets according to certain rules and distributing them evenly across different nodes. It can also reduce the amount of data scanned and make full use of the cluster's parallel processing capability, thereby improving query performance. NOTE After the data distribution is specified at table creation and query patterns or data characteristics in the business scenario evolves, since v3.2 StarRocks supports to meet the requirements for query performance in the latest business scenarios. Since v3.1, you do not need to specify the bucketing key in the DISTRIBUTED BY clause when creating a table or adding a partition. StarRocks supports random bucketing, which randomly distributes data across all buckets. For more information, see . Since v2.5.7, you can choose not to manually set the number of buckets when you create a table or add a partition. StarRocks can automatically set the number of buckets (BUCKETS). However, if the performance does not meet your expectations after StarRocks automatically sets the number of buckets and you are familiar with the bucketing mechanism, you can still . Modern distributed database systems generally use the following basic distribution methods: Round-Robin, Range, List, and Hash. Round-Robin: distributes data across different nodes in a cyclic. Range: distributes data across different nodes based on the ranges of partitioning column values. As shown in the diagram, the ranges [1-3] and [4-6] correspond to different nodes. List: distributes data across different nodes based on the discrete values of partitioning columns, such as gender and province. Each discrete value is mapped to a node, and multiple different values might be mapped to the same node. Hash: distributes data across different nodes based on a hash function. To achieve more flexible data partitioning, in addition to using one of the above data distribution methods, you can also combine these methods based on specific business requirements. Common combinations include Hash+Hash, Range+Hash, and Hash+List. StarRocks supports both separate and composite use of data distribution methods. NOTE In addition to the general distribution methods, StarRocks also supports Random distribution to simplify bucketing configuration. Also, StarRocks distributes data by implementing the two-level partitioning + bucketing method. The first level is partitioning: Data within a table can be partitioned. Supported partitioning methods are expression partitioning, range partitioning, and list partitioning. Or you can choose not to use partitioning (the entire table is regarded as one partition). The second level is bucketing: Data in a partition needs to be further distributed into smaller buckets. Supported bucketing methods are hash and random bucketing. | Distribution method | Partitioning and bucketing method | Description | | - | | | | Random distribution | Random bucketing | The entire table is considered a partition. The data in the table is randomly distributed into different buckets. This is the default data distribution method. | | Hash distribution | Hash bucketing | The entire table is considered a partition. The data in the table is distributed to the corresponding buckets, which is based on the hash values of the data's bucketing key by using a hash function. | | Range+Random distribution | <ol><li>Expression partitioning or range partitioning </li><li>Random bucketing </li></ol> | <ol><li>The data in the table is distributed to the corresponding partitions, which is based on the ranges where partitioning column values fall in. </li><li>The data in the partition is randomly distributed across different"
},
{
"data": "</li></ol> | | Range+Hash distribution | <ol><li>Expression partitioning or range partitioning</li><li>Hash bucketing </li></ol> | <ol><li>The data in the table is distributed to the corresponding partitions, which is based on the ranges where partitioning column values fall in.</li><li>The data in the partition is distributed to the corresponding buckets, which is based on the hash values of the data's bucketing key by using a hash function. </li></ol> | | List+Random distribution | <ol><li>Expression partitioning or list partitioning</li><li>Random bucketing </li></ol> | <ol><li>The data in the table is distributed to the corresponding partitions, which is based on the ranges where partitioning column values fall in.</li><li>The data in the partition is randomly distributed across different buckets.</li></ol> | | List+Hash distribution | <ol><li>Expression partitioning or List partitioning</li><li>Hash bucketing </li></ol> | <ol><li>The data in the table is partitioned based on the value lists that the partitioning columns values belongs to.</li><li>The data in the partition is distributed to the corresponding buckets, which is based on the hash values of the data's bucketing key by using a hash function.</li></ol> | Random distribution If you do not configure partitioning and bucketing methods at table creation, random distribution is used by default. This distribution method currently can only be used to create a Duplicate Key table. ```SQL CREATE TABLE site_access1 ( event_day DATE, site_id INT DEFAULT '10', pv BIGINT DEFAULT '0' , city_code VARCHAR(100), user_name VARCHAR(32) DEFAULT '' ) DUPLICATE KEY (eventday,siteid,pv); -- Because the partitioning and bucketing methods are not configured, random distribution is used by default. ``` Hash distribution ```SQL CREATE TABLE site_access2 ( event_day DATE, site_id INT DEFAULT '10', city_code SMALLINT, user_name VARCHAR(32) DEFAULT '', pv BIGINT SUM DEFAULT '0' ) AGGREGATE KEY (eventday, siteid, citycode, username) -- Use hash bucketing as the bucketing method and must specify the bucketing key. DISTRIBUTED BY HASH(eventday,siteid); ``` Range+Random distribution (This distribution method currently can only be used to create a Duplicate Key table.) ```SQL CREATE TABLE site_access3 ( event_day DATE, site_id INT DEFAULT '10', pv BIGINT DEFAULT '0' , city_code VARCHAR(100), user_name VARCHAR(32) DEFAULT '' ) DUPLICATE KEY(eventday,siteid,pv) -- Use expression partitioning as the partitioning method and configure a time function expression. -- You can also use range partitioning. PARTITION BY datetrunc('day', eventday); -- Because the bucketing method is not configured, random bucketing is used by default. ``` Range+Hash distribution ```SQL CREATE TABLE site_access4 ( event_day DATE, site_id INT DEFAULT '10', city_code VARCHAR(100), user_name VARCHAR(32) DEFAULT '', pv BIGINT SUM DEFAULT '0' ) AGGREGATE KEY(eventday, siteid, citycode, username) -- Use expression partitioning as the partitioning method and configure a time function expression. -- You can also use range partitioning. PARTITION BY datetrunc('day', eventday) -- Use hash bucketing as the bucketing method and must specify the bucketing key. DISTRIBUTED BY HASH(eventday, siteid); ``` List+Random distribution (This distribution method currently can only be used to create a Duplicate Key table.) ```SQL CREATE TABLE trechargedetail1 ( id bigint, user_id bigint, recharge_money decimal(32,2), city varchar(20) not null, dt date not null ) DUPLICATE KEY(id) -- Use expression partitioning as the partitioning method and specify the partitioning column. -- You can also use list partitioning. PARTITION BY (city); -- Because the bucketing method is not configured, random bucketing is used by default. ``` List+Hash distribution ```SQL CREATE TABLE trechargedetail2 ( id bigint, user_id bigint, recharge_money decimal(32,2), city varchar(20) not null, dt date not null ) DUPLICATE KEY(id) -- Use expression partitioning as the partitioning method and specify the partitioning column. -- You can also use list partitionifng. PARTITION BY (city) -- Use hash bucketing as the bucketing method and must specify the bucketing"
},
{
"data": "DISTRIBUTED BY HASH(city,id); ``` The partitioning method divides a table into multiple partitions. Partitioning primarily is used to split a table into different management units (partitions) based on the partitioning key. You can set a storage strategy for each partition, including the number of buckets, the strategy of storing hot and cold data, the type of storage medium, and the number of replicas. StarRocks allows you to use different types of storage mediums within a cluster. For example, you can store the latest data on solid-state drives (SSDs) to improve query performance, and historical data on SATA hard drives to reduce storage costs. | Partitioning method | Scenarios | Methods to create partitions | | - | | | | Expression partitioning (recommended) | Previously known as automatic partitioning. This partitioning method is more flexible and easy-to-use. It is suitable for most scenarios including querying and managing data based on continuous date ranges or enum values. | Automatically created during data loading | | Range partitioning | The typical scenario is to store simple, ordered data that is often queried and managed based on continuous date/numeric ranges. For instance, in some special cases, historical data needs to be partitioned by month, while recent data needs to be partitioned by day. | Created manually, dynamically, or in batch | | List partitioning | A typical scenario is to query and manage data based on enum values, and a partition needs to include data with different values for each partitioning column. For example, if you frequently query and manage data based on countries and cities, you can use this method and select `city` as the partitioning column. So a partition can store data for multiple cities belonging to the same country. | Created manually | The partitioning key is composed of one or more partitioning columns. Selecting a proper partitioning column can effectively reduce the amount of data scanned during queries. In most business systems, partitioning based on time is commonly adopted to resolve certain issues caused by the deletion of expired data and facilitate the management of tiered storage of hot and cold data. In this case, you can use expression partitioning or range partitioning and specify a time column as the partitioning column. Additionally, if the data is frequently queried and managed based on ENUM values, you can use expression partitioning or list partitioning and specify a column including these values as the partitioning column. When choosing the partitioning granularity, you need to consider data volume, query patterns, and data management granularity. Example 1: If the monthly data volume in a table is small, partitioning by month can reduce the amount of metadata compared to partitioning by day, thereby reducing the resource consumption of metadata management and scheduling. Example 2: If the monthly data volume in a table is large and queries mostly request data of certain days, partitioning by day can effectively reduce the amount of data scanned during queries. Example 3: If the data needs to expire on a daily basis, partitioning by day is recommended. The bucketing method divides a partition into multiple buckets. Data in a bucket is referred to as a tablet. The supported bucketing methods are (from v3.1) and . Random bucketing: When creating a table or adding partitions, you do not need to set a bucketing key. Data within a partition is randomly distributed into different buckets. Hash Bucketing: When creating a table or adding partitions, you need to specify a bucketing"
},
{
"data": "Data within the same partition is divided into buckets based on the values of the bucketing key, and rows with the same value in the bucketing key are distributed to the corresponding and unique bucket. The number of buckets: By default, StarRocks automatically sets the number of buckets (from v2.5.7). You can also manually set the number of buckets. For more information, please refer to . NOTICE Since v3.1, StarRocks's supports the time function expression and does not support the column expression. Since v3.0, StarRocks supports (previously known as automatic partitioning) which is more flexible and easy-to-use. This partitioning method is suitable for most scenarios such as querying and managing data based on continuous date ranges or enum values. You only need to configure a partition expression (a time function expression or a column expression) at table creation, and StarRocks will automatically create partitions during data loading. You no longer need to manually create numerous partitions in advance, nor configure dynamic partition properties. Range partitioning is suitable for storing simple, contiguous data, such as time series data, or continuous numerical data. And you frequently query and manage data based on continuous date/numerical ranges. Also, it can be applied in some special cases where historical data needs to be partitioned by month, and recent data needs to be partitioned by day. You need to explicitly define the data partitioning columns and establish the mapping relationship between partitions and ranges of partitioning column values. During data loading, StarRocks assigns the data to the corresponding partitions based on the ranges to which the data partitioning column values belong. As for the data type of the partitioning columns, before v3.3.0, range partitioning only supports partitioning columns of date and integer types. Since v3.3.0, three specific time functions can be used as partition columns. When explicitly defining the mapping relationship between partitions and ranges of partitioning column values, you need to first use a specific time function to convert partitioning column values of timestamps or strings into date values, and then divide the partitions based on the converted date values. :::info If the partitioning column value is a timestamp, you need to use the fromunixtime or fromunixtimems function to convert a timestamp to a date value when dividing the partitions. When the fromunixtime function is used, the partitioning column only supports INT and BIGINT types. When the fromunixtimems function is used, the partitioning column only supports BIGINT type. If the partitioning column value is a string (STRING, VARCHAR, or CHAR type), you need to use the str2date function to convert the string to a date value when dividing the partitions. ::: Define the mapping relationship between each partition and the range of partitioning column values. The partitioning column is of date type. ```SQL CREATE TABLE site_access( event_day DATE, site_id INT, city_code VARCHAR(100), user_name VARCHAR(32), pv BIGINT SUM DEFAULT '0' ) AGGREGATE KEY(eventday, siteid, citycode, username) PARTITION BY RANGE(event_day)( PARTITION p1 VALUES LESS THAN (\"2020-01-31\"), PARTITION p2 VALUES LESS THAN (\"2020-02-29\"), PARTITION p3 VALUES LESS THAN (\"2020-03-31\") ) DISTRIBUTED BY HASH(site_id); ``` The partitioning column is of integer type. ```SQL CREATE TABLE site_access( datekey INT, site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT SUM DEFAULT '0' ) AGGREGATE KEY(datekey, siteid, citycode, user_name) PARTITION BY RANGE (datekey) ( PARTITION p1 VALUES LESS THAN (\"20200131\"), PARTITION p2 VALUES LESS THAN (\"20200229\"), PARTITION p3 VALUES LESS THAN (\"20200331\") ) DISTRIBUTED BY HASH(site_id); ``` Three specific time functions can be used as partitioning columns (supported since"
},
{
"data": "When explicitly defining the mapping relationship between partitions and the ranges of partition column values, you can use a specific time function to convert the partition column values of timestamps or strings into date values, and then divide the partitions based on the converted date values. <Tabs groupId=\"manual partitioning\"> <TabItem value=\"example1\" label=\"The partition column values are timestamps\" default> ```SQL -- A 10-digit timestamp accurate to the second, for example, 1703832553. CREATE TABLE site_access( event_time bigint, site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT SUM DEFAULT '0' ) AGGREGATE KEY(eventtime, siteid, citycode, username) PARTITION BY RANGE(fromunixtime(eventtime)) ( PARTITION p1 VALUES LESS THAN (\"2021-01-01\"), PARTITION p2 VALUES LESS THAN (\"2021-01-02\"), PARTITION p3 VALUES LESS THAN (\"2021-01-03\") ) DISTRIBUTED BY HASH(site_id) ; -- A 13-digit timestamp accurate to the millisecond, for example, 1703832553219. CREATE TABLE site_access( event_time bigint, site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT SUM DEFAULT '0' ) AGGREGATE KEY(eventtime, siteid, citycode, username) PARTITION BY RANGE(fromunixtimems(event_time))( PARTITION p1 VALUES LESS THAN (\"2021-01-01\"), PARTITION p2 VALUES LESS THAN (\"2021-01-02\"), PARTITION p3 VALUES LESS THAN (\"2021-01-03\") ) DISTRIBUTED BY HASH(site_id); ``` </TabItem> <TabItem value=\"example2\" label=\"The partition column values are strings\"> ```SQL CREATE TABLE site_access ( event_time varchar(100), site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT SUM DEFAULT '0' ) AGGREGATE KEY(eventtime, siteid, citycode, username) PARTITION BY RANGE(str2date(event_time, '%Y-%m-%d'))( PARTITION p1 VALUES LESS THAN (\"2021-01-01\"), PARTITION p2 VALUES LESS THAN (\"2021-01-02\"), PARTITION p3 VALUES LESS THAN (\"2021-01-03\") ) DISTRIBUTED BY HASH(site_id); ``` </TabItem> </Tabs> related properties are configured at table creation. StarRocks automatically creates new partitions in advance and removes expired partitions to ensure data freshness, which implements time-to-live (TTL) management for partitions. Different from the automatic partition creation ability provided by the expression partitioning, dynamic partitioning can only periodically create new partitions based on the properties. If the new data does not belong to these partitions, an error is returned for the load job. However, the automatic partition creation ability provided by the expression partitioning can always create corresponding new partitions based on the loaded data. Multiple partitions can be created in batch at and after table creation. You can specify the start and end time for all the partitions created in batch in `START()` and `END()` and the partition increment value in `EVERY()`. However, note that the range of partitions is left-closed and right-open, which includes the start time but does not include the end time. The naming rule for partitions is the same as that of dynamic partitioning. The partitioning column is of date type. When the partitioning column is of date type, at table creation, you can use `START()` and `END()` to specify the start date and end date for all the partitions created in batch, and `EVERY(INTERVAL xxx)` to specify the incremental interval between two partitions. Currently the interval granularity supports `HOUR` (since v3.0), `DAY`, `WEEK`, `MONTH`, and"
},
{
"data": "<Tabs groupId=\"batch partitioning(date)\"> <TabItem value=\"example1\" label=\"with the same date interval\" default> In the following example, the partitions created in a batch start from `2021-01-01` and end on `2021-01-04`, with a partition increment of one day: ```SQL CREATE TABLE site_access ( datekey DATE, site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT DEFAULT '0' ) DUPLICATE KEY(datekey, siteid, citycode, user_name) PARTITION BY RANGE (datekey) ( START (\"2021-01-01\") END (\"2021-01-04\") EVERY (INTERVAL 1 DAY) ) DISTRIBUTED BY HASH(site_id); ``` It is equivalent to using the following `PARTITION BY` clause in the CREATE TABLE statement: ```SQL PARTITION BY RANGE (datekey) ( PARTITION p20210101 VALUES [('2021-01-01'), ('2021-01-02')), PARTITION p20210102 VALUES [('2021-01-02'), ('2021-01-03')), PARTITION p20210103 VALUES [('2021-01-03'), ('2021-01-04')) ) ``` </TabItem> <TabItem value=\"example2\" label=\"with different date intervals\"> You can create batches of date partitions with different incremental intervals by specifying different incremental intervals in `EVERY` for each batch of partitions (make sure that the partition ranges between different batches do not overlap). Partitions in each batch are created according to the `START (xxx) END (xxx) EVERY (xxx)` clause. For example: ```SQL CREATE TABLE site_access ( datekey DATE, site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT DEFAULT '0' ) DUPLICATE KEY(datekey, siteid, citycode, user_name) PARTITION BY RANGE (datekey) ( START (\"2019-01-01\") END (\"2021-01-01\") EVERY (INTERVAL 1 YEAR), START (\"2021-01-01\") END (\"2021-05-01\") EVERY (INTERVAL 1 MONTH), START (\"2021-05-01\") END (\"2021-05-04\") EVERY (INTERVAL 1 DAY) ) DISTRIBUTED BY HASH(site_id); ``` It is equivalent to using the following `PARTITION BY` clause in the CREATE TABLE statement: ```SQL PARTITION BY RANGE (datekey) ( PARTITION p2019 VALUES [('2019-01-01'), ('2020-01-01')), PARTITION p2020 VALUES [('2020-01-01'), ('2021-01-01')), PARTITION p202101 VALUES [('2021-01-01'), ('2021-02-01')), PARTITION p202102 VALUES [('2021-02-01'), ('2021-03-01')), PARTITION p202103 VALUES [('2021-03-01'), ('2021-04-01')), PARTITION p202104 VALUES [('2021-04-01'), ('2021-05-01')), PARTITION p20210501 VALUES [('2021-05-01'), ('2021-05-02')), PARTITION p20210502 VALUES [('2021-05-02'), ('2021-05-03')), PARTITION p20210503 VALUES [('2021-05-03'), ('2021-05-04')) ) ``` </TabItem> </Tabs> The partitioning column is of integer type. When the data type of the partitioning column is INT, you specify the range of partitions in `START` and `END` and define the incremental value in `EVERY`. Example: > NOTE > > The partitioning column values in START() and END() need to be wrapped in double quotation marks, while the incremental value in the EVERY() does not need to be wrapped in double quotation marks. <Tabs groupId=\"batch partitioning(integer)\"> <TabItem value=\"example1\" label=\"with the same numerical interval\" default> In the following example, the range of all the partition starts from `1` and ends at `5`, with a partition increment of `1`: ```SQL CREATE TABLE site_access ( datekey INT, site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT DEFAULT '0' ) DUPLICATE KEY(datekey, siteid, citycode, user_name) PARTITION BY RANGE (datekey) ( START (\"1\") END (\"5\") EVERY (1) ) DISTRIBUTED BY HASH(site_id); ``` It is equivalent to using the following `PARTITION BY` clause in the CREATE TABLE statement: ```SQL PARTITION BY RANGE (datekey) ( PARTITION p1 VALUES [(\"1\"), (\"2\")), PARTITION p2 VALUES [(\"2\"), (\"3\")), PARTITION p3 VALUES [(\"3\"), (\"4\")), PARTITION p4 VALUES [(\"4\"), (\"5\")) ) ``` </TabItem> <TabItem value=\"example2\" label=\"with different numerical intervals\"> You can create batches of numerical partitions with different incremental intervals by specifying different incremental intervals in `EVERY` for each batch of partitions (make sure that the partition ranges between different batches do not overlap). Partitions in each batch are created according to the `START (xxx) END (xxx) EVERY (xxx)` clause. For example: ```SQL CREATE TABLE site_access ( datekey INT, site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT DEFAULT '0' ) DUPLICATE KEY(datekey, siteid, citycode, user_name) PARTITION BY RANGE (datekey) ( START (\"1\") END (\"10\") EVERY (1), START (\"10\") END (\"100\") EVERY (10) ) DISTRIBUTED BY HASH(site_id); ``` </TabItem> </Tabs> Three specific time functions can be used as partitioning columns (supported since v3.3.0). When explicitly defining the mapping relationship between partitions and the ranges of partition column values, you can use a specific time function to convert the partition column values of timestamps or strings into date values, and then divide the partitions based on the converted date values. <Tabs groupId=\"batch partitioning(timestamp and string)\"> <TabItem value=\"example1\" label=\"The partitioning column values are timestamps\" default> ```SQL -- A 10-digit timestamp accurate to the second, for example, 1703832553. CREATE TABLE site_access( event_time bigint, site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT DEFAULT '0' ) PARTITION BY RANGE(fromunixtime(eventtime)) ( START (\"2021-01-01\") END (\"2021-01-10\") EVERY (INTERVAL 1 DAY) ) DISTRIBUTED BY HASH(site_id); -- A 13-digit timestamp accurate to the milliseconds, for example,"
},
{
"data": "CREATE TABLE site_access( event_time bigint, site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT DEFAULT '0' ) PARTITION BY RANGE(fromunixtimems(event_time))( START (\"2021-01-01\") END (\"2021-01-10\") EVERY (INTERVAL 1 DAY) ) DISTRIBUTED BY HASH(site_id); ``` </TabItem> <TabItem value=\"example2\" label=\"The partitioning column values are strings\"> ```SQL CREATE TABLE site_access ( event_time varchar(100), site_id INT, city_code SMALLINT, user_name VARCHAR(32), pv BIGINT DEFAULT '0' ) PARTITION BY RANGE(str2date(event_time, '%Y-%m-%d'))( START (\"2021-01-01\") END (\"2021-01-10\") EVERY (INTERVAL 1 DAY) ) DISTRIBUTED BY HASH(site_id); ``` </TabItem> </Tabs> is suitable for accelerating queries and efficiently managing data based on enum values. It is especially useful for scenarios where a partition needs to include data with different values in a partitioning column. For example, if you frequently query and manage data based on countries and cities, you can use this partitioning method and select the `city` column as the partitioning column. In this case, one partition can contain data of various cities belonging to one country. StarRocks stores data in the corresponding partitions based on the explicit mapping of the predefined value list for each partition. For range partitioning and list partitioning, you can manually add new partitions to store new data. However for expression partitioning, because partitions are created automatically during data loading, you do not need to do so. The following statement adds a new partition to table `site_access` to store data for a new month: ```SQL ALTER TABLE site_access ADD PARTITION p4 VALUES LESS THAN (\"2020-04-30\") DISTRIBUTED BY HASH(site_id); ``` The following statement deletes partition `p1` from table `site_access`. NOTE This operation does not immediately delete data in a partition. Data is retained in the Trash for a period of time (one day by default). If a partition is mistakenly deleted, you can use the command to restore the partition and its data. ```SQL ALTER TABLE site_access DROP PARTITION p1; ``` The following statement restores partition `p1` and its data to table `site_access`. ```SQL RECOVER PARTITION p1 FROM site_access; ``` The following statement returns details of all partitions in table `site_access`. ```SQL SHOW PARTITIONS FROM site_access; ``` StarRocks distributes the data in a partition randomly across all buckets. It is suitable for scenarios with small data sizes and relatively low requirement for query performance. If you do not set a bucketing method, StarRocks uses random bucketing by default and automatically sets the number of buckets. However, note that if you query massive amounts of data and frequently use certain columns as filter conditions, the query performance provided by random bucketing may not be optimal. In such scenarios, it is recommended to use . When these columns are used as filter conditions for queries, only data in a small number of buckets that the query hits need to be scanned and computed, which can significantly improve query performance. You can only use random bucketing to create a Duplicate Key table. You cannot specify a table bucketed randomly to belong to a . cannot be used to load data into tables bucketed randomly. In the following CREATE TABLE example, the `DISTRIBUTED BY xxx` statement is not used, so StarRocks uses random bucketing by default, and automatically sets the number of buckets. ```SQL CREATE TABLE site_access1( event_day DATE, site_id INT DEFAULT '10', pv BIGINT DEFAULT '0' , city_code VARCHAR(100), user_name VARCHAR(32) DEFAULT '' ) DUPLICATE KEY(eventday,siteid,pv); ``` However, if you are familiar with StarRocks' bucketing mechanism, you can also manually set the number of buckets when creating a table with random"
},
{
"data": "```SQL CREATE TABLE site_access2( event_day DATE, site_id INT DEFAULT '10', pv BIGINT DEFAULT '0' , city_code VARCHAR(100), user_name VARCHAR(32) DEFAULT '' ) DUPLICATE KEY(eventday,siteid,pv) DISTRIBUTED BY RANDOM BUCKETS 8; -- manually set the number of buckets to 8 ``` StarRocks can use hash bucketing to subdivide data in a partition into buckets based on the bucketing key and . In hash bucketing, a hash function takes data's bucketing key value as an input and calculates a hash value. Data is stored in the corresponding bucket based on the mapping between the hash values and buckets. Improved query performance: Rows with the same bucketing key values are stored in the same bucket, reducing the amount of data scanned during queries. Even data distribution: By selecting columns with higher cardinality (a larger number of unique values) as the bucketing key, data can be more evenly distributed across buckets. We recommend that you choose the column that satisfy the following two requirements as the bucketing column. high cardinality column such as ID column that often used in a filter for queries But if no columns satisfy both requirements, you need to determine the bucketing column according to the complexity of queries. If the query is complex, it is recommended that you select high cardinality columns as bucketing columns to ensure that the data is as evenly distributed as possible across all the buckets and improve the cluster resource utilization. If the query is relatively simple, it is recommended to select columns that are frequently used as filer conditions in queries as bucketing columns to improve query efficiency. If partition data cannot be evenly distributed across all the buckets by using one bucketing column, you can choose multiple bucketing columns. Note that it is recommended to use no more than 3 columns. When a table is created, you must specify the bucketing columns. The data types of bucketing columns must be INTEGER, DECIMAL, DATE/DATETIME, or CHAR/VARCHAR/STRING. Since 3.2, bucketing columns can be modified by using ALTER TABLE after table creation. In the following example, the `siteaccess` table is created by using `siteid` as the bucketing column. Additionally, when data in the `siteaccess` table is queried, data is often filtered by sites. Using `siteid` as the bucketing key can prune a significant number of irrelevant buckets during queries. ```SQL CREATE TABLE site_access( event_day DATE, site_id INT DEFAULT '10', city_code VARCHAR(100), user_name VARCHAR(32) DEFAULT '', pv BIGINT SUM DEFAULT '0' ) AGGREGATE KEY(eventday, siteid, citycode, username) PARTITION BY RANGE(event_day) ( PARTITION p1 VALUES LESS THAN (\"2020-01-31\"), PARTITION p2 VALUES LESS THAN (\"2020-02-29\"), PARTITION p3 VALUES LESS THAN (\"2020-03-31\") ) DISTRIBUTED BY HASH(site_id); ``` Suppose each partition of table `siteaccess` has 10 buckets. In the following query, 9 out of 10 buckets are pruned, so StarRocks only needs to scan 1/10 of the data in the `siteaccess` table: ```SQL select sum(pv) from site_access where site_id = 54321; ``` However, if `siteid` is unevenly distributed and a large number of queries only request data of a few sites, using only one bucketing column can result in severe data skew, causing system performance bottlenecks. In this case, you can use a combination of bucketing columns. For example, the following statement uses `siteid` and `city_code` as bucketing columns. ```SQL CREATE TABLE site_access ( site_id INT DEFAULT '10', city_code SMALLINT, user_name VARCHAR(32) DEFAULT '', pv BIGINT SUM DEFAULT '0' ) AGGREGATE KEY(siteid, citycode, user_name) DISTRIBUTED BY HASH(siteid,citycode); ``` Practically speaking, you can use one or two bucketing columns based on your business"
},
{
"data": "Using one bucketing column `siteid` is highly beneficial for short queries as it reduces data exchange between nodes, thereby enhancing the overall performance of the cluster. On the other hand, adopting two bucketing columns `siteid` and `city_code` is advantageous for long queries as it can leverage the overall concurrency of the distributed cluster to significantly improve performance. NOTE Short queries involve scanning a small amount of data, and can be completed on a single node. Long queries involve scanning a large amount of data, and their performance can be significantly improved by parallel scanning across multiple nodes in a distributed cluster. Buckets reflect how data files are actually organized in StarRocks. Automatically set the number of buckets (recommended) Since v2.5.7, StarRocks supports automatically setting the number of buckets based on machine resources and data volume for a partition. :::tip If the raw data size of a partition exceeds 100 GB, we recommend that you manually configure the number of buckets using Method 2. ::: <Tabs groupId=\"automaticexamples1\"> <TabItem value=\"example1\" label=\"Table configured with hash bucketing\" default> Example: ```sql CREATE TABLE site_access ( site_id INT DEFAULT '10', city_code SMALLINT, user_name VARCHAR(32) DEFAULT '', event_day DATE, pv BIGINT SUM DEFAULT '0') AGGREGATE KEY(siteid, citycode, username,eventday) PARTITION BY datetrunc('day', eventday) DISTRIBUTED BY HASH(siteid,citycode); -- do not need to set the number of buckets ``` </TabItem> <TabItem value=\"example2\" label=\"Table configured with random bucketing\"> As for tables configured with random bucketing, in addition to automatically setting the number of buckets in a partition, StarRocks further optimizes the logic since v3.2.0. StarRocks also can dynamically increase the number of buckets in a partition during data loading, based on the cluster capacity and the volume of loaded data. :::warning To enable the on-demand and dynamic increase of the number of buckets, you need to set the table property `PROPERTIES(\"bucketsize\"=\"xxx\")` to specify the size of a single bucket. If the data volume in a partition is small, you can set the `bucketsize` to 1 GB. Otherwise, you can set the `bucket_size` to 4 GB. Once the on-demand and dynamic increase of the number of buckets is enabled, and you need to rollback to version 3.1, you have to first delete the table which enables the dynamic increase in the number of buckets. Then you need to manually execute metadata checkpoint using before rolling back. ::: Example: ```sql CREATE TABLE details1 ( event_day DATE, site_id INT DEFAULT '10', pv BIGINT DEFAULT '0', city_code VARCHAR(100), user_name VARCHAR(32) DEFAULT '') DUPLICATE KEY (eventday,siteid,pv) PARTITION BY datetrunc('day', eventday) -- The number of buckets in a partition is automatically determined by StarRocks and the number dynamically increases on demand because the size of a bucket is set to 1 GB. PROPERTIES(\"bucket_size\"=\"1073741824\") ; CREATE TABLE details2 ( event_day DATE, site_id INT DEFAULT '10', pv BIGINT DEFAULT '0' , city_code VARCHAR(100), user_name VARCHAR(32) DEFAULT '') DUPLICATE KEY (eventday,siteid,pv) PARTITION BY datetrunc('day', eventday) -- The number of buckets in the table partition is automatically determined by StarRocks, and the number is fixed and does not dynamically increases on demand because the size of a bucket is not set. ; ``` </TabItem> </Tabs> Manually set the number of buckets Since v2.4.0, StarRocks supports using multiple threads to scan a tablet in parallel during a query, thereby reducing the dependency of scanning performance on the tablet count. We recommend that each tablet contain about 10 GB of raw data. If you intend to manually set the number of buckets, you can estimate the amount of data in each partition of a table and then decide the number of"
},
{
"data": "To enable parallel scanning on tablets, make sure the `enabletabletinternalparallel` parameter is set to `TRUE` globally for the entire system (`SET GLOBAL enabletabletinternalparallel = true;`). <Tabs groupId=\"manualexamples1\"> <TabItem value=\"example1\" label=\"Table configured with hash bucketing\" default> ```SQL CREATE TABLE site_access ( site_id INT DEFAULT '10', city_code SMALLINT, user_name VARCHAR(32) DEFAULT '', event_day DATE, pv BIGINT SUM DEFAULT '0') AGGREGATE KEY(siteid, citycode, username,eventday) PARTITION BY datetrunc('day', eventday) DISTRIBUTED BY HASH(siteid,citycode) BUCKETS 30; -- Suppose the amount of raw data that you want to load into a partition is 300 GB. -- Because we recommend that each tablet contain 10 GB of raw data, the number of buckets can be set to 30. DISTRIBUTED BY HASH(siteid,citycode) BUCKETS 30; ``` </TabItem> <TabItem value=\"example2\" label=\"Table configured with random bucketing\"> ```sql CREATE TABLE details ( site_id INT DEFAULT '10', city_code VARCHAR(100), user_name VARCHAR(32) DEFAULT '', event_day DATE, pv BIGINT DEFAULT '0' ) DUPLICATE KEY (siteid,citycode) PARTITION BY datetrunc('day', eventday) DISTRIBUTED BY RANDOM BUCKETS 30 ; ``` </TabItem> </Tabs> Automatically set the number of buckets (recommended) Since v2.5.7, StarRocks supports automatically setting the number of buckets based on machine resources and data volume for a partition. :::tip If the raw data size of a partition exceeds 100 GB, we recommend that you manually configure the number of buckets using Method 2. ::: <Tabs groupId=\"automaticexamples2\"> <TabItem value=\"example1\" label=\"Table configured with hash bucketing\" default> ```sql -- Automatically set the number of buckets for all partitions. ALTER TABLE siteaccess DISTRIBUTED BY HASH(siteid,city_code); -- Automatically set the number of buckets for specific partitions. ALTER TABLE site_access PARTITIONS (p20230101, p20230102) DISTRIBUTED BY HASH(siteid,citycode); -- Automatically set the number of buckets for new partitions. ALTER TABLE site_access ADD PARTITION p20230106 VALUES [('2023-01-06'), ('2023-01-07')) DISTRIBUTED BY HASH(siteid,citycode); ``` </TabItem> <TabItem value=\"example2\" label=\"Table configured with random bucketing\"> As for tables configured with random bucketing, in addition to automatically setting the number of buckets in a partition, StarRocks further optimizes the logic since v3.2.0. StarRocks also can dynamically increase the number of buckets in a partition during data loading, based on the cluster capacity and the volume of loaded data. This not only makes partition creation easier but also increases bulk load performance. :::warning To enable the on-demand and dynamic increase of the number of buckets, you need to set the table property `PROPERTIES(\"bucketsize\"=\"xxx\")` to specify the size of a single bucket. If the data volume in a partition is small, you can set the `bucketsize` to 1 GB. Otherwise, you can set the `bucket_size` to 4 GB. Once the on-demand and dynamic increase of the number of buckets is enabled, and you need to rollback to version 3.1, you have to first delete the table which enables the dynamic increase in the number of buckets. Then you need to manually execute metadata checkpoint using before rolling back. ::: ```sql -- The number of buckets for all partitions is automatically set by StarRocks and this number is fixed because the on-demand and dynamic increase of the number of buckets is disabled. ALTER TABLE details DISTRIBUTED BY RANDOM; -- The number of buckets for all partitions is automatically set by StarRocks and the on-demand and dynamic increase of the number of buckets is enabled. ALTER TABLE details SET(\"bucket_size\"=\"1073741824\"); -- Automatically set the number of buckets for specific partitions. ALTER TABLE details PARTITIONS (p20230103, p20230104) DISTRIBUTED BY RANDOM; -- Automatically set the number of buckets for new partitions. ALTER TABLE details ADD PARTITION p20230106 VALUES [('2023-01-06'), ('2023-01-07')) DISTRIBUTED BY RANDOM; ``` </TabItem> </Tabs> Manually set the number of buckets You can also manually specify the buckets"
},
{
"data": "To calculate the number of buckets for a partition, you can refer to the approach used when manually setting the number of buckets at table creation, . <Tabs groupId=\"manualexamples2\"> <TabItem value=\"example1\" label=\"Table configured with hash bucketing\" default> ```sql -- Manually set the number of buckets for all partitions ALTER TABLE site_access DISTRIBUTED BY HASH(siteid,citycode) BUCKETS 30; -- Manually set the number of buckets for specific partitions. ALTER TABLE site_access partitions p20230104 DISTRIBUTED BY HASH(siteid,citycode) BUCKETS 30; -- Manually set the number of buckets for new partitions. ALTER TABLE site_access ADD PARTITION p20230106 VALUES [('2023-01-06'), ('2023-01-07')) DISTRIBUTED BY HASH(siteid,citycode) BUCKETS 30; ``` </TabItem> <TabItem value=\"example2\" label=\"Table configured with random bucketing\"> ```sql -- Manually set the number of buckets for all partitions ALTER TABLE details DISTRIBUTED BY RANDOM BUCKETS 30; -- Manually set the number of buckets for specific partitions. ALTER TABLE details partitions p20230104 DISTRIBUTED BY RANDOM BUCKETS 30; -- Manually set the number of buckets for new partitions. ALTER TABLE details ADD PARTITION p20230106 VALUES [('2023-01-06'), ('2023-01-07')) DISTRIBUTED BY RANDOM BUCKETS 30; ``` Manually set the default number of buckets for dynamic partitions. ```sql ALTER TABLE details_dynamic SET (\"dynamic_partition.buckets\"=\"xxx\"); ``` </TabItem> </Tabs> After creating a table, you can execute to view the number of buckets set by StarRocks for each partition. As for a table configured with hash bucketing, the number of buckets for each partitions is fixed. :::info As for a table configured with random bucketing which enable the on-demand and dynamic increase of the number of buckets, the number of buckets in each partition dynamically increases. So the returned result displays the current number of buckets for each partition. For this table type, the actual hierarchy within a partition is as follows: partition > subpartition > bucket. To increase the number of buckets, StarRocks actually adds a new subpartition which includes a certain number of buckets. As a result, the SHOW PARTITIONS statement may return multiple data rows with the same partition name, which show the information of the subpartitions within the same partition. ::: NOTICE StarRocks's currently does not support this feature. As query patterns and data volume evolve in business scenarios, the configurations specified at table creation, such as the bucketing method, the number of buckets, and the sort key, may no longer be suitable for the new business scenario and even may cause query performance to decrease. At this point, you can use `ALTER TABLE` to modify the bucketing method, the number of buckets, and the sort key to optimize data distribution. For example: Increase the number of buckets when data volume within partitions is significantly increased When the data volume within partitions becomes significantly larger than before, it is necessary to modify the number of buckets to maintain tablet sizes generally within the range of 1 GB to 10 GB. Modify the bucketing key to avoid data skew When the current bucketing key can cause data skew (for example, only the `k1` column is configured as the bucketing key), it is necessary to specify more suitable columns or add additional columns to the bucketing key. For example: ```SQL ALTER TABLE t DISTRIBUTED BY HASH(k1, k2) BUCKETS 20; -- When the StarRocks's version is 3.1 or later, and the table is Duplicate Key table, you can consider directly using the system's default bucketing settings, that is, random bucketing and the number of buckets automatically set by StarRocks. ALTER TABLE t DISTRIBUTED BY RANDOM; ``` Adapting the sort key due to changes in query patterns If the business query patterns are significantly changed and additional columns are used as conditional columns, it can be beneficial to"
}
] |
{
"category": "App Definition and Development",
"file_name": "System_variable.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" StarRocks provides many system variables that can be set and modified to suit your requirements. This section describes the variables supported by StarRocks. You can view the settings of these variables by running the command on your MySQL client. You can also use the command to dynamically set or modify variables. You can make these variables take effect globally on the entire system, only in the current session, or only in a single query statement. The variables in StarRocks refer to the variable sets in MySQL, but some variables are only compatible with the MySQL client protocol and do not function on the MySQL database. NOTE Any user has the privilege to run SHOW VARIABLES and make a variable take effect at session level. However, only users with the SYSTEM-level OPERATE privilege can make a variable take effect globally. Globally effective variables take effect on all the future sessions (excluding the current session). If you want to make a setting change for the current session and also make that setting change apply to all future sessions, you can make the change twice, once without the `GLOBAL` modifier and once with it. For example: ```SQL SET querymemlimit = 137438953472; -- Apply to the current session. SET GLOBAL querymemlimit = 137438953472; -- Apply to all future sessions. ``` StarRocks supports three types (levels) of variables: global variables, session variables, and `SET_VAR` hints. Their hierarchical relationship is as follows: Global variables take effect on global level, and can be overridden by session variables and `SET_VAR` hints. Session variables take effect only on the current session, and can be overridden by `SET_VAR` hints. `SET_VAR` hints take effect only on the current query statement. You can view all or some variables by using `SHOW VARIABLES [LIKE 'xxx']`. Example: ```SQL -- Show all variables in the system. SHOW VARIABLES; -- Show variables that match a certain pattern. SHOW VARIABLES LIKE '%time_zone%'; ``` You can set variables to take effect globally or only on the current session. When set to global, the new value will be used for all the future sessions, while the current session still uses the original value. When set to \"current session only\", the variable will only take effect on the current session. A variable set by `SET <var_name> = xxx;` only takes effect for the current session. Example: ```SQL SET querymemlimit = 137438953472; SET forwardtomaster = true; SET time_zone = \"Asia/Shanghai\"; ``` A variable set by `SET GLOBAL <var_name> = xxx;` takes effect globally. Example: ```SQL SET GLOBAL querymemlimit = 137438953472; ``` The following variables only take effect globally. They cannot take effect for a single session, which means you must use `SET GLOBAL <varname> = xxx;` for these variables. If you try to set such a variable for a single session (`SET <varname> = xxx;`), an error is returned. activateallrolesonlogin charactersetdatabase defaultrowsettype enablequeryqueue_select enablequeryqueue_statistic enablequeryqueue_load init_connect lowercasetable_names license language querycachesize queryqueuefreshresourceusageintervalms queryqueueconcurrency_limit queryqueuememusedpct_limit queryqueuecpuusedpermille_limit queryqueuependingtimeoutsecond queryqueuemaxqueuedqueries systemtimezone version_comment version In addition, variable settings also support constant expressions, such as: ```SQL SET querymemlimit = 10 1024 1024 * 1024; ``` ```SQL SET forwardtomaster = concat('tr', 'u', 'e'); ``` In some scenarios, you may need to set variables specifically for certain queries. By using the `SET_VAR` hint, you can set session variables that will take effect only within a single statement. Example: ```sql SELECT /+ SET_VAR(query_mem_limit = 8589934592) / name FROM people ORDER BY name; SELECT /+ SET_VAR(query_timeout = 1) / sleep(3); ``` NOTE `SET_VAR` can only be placed after the `SELECT` keyword and enclosed in `/+.../`. You can also set multiple variables in a single"
},
{
"data": "Example: ```sql SELECT /*+ SET_VAR ( execmemlimit = 515396075520, query_timeout=10000000, batch_size=4096, parallelfragmentexecinstancenum=32 ) / FROM TABLE; ``` The variables are described in alphabetical order. Variables with the `global` label can only take effect globally. Other variables can take effect either globally or for a single session. Description*: Whether to enable all roles (including default roles and granted roles) for a StarRocks user when the user connects to the StarRocks cluster. If enabled (`true`), all roles of the user are activated at user login. This takes precedence over the roles set by . If disabled (`false`), the roles set by SET DEFAULT ROLE are activated. Default*: false Introduced in*: v3.0 If you want to activate the roles assigned to you in a session, use the command. Used for MySQL client compatibility. No practical usage. Used for MySQL client compatibility. No practical usage. Description*: Used to specify the number of rows of a single packet transmitted by each node during query execution. The default is 1024, i.e., every 1024 rows of data generated by the source node is packaged and sent to the destination node. A larger number of rows will improve the query throughput in large data volume scenarios, but may increase the query latency in small data volume scenarios. Also, it may increase the memory overhead of the query. We recommend to set `batch_size` between 1024 to 4096. Default*: 1024 Description*: Used to set the threshold for big queries. When the session variable `enableprofile` is set to `false` and the amount of time taken by a query exceeds the threshold specified by the variable `bigqueryprofilethreshold`, a profile is generated for that query. Note: In versions v3.1.5 to v3.1.7, as well as v3.2.0 to v3.2.2, we introduced the `bigqueryprofilesecondthreshold` for setting the threshold for big queries. In versions v3.1.8, v3.2.3, and subsequent releases, this parameter has been replaced by `bigqueryprofile_threshold` to offer more flexible configuration options. Default*: 0 Unit*: Second Data type*: String Introduced in*: v3.1 Description*: Used to specify the catalog to which the session belongs. Default*: default_catalog Data type*: String Introduced in*: v3.2.4 Description*: Controls how the CBO converts data from the DECIMAL type to the STRING type. If this variable is set to `true`, the logic built in v2.5.x and later versions prevails and the system implements strict conversion (namely, the system truncates the generated string and fills 0s based on the scale length). If this variable is set to `false`, the logic built in versions earlier than v2.5.x prevails and the system processes all valid digits to generate a string. Default*: true Introduced in*: v2.5.14 Description*: Whether to enable low cardinality optimization. After this feature is enabled, the performance of querying STRING columns improves by about three times. Default*: true Description: Specifies the data type used for data comparison between DECIMAL data and STRING data. The default value is `VARCHAR`, and DECIMAL is also a valid value. This variable takes effect only for `=` and `!=` comparison.* Data type*: String Introduced in*: v2.5.14 Description*: Specifies the maximum number of candidate materialized views allowed during query planning. Default*: 64 Introduced in*: v3.1.9, v3.2.5 Description*: Whether to enable JSON subfield pruning. This variable must be used with the BE dynamic parameter `enablejsonflat`. Otherwise, it may degrade JSON data query performance. Default*: false Data type*: Int Introduced in*: v3.3.0 Description*: Whether to enable query rewrite based on synchronous materialized views. Default*: true Introduced in*: v3.1.11, v3.2.5 Description*: Specifies the name of the asynchronous materialized views to include in query"
},
{
"data": "You can use this variable to limit the number of candidate materialized views and improve the query rewrite performance in the optimizer. This item takes effect prior to `queryexcludingmv_names`. Default*: empty Data type*: String Introduced in*: v3.1.11, v3.2.5 Description*: Specifies the name of the asynchronous materialized views to exclude from query execution. You can use this variable to limit the number of candidate materialized views and reduce the time of query rewrite in the optimizer. `queryincludingmv_names` takes effect prior to this item. Default*: empty Data type*: String Introduced in*: v3.1.11, v3.2.5 Description*: Specifies the maximum time that one materialized view rewrite rule can consume. When the threshold is reached, this rule will not be used for query rewrite. Default*: 1000 Unit*: ms Introduced in*: v3.1.9, v3.2.5 Description*: Whether to enable text-based materialized view rewrite. When this item is set to true, the optimizer will compare the query with the existing materialized views. A query will be rewritten if the abstract syntax tree of the materialized view's definition matches that of the query or its sub-query. Default*: true Introduced in*: v3.2.5, v3.3.0 Description*: Specifies the maximum number of times that the system checks whether a query's sub-query matches the materialized views' definition. Default*: 4 Introduced in*: v3.2.5, v3.3.0 Description*: Whether to enable query rewrite for queries against multiple tables in the optimizer's rule-based optimization phase. Enabling this feature will improve the robustness of the query rewrite. However, it will also increase the time consumption if the query misses the materialized view. Default*: true Introduced in*: v3.3.0 Description*: Whether to enable query rewrite for logical view-based materialized views. If this item is set to `true`, the logical view is used as a unified node to rewrite the queries against itself for better performance. If this item is set to `false`, the system transcribes the queries against logical views into queries against physical tables or materialized views and then rewrites them. Default*: false Introduced in*: v3.1.9, v3.2.5, v3.3.0 Description*: Whether to enable materialized view union rewrite. If this item is set to `true`, the system seeks to compensate the predicates using UNION ALL when the predicates in the materialized view cannot satisfy the query's predicates. Default*: true Introduced in*: v2.5.20, v3.1.9, v3.2.7, v3.3.0 Description*: Specifies to which FE nodes the query statements are routed. Valid values: `default`: Routes the query statement to the Leader FE or Follower FEs, depending on the Follower's replay progress. If the Follower FE nodes have not completed replay progress, queries will be routed to the Leader FE node. If the replay progress is complete, queries will be preferentially routed to the Follower FE node. `leader`: Routes the query statement to the Leader FE. `follower`: Routes the query statement to Follower FE. Default*: default Data type*: String Introduced in*: v2.5.20, v3.1.9, v3.2.7, v3.3.0 Data type*: StringThe character set supported by StarRocks. Only UTF8 (`utf8`) is supported. Default*: utf8 Data type*: String Description*: The maximum number of concurrent I/O tasks that can be issued by a scan operator during external table queries. The value is an integer. Currently, StarRocks can adaptively adjust the number of concurrent I/O tasks when querying external tables. This feature is controlled by the variable `enableconnectoradaptiveiotasks`, which is enabled by default. Default*: 16 Data type*: Int Introduced in*: v2.5 Description*: Specifies the compression algorithm used for writing data into Hive tables or Iceberg tables, or exporting data with Files(). Valid values*: `uncompressed`, `snappy`, `lz4`, `zstd`, and `gzip`. Default*: uncompressed Data type*: String Introduced in*: v3.2.3 Description*: The number of buckets for the COUNT DISTINCT column in a group-by-count-distinct query. This variable takes effect only when `enabledistinctcolumn_bucketization` is set to `true`. Default*: 1024 Introduced in*:"
},
{
"data": "Used to set the default storage format used by the storage engine of the computing node. The currently supported storage formats are `alpha` and `beta`. Description*: The default compression algorithm for table storage. Supported compression algorithms are `snappy, lz4, zlib, zstd`. Note that if you specified the `compression` property in a CREATE TABLE statement, the compression algorithm specified by `compression` takes effect. Default*: lz4_frame Introduced in*: v3.0 Description*: Used to control whether the Colocation Join is enabled. The default value is `false`, meaning the feature is enabled. When this feature is disabled, query planning will not attempt to execute Colocation Join. Default*: false Used to enable the streaming pre-aggregations. The default value is `false`, meaning it is enabled. Used for MySQL client compatibility. No practical usage. <!-- This variable is introduced to solve compatibility issues. Default value: `true`. --> Description*: Whether to adaptively adjust the number of concurrent I/O tasks when querying external tables. Default value is `true`. If this feature is not enabled, you can manually set the number of concurrent I/O tasks using the variable `connectoriotasksperscan_operator`. Default*: true Introduced in*: v2.5 Description*: Whether to enable bucketization for the COUNT DISTINCT colum in a group-by-count-distinct query. Use the `select a, count(distinct b) from t group by a;` query as an example. If the GROUP BY colum `a` is a low-cardinality column and the COUNT DISTINCT column `b` is a high-cardinality column which has severe data skew, performance bottleneck will occur. In this situation, you can split data in the COUNT DISTINCT column into multiple buckets to balance data and prevent data skew. You must use this variable with the variable `countdistinctcolumn_buckets`. You can also enable bucketization for the COUNT DISTINCT column by adding the `skew` hint to your query, for example, `select a,count(distinct [skew] b) from t group by a;`. Default*: false, which means this feature is disabled. Introduced in*: v2.5 Description*: Whether to enable resource group-level . Default*: false, which means this feature is disabled. Introduced in*: v3.1.4 Description*: Whether to cache pointers and partition names for Iceberg tables. From v3.2.1 to v3.2.3, this parameter is set to `true` by default, regardless of what metastore service is used. In v3.2.4 and later, if the Iceberg cluster uses AWS Glue as metastore, this parameter still defaults to `true`. However, if the Iceberg cluster uses other metastore service such as Hive metastore, this parameter defaults to `false`. Introduced in*: v3.2.1 Used to enable the strict mode when loading data using the INSERT statement. The default value is `true`, indicating the strict mode is enabled by default. For more information, see . Description*: Whether to allow StarRocks to rewrite queries in INSERT INTO SELECT statements. Default*: false, which means Query Rewrite in such scenarios is disabled by default. Introduced in*: v2.5.18, v3.0.9, v3.1.7, v3.2.2 Description: Controls whether to enable rule-based materialized view query rewrite. This variable is mainly used in single-table query rewrite. Default: true Data type*: Boolean Introduced in*: v2.5 Description*: Whether to enable short circuiting for queries. Default: `false`. If it is set to `true`, when the table uses hybrid row-column storage and the query meets the criteria (to evaluate whether the query is a point query): the conditional columns in the WHERE clause include all primary key columns, and the operators in the WHERE clause are `=` or `IN`, the query takes the short circuit to directly query the data stored in the row-by-row fashion. Default*: false Introduced in*: v3.2.3 Description*: Whether to enable intermediate result spilling. Default:"
},
{
"data": "If it is set to `true`, StarRocks spills the intermediate results to disk to reduce the memory usage when processing aggregate, sort, or join operators in queries. Default*: false Introduced in*: v3.0 Description*: Whether to enable intermediate result spilling to object storage. If it is set to `true`, StarRocks spills the intermediate results to the storage volume specified in `spillstoragevolume` after the capacity limit of the local disk is reached. For more information, see . Default*: false Introduced in*: v3.3.0 Description: Used to check whether the column name referenced in ORDER BY is ambiguous. When this variable is set to the default value `TRUE`, an error is reported for such a query pattern: Duplicate alias is used in different expressions of the query and this alias is also a sorting field in ORDER BY, for example, `select distinct t1. from tbl1 t1 order by t1.k1;`. The logic is the same as that in v2.3 and earlier. When this variable is set to `FALSE`, a loose deduplication mechanism is used, which processes such queries as valid SQL queries. Default*: true Introduced in*: v2.5.18 and v3.1.7 Description*: Specifies whether to send the profile of a query for analysis. The default value is `false`, which means no profile is required. By default, a profile is sent to the FE only when a query error occurs in the BE. Profile sending causes network overhead and therefore affects high concurrency. If you need to analyze the profile of a query, you can set this variable to `true`. After the query is completed, the profile can be viewed on the web page of the currently connected FE (address: `fehost:fehttpport/query`). This page displays the profiles of the latest 100 queries with `enableprofile` turned on. Default*: false Description*: Boolean value to enable query queues for loading tasks. Default*: false Description*: Whether to enable query queues for SELECT queries. Default*: false Description*: Whether to enable query queues for statistics queries. Default*: false Description*: Boolean value to control whether to direct multiple queries against the same tablet to a fixed replica. In scenarios where the table to query has a large number of tablets, this feature significantly improves query performance because the meta information and data of the tablet can be cached in memory more quickly. However, if there are some hotspot tablets, this feature may degrade the query performance because it directs the queries to the same BE, making it unable to fully use the resources of multiple BEs in high-concurrency scenarios. Default*: false, which means the system selects a replica for each query. Introduced in*: v2.5.6, v3.0.8, v3.1.4, and v3.2.0. Description*: Specifies whether to enable the Data Cache feature. After this feature is enabled, StarRocks caches hot data read from external storage systems into blocks, which accelerates queries and analysis. For more information, see . In versions prior to 3.2, this variable was named as `enablescanblock_cache`. Default*: false Introduced in*: v2.5 Description*: Specifies whether to cache data blocks read from external storage systems in StarRocks. If you do not want to cache data blocks read from external storage systems, set this variable to `false`. Default value: true. This variable is supported from 2.5. In versions prior to 3.2, this variable was named as `enablescanblock_cache`. Default*: true Introduced in*: v2.5 Description*: Whether to enable adaptive parallel scanning of tablets. After this feature is enabled, multiple threads can be used to scan one tablet by segment, increasing the scan concurrency. Default*: true Introduced in*: v2.3 Description*: Specifies whether to enable the Query Cache feature. Valid values: true and false. `true` specifies to enable this feature, and `false` specifies to disable this"
},
{
"data": "When this feature is enabled, it works only for queries that meet the conditions specified in the application scenarios of . Default*: false Introduced in*: v2.5 Description*: Specifies whether to enable adaptive parallelism for data loading. After this feature is enabled, the system automatically sets load parallelism for INSERT INTO and Broker Load jobs, which is equivalent to the mechanism of `pipeline_dop`. For a newly deployed v2.5 StarRocks cluster, the value is `true` by default. For a v2.5 cluster upgraded from v2.4, the value is `false`. Default*: false Introduced in*: v2.5 Description*: Specifies whether to enable the pipeline execution engine. `true` indicates enabled and `false` indicates the opposite. Default value: `true`. Default*: true Description*: Specifies whether to enable sorted streaming. `true` indicates sorted streaming is enabled to sort data in data streams. Default*: false Introduced in*: v2.5 Whether to enable global runtime filter (RF for short). RF filters data at runtime. Data filtering often occurs in the Join stage. During multi-table joins, optimizations such as predicate pushdown are used to filter data, in order to reduce the number of scanned rows for Join and the I/O in the Shuffle stage, thereby speeding up the query. StarRocks offers two types of RF: Local RF and Global RF. Local RF is suitable for Broadcast Hash Join and Global RF is suitable for Shuffle Join. Default value: `true`, which means global RF is enabled. If this feature is disabled, global RF does not take effect. Local RF can still work. Whether to enable multi-column global runtime filter. Default value: `false`, which means multi-column global RF is disabled. If a Join (other than Broadcast Join and Replicated Join) has multiple equi-join conditions: If this feature is disabled, only Local RF works. If this feature is enabled, multi-column Global RF takes effect and carries `multi-column` in the partition by clause. Description*: Whether to allow for sinking data to external tables of Hive. Default*: false Introduced in*: v3.2 Used for MySQL client compatibility. No practical usage. Description*: Whether to allow implicit conversions for all compound predicates and for all expressions in the WHERE clause. Default*: false Introduced in*: v3.1 Used to control whether the aggregation node enables streaming aggregation for computing. The default value is false, meaning the feature is not enabled. Used to specify whether some commands will be forwarded to the leader FE for execution. Alias: `forwardtomaster`. The default value is `false`, meaning not forwarding to the leader FE. There are multiple FEs in a StarRocks cluster, one of which is the leader FE. Normally, users can connect to any FE for full-featured operations. However, some information is only available on the leader FE. For example, if the SHOW BACKENDS command is not forwarded to the leader FE, only basic information (for example, whether the node is alive) can be viewed. Forwarding to the leader FE can get more detailed information including the node start time and last heartbeat time. The commands affected by this variable are as follows: SHOW FRONTENDS: Forwarding to the leader FE allows users to view the last heartbeat message. SHOW BACKENDS: Forwarding to the leader FE allows users to view the boot time, last heartbeat information, and disk capacity information. SHOW BROKER: Forwarding to the leader FE allows users to view the boot time and last heartbeat information. SHOW TABLET ADMIN SHOW REPLICA DISTRIBUTION ADMIN SHOW REPLICA STATUS: Forwarding to the leader FE allows users to view the tablet information stored in the metadata of the leader FE. Normally, the tablet information should be the same in the metadata of different"
},
{
"data": "If an error occurs, you can use this method to compare the metadata of the current FE and the leader FE. Show PROC: Forwarding to the leader FE allows users to view the PROC information stored in the metadata. This is mainly used for metadata comparison. Description*: The maximum length of string returned by the function. Default*: 1024 Min value*: 4 Unit*: Characters Data type*: Long Description*: Used to control whether the data of the left table can be filtered by using the filter condition against the right table in the Join query. If so, it can reduce the amount of data that needs to be processed during the query. Default: `true` indicates the operation is allowed and the system decides whether the left table can be filtered. `false` indicates the operation is disabled. The default value is `true`. Used for MySQL client compatibility. No practical usage. Used for MySQL client compatibility. No practical usage. Description*: The number of concurrent I/O tasks that can be issued by a scan operator. Increase this value if you want to access remote storage systems such as HDFS or S3 but the latency is high. However, a larger value causes more memory consumption. Default*: 4 Data type*: Int Introduced in*: v2.5 Used for MySQL client compatibility. No practical usage. Description*: Displays the license of StarRocks. Default*: Apache License 2.0 Specifies the memory limit for the import operation. The default value is 0, meaning that this variable is not used and `querymemlimit` is used instead. This variable is only used for the `INSERT` operation which involves both query and import. If the user does not set this variable, the memory limit for both query and import will be set as `execmemlimit`. Otherwise, the memory limit for query will be set as `execmemlimit` and the memory limit for import will be as `loadmemlimit`. Other import methods such as `BROKER LOAD`, `STREAM LOAD` still use `execmemlimit` for memory limit. Specifies the maximum number of unqualified data rows that can be logged. Valid values: `0`, `-1`, and any non-zero positive integer. Default value: `0`. The value `0` specifies that data rows that are filtered out will not be logged. The value `-1` specifies that all data rows that are filtered out will be logged. A non-zero positive integer such as `n` specifies that up to `n` data rows that are filtered out can be logged on each BE. Used for MySQL client compatibility. No practical usage. Table names in StarRocks are case-sensitive. Specifies the query rewrite mode of asynchronous materialized views. Valid values: `disable`: Disable automatic query rewrite of asynchronous materialized views. `default` (Default value): Enable automatic query rewrite of asynchronous materialized views, and allow the optimizer to decide whether a query can be rewritten using the materialized view based on the cost. If the query cannot be rewritten, it directly scans the data in the base table. `defaultorerror`: Enable automatic query rewrite of asynchronous materialized views, and allow the optimizer to decide whether a query can be rewritten using the materialized view based on the cost. If the query cannot be rewritten, an error is returned. `force`: Enable automatic query rewrite of asynchronous materialized views, and the optimizer prioritizes query rewrite using the materialized view. If the query cannot be rewritten, it directly scans the data in the base table. `forceorerror`: Enable automatic query rewrite of asynchronous materialized views, and the optimizer prioritizes query rewrite using the materialized view. If the query cannot be rewritten, an error is returned. Description*: Used for compatibility with the JDBC connection pool C3P0. This variable specifies the maximum size of packets that can be transmitted between the client and"
},
{
"data": "Default*: 33554432 (32 MB). You can raise this value if the client reports \"PacketTooBigException\". Unit*: Byte Data type*: Int Description*: The maximum number of predicates that can be pushed down for a column. Default*: -1, indicating that the value in the `be.conf` file is used. If this variable is set to a value greater than 0, the value in `be.conf` is ignored. Data type*: Int Description*: The maximum number of scan key segmented by each query. Default*: -1, indicating that the value in the `be.conf` file is used. If this variable is set to a value greater than 0, the value in `be.conf` is ignored. Description*: The maximum levels of nested materialized views that can be used for query rewrite. Value range*: [1, +). The value of `1` indicates that only materialized views created on base tables can be used for query rewrite. Default*: 3 Data type*: Int Used for MySQL client compatibility. No practical usage. Used for MySQL client compatibility. No practical usage. Used for MySQL client compatibility. No practical usage. Description*: The timeout duration of the query optimizer. When the optimizer times out, an error is returned and the query is stopped, which affects the query performance. You can set this variable to a larger value based on your query or contact StarRocks technical support for troubleshooting. A timeout often occurs when a query has too many joins. Default*: 3000 Unit*: ms Used to set the number of exchange nodes that an upper-level node uses to receive data from a lower-level node in the execution plan. The default value is -1, meaning the number of exchange nodes is equal to the number of execution instances of the lower-level node. When this variable is set to be greater than 0 but smaller than the number of execution instances of the lower-level node, the number of exchange nodes equals the set value. In a distributed query execution plan, the upper-level node usually has one or more exchange nodes to receive data from the execution instances of the lower-level node on different BEs. Usually the number of exchange nodes is equal to the number of execution instances of the lower-level node. In some aggregation query scenarios where the amount of data decreases drastically after aggregation, you can try to modify this variable to a smaller value to reduce the resource overhead. An example would be running aggregation queries using the Duplicate Key table. Used to set the number of instances used to scan nodes on each BE. The default value is 1. A query plan typically produces a set of scan ranges. This data is distributed across multiple BE nodes. A BE node will have one or more scan ranges, and by default, each BE node's set of scan ranges is processed by only one execution instance. When machine resources suffice, you can increase this variable to allow more execution instances to process a scan range simultaneously for efficiency purposes. The number of scan instances determines the number of other execution nodes in the upper level, such as aggregation nodes and join nodes. Therefore, it increases the concurrency of the entire query plan execution. Modifying this variable will help improve efficiency, but larger values will consume more machine resources, such as CPU, memory, and disk IO. Description*: Used to control the mode of partial updates. Valid values: `auto` (default): The system automatically determines the mode of partial updates by analyzing the UPDATE statement and the columns"
},
{
"data": "`column`: The column mode is used for the partial updates, which is particularly suitable for the partial updates which involve a small number of columns and a large number of rows. For more information, see . Default*: auto Introduced in*: v3.1 Used for compatibility with MySQL JDBC versions 8.0.16 and above. No practical usage. Description*: Specifies whether the FEs distribute query execution plans to CN nodes. Valid values: `true`: indicates that the FEs distribute query execution plans to CN nodes. `false`: indicates that the FEs do not distribute query execution plans to CN nodes. Default*: false Introduced in*: v2.4 Description*: The parallelism of a pipeline instance, which is used to adjust the query concurrency. Default value: 0, indicating the system automatically adjusts the parallelism of each pipeline instance. You can also set this variable to a value greater than 0. Generally, set the value to half the number of physical CPU cores. From v3.0 onwards, StarRocks adaptively adjusts this variable based on query parallelism. Default*: 0 Data type*: Int Description*: Controls the level of the query profile. A query profile often has five layers: Fragment, FragmentInstance, Pipeline, PipelineDriver, and Operator. Different levels provide different details of the profile: 0: StarRocks combines metrics of the profile and shows only a few core metrics. 1: default value. StarRocks simplifies the profile and combines metrics of the profile to reduce profile layers. 2: StarRocks retains all the layers of the profile. The profile size is large in this scenario, especially when the SQL query is complex. This value is not recommended. Default*: 1 Data type*: Int Description*: The threshold for triggering the Passthrough mode. When the number of bytes or rows from the computation results of a specific tablet accessed by a query exceeds the threshold specified by `querycacheentrymaxbytes` or `querycacheentrymaxrows`, the query is switched to Passthrough mode. Valid values*: 0 to 9223372036854775807 Default*: 4194304 Unit*: Byte Introduced in*: v2.5 Description*: The upper limit of rows that can be cached. See the description in `querycacheentrymaxbytes`. Default value: . Default*: 409600 Introduced in*: v2.5 Description*: The upper limit of cardinality for GROUP BY in Query Cache. Query Cache is not enabled if the rows generated by GROUP BY exceeds this value. Default value: 5000000. If `querycacheentrymaxbytes` or `querycacheentrymaxrows` is set to 0, the Passthrough mode is used even when no computation results are generated from the involved tablets. Default*: 5000000 Data type*: Long Introduced in*: v2.5 Used for MySQL client compatibility. No practical use. Used for compatibility with JDBC connection pool C3P0. No practical use. Description*: Used to set the memory limit of a query on each BE node. The default value is 0, which means no limit for it. This item takes effect only after Pipeline Engine is enabled. When the `Memory Exceed Limit` error happens, you could try to increase this variable. Default*: 0, which means no limit. Unit*: Byte Description*: The upper limit of concurrent queries on a BE. It takes effect only after being set greater than `0`. Default*: 0 Data type*: Int Description: The upper limit of CPU usage permille (CPU usage 1000) on a BE. It takes effect only after being set greater than `0`. Value range*: [0, 1000] Default*: `0` Description*: The upper limit of queries in a queue. When this threshold is reached, incoming queries are rejected. It takes effect only after being set greater than `0`. Default*: `1024`. Description*: The upper limit of memory usage percentage on a BE. It takes effect only after being set greater than `0`. Value range*: [0, 1] Default*: 0 Description*: The maximum timeout of a pending query in a queue. When this threshold is reached, the corresponding query is"
},
{
"data": "Default*: 300 Unit*: Second Description*: Used to set the query timeout in \"seconds\". This variable will act on all query statements in the current connection, as well as INSERT statements. The default value is 300 seconds. Value range*: [1, 259200] Default*: 300 Data type*: Int Unit*: Second Description*: The maximum number of IN predicates that can be used for Range partition pruning. Default value: 100. A value larger than 100 may cause the system to scan all tablets, which compromises the query performance. Default*: 100 Introduced in*: v3.0 Used to decide whether to rewrite count distinct queries to bitmapunioncount and hllunionagg. Description*: Whether to place GRF on Exchange Node after GRF is pushed down across the Exchange operator to a lower-level operator. The default value is `false`, which means GRF will not be placed on Exchange Node after it is pushed down across the Exchange operator to a lower-level operator. This prevents repetitive use of GRF and reduces the computation time. However, GRF delivery is a \"try-best\" process. If the lower-level operator fails to receive the GRF but the GRF is not placed on Exchange Node, data cannot be filtered, which compromises filter performance. `true` means GRF will still be placed on Exchange Node even after it is pushed down across the Exchange operator to a lower-level operator. Default*: false Description*: The maximum number of rows allowed for the Hash table based on which Bloom filter Local RF is generated. Local RF will not be generated if this value is exceeded. This variable prevents the generation of an excessively long Local RF. Default*: 1024000 Data type*: Int Description*: The time interval at which runtime profiles are reported. Default*: 10 Unit*: Second Data type*: Int Introduced in*: v3.1.0 The execution mode of intermediate result spilling. Valid values: `auto`: Spilling is automatically triggered when the memory usage threshold is reached. `force`: StarRocks forcibly executes spilling for all relevant operators, regardless of memory usage. This variable takes effect only when the variable `enable_spill` is set to `true`. Description*: The storage volume with which you want to store the intermediate results of queries that triggered spilling. For more information, see . Default*: Empty string Introduced in*: v3.3.0 Used for compatibility with the JDBC connection pool C3P0. No practical usage. Description*: The SQL dialect that is used. For example, you can run the `set sql_dialect = 'trino';` command to set the SQL dialect to Trino, so you can use Trino-specific SQL syntax and functions in your queries. > NOTICE > > After you configure StarRocks to use the Trino dialect, identifiers in queries are not case-sensitive by default. Therefore, you must specify names in lowercase for your databases and tables at database and table creation. If you specify database and table names in uppercase, queries against these databases and tables will fail. Data type*: StarRocks Introduced in*: v3.0 Used to specify the SQL mode to accommodate certain SQL dialects. Valid values include: `PIPESASCONCAT`: The pipe symbol `|` is used to concatenate strings, for example, `select 'hello ' || 'world'`. `ONLYFULLGROUP_BY` (Default): The SELECT LIST can only contain GROUP BY columns or aggregate functions. `ALLOWTHROWEXCEPTION`: returns an error instead of NULL when type conversion fails. `FORBIDINVALIDDATE`: prohibits invalid dates. `MODEDOUBLELITERAL`: interprets floating-point types as DOUBLE rather than DECIMAL. `SORTNULLSLAST`: places NULL values at the end after sorting. `ERRORIFOVERFLOW`: returns an error instead of NULL in the case of arithmetic overflow. Currently, only the DECIMAL data type supports this option. `GROUPCONCATLEGACY`: uses the `group_concat` syntax of v2.5 and earlier. This option is supported from v3.0.9 and"
},
{
"data": "You can set only one SQL mode, for example: ```SQL set sqlmode = 'PIPESAS_CONCAT'; ``` Or, you can set multiple modes at a time, for example: ```SQL set sqlmode = 'PIPESASCONCAT,ERRORIFOVERFLOW,GROUPCONCAT_LEGACY'; ``` Used for MySQL client compatibility. No practical usage. Used for MySQL client compatibility. No practical usage. Description*: Used to adjust the parallelism of statistics collection tasks that can run on BEs. Default value: 1. You can increase this value to speed up collection tasks. Default*: 1 Data type*: Int The types of engines supported by StarRocks: olap (default): StarRocks system-owned engine. mysql: MySQL external tables. broker: Access external tables through a broker program. elasticsearch or es: Elasticsearch external tables. hive: Hive external tables. iceberg: Iceberg external tables, supported from v2.1. hudi: Hudi external tables, supported from v2.2. jdbc: external table for JDBC-compatible databases, supported from v2.3. Used to specify the preaggregation mode for the first phase of GROUP BY. If the preaggregation effect in the first phase is not satisfactory, you can use the streaming mode, which performs simple data serialization before streaming data to the destination. Valid values: `auto`: The system first tries local preaggregation. If the effect is not satisfactory, it switches to the streaming mode. This is the default value. `force_preaggregation`: The system directly performs local preaggregation. `force_streaming`: The system directly performs streaming. Used to display the time zone of the current system. Cannot be changed. Used to set the time zone of the current session. The time zone can affect the results of certain time functions. Description*: Used to control where to output the logs of query trace profiles. Valid values: `command`: Return query trace profile logs as the Explain String* after executing TRACE LOGS. `file`: Return query trace profile logs in the FE log file fe.log* with the class name being `FileLogTracer`. For more information on query trace profile, see . Default*: `command` Data type*: String Introduced in*: v3.2.0 Description*: Used for MySQL 5.8 compatibility. The alias is `txreadonly`. This variable specifies the transaction access mode. `ON` indicates read only and `OFF` indicates readable and writable. Default*: OFF Introduced in*: v2.5.18, v3.0.9, v3.1.7 Used for MySQL client compatibility. No practical usage. The alias is `transaction_isolation`. Description*: The maximum number of CN nodes that can be used. This variable is valid when `prefercomputenode=true`. Valid values: `-1`: indicates that all CN nodes are used. `0`: indicates that no CN nodes are used. Default*: -1 Data type*: Int Introduced in*: v2.4 Used to control the query to fetch data using the rollup index of the segment v2 storage format. This variable is used for validation when going online with segment v2. It is not recommended for other cases. Used to control whether the vectorized engine is used to execute queries. A value of `true` indicates that the vectorized engine is used, otherwise the non-vectorized engine is used. The default is `true`. This feature is enabled by default from v2.4 onwards and therefore, is deprecated. The MySQL server version returned to the client. The value is the same as FE parameter `mysqlserverversion`. The StarRocks version. Cannot be changed. Description*: The number of seconds the server waits for activity on a non-interactive connection before closing it. If a client does not interact with StarRocks for this length of time, StarRocks will actively close the connection. Default*: 28800 (8 hours). Unit*: Second Data type*: Int Description*: Used to specify how columns are matched when StarRocks reads ORC files from Hive. The default value is `false`, which means columns in ORC files are read based on their ordinal positions in the Hive table definition. If this variable is set to"
}
] |
{
"category": "App Definition and Development",
"file_name": "columns.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" `columns` contains information about all table columns (or view columns). :::note The metadata of is not recorded in `columns`. You can access it by executing `SHOW PROC '/dbs/db/table/index_schema'`. ::: The following fields are provided in `columns`: | Field | Description | | | | | TABLE_CATALOG | The name of the catalog to which the table containing the column belongs. This value is always `NULL`. | | TABLE_SCHEMA | The name of the database to which the table containing the column belongs. | | TABLE_NAME | The name of the table containing the column. | | COLUMN_NAME | The name of the column. | | ORDINAL_POSITION | The ordinal position of the column within the table. | | COLUMN_DEFAULT | The default value for the column. This is `NULL` if the column has an explicit default of `NULL`, or if the column definition includes no `DEFAULT` clause. | | IS_NULLABLE | The column nullability. The value is `YES` if `NULL` values can be stored in the column, `NO` if not. | | DATATYPE | The column data type. The `DATATYPE` value is the type name only with no other information. The `COLUMN_TYPE` value contains the type name and possibly other information such as the precision or length. | | CHARACTERMAXIMUMLENGTH | For string columns, the maximum length in characters. | | CHARACTEROCTETLENGTH | For string columns, the maximum length in bytes. | | NUMERIC_PRECISION | For numeric columns, the numeric precision. | | NUMERIC_SCALE | For numeric columns, the numeric scale. | | DATETIME_PRECISION | For temporal columns, the fractional seconds precision. | | CHARACTERSETNAME | For character string columns, the character set name. | | COLLATION_NAME | For character string columns, the collation name. | | COLUMNTYPE | The column data type.<br />The `DATATYPE` value is the type name only with no other"
},
{
"data": "The `COLUMN_TYPE` value contains the type name and possibly other information such as the precision or length. | | COLUMNKEY | Whether the column is indexed:<ul><li>If `COLUMNKEY` is empty, the column either is not indexed or is indexed only as a secondary column in a multiple-column, nonunique index.</li><li>If `COLUMNKEY` is `PRI`, the column is a `PRIMARY KEY` or is one of the columns in a multiple-column `PRIMARY KEY`.</li><li>If `COLUMNKEY` is `UNI`, the column is the first column of a `UNIQUE` index. (A `UNIQUE` index permits multiple `NULL` values, but you can tell whether the column permits `NULL` by checking the `Null` column.)</li><li>If `COLUMNKEY` is `DUP`, the column is the first column of a nonunique index in which multiple occurrences of a given value are permitted within the column.</li></ul>If more than one of the `COLUMNKEY` values applies to a given column of a table, `COLUMN_KEY` displays the one with the highest priority, in the order `PRI`, `UNI`, `DUP`.<br />A `UNIQUE` index may be displayed as `PRI` if it cannot contain `NULL` values and there is no `PRIMARY KEY` in the table. A `UNIQUE` index may display as `MUL` if several columns form a composite `UNIQUE` index; although the combination of the columns is unique, each column can still hold multiple occurrences of a given value. | | EXTRA | Any additional information that is available about a given column. | | PRIVILEGES | The privileges you have for the column. | | COLUMN_COMMENT | Any comment included in the column definition. | | COLUMN_SIZE | | | DECIMAL_DIGITS | | | GENERATION_EXPRESSION | For generated columns, displays the expression used to compute column values. Empty for nongenerated columns. | | SRS_ID | This value applies to spatial columns. It contains the column `SRID` value that indicates the spatial reference system for values stored in the column. |"
}
] |
{
"category": "App Definition and Development",
"file_name": "external_resources.md",
"project_name": "Flink",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: External Resources weight: 2 type: docs aliases: /deployment/advanced/external_resources.html /ops/external_resources.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> In addition to CPU and memory, many workloads also need some other resources, e.g. GPUs for deep learning. To support external resources, Flink provides an external resource framework. The framework supports requesting various types of resources from the underlying resource management systems (e.g., Kubernetes), and supplies information needed for using these resources to the operators. Different resource types can be supported. You can either leverage built-in plugins provided by Flink (currently only for GPU support), or implement your own plugins for custom resource types. In general, the external resource framework does two things: Set the corresponding fields of the resource requests (for requesting resources from the underlying system) with respect to your configuration. Provide operators with the information needed for using the resources. When deployed on resource management systems (Kubernetes / Yarn), the external resource framework will ensure that the allocated pod/container will contain the desired external resources. Currently, many resource management systems support external resources. For example, Kubernetes supports GPU, FPGA, etc. through its mechanism since v1.10, and Yarn supports GPU and FPGA resources since 2.10 and 3.1. In Standalone mode, the user has to ensure that the external resources are available. The external resource framework will provide the corresponding information to operators. The external resource information, which contains the basic properties needed for using the resources, is generated by the configured external resource drivers. To enable an external resource with the external resource framework, you need to: Prepare the external resource plugin. Set configurations for the external resource. Get the external resource information from `RuntimeContext` and use it in your operators. You need to prepare the external resource plugin and put it into the `plugins/` folder of your Flink distribution, see . Apache Flink provides a first-party . You can also . First, you need to add resource names for all the external resource types to the external resource list (with the configuration key external-resources) with delimiter \";\", e.g. \"external-resources: gpu;fpga\" for two external resources \"gpu\" and \"fpga\". Only the \\<resource_name\\> defined here will go into effect in the external resource framework. For each external resource, you could configure the below options. The \\<resource_name\\> in all the below configuration options corresponds to the name listed in the external resource list: Amount (`external.<resource_name>.amount`): This is the quantity of the external resource that should be requested from the external system. Config key in Yarn (`external-resource.<resource_name>.yarn.config-key`): optional. If configured, the external resource framework will add this key to the resource profile of container requests for Yarn. The value will be set to the value of"
},
{
"data": "Config key in Kubernetes (`external-resource.<resource_name>.kubernetes.config-key`): optional. If configured, external resource framework will add `resources.limits.<config-key>` and `resources.requests.<config-key>` to the main container spec of TaskManager and set the value to the value of `external-resource.<resource_name>.amount`. Driver Factory (`external-resource.<resource_name>.driver-factory.class`): optional. Defines the factory class name for the external resource identified by \\<resource_name\\>. If configured, the factory will be used to instantiate drivers in the external resource framework. If not configured, the requested resource will still exist in the `TaskManager` as long as the relevant options are configured. However, the operator will not get any information of the resource from `RuntimeContext` in that case. Driver Parameters (`external-resource.<resource_name>.param.<param>`): optional. The naming pattern of custom config options for the external resource specified by \\<resource_name\\>. Only the configurations that follow this pattern will be passed into the driver factory of that external resource. An example configuration that specifies two external resources: ```bash external-resources: gpu;fpga # Define two external resources, \"gpu\" and \"fpga\". external-resource.gpu.driver-factory.class: org.apache.flink.externalresource.gpu.GPUDriverFactory # Define the driver factory class of gpu resource. external-resource.gpu.amount: 2 # Define the amount of gpu resource per TaskManager. external-resource.gpu.param.discovery-script.args: --enable-coordination # Define the custom param discovery-script.args which will be passed into the gpu driver. external-resource.fpga.driver-factory.class: org.apache.flink.externalresource.fpga.FPGADriverFactory # Define the driver factory class of fpga resource. external-resource.fpga.amount: 1 # Define the amount of fpga resource per TaskManager. external-resource.fpga.yarn.config-key: yarn.io/fpga # Define the corresponding config key of fpga in Yarn. ``` To use the resources, operators need to get the `ExternalResourceInfo` set from the `RuntimeContext`. `ExternalResourceInfo` wraps the information needed for using the resource, which can be retrieved with `getProperty`. What properties are available and how to access the resource with the properties depends on the specific plugin. Operators can get the `ExternalResourceInfo` set of a specific external resource from `RuntimeContext` or `FunctionContext` by `getExternalResourceInfos(String resourceName)`. The `resourceName` here should have the same value as the name configured in the external resource list. It can be used as follows: {{< tabs \"81f076a6-0048-4fb0-88b8-be8508e969c8\" >}} {{< tab \"Java\" >}} ```java public class ExternalResourceMapFunction extends RichMapFunction<String, String> { private static final String RESOURCE_NAME = \"foo\"; @Override public String map(String value) { Set<ExternalResourceInfo> externalResourceInfos = getRuntimeContext().getExternalResourceInfos(RESOURCE_NAME); List<String> addresses = new ArrayList<>(); externalResourceInfos.iterator().forEachRemaining(externalResourceInfo -> addresses.add(externalResourceInfo.getProperty(\"address\").get())); // map function with addresses. // ... } } ``` {{< /tab >}} {{< tab \"Scala\" >}} ```scala class ExternalResourceMapFunction extends RichMapFunction[(String, String)] { var RESOURCE_NAME = \"foo\" override def map(value: String): String = { val externalResourceInfos = getRuntimeContext().getExternalResourceInfos(RESOURCE_NAME) val addresses = new util.ArrayList[String] externalResourceInfos.asScala.foreach( externalResourceInfo => addresses.add(externalResourceInfo.getProperty(\"address\").get())) // map function with addresses. // ... } } ``` {{< /tab >}} {{< /tabs >}} Each `ExternalResourceInfo` contains one or more properties with keys representing the different dimensions of the resource. You could get all valid keys by `ExternalResourceInfo#getKeys`. {{< hint info >}} Note: Currently, the information returned by RuntimeContext#getExternalResourceInfos is available to all the operators. {{< /hint >}} To implement a plugin for your custom resource type, you need to: Add your own external resource driver by implementing the `org.apache.flink.api.common.externalresource.ExternalResourceDriver` interface. Add a driver factory, which instantiates the driver, by implementing the `org.apache.flink.api.common.externalresource.ExternalResourceDriverFactory`. Add a service entry. Create a file `META-INF/services/org.apache.flink.api.common.externalresource.ExternalResourceDriverFactory` which contains the class name of your driver factory class (see the docs for more"
},
{
"data": "For example, to implement a plugin for external resource named \"FPGA\", you need to implement `FPGADriver` and `FPGADriverFactory` first: {{< tabs \"6d799b04-fa41-42b6-9933-cb1fe288cd81\" >}} {{< tab \"Java\" >}} ```java public class FPGADriver implements ExternalResourceDriver { @Override public Set<FPGAInfo> retrieveResourceInfo(long amount) { // return the information set of \"FPGA\" } } public class FPGADriverFactory implements ExternalResourceDriverFactory { @Override public ExternalResourceDriver createExternalResourceDriver(Configuration config) { return new FPGADriver(); } } // Also implement FPGAInfo which contains basic properties of \"FPGA\" resource. public class FPGAInfo implements ExternalResourceInfo { @Override public Optional<String> getProperty(String key) { // return the property with the given key. } @Override public Collection<String> getKeys() { // return all property keys. } } ``` {{< /tab >}} {{< tab \"Scala\" >}} ```scala class FPGADriver extends ExternalResourceDriver { override def retrieveResourceInfo(amount: Long): Set[FPGAInfo] = { // return the information set of \"FPGA\" } } class FPGADriverFactory extends ExternalResourceDriverFactory { override def createExternalResourceDriver(config: Configuration): ExternalResourceDriver = { new FPGADriver() } } // Also implement FPGAInfo which contains basic properties of \"FPGA\" resource. class FPGAInfo extends ExternalResourceInfo { override def getProperty(key: String): Option[String] = { // return the property with the given key. } override def getKeys(): util.Collection[String] = { // return all property keys. } } ``` {{< /tab >}} {{< /tabs >}} Create a file with name `org.apache.flink.api.common.externalresource.ExternalResourceDriverFactory` in `META-INF/services/` and write the factory class name (e.g. `your.domain.FPGADriverFactory`) to it. Then, create a jar which includes `FPGADriver`, `FPGADriverFactory`, `META-INF/services/` and all the external dependencies. Make a directory in `plugins/` of your Flink distribution with an arbitrary name, e.g. \"fpga\", and put the jar into this directory. See for more details. {{< hint info >}} Note: External resources are shared by all operators running on the same machine. The community might add external resource isolation in a future release. {{< /hint >}} Currently, Flink supports GPUs as external resources. We provide a first-party plugin for GPU resources. The plugin leverages a discovery script to discover indexes of GPU devices, which can be accessed from the resource information via the property \"index\". We provide a default discovery script that can be used to discover NVIDIA GPUs. You can also provide your custom script. We provide {{< gh_link file=\"/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/gpu/MatrixVectorMul.java\" name=\"an example\" >}} which shows how to use the GPUs to do matrix-vector multiplication in Flink. {{< hint info >}} Note: Currently, for all the operators, RuntimeContext#getExternalResourceInfos returns the same set of resource information. That means, the same set of GPU devices are always accessible to all the operators running in the same TaskManager. There is no operator level isolation at the moment. {{< /hint >}} To make GPU resources accessible, certain prerequisites are needed depending on your environment: For standalone mode, administrators should ensure the NVIDIA driver is installed and GPU resources are accessible on all the nodes in the cluster. For Yarn deployment, administrators should configure the Yarn cluster to enable . Notice the required Hadoop version is 2.10+ or 3.1+. For Kubernetes deployment, administrators should make sure the NVIDIA GPU is installed. Notice the required version is 1.10+. At the moment, Kubernetes only supports NVIDIA GPU and AMD GPU. Flink only provides discovery script for NVIDIA GPUs, but you can provide a custom discovery script for AMD GPUs yourself, see . As mentioned in , you also need to do two things to enable GPU resources: Configure the GPU resource. Get the information of GPU resources, which contains the GPU index as property with key \"index\", in"
},
{
"data": "For the GPU plugin, you need to specify the common external resource configurations: `external-resources`: You need to append your resource name (e.g. gpu) for GPU resources to it. `external-resource.<resource_name>.amount`: The amount of GPU devices per TaskManager. `external-resource.<resource_name>.yarn.config-key`: For Yarn, the config key of GPU is `yarn.io/gpu`. Notice that Yarn only supports NVIDIA GPU at the moment. `external-resource.<resource_name>.kubernetes.config-key`: For Kubernetes, the config key of GPU is `<vendor>.com/gpu`. Currently, \"nvidia\" and \"amd\" are the two supported vendors. Notice that if you use AMD GPUs, you need to provide a discovery script yourself, see . external-resource.<resource_name>.driver-factory.class: Should be set to org.apache.flink.externalresource.gpu.GPUDriverFactory. In addition, there are some specific configurations for the GPU plugin: `external-resource.<resource_name>.param.discovery-script.path`: The path of the . It can either be an absolute path, or a relative path to `FLINK_HOME` when defined or current directory otherwise. If not explicitly configured, the default script will be used. `external-resource.<resource_name>.param.discovery-script.args`: The arguments passed to the discovery script. For the default discovery script, see for the available parameters. An example configuration for GPU resource: ```bash external-resources: gpu external-resource.gpu.driver-factory.class: org.apache.flink.externalresource.gpu.GPUDriverFactory # Define the driver factory class of gpu resource. external-resource.gpu.amount: 2 # Define the amount of gpu resource per TaskManager. external-resource.gpu.param.discovery-script.path: plugins/external-resource-gpu/nvidia-gpu-discovery.sh external-resource.gpu.param.discovery-script.args: --enable-coordination # Define the custom param \"discovery-script.args\" which will be passed into the gpu driver. external-resource.gpu.yarn.config-key: yarn.io/gpu # for Yarn external-resource.gpu.kubernetes.config-key: nvidia.com/gpu # for Kubernetes ``` The `GPUDriver` leverages a discovery script to discover GPU resources and generate the GPU resource information. We provide a default discovery script for NVIDIA GPU, located at `plugins/external-resource-gpu/nvidia-gpu-discovery.sh` of your Flink distribution. The script gets the indexes of visible GPU resources through the `nvidia-smi` command. It tries to return the required amount (specified by `external-resource.<resource_name>.amount`) of GPU indexes in a list, and exit with non-zero if the amount cannot be satisfied. For standalone mode, multiple TaskManagers might be co-located on the same machine, and each GPU device is visible to all the TaskManagers. The default discovery script supports a coordination mode, in which it leverages a coordination file to synchronize the allocation state of GPU devices and ensure each GPU device can only be used by one TaskManager process. The relevant arguments are: `--enable-coordination-mode`: Enable the coordination mode. By default the coordination mode is disabled. `--coordination-file filePath`: The path of the coordination file used to synchronize the allocation state of GPU resources. The default path is `/var/tmp/flink-gpu-coordination`. {{< hint info >}} Note: The coordination mode only ensures that a GPU device is not shared by multiple TaskManagers of the same Flink cluster. Please be aware that another Flink cluster (with a different coordination file) or a non-Flink application can still use the same GPU devices. {{< /hint >}} You can also provide a discovery script to address your custom requirements, e.g. discovering AMD GPU. Please make sure the path of your custom script is accessible to Flink and configured (`external-resource.<resource_name>.param.discovery-script.path`) correctly. The contract of the discovery script: `GPUDriver` passes the amount (specified by `external-resource.<resource_name>.amount`) as the first argument into the script. The user-defined arguments in `external-resource.<resource_name>.param.discovery-script.args` would be appended after it. The script should return a list of the available GPU indexes, split by a comma. Whitespace only indexes will be ignored. The script can also suggest that the discovery is not properly performed, by exiting with non-zero. In that case, no gpu information will be provided to operators."
}
] |
{
"category": "App Definition and Development",
"file_name": "space.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Returns a string of the specified number of spaces. ```Haskell space(x); ``` `x`: the number of spaces to return. The supported data type is INT. Returns a value of the VARCHAR type. ```Plain Text mysql> select space(6); +-+ | space(6) | +-+ | | +-+ 1 row in set (0.00 sec) ``` SPACE"
}
] |
{
"category": "App Definition and Development",
"file_name": "5187-luajit-metrics.md",
"project_name": "Tarantool",
"subcategory": "Database"
} | [
{
"data": "Status*: In progress Start date*: 17-07-2020 Authors*: Sergey Kaplun @Buristan [email protected], Igor Munkin @igormunkin [email protected], Sergey Ostanevich @sergos [email protected] Issues*: LuaJIT metrics provide extra information about the Lua state. They consist of GC metrics (overall amount of objects and memory usage), JIT stats (both related to the compiled traces and the engine itself), string hash hits/misses. One can be curious about their application performance. We are going to provide various metrics about the several platform subsystems behaviour. GC pressure produced by user code can weight down all application performance. Irrelevant traces compiled by the JIT engine can just burn CPU time with no benefits as a result. String hash collisions can lead to DoS caused by a single request. All these metrics should be well monitored by users wanting to improve the performance of their application. The additional header <lmisclib.h> is introduced to extend the existing LuaJIT C API with new interfaces. The first function provided via this header is the following: ```c / API for obtaining various platform metrics. / LUAMISCAPI void luaMmetrics(luaState *L, struct luamMetrics *metrics); ``` This function fills the structure pointed to by `metrics` with the corresponding metrics related to Lua state anchored to the given coroutine `L`. The `struct luam_Metrics` has the following definition: ```c struct luam_Metrics { /* Number of strings being interned (i.e. the string with the same payload is found, so a new one is not created/allocated). */ sizet strhashhit; / Total number of strings allocations during the platform lifetime. / sizet strhashmiss; / Amount of allocated string objects. / sizet gcstrnum; / Amount of allocated table objects. / sizet gctabnum; / Amount of allocated udata objects. / sizet gcudatanum; / Amount of allocated cdata objects. / sizet gccdatanum; / Memory currently allocated. / sizet gctotal; / Total amount of freed memory. / sizet gcfreed; / Total amount of allocated memory. / sizet gcallocated; / Count of incremental GC steps per state. / sizet gcsteps_pause; sizet gcsteps_propagate; sizet gcsteps_atomic; sizet gcsteps_sweepstring; sizet gcsteps_sweep; sizet gcsteps_finalize; /* Overall number of snap restores (amount of guard assertions leading to stopping trace executions). */ sizet jitsnap_restore; / Overall number of abort traces. / sizet jittrace_abort; / Total size of all allocated machine code areas. / sizet jitmcode_size; / Amount of JIT traces. / unsigned int jittracenum; }; ``` Couple of words about how metrics are collected: `strhash_*` -- whenever the string with the same payload is found, so a new one is not created/allocated, there is incremented `strhash_hit` counter, if a new one string created/allocated then `strhash_miss` is incremented instead. `gc*num`, `jittrace_num` -- corresponding counter is incremented whenever a new object is allocated. When object is collected by GC its counter is decremented. `gctotal`, `gcallocated`, `gc_freed` -- any time when allocation function is called `gcallocated` and/or `gcfreed` is increased and `gc_total` is increased when memory is allocated or reallocated, is decreased when memory is"
},
{
"data": "`gcsteps*` -- corresponding counter is incremented whenever Garbage Collector starts to execute an incremental step of garbage collection. `jitsnaprestore` -- whenever JIT machine exits from the trace and restores interpreter state `jitsnaprestore` counter is incremented. `jittraceabort` -- whenever JIT compiler can't record the trace in case NYI BC or builtins this counter is incremented. `jitmcodesize` -- whenever new MCode area is allocated `jitmcodesize` is increased at corresponding size in bytes. Sets to 0 when all mcode area is freed. When a trace is collected by GC this value doesn't change. This area will be reused later for other traces. MCode area is linked with `jit_State` not with trace by itself. Traces just reserve MCode area that needed. All metrics are collected throughout the platform uptime. These metrics increase monotonically and can overflow: `strhash_hit` `strhash_miss` `gc_freed` `gc_allocated` `gcstepspause` `gcstepspropagate` `gcstepsatomic` `gcstepssweepstring` `gcstepssweep` `gcstepsfinalize` `jitsnaprestore` `jittraceabort` They make sense only with comparing with their value from a previous `luaM_metrics()` call. There is also a complement introduced for Lua space -- `misc.getmetrics()`. This function is just a wrapper for `luaM_metrics()` returning a Lua table with the similar metrics. All returned values are presented as numbers with cast to double, so there is a corresponding precision loss. Function usage is quite simple: ``` $ ./src/tarantool Tarantool 2.5.0-267-gbf047ad44 type 'help' for interactive help tarantool> misc.getmetrics() gc_freed: 2245853 strhash_hit: 53965 gcstepsatomic: 6 strhash_miss: 6879 gcstepssweepstring: 17920 gc_strnum: 5759 gc_tabnum: 1813 gc_cdatanum: 89 jitsnaprestore: 0 gc_total: 1370812 gc_udatanum: 17 gcstepsfinalize: 0 gc_allocated: 3616665 jittracenum: 0 gcstepssweep: 297 jittraceabort: 0 jitmcodesize: 0 gcstepspropagate: 10181 gcstepspause: 7 ... ``` This section describes small example of metrics usage. For example amount of `strhash_misses` can be shown for tracking of new string objects allocations. For example if we add code like: ```lua local function shardedstoragefunc(storagename, funcname) return 'shardedstorage.storages.' .. storagename .. '.' .. func_name end ``` increase in slope curve of `strhash_misses` means, that after your changes there are more new strings allocating at runtime. Of course slope curve of `strhashmisses` should be less than slope curve of `strhashhits`. Slope curves of `gcfreed` and `gcallocated` can be used for analysis of GC pressure of your application (less is better). Also we can check some hacky optimization with these metrics. For example let's assume that we have this code snippet: ```lua local old_metrics = misc.getmetrics() local t = {} for i = 1, 513 do t[i] = i end local new_metrics = misc.getmetrics() local diff = newmetrics.gcallocated - oldmetrics.gcallocated ``` `diff` equals to 18879 after running of this chunk. But if we change table initialization to ```lua local table_new = require \"table.new\" local old_metrics = misc.getmetrics() local t = table_new(513,0) for i = 1, 513 do t[i] = i end local new_metrics = misc.getmetrics() local diff = newmetrics.gcallocated - oldmetrics.gcallocated ``` `diff` shows us only 5895. Slope curves of `gcsteps*` can be used for tracking of GC pressure too. For long time observations you will see periodic increment for `gcsteps*` metrics -- for example longer period of `gcstepsatomic` increment is"
},
{
"data": "Also additional amount of `gcstepspropagate` in one period can be used to indirectly estimate amount of objects. These values also correlate with the step multiplier of the GC. The amount of incremental steps can grow, but one step can process a small amount of objects. So these metrics should be considered together with GC setup. Amount of `gc_*num` is useful for control of memory leaks -- total amount of these objects should not growth nonstop (you can also track `gc_total` for this). Also `jitmcodesize` can be used for tracking amount of allocated memory for traces machine code. Slope curves of `jittraceabort` shows how many times trace hasn't been compiled when the attempt was made (less is better). Amount of `gctracenum` is shown how much traces was generated (usually more is better). And the last one -- `gcsnaprestores` can be used for estimation when LuaJIT is stop trace execution. If slope curves of this metric growth after changing old code it can mean performance degradation. Assumes that we have code like this: ```lua local function foo(i) return i <= 5 and i or tostring(i) end -- minstitch option needs to emulate nonstitching behaviour jit.opt.start(0, \"hotloop=2\", \"hotexit=2\", \"minstitch=15\") local sum = 0 local old_metrics = misc.getmetrics() for i = 1, 10 do sum = sum + foo(i) end local new_metrics = misc.getmetrics() local diff = newmetrics.jitsnaprestore - oldmetrics.jitsnaprestore ``` `diff` equals 3 (1 side exit on loop end, 2 side exits to the interpreter before trace gets hot and compiled) after this chunk of code. And now we decide to change `foo` function like this: ```lua local function foo(i) -- math.fmod is not yet compiled! return i <= 5 and i or math.fmod(i, 11) end ``` `diff` equals 6 (1 side exit on loop end, 2 side exits to the interpreter before trace gets hot and compiled an 3 side exits from the root trace could not get compiled) after the same chunk of code. Benchmarks were taken from repo: . Example of usage: ```bash /usr/bin/time -f\"array3d %U\" ./luajit $BENCH_DIR/array3d.lua 300 >/dev/null ``` Taking into account the measurement error ~ 2%, it can be said that there is no difference in the performance. Benchmark results after and before patch (less is better): ``` Benchmark | AFTER (s) | BEFORE (s) +--+-- array3d | 0.21 | 0.20 binary-trees | 3.30 | 3.24 chameneos | 2.86 | 2.99 coroutine-ring | 0.98 | 1.02 euler14-bit | 1.01 | 1.05 fannkuch | 6.74 | 6.81 fasta | 8.25 | 8.28 life | 0.47 | 0.46 mandelbrot | 2.65 | 2.68 mandelbrot-bit | 1.96 | 1.97 md5 | 1.58 | 1.54 nbody | 1.36 | 1.56 nsieve | 2.02 | 2.06 nsieve-bit | 1.47 | 1.50 nsieve-bit-fp | 4.37 | 4.60 partialsums | 0.54 | 0.55 pidigits-nogmp | 3.46 | 3.46 ray | 1.62 | 1.63 recursive-ack | 0.19 | 0.20 recursive-fib | 1.63 | 1.67 scimark-fft | 5.76 | 5.86 scimark-lu | 3.58 | 3.64 scimark-sor | 2.33 | 2.34 scimark-sparse | 4.88 | 4.93 series | 0.94 | 0.94 spectral-norm | 0.94 | 0.97 ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "foreign-key-ysql.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Foreign keys in YugabyteDB YSQL headerTitle: Foreign keys linkTitle: Foreign keys description: Defining Foreign key constraint in YSQL headContent: Explore foreign keys in YugabyteDB using YSQL menu: v2.18: identifier: foreign-key-ysql parent: explore-indexes-constraints weight: 210 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../foreign-key-ysql/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL </a> </li> </ul> A foreign key represents one or more columns in a table referencing the following: A primary key in another table. A or columns restricted with a in another table. Tables can have multiple foreign keys. You use a foreign key constraint to maintain the referential integrity of data between two tables: values in columns in one table equal the values in columns in another table. Define the foreign key constraint using the following syntax: ```sql [CONSTRAINT fk_name] FOREIGN KEY(fk_columns) REFERENCES parenttable(parentkey_columns) [ON DELETE delete_action] [ON UPDATE update_action] ``` Defining the `CONSTRAINT` clause and naming the foreign key is optional. If you omit it, an auto-generated name is provided by YSQL. The `REFERENCES` clause specifies the parent table and its columns referenced by the fk_columns. Defining actions is also optional; if defined, they determine the behaviors when the primary key in the parent table is deleted or updated. YSQL allows you to perform the following actions: `SET NULL` - when the referenced rows in the parent table are deleted or updated, foreign key columns in the referencing rows of the child table are automatically set to `NULL`. `SET DEFAULT` - when the referenced rows of the parent table are deleted or updated, the default value is set to the foreign key column of the referencing rows in the child table. `RESTRICT` - when the referenced rows in the parent table are deleted or updated, deletion of a referenced row is"
},
{
"data": "`CASCADE` - when the referenced rows in the parent table are deleted or updated, the referencing rows in the child table are deleted or updated. `NO ACTION` (default) - when the referenced rows in the parent table are deleted or updated, no action is taken. {{% explore-setup-single %}} The following example creates two tables: ```sql CREATE TABLE employees( employee_no integer GENERATED ALWAYS AS IDENTITY, name text NOT NULL, department text, PRIMARY KEY(employee_no) ); CREATE TABLE contacts( contact_id integer GENERATED ALWAYS AS IDENTITY, employee_no integer, contact_name text NOT NULL, phone integer, email text, PRIMARY KEY(contact_id), CONSTRAINT fk_employee FOREIGN KEY(employee_no) REFERENCES employees(employee_no) ); ``` The parent table is `employees` and the child table is `contacts`. Each employee has any number of contacts, and each contact belongs to no more than one employee. The `employeeno` column in the `contacts` table is the foreign key column that references the primary key column with the same name in the `employees` table. The `fkemployee` foreign key constraint in the `contacts` table defines the `employeeno` as the foreign key. By default, `NO ACTION` is applied because `fkemployee` is not associated with any action. The following example shows how to create the same `contacts` table with a `CASCADE` action `ON DELETE`: ```sql CREATE TABLE contacts( contact_id integer GENERATED ALWAYS AS IDENTITY, employee_no integer, contact_name text NOT NULL, phone integer, email text, PRIMARY KEY(contact_id), CONSTRAINT fk_employee FOREIGN KEY(employee_no) REFERENCES employees(employee_no) ON DELETE CASCADE ); ``` You can add a foreign key constraint to an existing table by using the `ALTER TABLE` statement, using the following syntax: ```sql ALTER TABLE child_table ADD CONSTRAINT constraint_name FOREIGN KEY (fk_columns) REFERENCES parenttable (parentkey_columns); ``` Before altering a table with a foreign key constraint, you need to remove the existing foreign key constraint, as per the following syntax: ```sql ALTER TABLE child_table DROP CONSTRAINT constraint_fkey; ``` The next step is to add a new foreign key constraint, possibly including an action, as demonstrated by the following syntax: ```sql ALTER TABLE child_table ADD CONSTRAINT constraint_fk FOREIGN KEY (fk_columns) REFERENCES parenttable(parentkey_columns) [ON DELETE action]; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "Sql_faq.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" This topic provides answers to some frequently asked questions about SQL. To solve this problem, increase the value of the `memorylimitationperthreadforschemachange` parameter in the be.conf file. This parameter refers to the maximum storage that can be allocated for a single task to change the scheme. The default value of the maximum storage is 2 GB. StarRocks does not directly cache final query results. From v2.5 onwards, StarRocks uses the Query Cache feature to save the intermediate results of first-stage aggregation in the cache. New queries that are semantically equivalent to previous queries can reuse the cached computation results to accelerate computations. Query cache uses BE memory. For more information, see . In standard SQL, every calculation that includes an operand with a `NULL` value returns a `NULL`. StarRocks does not support the DECODE function of the Oracle database. StarRocks is compatible with MySQL, so you can use the CASE WHEN statement. Yes. StarRocks merges data in a way that references Google Mesa. In StarRocks, a BE triggers the data merge and it has two kinds of compaction to merge data. If the data merge is not completed, it is finished during your query. Therefore, you can read the latest data after data loading. No. This error occurs because the previous alteration has not been completed. You can run the following code to check the status of the previous alteration: ```SQL show tablet from lineitem where State=\"ALTER\"; ``` The time spent on the alteration operation relates to the data volume. In general, the alteration can be completed in minutes. We recommend that you stop loading data into StarRocks while you are altering tables because data loading lowers the speed at which alteration completes. This error occurs when the metadata of Apache Hive partitions cannot be obtained. To solve this problem, copy core-sit.xml and hdfs-site.xml to the fe.conf file and the be.conf file. This error occurs usually due to a full garbage collection (full GC), which can be checked by using backend monitoring and the fe.gc log. To solve this problem, perform one of the following operations: Allows SQL's client to access multiple frontends (FEs) simultaneously to spread the load. Change the heap size of Java Virtual Machine (JVM) from 8 GB to 16 GB in the fe.conf file to increase memory and reduce the impact of full GC. SQL can only guarantee that column A is ordered, and it cannot guarantee that the order of column B is the same for each query. MySQL can guarantee the order of column A and column B because it is a standalone"
},
{
"data": "StarRocks is a distributed database, of which data stored in the underlying table is in a sharding pattern. The data of column A is distributed across multiple machines, so the order of column B returned by multiple machines may be different for each query, resulting in inconsistent order of B each time. To solve this problem, change `select B from tbl order by A limit 10` to `select B from tbl order by A,B limit 10`. To solve this problem, check the profile and see MERGE details: Check whether the aggregation on the storage layer takes up too much time. Check whether there are too many indicator columns. If so, aggregate hundreds of columns of millions of rows. ```plaintext MERGE: aggr: 26s270ms sort: 15s551ms ``` Nested functions are not supported, such as `todays(now())` in `DELETE from testnew WHERE todays(now())-todays(publish_time) >7;`. To improve efficiency, add the `-A` parameter when you connect to MySQL's client server: `mysql -uroot -h127.0.0.1 -P8867 -A`. MySQL's client server does not pre-read database information. Adjust the log level and corresponding parameters. For more information, see . When you create colocated tables, you need to set the `group` property. Therefore, you cannot modify the replication number for a single table. You can perform the following steps to modify the replication number for all tables in a group: Set `group_with` to `empty` for all tables in a group. Set a proper `replication_num` for all tables in a group. Set `group_with` back to its original value. VARCHAR is a variable-length data type, which has a specified length that can be changed based on the actual data length. Specifying a different varchar length when you create a table has little impact on the query performance on the same data. To truncate a table, you need to create the corresponding partitions and then swap them. If there are a larger number of partitions that need to be created, this error occurs. In addition, if there are many data load tasks, the lock will be held for a long time during the compaction process. Therefore, the lock cannot be acquired when you create tables. If there are too many data load tasks, set `tabletmapshard_size` to `512` in the be.conf file to reduce the lock contention. Add the following information to hdfs-site.xml in the fe.conf file and the be.conf file: ```HTML <property> <name>dfs.namenode.kerberos.principal.pattern</name> <value>*</value> </property> ``` No. No, use functions to change \"2021-10\" to \"2021-10-01\" and then use \"2021-10-01\" as a partition field. You can use the command. `SHOW DATA;` displays the data size and replicas of all tables in the current database. `SHOW DATA FROM <dbname>.<tablename>;` displays the data size, number of replicas, and number of rows in a specified table of a specified database."
}
] |
Subsets and Splits