tag
dict
content
listlengths
1
171
{ "category": "App Definition and Development", "file_name": "bug-report.md", "project_name": "Stolon", "subcategory": "Database" }
[ { "data": "name: Bug Report about: Report a bug on Stolon labels: bug <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! NOTE: Please submit only bug reports. For other question or if unsure ask on the --> What happened: What you expected to happen: How to reproduce it (as minimally and precisely as possible): Anything else we need to know?: Environment: Stolon version: Stolon running environment (if useful to understand the bug): Others:" } ]
{ "category": "App Definition and Development", "file_name": "yugabyte-pg-reference.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: YugabyteDB node-postgres Smart Driver reference headerTitle: Node.js Drivers linkTitle: Node.js Drivers description: YugabyteDB node-postgres smart driver for YSQL headcontent: Node.js Drivers for YSQL menu: v2.18: name: Node.js Drivers identifier: ref-yugabyte-pg-driver parent: drivers weight: 500 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../yugabyte-pg-reference/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YugabyteDB node-postgres Smart Driver </a> </li> <li > <a href=\"../postgres-pg-reference/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> PostgreSQL node-postgres Driver </a> </li> </ul> YugabyteDB node-postgres smart driver is a Node.js driver for built on the , with additional connection load balancing features. For more information on the YugabyteDB node-postgres smart driver, see the following: Download and install the YugabyteDB node-postgres smart driver using the following command (you need to have Node.js installed on your system): ```sh npm install @yugabytedb/pg ``` The driver requires YugabyteDB version 2.7.2.0 or higher. You can start using the driver in your code. Learn how to perform the common tasks required for Node.js application development using the YugabyteDB node-postgres smart driver. The following connection properties need to be added to enable load balancing: `loadBalance` - enable cluster-aware load balancing by setting this property to `true`; disabled by default. `topologyKeys` - provide comma-separated geo-location values to enable topology-aware load balancing. Geo-locations can be provided as `cloud.region.zone`. Specify all zones in a region as `cloud.region.*`. To designate fallback locations for when the primary location is unreachable, specify a priority in the form `:n`, where `n` is the order of precedence. For example, `cloud1.datacenter1.rack1:1,cloud1.datacenter1.rack2:2`. By default, the driver refreshes the list of nodes every 300 seconds (5 minutes). You can change this value by including the `ybServersRefreshInterval` parameter. To use the driver, do the following: Pass new connection properties for load balancing in the connection URL. To enable uniform load balancing across all servers, set the `loadBalance` property to `true` in the URL, as per the following connection string: ```javascript const connectionString = \"postgresql://user:password@localhost:port/database?loadBalance=true\" const client = new Client(connectionString); client.connect() ``` After the driver establishes the initial connection, it fetches the list of available servers from the universe and performs load balancing of subsequent connection requests across these servers. To specify topology keys, set the `topologyKeys` property to comma separated values, as per the following connection string: ```js const connectionString = \"postgresql://user:password@localhost:port/database?loadBalance=true&topologyKeys=cloud1.datacenter1.rack1,cloud1.datacenter1.rack2\" const client = new Client(connectionString); client.conn ``` To configure a basic connection pool of maximum 100 connections using `Pool`, specify load balance as follows: ```js let pool = new Pool({ user: 'yugabyte', password: 'yugabyte', host: 'localhost', port: 5433, loadBalance: true, database: 'yugabyte', max: 100 }) ``` This tutorial shows how to use the YugabyteDB node-postgres smart driver with YugabyteDB. It starts by creating a three-node cluster with a of 3. This tutorial uses the" }, { "data": "Next, you use a Node.js application to demonstrate the driver's load balancing features. {{< note title=\"Note\">}} The driver requires YugabyteDB version 2.7.2.0 or higher. {{< /note>}} Create a universe with a 3-node RF-3 cluster with some fictitious geo-locations assigned. The placement values used are just tokens and have nothing to do with actual AWS cloud regions and zones. ```sh cd <path-to-yugabytedb-installation> ``` ```sh ./bin/yb-ctl create --rf 3 --placement_info \"aws.us-west.us-west-2a,aws.us-west.us-west-2a,aws.us-west.us-west-2b\" ``` To check uniform load balancing, do the following: Create a Node.js file to run the example: ```sh touch example.js ``` Add the following code in `example.js` file. ```js const pg = require('@yugabytedb/pg'); async function createConnection(){ const yburl = \"postgresql://yugabyte:yugabyte@localhost:5433/yugabyte?loadBalance=true\" let client = new pg.Client(yburl); client.on('error', () => { // ignore the error and handle exiting }) await client.connect() client.connection.on('error', () => { // ignore the error and handle exiting }) return client; } async function createNumConnections(numConnections) { let clientArray = [] for (let i=0; i<numConnections; i++) { if(i&1){ clientArray.push(await createConnection()) }else { setTimeout(async() => { clientArray.push(await createConnection()) }, 1000) } } return clientArray } (async () => { let clientArray = [] let numConnections = 30 clientArray = await createNumConnections(numConnections) setTimeout(async () => { console.log('Node connection counts after making connections: \\n\\n \\t\\t', pg.Client.connectionMap, '\\n') }, 2000) })(); ``` Run the example: ```sh node example.js ``` The application creates 30 connections and displays a key value pair map where the keys are the host and the values are the number of connections on them (This is the client side perspective of the number of connections). Each node should have 10 connections. For topology-aware load balancing, run the application with the `topologyKeys` property set to `aws.us-west.us-west-2a`; only two nodes will be used in this case. ```js const pg = require('@yugabytedb/pg'); async function createConnection(){ const yburl = \"postgresql://yugabyte:yugabyte@localhost:5433/yugabyte?loadBalance=true&&topologyKey=aws.us-west.us-west-2a\" let client = new pg.Client(yburl); client.on('error', () => { // ignore the error and handle exiting }) await client.connect() client.connection.on('error', () => { // ignore the error and handle exiting }) return client; } async function createNumConnections(numConnections) { let clientArray = [] for (let i=0; i<numConnections; i++) { if(i&1){ clientArray.push(await createConnection()) }else { setTimeout(async() => { clientArray.push(await createConnection()) }, 1000) } } return clientArray } (async () => { let clientArray = [] let numConnections = 30 clientArray = await createNumConnections(numConnections) setTimeout(async () => { console.log('Node connection counts after making connections: \\n\\n \\t\\t', pg.Client.connectionMap, '\\n') }, 2000) })(); ``` To verify the behavior, wait for the app to create connections and then navigate to `http://<host>:13000/rpcz`. The first two nodes should have 15 connections each, and the third node should have zero connections. When you're done experimenting, run the following command to destroy the local cluster: ```sh ./bin/yb-ctl destroy ```" } ]
{ "category": "App Definition and Development", "file_name": "task_lifecycle.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "title: Task Lifecycle weight: 6 type: docs aliases: /internals/task_lifecycle.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> A task in Flink is the basic unit of execution. It is the place where each parallel instance of an operator is executed. As an example, an operator with a parallelism of 5 will have each of its instances executed by a separate task. The `StreamTask` is the base for all different task sub-types in Flink's streaming engine. This document goes through the different phases in the lifecycle of the `StreamTask` and describes the main methods representing each of these phases. Because the task is the entity that executes a parallel instance of an operator, its lifecycle is tightly integrated with that of an operator. So, we will briefly mention the basic methods representing the lifecycle of an operator before diving into those of the `StreamTask` itself. The list is presented below in the order that each of the methods is called. Given that an operator can have a user-defined function (UDF), below each of the operator methods we also present (indented) the methods in the lifecycle of the UDF that it calls. These methods are available if your operator extends the `AbstractUdfStreamOperator`, which is the basic class for all operators that execute UDFs. // initialization phase OPERATOR::setup UDF::setRuntimeContext OPERATOR::initializeState OPERATOR::open UDF::open // processing phase (called on every element/watermark) OPERATOR::processElement UDF::run OPERATOR::processWatermark // checkpointing phase (called asynchronously on every checkpoint) OPERATOR::snapshotState // notify the operator about the end of processing records OPERATOR::finish // termination phase OPERATOR::close UDF::close In a nutshell, the `setup()` is called to initialize some operator-specific machinery, such as its `RuntimeContext` and its metric collection data-structures. After this, the `initializeState()` gives an operator its initial state, and the `open()` method executes any operator-specific initialization, such as opening the user-defined function in the case of the `AbstractUdfStreamOperator`. {{< hint info >}} The `initializeState()` contains both the logic for initializing the state of the operator during its initial execution (e.g. register any keyed state), and also the logic to retrieve its state from a checkpoint after a failure. More about this on the rest of this" }, { "data": "{{< /hint >}} Now that everything is set, the operator is ready to process incoming data. Incoming elements can be one of the following: input elements, watermark, and checkpoint barriers. Each one of them has a special element for handling it. Elements are processed by the `processElement()` method, watermarks by the `processWatermark()`, and checkpoint barriers trigger a checkpoint which invokes (asynchronously) the `snapshotState()` method, which we describe below. For each incoming element, depending on its type one of the aforementioned methods is called. Note that the `processElement()` is also the place where the UDF's logic is invoked, e.g. the `map()` method of your `MapFunction`. Finally, in the case of a normal, fault-free termination of the operator (e.g. if the stream is finite and its end is reached), the `finish()` method is called to perform any final bookkeeping action required by the operator's logic (e.g. flush any buffered data, or emit data to mark end of processing), and the `close()` is called after that to free any resources held by the operator (e.g. open network connections, io streams, or native memory held by the operator's data). In the case of a termination due to a failure or due to manual cancellation, the execution jumps directly to the `close()` and skips any intermediate phases between the phase the operator was in when the failure happened and the `close()`. Checkpoints: The `snapshotState()` method of the operator is called asynchronously to the rest of the methods described above whenever a checkpoint barrier is received. Checkpoints are performed during the processing phase, i.e. after the operator is opened and before it is closed. The responsibility of this method is to store the current state of the operator to the specified from where it will be retrieved when the job resumes execution after a failure. Below we include a brief description of Flink's checkpointing mechanism, and for a more detailed discussion on the principles around checkpointing in Flink please read the corresponding documentation: . Following that brief introduction on the operator's main phases, this section describes in more detail how a task calls the respective methods during its execution on a cluster. The sequence of the phases described here is mainly included in the `invoke()` method of the `StreamTask` class. The remainder of this document is split into two subsections, one describing the phases during a regular, fault-free execution of a task (see ), and (a shorter) one describing the different sequence followed in case the task is cancelled (see ), either manually, or due some other reason, e.g. an exception thrown during" }, { "data": "The steps a task goes through when executed until completion without being interrupted are illustrated below: TASK::setInitialState TASK::invoke create basic utils (config, etc) and load the chain of operators setup-operators task-specific-init initialize-operator-states open-operators run finish-operators wait for the final checkpoint completed (if enabled) close-operators task-specific-cleanup common-cleanup As shown above, after recovering the task configuration and initializing some important runtime parameters, the very first step for the task is to retrieve its initial, task-wide state. This is done in the `setInitialState()`, and it is particularly important in two cases: when the task is recovering from a failure and restarts from the last successful checkpoint when resuming from a . If it is the first time the task is executed, the initial task state is empty. After recovering any initial state, the task goes into its `invoke()` method. There, it first initializes the operators involved in the local computation by calling the `setup()` method of each one of them and then performs its task-specific initialization by calling the local `init()` method. By task-specific, we mean that depending on the type of the task (`SourceTask`, `OneInputStreamTask` or `TwoInputStreamTask`, etc), this step may differ, but in any case, here is where the necessary task-wide resources are acquired. As an example, the `OneInputStreamTask`, which represents a task that expects to have a single input stream, initializes the connection(s) to the location(s) of the different partitions of the input stream that are relevant to the local task. Having acquired the necessary resources, it is time for the different operators and user-defined functions to acquire their individual state from the task-wide state retrieved above. This is done in the `initializeState()` method, which calls the `initializeState()` of each individual operator. This method should be overridden by every stateful operator and should contain the state initialization logic, both for the first time a job is executed, and also for the case when the task recovers from a failure or when using a savepoint. Now that all operators in the task have been initialized, the `open()` method of each individual operator is called by the `openAllOperators()` method of the `StreamTask`. This method performs all the operational initialization, such as registering any retrieved timers with the timer service. A single task may be executing multiple operators with one consuming the output of its predecessor. In this case, the `open()` method is called from the last operator, i.e. the one whose output is also the output of the task itself, to the first. This is done so that when the first operator starts processing the task's input, all downstream operators are ready to receive its output. {{< hint info >}} Consecutive operators in a task are opened from the last to the first. {{< /hint >}} Now the task can resume execution and operators can start processing fresh input data. This is the place where the task-specific `run()` method is called. This method will run until either there is no more input data (finite stream), or the task is cancelled (manually or not). Here is where the operator specific `processElement()` and `processWatermark()` methods are called. In the case of running till completion," }, { "data": "there is no more input data to process, after exiting from the `run()` method, the task enters its shutdown process. Initially, the timer service stops registering any new timers (e.g. from fired timers that are being executed), clears all not-yet-started timers, and awaits the completion of currently executing timers. Then the `finishAllOperators()` notifies the operators involved in the computation by calling the `finish()` method of each operator. Then, any buffered output data is flushed so that they can be processed by the downstream tasks. Then if final checkpoint is enabled, the task would to ensure operators using two-phase committing have committed all the records. Finally the task tries to clear all the resources held by the operators by calling the `close()` method of each one. When opening the different operators, we mentioned that the order is from the last to the first. Closing happens in the opposite manner, from first to last. {{< hint info >}} Consecutive operators in a task are closed from the first to the last. {{< /hint >}} Finally, when all operators have been closed and all their resources freed, the task shuts down its timer service, performs its task-specific cleanup, e.g. cleans all its internal buffers, and then performs its generic task clean up which consists of closing all its output channels and cleaning any output buffers. Checkpoints: Previously we saw that during `initializeState()`, and in case of recovering from a failure, the task and all its operators and functions retrieve the state that was persisted to stable storage during the last successful checkpoint before the failure. Checkpoints in Flink are performed periodically based on a user-specified interval, and are performed by a different thread than that of the main task thread. That's why they are not included in the main phases of the task lifecycle. In a nutshell, special elements called `CheckpointBarriers` are injected periodically by the source tasks of a job in the stream of input data, and travel with the actual data from source to sink. A source task injects these barriers after it is in running mode, and assuming that the `CheckpointCoordinator` is also running. Whenever a task receives such a barrier, it schedules a task to be performed by the checkpoint thread, which calls the `snapshotState()` of the operators in the task. Input data can still be received by the task while the checkpoint is being performed, but the data is buffered and only processed and emitted downstream after the checkpoint is successfully completed. In the previous sections we described the lifecycle of a task that runs till completion. In case the task is cancelled at any point, then the normal execution is interrupted and the only operations performed from that point on are the timer service shutdown, the task-specific cleanup, the closing of the operators, and the general task cleanup, as described above. {{< top >}}" } ]
{ "category": "App Definition and Development", "file_name": "beam-2.44.0.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Apache Beam 2.44.0\" date: 2023-01-17 09:00:00 -0700 categories: blog release authors: klk <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> We are happy to present the new 2.44.0 release of Beam. This release includes both improvements and new functionality. See the for this release. <!--more--> For more information on changes in 2.44.0, check out the . Support for Bigtable sink (Write and WriteBatch) added (Go) (). S3 implementation of the Beam filesystem (Go) (). Support for SingleStoreDB source and sink added (Java) (). Added support for DefaultAzureCredential authentication in Azure Filesystem (Python) (). Added new CdapIO for CDAP Batch and Streaming Source/Sinks (Java) (). Added new SparkReceiverIO for Spark Receivers 2.4. (Java) (). Beam now provides a portable \"runner\" that can render pipeline graphs with graphviz. See `python -m apache_beam.runners.render --help` for more details. Local packages can now be used as dependencies in the requirements.txt file, rather than requiring them to be passed separately via the `--extra_package` option (Python) (). Pipeline Resource Hints now supported via `--resource_hints` flag (Go) (). Make Python SDK containers reusable on portable runners by installing dependencies to temporary venvs (). RunInference model handlers now support the specification of a custom inference function in Python () Support for `map_windows` urn added to Go SDK (). `ParquetIO.withSplit` was removed since splittable reading has been the default behavior since 2.35.0. The effect of this change is to drop support for non-splittable reading (Java)(). `beam-sdks-java-extensions-google-cloud-platform-core` is no longer a dependency of the Java SDK" }, { "data": "Some users of a portable runner (such as Dataflow Runner v2) may have an undeclared dependency on this package (for example using GCS with TextIO) and will now need to declare the dependency. `beam-sdks-java-core` is no longer a dependency of the Java SDK Harness. Users of a portable runner (such as Dataflow Runner v2) will need to provide this package and its dependencies. Slices now use the Beam Iterable Coder. This enables cross language use, but breaks pipeline updates if a Slice type is used as a PCollection element or State API element. (Go) Fixed JmsIO acknowledgment issue (Java) () Fixed Beam SQL CalciteUtils (Java) and Cross-language JdbcIO (Python) did not support JDBC CHAR/VARCHAR, BINARY/VARBINARY logical types (, ). Ensure iterated and emitted types are used with the generic register package are registered with the type and schema registries.(Go) () According to git shortlog, the following people contributed to the 2.44.0 release. Thank you to all contributors! Ahmed Abualsaud Ahmet Altay Alex Merose Alexey Inkin Alexey Romanenko Anand Inguva Andrei Gurau Andrej Galad Andrew Pilloud Ayush Sharma Benjamin Gonzalez Bjorn Pedersen Brian Hulette Bruno Volpato Bulat Safiullin Chamikara Jayalath Chris Gavin Damon Douglas Danielle Syse Danny McCormick Darkhan Nausharipov David Cavazos Dmitry Repin Doug Judd Elias Segundo Antonio Evan Galpin Evgeny Antyshev Heejong Lee Henrik Heggelund-Berg Israel Herraiz Jack McCluskey Jan Lukavsk Janek Bevendorff Johanna jeling John J. Casey Jozef Vilcek Kanishk Karanawat Kenneth Knowles Kiley Sok Laksh Liam Miller-Cushon Luke Cwik MakarkinSAkvelon Minbo Bae Moritz Mack Nancy Xu Ning Kang Nivaldo Tokuda Oleh Borysevych Pablo Estrada Philippe Moussalli Pranav Bhandari Rebecca Szper Reuven Lax Rick Smit Ritesh Ghorse Robert Bradshaw Robert Burke Ryan Thompson Sam Whittle Sanil Jain Scott Strong Shubham Krishna Steven van Rossum Svetak Sundhar Thiago Nunes Tianyang Hu Trevor Gevers Valentyn Tymofieiev Vitaly Terentyev Vladislav Chunikhin Xinyu Liu Yi Hu Yichi Zhang AdalbertMemSQL agvdndor andremissaglia arne-alex bullet03 camphillips22 capthiron creste fab-jul illoise kn1kn1 nancyxu123 peridotml shinannegans smeet07" } ]
{ "category": "App Definition and Development", "file_name": "array-agg-unnest.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: arrayagg(), unnest(), and generatesubscripts() linkTitle: arrayagg(), unnest(), generatesubscripts() headerTitle: arrayagg(), unnest(), and generatesubscripts() description: arrayagg(), unnest(), and generatesubscripts() menu: v2.18: identifier: array-agg-unnest parent: array-functions-operators type: docs For one-dimensional arrays, but only for these (see ), these two functions have mutually complementary effects in the following sense. After this sequence (the notation is informal): ```output array_agg of \"SETOF tuples #1\" => \"result array\" unnest of \"result array\" => \"SETOF tuples #3\" ``` The \"SETOF tuples #3\" has identical shape and content to that of \"SETOF tuples #1\". And the data type of \"result array\" is an array of the data type of the tuples. Moreover, and again for the special case of one-dimensional arrays, the function `generate_subscripts()` can be used to produce the same result as `unnest()`. For this reason, the three functions, `arrayagg()`, `unnest()`, and `generatesubscripts()` are described in the same section. This function has two overloads. Purpose: Return a one-dimensional array from a SQL subquery. Its rows might be scalars (that is, the `SELECT` list might be a single column). But, in typical use, they are likely to be of \"row\" type values. Signature: ```output input value: SETOF anyelement return value: anyarray ``` In normal use, `arrayagg()` is applied to the `SELECT` list from a physical table, or maybe from a view that encapsulates the query. This is shown in the \"\"_ example below. But first, you can demonstrate the functionality without creating and populating a table by using, instead, the `VALUES` statement. Try this: ```plpgsql values (1::int, 'dog'::text), (2::int, 'cat'::text), (3::int, 'ant'::text); ``` It produces this result: ```output column1 | column2 1 | dog 2 | cat 3 | ant ``` Notice that YSQL has named the `SELECT` list items \"column1\" and \"column2\". The result is a so-called `SETOF`. It means a set of rows, just as is produced by a `SELECT` statement. (You'll see the term if you describe the `generateseries()` built-in table function with the `\\df` meta-command.) To use the rows that the `VALUES` statement produces as the input for `arrayagg()`, you need to use a named `type`, thus: ```plpgsql create type rt as (f1 int, f2 text); with tab as ( values (1::int, 'dog'::text), (2::int, 'cat'::text), (3::int, 'ant'::text)) select array_agg((column1, column2)::rt order by column1) as arr from tab; ``` It produces this result: ```output arr {\"(1,dog)\",\"(2,cat)\",\"(3,ant)\"} ``` You recognize this as the text of the literal that represents an array of tuples that are shape-compatible with \"type rt\". The underlying notions that explain what is seen here are explained in . Recall from that this value doesn't encode the type name. In fact, you could typecast it to any shape compatible type. You can understand the effect of `array_agg()` thus: Treat each row as a \"rt[]\" array with a single-value. Concatenate (see the ) the values from all the rows in the specified order into a new \"rt[]\" array. This code illustrates this point: ```plpgsql -- Consider this SELECT: with tab as ( values ((1, 'dog')::rt), ((2, 'cat')::rt), ((3, 'ant')::rt)) select array_agg(column1 order by column1) as arr from tab; -- It can be seen as equivalent this SELECT: select array[(1, 'dog')::rt] || array[(2, 'cat')::rt] || array[(3, 'ant')::rt] as arr; ``` Each of the three" }, { "data": "as arr\" queries above produces the same result, as was shown after the first of them. This demonstrates their semantic equivalence. To prepare for the demonstration of `unnest()`, save the single-valued result from the most recent of the three queries (but any one of them would do) into a `ysqlsh` variable by using the `\\gset` meta-command. This takes a single argument, conventionally spelled with a trailing underscore (for example, \"result&#95;\") and re-runs the `SELECT` statement that, as the last submitted `ysqlsh` command, is still in the command buffer. (If the `SELECT` doesn't return a single row, then you get a clear error.) In general, when the `SELECT` list has N members, called \"c1\" through \"cN\", each of these values is stored in automatically-created variables called \"result&#95;c1\" through \"result&#95;cN\". if you aren't already familiar with the `\\gset` meta-command, you can read a brief account of how it works in within the major section on `ysqlsh`. Immediately after running the \"with... select arrayagg(...) as arr...\"_ query above, do this: ```plpgsql \\gset result_ \\echo :result_arr ``` The `\\gset` meta-command is silent. The `\\echo` meta-command shows this: ```output {\"(1,dog)\",\"(2,cat)\",\"(3,ant)\"} ``` The text of the literal is now available for re-use, as was intended. Before considering `unnest()`, look at `array_agg()`'s second overload: Purpose: Return a (N+1)-dimensional array from a SQL subquery whose rows are N-dimensional arrays. The aggregated arrays must all have the same dimensionality. Signature: ```output input value: SETOF anyarray return value: anyarray ``` Here is a positive example: ```plpgsql with tab as ( values ('{a, b, c}'::text[]), ('{d, e, f}'::text[])) select array_agg((column1)::text[] order by column1) as arr from tab; ``` It produces this result: ```output arr {{a,b,c},{d,e,f}} ``` And here is a negative example: ```plpgsql with tab as ( values ('{a, b, c}'::text[]), ('{d, e }'::text[])) select array_agg((column1)::text[] order by column1) as arr from tab; ``` It causes this error: ```output 2202E: cannot accumulate arrays of different dimensionality ``` This function has two overloads. The first is straightforward and has an obvious usefulness. The second is rather exotic. Purpose: Transform the values in a single array into a SQL table (that is, a `SETOF`) these values. Signature: ```output input value: anyarray return value: SETOF anyelement ``` As the sketch at the start of this page indicated, the input to unnest is an array. To use what the code example in the account of arrayagg() set in the `ysqlsh` variable \"result&#95;arr\" in a SQL statement, you must quote it and typecast it to \"rt[]\"_. This can be done with the \\set meta-command, thus: ```plpgsql \\set unnestarg '\\'':resultarr'\\'::rt[]' \\echo :unnest_arg ``` The `\\set` meta-command uses the backslash character to escape the single quote character that it also uses to surround the string that it assigns to the target `ysqlsh` variable. The `\\echo` meta-command shows this: ```output '{\"(1,dog)\",\"(2,cat)\",\"(3,ant)\"}'::rt[] ``` Now use it as the actual argument for `unnest()` thus: ```plpgsql with rows as ( select unnest(:unnest_arg) as rec) select (rec).f1, (rec).f2 from rows order by 1; ``` The parentheses around the column alias \"rec\" are required to remove what the SQL compiler would otherwise see as an ambiguity, and would report as a \"42P01 undefinedtable\"_ error. This is the result: ```output f1 | f2 +-- 1 | dog 2 | cat 3 | ant ``` As promised, the original `SETOF` tuples has been" }, { "data": "Purpose: Transform the values in a variadic list of arrays into a SQL table whose columns each are a `SETOF` the corresponding input array's values. This overload can be used only in the `FROM` clause of a subquery. Each input array might have a different type and a different cardinality. The input array with the greatest cardinality determines the number of output rows. The rows of those input arrays that have smaller cardinalities are filled at the end with `NULL`s. The optional `WITH ORDINALITY` clause adds a column that numbers the rows. Signature: ```output input value: <variadic list of> anyarray return value: many coordinated columns of SETOF anyelement ``` ```plpgsql create type rt as (a int, b text); \\pset null '<is null>' select * from unnest( array[1, 2], array[10, 20, 30, 45, 50], array['a', 'b', 'c', 'd'], array[(1, 'p')::rt, (2, 'q')::rt, (3, 'r')::rt, (4, 's')::rt] ) with ordinality as result(arr1, arr2, arr3, arr4a, arr4n, n); ``` It produces this result: ```output arr1 | arr2 | arr3 | arr4a | arr4n | n --++--+--+--+ 1 | 10 | a | 1 | p | 1 2 | 20 | b | 2 | q | 2 <is null> | 30 | c | 3 | r | 3 <is null> | 45 | d | 4 | s | 4 <is null> | 50 | <is null> | <is null> | <is null> | 5 ``` Start by aggregating three `int[]` array instances and by preparing the result as an `int[]` literal for the next step using the same `\\gset` technique that was used above: ```plpgsql with tab as ( values ('{1, 2, 3}'::int[]), ('{4, 5, 6}'::int[]), ('{7, 8, 9}'::int[])) select array_agg(column1 order by column1) as arr from tab \\gset result_ \\set unnestarg '\\'':resultarr'\\'::int[]' \\echo :unnest_arg ``` Notice that the SQL statement, this time, is not terminated with a semicolon. Rather, the `\\gset` meta-command acts as the terminator. This makes the `ysqlsh` output less noisy. This is the result: ```output '{{1,2,3},{4,5,6},{7,8,9}}'::int[] ``` You recognize this as the literal for a two-dimensional array. Now use this as the actual argument for `unnest()`: ```plpgsql select unnest(:unnest_arg) as val order by 1; ``` It produces this result: ```output val -- 1 2 3 4 5 6 7 8 9 ``` This `SETOF` result lists all of the input array's \"leaf\" values in row-major order. This term is explained in ) within the \"Functions for reporting the geometric properties of an array\" section. Notice that, for the multidimensional case, the original input to `arrayagg()` was not_, therefore, regained. This point is emphasized by aggregating the result: ```plpgsql with a as (select unnest(:unnest_arg) as val) select array_agg(val order by val) from a; ``` It produces this result: ```output array_agg {1,2,3,4,5,6,7,8,9} ``` You started with a two-dimensional array. But now you have a one-dimensional array with the same values as the input array in the same row-major order. This result has the same semantic content that the `arraytostring()` function produces: ```plpgsql select arraytostring(:unnest_arg, ','); ``` It produces this result: ```output arraytostring 1,2,3,4,5,6,7,8,9 ``` See . This shows how you can use the `FOREACH` loop in procedural code, with an appropriate value for the `SLICE` operand, to unnest an array into a set of subarrays whose dimensionality you can" }, { "data": "At one end of the range, you can mimmic `unnest()` and produce scalar values. At the other end of the range, you can produce a set of arrays with dimensionality `n - 1` where `n` is the dimensionality of the input array. The basic illustration of the functionality of `array_agg()` showed how it can convert the entire contents of a table (or, by extension, the `SETOF` rows defined by a `SELECT` execution) into a single array value. This can be useful to return a large `SELECT` result in its entirety (in other words, in a single round trip) to a client program. Another use is to populate a single newly-created \"masterswithdetails\" table from the fully projected and unrestricted `INNER JOIN` of a classic \"masters\" and \"details\" pair of tables. The new table has all the columns that the source \"masters\" table has and all of its rows. And it has an additional \"details\" column that holds, for each \"masters\" row, a \"detailst[]\" array that represents all of the child rows that it has in the source \"details\" table. The type \"details&#95;t\" has all of the columns of the \"details\" table except the \"details.masterspk\" foreign key column. This column vanishes because, as the join column, it vanishes in the `INNER JOIN`. The \"details\" table's \"payload\" is now held in place in a single multivalued field in the new \"masters&#95;with&#95;details\" table. Start by creating and populating the \"masters\" and \"details\" tables: ```plpgsql create table masters( master_pk int primary key, master_name text not null); insert into masters(masterpk, mastername) values (1, 'John'), (2, 'Mary'), (3, 'Joze'); create table details( master_pk int not null, seq int not null, detail_name text not null, constraint detailspk primary key(masterpk, seq), constraint masterpkfk foreign key(master_pk) references masters(master_pk) match full on delete cascade on update restrict); insert into details(masterpk, seq, detailname) values (1, 1, 'cat'), (1, 2, 'dog'), (2, 1, 'rabbit'), (2, 2, 'hare'), (2, 3, 'squirrel'), (2, 4, 'horse'), (3, 1, 'swan'), (3, 2, 'duck'), (3, 3, 'turkey'); ``` Next, create a view that encodes the fully projected, unrestricted inner join of the original data, and inspect the result set that it represents: ```plpgsql create or replace view original_data as select master_pk, m.master_name, d.seq, d.detail_name from masters m inner join details d using (master_pk); select master_pk, master_name, seq, detail_name from original_data order by master_pk, seq; ``` This is the result: ```output masterpk | mastername | seq | detail_name --+-+--+- 1 | John | 1 | cat 1 | John | 2 | dog 2 | Mary | 1 | rabbit 2 | Mary | 2 | hare 2 | Mary | 3 | squirrel 2 | Mary | 4 | horse 3 | Joze | 1 | swan 3 | Joze | 2 | duck 3 | Joze | 3 | turkey ``` Next, create the type \"details&#95;t\" and the new table: ```plpgsql create type detailst as (seq int, detailname text); create table masterswithdetails ( master_pk int primary key, master_name text not null, details details_t[] not null); ``` Notice that you made the \"details\" column `not null`. This was a choice. It adds semantics that are notoriously difficult to capture in the original two table design without tricky, and therefore error-prone, programming of triggers and the like. You have implemented the so-called \"mandatory one-to-many\"" }, { "data": "In the present example, the rule says (in the context of the entity-relationship model that specifies the requirements) that an occurrence of a \"Master\" entity type cannot exist unless it has at least one, but possibly many, child occurrences of a \"Detail\" entity type. Next, populate the new table and inspect its contents: ```plpgsql insert into masterswithdetails select master_pk, master_name, arrayagg((seq, detailname)::details_t order by seq) as agg from original_data group by masterpk, mastername; select masterpk, mastername, details from masterswithdetails order by 1; ``` This is the result: ```output masterpk | mastername | details --+-+ 1 | John | {\"(1,cat)\",\"(2,dog)\"} 2 | Mary | {\"(1,rabbit)\",\"(2,hare)\",\"(3,squirrel)\",\"(4,horse)\"} 3 | Joze | {\"(1,swan)\",\"(2,duck)\",\"(3,turkey)\"} ``` Here's a helper function to show the primitive values that the \"details&#95;t[]\" array encodes without the clutter of the array literal syntax: ```plpgsql create function prettydetails(arr in detailst[]) returns text language plpgsql as $body$ declare arrtype constant text := pgtypeof(arr); ndims constant int := array_ndims(arr); lb constant int := array_lower(arr, 1); ub constant int := array_upper(arr, 1); begin assert arrtype = 'detailst[]', 'assert failed: ndims = %', arr_type; assert ndims = 1, 'assert failed: ndims = %', ndims; declare line text not null := rpad(arr[lb].seq::text||': '||arr[lb].detail_name::text, 12)|| ' | '; begin for j in (lb + 1)..ub loop line := line|| rpad(arr[j].seq::text||': '||arr[j].detail_name::text, 12)|| ' | '; end loop; return line; end; end; $body$; ``` Notice that this is not a general purpose function. Rather, it expects that the input is a _\"details&#95;t; ; and . Invoke it like this: ```plpgsql select masterpk, mastername, pretty_details(details) from masterswithdetails order by 1; ``` It produces this result: ```output masterpk | mastername | pretty_details --+-+-- 1 | John | 1: cat | 2: dog | 2 | Mary | 1: rabbit | 2: hare | 3: squirrel | 4: horse | 3 | Joze | 1: swan | 2: duck | 3: turkey | ``` Next, create a view that uses `unnest()` to re-create the effect of the fully projected, unrestricted inner join of the original data, and inspect the result set that it represents: ```plpgsql create or replace view new_data as with v as ( select master_pk, master_name, unnest(details) as details from masterswithdetails) select master_pk, master_name, (details).seq, (details).detail_name from v; select master_pk, master_name, seq, detail_name from new_data order by master_pk, seq; ``` The result is identical to what the \"original&#95;data\" view represents. But rather than relying on visual inspection, can check that the \"new&#95;data\" view and the \"original&#95;data\" view represent the identical result by using SQL thus: ```plpgsql with originalexceptnew as ( select masterpk, mastername, seq, detail_name from original_data except select masterpk, mastername, seq, detail_name from new_data), newexceptoriginal as ( select masterpk, mastername, seq, detail_name from new_data except select masterpk, mastername, seq, detail_name from original_data), originalexceptnewunionnewexceptoriginal as ( select masterpk, mastername, seq, detail_name from originalexceptnew union select masterpk, mastername, seq, detail_name from newexceptoriginal) select case count(*) when 0 then '\"newdata\" is identical to \"originaldata.\"' else '\"newdata\" differs from \"originaldata\".' end as result from originalexceptnewunionnewexceptoriginal; ``` This is the result: ```output result \"newdata\" is identical to \"originaldata.\" ``` Notice that if you choose the \"masters&#95;with&#95;details\" approach (either as a migration from a two-table approach in an extant application, or as an initial choice in a new application) you must appreciate the" }, { "data": "Prerequisite: You must be confident that the \"details\" rows are genuinely private each to its own master and do not implement a many-to-many relationship in the way that the \"order&#95;items\" table does between the \"customers\" table and the \"items\" table in the classic sales order entry model that is frequently used to teach table design according to the relational model. Pros: You can enforce the mandatory one-to-many requirement declaratively and effortlessly. Changing and querying the data will be faster because you use single table, single-row access rather than two-table, multi-row access. You can trivially recapture the query functionality of the two-table approach by implementing a \"new&#95;data\" unnesting view as has been shown. So you can still find, for example, rows in the \"masters&#95;with&#95;details\" table where the \"details\" array has the specified values like this: ```plpgsql with v as ( select masterpk, mastername, seq, detail_name from new_data where detail_name in ('rabbit', 'horse', 'duck', 'turkey')) select master_pk, master_name, arrayagg((seq, detailname)::details_t order by seq) as agg from v group by masterpk, mastername order by 1; ``` This is the result: ```output masterpk | mastername | agg --+-+- 2 | Mary | {\"(1,rabbit)\",\"(4,horse)\"} 3 | Joze | {\"(2,duck)\",\"(3,turkey)\"} ``` Cons: Changing the data in the \"details\" array is rather difficult. Try this (in the two-table regime): ```plpgsql update details set detail_name = 'bobcat' where master_pk = 2 and detail_name = 'squirrel'; select master_pk, master_name, seq, detail_name from original_data where master_pk = 2 order by master_pk, seq; ``` This is the result: ```output masterpk | mastername | seq | detail_name --+-+--+- 2 | Mary | 1 | rabbit 2 | Mary | 2 | hare 2 | Mary | 3 | bobcat 2 | Mary | 4 | horse ``` Here's how you achieve the same effect, and check that it worked as intended, in the new regime. Notice that you need to know the value of \"seq\" for the \"rt\" object that has the \"detail&#95;name\" value of interest. This can be done by implementing a dedicated PL/pgSQL function that encapsulates `array_replace()` or that replaces a value directly by addressing it using its index. But it's hard to do without that. (These methods are described in .) ```plpgsql update masterswithdetails set details = array_replace(details, '(3,squirrel)', '(3,bobcat)') where master_pk = 2; select master_pk, master_name, seq, detail_name from new_data where master_pk = 2 order by master_pk, seq; ``` The result is identical to the result shown for querying \"original&#95;data\" above. Implementing the requirement that the values of \"detail&#95;name\" must be unique for a given \"masters\" row is trivial in the old regime: ```plpgsql create unique index on details(masterpk, detailname); ``` To achieve the effect in the new regime, you'd need to write a PL/pgSQL function, with return type `boolean` that scans the values in the \"details\" array and returns `TRUE` when there are no duplicates among the values of the \"detail&#95;name\" field and that otherwise returns `FALSE`. Then you'd use this function as the basis for a check constraint in the definition of the \"details&#95;with&#95;masters\" table. This is a straightforward programming task, but it does take more effort than the declarative implementation of the business rule that the two-table regime allows. Purpose: Return the index values, along the specified dimension, of an array as a SQL table (that is, a `SETOF`) these `int`" }, { "data": "Signature: ```output input value: anyarray, integer, boolean return value: SETOF integer ``` The second input parameter specifies the dimension along which the index values should be generated. The third, optional, input parameter controls the ordering of the values. The default value `FALSE` means generate the index values in ascending order from the lower index bound to the upper index bound; and the value `TRUE` means generate the index values in descending order from the upper index bound to the lower index bound. It's useful to use the same array in each of several examples. Make it available thus: ```plpgsql drop function if exists arr() cascade; create function arr() returns int[] language sql as $body$ select array[17, 42, 53, 67]::int[]; $body$; ``` Now demonstrate the basic behavior generatesubscripts():_ ```plpgsql select generate_subscripts(arr(), 1) as subscripts; ``` This is the result: ```output subscripts 1 2 3 4 ``` Asks for the subscripts to be generated in reverse order. ```plpgsql select generate_subscripts(arr(), 1, true) as subscripts; ``` This is the result: ```output subscripts 4 3 2 1 ``` `generateseries()` can be use to produce the same result as `generatesubscripts()`. Notice that `generateseries()` doesn't have a \"reverse\"_ option. This means that, especially when you want the results in reverse order, the syntax is significantly more cumbersome, as this example shows: ```plpgsql select arrayupper(arr(), 1) + 1 - generateseries( array_lower(arr(), 1), array_upper(arr(), 1) ) as subscripts; ``` The following example creates a procedure that compares the results of `generatesubscripts()` and `generateseries()`, when the latter is invoked in a way that will produce the same results as the former. The procedure's input parameter lets you specify along which dimension you want to generate the index values. To emphasize how much easier it is to write the `generate_subscripts()` invocation, the test uses the reverse index order option. The array is constructed using the array literal notation (see ) that explicitly sets the lower index bound along each of the array's three dimensions. is used to aggregate the results from each approach so that they can be compared simply by using the . ```plpgsql create or replace procedure p(dim in int) language plpgsql as $body$ declare arr constant int[] not null := ' [7:10]={ { { 1, 2, 3, 4},{ 5, 6, 7, 8},{ 9,10,11,12} }, { {13,14,15,16},{17,18,19,20},{21,22,23,24} } }'::int[]; subscripts_1 constant int[] := ( with v as ( select generate_subscripts(arr, dim) as s) select array_agg(s) from v ); lb constant int := array_lower(arr, dim); ub constant int := array_upper(arr, dim); subscripts_2 constant int[] := ( with v as ( select generate_series(lb, ub) as s) select array_agg(s) from v ); begin assert subscripts1 = subscripts2, 'assert failed'; end; $body$; do $body$ begin call p(1); call p(2); call p(3); end; $body$; ``` Each of the calls finishes silently, showing that the asserts hold. Both of the built-ins, `generateseries()` and `generatesubscripts()` are table functions. For this reason, they are amenable to this aliasing locution: ```plpgsql select mytablealias.mycolumnalias from generateseries(1, 3) as mytablealias(mycolumn_alias); ``` This is the result: ```output mycolumnalias -- 1 2 3 ``` The convention among PostgreSQL users is to use `g(i)` with these two built-ins, where \"g\" stands for \"generate\" and \"i\" is the common favorite for a loop iterand in procedural programming. You are very likely, therefore, to see something like this: ```plpgsql select" }, { "data": "from generate_subscripts('[5:7]={17, 42, 53}'::int[], 1) as g(i); ``` with this result: ```output i 5 6 7 ``` This is useful because without the locution, the result of each of these table functions is anonymous. The more verbose alternative is to define the aliases in a `WITH` clause, as was done above: ```plpgsql with g(i) as ( select generate_subscripts('[5:7]={17, 42, 53}'::int[], 1)) select g.i from g; ``` The most obvious use is to tabulate the array values along side of the index values, using the immediately preceding example: ```plpgsql drop table if exists t cascade; create table t(k text primary key, arr int[]); insert into t(k, arr) values ('Array One', '{17, 42, 53, 67}'), ('Array Two', '[5:7]={19, 47, 59}'); select i, (select arr from t where k = 'Array One')[i] from generate_subscripts((select arr from t where k = 'Array One'), 1) as g(i); ``` It produces this result: ```output i | arr +-- 1 | 17 2 | 42 3 | 53 4 | 67 ``` Notice that this: ```output (select arr from t where k = 1)[i] ``` has the same effect as this: ```output (select arr[i] from t where k = 1) ``` It was written the first way to emphasize the annoying textual repetition of \"(select arr from t where k = 1)\". This highlights a critical difference between SQL and a procedural language like PL/pgSQL. The latter allows you so initialize a variable with an arbitrarily complex and verbose expression and then just to use the variable's name thereafter. But SQL has no such notion. Notice that the table t has two rows. You can't generalize the SQL shown immediately above to list the indexes with their array values for both rows. This is where the cross join lateral syntax comes to the rescue: ```plpgsql with c(k, a, idx) as ( select k, arr, indexes.idx from t cross join lateral generate_subscripts(t.arr, 1) as indexes(idx)) select k, idx, a[idx] from c order by k; ``` It produces this result: ```output k | idx | a --+--+- Array One | 1 | 17 Array One | 2 | 42 Array One | 3 | 53 Array One | 4 | 67 Array Two | 5 | 19 Array Two | 6 | 47 Array Two | 7 | 59 ``` Here is the PL/pgSQL re-write. ```plpgsql do $body$ <<b>>declare arr constant int[] := (select arr from t where k = 'Array Two'); i int; begin for b.i in ( select g.i from generate_subscripts(arr, 1) as g(i)) loop raise info '% | % ', i, arr[i]; end loop; end b; $body$; ``` The result (after manually stripping the \"INFO:\" prompts), is the same as the SQL approach that uses `generatesubscripts()` with cross join lateral_, shown above, produces: ```output 5 | 19 6 | 47 7 | 59 ``` Notice that having made the transition to a procedural approach, there is no longer any need to use `generate_subscripts()`. Rather, and can be used in the ordinary way to set the bounds of the integer variant of a `FOR` loop: ```plpgsql do $body$ declare arr constant int[] := (select arr from t where k = 1); begin for i in reverse arrayupper(arr," }, { "data": "1) loop raise info '% | % ', i, arr[i]; end loop; end; $body$; ``` It produces the same result. Try these two examples: ```plpgsql with v as ( select array[17, 42, 53]::int[] as arr) select (select arr[idx] from v) as val from generate_subscripts((select arr from v), 1) as subscripts(idx); ``` and: ```plpgsql with v as ( select array[17, 42, 53]::int[] as arr) select unnest((select arr from v)) as val; ``` Each uses the same array, \"array[1, 2, 3]::int[]\", and each produces the same result, thus: ```output val -- 17 42 53 ``` One-dimensional arrays are by far the most common use of the array data type. This is probably because a one-dimensional array of \"row\" type values naturally models a schema-level tablealbeit that an array brings an unavoidable ordering of elements while the rows in a schema-level table have no intrinsic order. In the same way, an array of scalar elements models the values in a column of a schema-level table. Certainly, almost all the array examples in the use one-dimensional arrays. Further, it is common to want to present an array's elements as a `SETOF` these values. For this use case, and as the two code examples above show, `unnest()` is simpler to use than `generatesubscripts()`. It is far less common to care about the actual dense sequence of index values that address an array's elementsfor which purpose you would need `generatesubscripts()`. Moreover, `unnest()` (as has already been shown in this section) \"flattens\" an array of any dimensionality into the sequence of its elements in row-major order but `generate_subscripts()` brings no intrinsic functionality to do this. You can certainly achieve the result, as these two examples show for a two-dimensional array. Compare this: ```plpgsql select unnest('{{17, 42, 53},{57, 67, 73}}'::int[]) as element; ``` with this: ```plpgsql with a as ( select '{{17, 42, 53},{57, 67, 73}}'::int[] as arr), s1 as ( select generate_subscripts((select arr from a), 1) as i), s2 as ( select generate_subscripts((select arr from a), 2) as j) select (select arr from a) element from s1,s2 order by s1.i, s2.j; ``` Again, each uses the same array (this time '{{17, 42, 53},{57, 67, 73}}'::int[]) and each produces the same result, thus: ```output element 17 42 53 57 67 73 ``` You could generalize this approach for an array of any dimensionality. However, the `generatesubscripts()` approach is more verbose, and therefore more error-prone, than the `unnest()` approach. However, because \"order by s1.i, s2.j\"_ makes your ordering rule explicit, you could define any ordering that suited your purpose. See . The `FOREACH` loop brings dedicated syntax for looping over the contents of an array. The loop construct uses the `SLICE` keyword to specify the subset of the array's elements over which you want to iterate. Typically you specify that the iterand is an array with fewer dimensions than the array over which you iterate. Because this functionality is intrinsic to the `FOREACH` loop, and because it would be very hard to write the SQL statement that produces this kind of slicing, you should use the `FOREACH` loop when you have this kind of requirement. If you want to consume the output in a surrounding SQL statement, you can use `FOREACH` in a PL/pgSQL table function that returns a `SETOF` the sub-array that you need. You specify the `RETURNS` clause of such a table function using the `TABLE` keyword." } ]
{ "category": "App Definition and Development", "file_name": "LICENSING.md", "project_name": "KubeBlocks by ApeCloud", "subcategory": "Database" }
[ { "data": "License names used in this document are as per . The default license for this project is . The following directories and their subdirectories are licensed under Apache-2.0: ``` apis/ deploy/ docker/ externalapis/ hack/ pkg/client/ test/ tools/ ``` The following directories and their subdirectories are licensed under their original upstream licenses: ``` cmd/probe/pkg/component/ pkg/cli/cmd/plugin/download/ vendor/ ```" } ]
{ "category": "App Definition and Development", "file_name": "v22.8.9.24-lts.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Keeper performance improvement: improve commit performance for cases when many different nodes have uncommitted states. This should help with cases when a follower node can't sync fast enough. (). Backported in : Update tzdata to 2022f. Mexico will no longer observe DST except near the US border: https://www.timeanddate.com/news/time/mexico-abolishes-dst-2022.html. Chihuahua moves to year-round UTC-6 on 2022-10-30. Fiji no longer observes DST. See https://github.com/google/cctz/pull/235 and https://bugs.launchpad.net/ubuntu/+source/tzdata/+bug/1995209. (). Backported in : Before the fix, the user-defined config was preserved by RPM in `$file.rpmsave`. The PR fixes it and won't replace the user's files from packages. (). Backported in : Add a CI step to mark commits as ready for release; soft-forbid launching a release script from branches but master. (). Backported in : Fixed `Unknown identifier (aggregate-function)` exception which appears when a user tries to calculate WINDOW ORDER BY/PARTITION BY expressions over aggregate functions: ``` CREATE TABLE default.tenk1 ( `unique1` Int32, `unique2` Int32, `ten` Int32 ) ENGINE = MergeTree ORDER BY tuple() SETTINGS indexgranularity = 8192; SELECT ten, sum(unique1) + sum(unique2) AS res, rank() OVER (ORDER BY sum(unique1) + sum(unique2) ASC) AS rank FROM complex GROUP BY ten ORDER BY ten ASC; ``` which gives: ``` Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Unknown identifier: sum(unique1); there are columns: unique1, unique2, ten: While processing sum(unique1) + sum(unique2) ASC. (UNKNOWN_IDENTIFIER) ```. (). Backported in : A segmentation fault related to DNS & c-ares has been reported. The below error ocurred in multiple threads: ``` 2022-09-28 15:41:19.008,2022.09.28 15:41:19.008088 (). Backported in : Fix rare NOTFOUNDCOLUMNINBLOCK error when projection is possible to use but there is no projection available. This fixes . The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/25563. (). Do not warn about kvm-clock (). Revert revert 41268 disable s3 parallel write for part moves to disk s3 (). Always run `BuilderReport` and `BuilderSpecialReport` in all CI types ()." } ]
{ "category": "App Definition and Development", "file_name": "cloud-security-features.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Security architecture drilldown linkTitle: Security architecture drilldown description: Explore the security architecture of YugabyteDB Managed data, our fully managed YugabyteDB-as-a-Service. menu: preview_yugabyte-cloud: parent: cloud-security identifier: cloud-security-features weight: 10 type: docs YugabyteDB Managed is a fully managed YugabyteDB-as-a-Service that allows you to run YugabyteDB clusters on public cloud providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). YugabyteDB Managed runs on top of . It is responsible for creating and managing customer YugabyteDB clusters deployed on cloud provider infrastructure. All customer clusters are firewalled from each other. Outside connections are also firewalled according to the rules that you assign to your clusters. You can also connect Dedicated (that is, not Sandbox) clusters to virtual private clouds (VPCs) on the public cloud provider of your choice (subject to the IP allow list rules). YugabyteDB Managed uses both encryption in transit and encryption at rest to protect clusters and cloud infrastructure. All communication between YugabyteDB Managed architecture domains is encrypted in transit using TLS. Likewise, all communication between clients or applications and clusters is encrypted in transit. Every cluster has its own certificates, generated when the cluster is created and signed by the Yugabyte internal PKI. Root and intermediate certificates are not extractable from the hardware security appliances. Data at rest, including clusters and backups, is AES-256 encrypted using native cloud provider technologies - S3 and EBS volume encryption for AWS, Azure disk encryption, and server-side and persistent disk encryption for GCP. Encryption keys are managed by the cloud provider and anchored by hardware security appliances. Customers can enable YugabyteDB and as additional security controls. YugabyteDB Managed provides DDoS and application layer protection, and automatically blocks network protocol and volumetric DDoS attacks. Yugabyte secures YugabyteDB Managed databases using the same default to secure their own YugabyteDB installations, including: authentication role-based access control dedicated users limited network exposure encryption in transit encryption at rest For information on data privacy and compliance, refer to the . Yugabyte supports audit logging at the account and database level. For information on database audit logging, refer to ." } ]
{ "category": "App Definition and Development", "file_name": "ycsb-jdbc.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Benchmark YSQL performance using YCSB and JDBC headerTitle: YCSB linkTitle: YCSB description: Learn how to test the YSQL API using the YCSB benchmark. headcontent: Benchmark YSQL performance using YCSB with standard JDBC binding menu: v2.18: identifier: ycsb-1-ysql parent: benchmark weight: 5 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../ycsb-jdbc/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> JDBC Binding </a> </li> <li > <a href=\"../ycsb-ysql/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL Binding </a> </li> <li > <a href=\"../ycsb-ycql/\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL Binding </a> </li> </ul> This document describes how to use the standard JDBC binding to run the YCSB benchmark. For additional information about YCSB, refer to the following: To run the benchmark, ensure that you meet the prerequisites and complete steps such as starting YugabyteDB and configuring its properties. The binaries are compiled with Java 13 and it is recommended to run these binaries with that version. Run the following commands to download the YCSB binaries: ```sh $ cd $HOME $ wget https://github.com/yugabyte/YCSB/releases/download/1.0/ycsb.tar.gz $ tar -xzf ycsb.tar.gz $ cd YCSB ``` Ensure that you have the YSQL shell and that its location is included in the `PATH` variable, as follows: ```sh $ export PATH=$PATH:/path/to/ysqlsh ``` You can find `ysqlsh` in your YugabyteDB installation's `bin` directory. For example: ```sh $ export PATH=$PATH:/Users/yugabyte/code/bin ``` Start your YugabyteDB cluster by following the procedure described in . Note the IP addresses of the nodes in the cluster, as these addresses are required when configuring the properties file. Update the file `db.properties` in the YCSB directory with the following contents, replacing values for the IP addresses in the `db.url` field with the correct values for all the nodes that are part of the cluster: ```properties db.driver=org.postgresql.Driver db.url=jdbc:postgresql://<ip1>:5433/ycsb;jdbc:postgresql://<ip2>:5433/ycsb;jdbc:postgresql://<ip3>:5433/ycsb; db.user=yugabyte db.passwd= ``` The other configuration parameters are described in . Use the following script `run_jdbc.sh` to load and run all the workloads: ```sh $ ./run_jdbc.sh --ip <ip> ``` The preceding command runs the workload on a table with a million rows. To run the benchmark on a table with a different row count, use the following command: ```sh $ ./run_jdbc.sh --ip <ip> --recordcount <number of rows> ``` To obtain the maximum performance out of the system, you can tune the `threadcount` parameter in the script. As a reference, for a" }, { "data": "instance with 16 cores and 32GB RAM, you use a threadcount of 32 for the loading phase and 256 for the execution phase. The `run_jdbc.sh` script creates two result files per workload: one for the loading, and one for the execution phase with the details of throughput and latency. For example, for a workload it creates, inspect the `workloada-ysql-load.dat` and `workloada-ysql-transaction.dat` files. Optionally, you can run workloads individually using the following steps: Start the YSQL shell using the following command: ```sh $ ./bin/ysqlsh -h <ip> ``` Create the `ycsb` database as follows: ```sql yugabyte=# CREATE DATABASE ycsb; ``` Connect to the database as follows: ```sql yugabyte=# \\c ycsb ``` Create the table as follows: ```sql ycsb=# CREATE TABLE usertable ( YCSB_KEY TEXT, FIELD0 TEXT, FIELD1 TEXT, FIELD2 TEXT, FIELD3 TEXT, FIELD4 TEXT, FIELD5 TEXT, FIELD6 TEXT, FIELD7 TEXT, FIELD8 TEXT, FIELD9 TEXT, PRIMARY KEY (YCSB_KEY ASC)) SPLIT AT VALUES (('user10'),('user14'),('user18'), ('user22'),('user26'),('user30'),('user34'),('user38'), ('user42'),('user46'),('user50'),('user54'),('user58'), ('user62'),('user66'),('user70'),('user74'),('user78'), ('user82'),('user86'),('user90'),('user94'),('user98')); ``` Load the data before you start the `jdbc` workload: ```sh $ ./bin/ycsb load jdbc -s \\ -P db.properties \\ -P workloads/workloada \\ -p recordcount=1000000 \\ -p operationcount=10000000 \\ -p threadcount=32 ``` Run the workload as follows: {{< note title=\"Note\" >}} The `recordcount` parameter in the following `ycsb` commands should match the number of rows in the table. {{< /note >}} ```sh $ ./bin/ycsb run jdbc -s \\ -P db.properties \\ -P workloads/workloada \\ -p recordcount=1000000 \\ -p operationcount=10000000 \\ -p threadcount=256 ``` Run other workloads (for example, `workloadb`) by changing the corresponding argument in the preceding command, as follows: ```sh $ ./bin/ycsb run jdbc -s \\ -P db.properties \\ -P workloads/workloadb \\ -p recordcount=1000000 \\ -p operationcount=10000000 \\ -p threadcount=256 ``` When run on a 3-node cluster of `c5.4xlarge` AWS instances (16 cores, 32GB of RAM, and 2 EBS volumes) all belonging to the same availability zone with the client VM running in the same availability zone, expect the following results for 1 million rows: | Workload | Throughput (ops/sec) | Read Latency | Write Latency | | :- | :- | :-- | : | | Workload A | 37,443 | 1.5ms | 12 ms update | | Workload B | 66,875 | 4ms | 7.6ms update | | Workload C | 77,068 | 3.5ms | Not applicable | | Workload D | 63,676 | 4ms | 7ms insert | | Workload E | 16,642 | 15ms scan | Not applicable | | Workload F | 29,500 | 2ms | 15ms read-modify-write | For an additional example, refer to ." } ]
{ "category": "App Definition and Development", "file_name": "windowinto.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"WindowInto\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <table align=\"left\"> <a target=\"_blank\" class=\"button\" href=\"https://beam.apache.org/releases/pydoc/current/apachebeam.transforms.window.html?highlight=window#module-apachebeam.transforms.window\"> <img src=\"/images/logos/sdks/python.png\" width=\"20px\" height=\"20px\" alt=\"Pydoc\"> Pydoc </a> </table> <br><br> Logically divides up or groups the elements of a collection into finite windows according to a function. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHON_Window\" show=\"window\" >}} {{< /playground >}} produces a collection where each element consists of a key and all values associated with that key. applies a function to determine a timestamp to each element in the output collection." } ]
{ "category": "App Definition and Development", "file_name": "v1.21.md", "project_name": "EDB", "subcategory": "Database" }
[ { "data": "History of user-visible changes in the 1.21 minor release of CloudNativePG. For a complete list of changes, please refer to the on the release branch in GitHub. Release date: Apr 24, 2024 !!! Warning Version 1.21 is approaching its End-of-Life (EOL) on May 24, 2024. If you haven't already, please begin planning for an upgrade promptly to ensure continued support and security. Enhancements: Users can now configure the `walloghints` PostgreSQL parameter (#4218) (#4218) Fully Qualified Domain Names (FQDN) in URIs for automatically generated secrets (#4095) Cleanup of instance Pods not owned by the Cluster during Cluster restore (#4141) Error detection when invoking `barman-cloud-wal-restore` in `recovery` bootstrap (#4101) Fixes: Ensured that before a switchover, the elected replica is in streaming replication (#4288) Correctly handle parsing errors of instances' LSN when sorting them (#4283) Recreate the primary Pod if there are no healthy standbys available to promote (#4132) Cleanup `PGDATA` in case of failure of the restore job (#4151) Reload certificates on configuration update (#3705) `cnpg` plugin for `kubectl`: Improve the arguments handling of `destroy`, `fencing`, and `promote` plugin commands (#4280) Correctly handle the percentage of the backup progress in `cnpg status` (#4131) Gracefully handle databases with no sequences in `sync-sequences` command (#4346) Changes: The Grafana dashboard now resides at https://github.com/cloudnative-pg/grafana-dashboards (#4154) Release date: Mar 14, 2024 Allow customization of the `wal_level` GUC in PostgreSQL (#4020) Add the `cnpg.io/skipWalArchiving` annotation to disable WAL archiving when set to `enabled` (#4055) Enrich the `cnpg` plugin for `kubectl` with the `publication` and `subscription` command groups to imperatively set up PostgreSQL native logical replication (#4052) Allow customization of `CERTIFICATEDURATION` and `EXPIRINGCHECK_THRESHOLD` for automated management of TLS certificates handled by the operator (#3686) Introduce initial support for tab-completion with the `cnpg` plugin for `kubectl` (#3875) Retrieve the correct architecture's binary from the corresponding catalog in the running operator image during in-place updates, enabling the operator to inject the correct binary into any Pod with a supported architecture (#3840) Properly synchronize PVC group labels with those on the pods, a critical aspect when all pods are deleted and the operator needs to decide which Pod to recreate first (#3930) Disable `walsendertimeout` when cloning a replica to prevent timeout errors due to slow connections (#4080) Ensure that volume snapshots are ready before initiating recovery bootstrap procedures, preventing an error condition where recovery with incomplete backups could enter an error loop (#3663) Prevent an error loop when unsetting connection limits in managed roles (#3832) Resolve a corner case in hibernation where the instance pod has been deleted, but the cluster status still has the hibernation condition set to false (#3970) Correctly detect Google Cloud capabilities for Barman Cloud (#3931) Use `Role` instead of `ClusterRole` for operator permissions in OLM, requiring fewer privileges when installed on a per-namespace basis (#3855, Enforce fully-qualified object names in SQL queries for the PgBouncer pooler (#4080) Follow Kubernetes recommendations to switch from client-side to server-side application of manifests, requiring the `--server-side` option by default when installing the operator (#3729). Set the default operand image to PostgreSQL 16.2 (#3823). Release date: Feb 2, 2024 Enhancements: Tailor ephemeral volume storage in a Postgres cluster using a claim template through the `ephemeralVolumeSource`" }, { "data": "(#3678) Introduce the `pgadmin4` command in the `cnpg` plugin for `kubectl`, providing a straightforward method to demonstrate connecting to a given database cluster and navigate its content in a local environment such as kind - for evaluation purposes only (#3701) Allow customization of PostgreSQL's ident map file via the `.spec.postgresql.pg_ident` stanza, through a list of user name maps (#3534) Fixes: Prevent an unrecoverable issue with `pg_rewind` failing due to `postgresql.auto.conf` being read-only on clusters where the `ALTER SYSTEM` SQL command is disabled - the default (#3728) Reduce the risk of disk space shortage when using the import facility of the `initdb` bootstrap method, by disabling the durability settings in the PostgreSQL instance for the duration of the import process (#3743) Avoid pod restart due to erroneous resource quantity comparisons, e.g. \"1 != 1000m\" (#3706) Properly escape reserved characters in `pgpass` connection fields (#3713) Prevent systematic rollout of pods due to considering zero and nil different values in `.spec.projectedVolumeTemplate.sources` (#3647) Ensure configuration coherence by pruning from `postgresql.auto.conf` any options now incorporated into `override.conf` (#3773) Release date: Dec 21, 2023 Security: By default, TLSv1.3 is now enforced on all PostgreSQL 12 or higher installations. Additionally, users can configure the `ssl_ciphers`, `sslminprotocolversion`, and `sslmaxprotocolversion` GUCs (#3408). Integration of Docker image scanning with Dockle and Snyk to enhance security measures (#3300). Enhancements: Improved reconciliation of external clusters (#3533). Introduction of the ability to enable/disable the `ALTER SYSTEM` command (#3535). Support for Prometheus' dynamic relabeling through the `podMonitorMetricRelabelings` and `podMonitorRelabelings` options in the `.spec.monitoring` stanza of the `Cluster` and `Pooler` resources (#3075). Enhanced computation of the first recoverability point and last successful backup by considering volume snapshots alongside object-store backups (#2940). <!-- NO 1.20 --> Elimination of the use of the `PGPASSFILE` environment variable when establishing a network connection to PostgreSQL (#3522). Improved `cnpg report` plugin command by collecting a cluster's PVCs (#3357). Enhancement of the `cnpg status` plugin command, providing information about managed roles, including alerts (#3310). Introduction of Red Hat UBI 8 container images for the operator, suitable for OLM deployments. <!-- NO 1.20 --> Connection pooler: Scaling down instances of a `Pooler` resource to 0 is now possible (#3517). Addition of the `cnpg.io/podRole` label with a value of 'pooler' to every pooler deployment, differentiating them from instance pods (#3396). Fixes: Reconciliation of metadata, annotations, and labels of `PodDisruptionBudget` resources (#3312 and #3434). Reconciliation of the metadata of the managed credential secrets (#3316). Resolution of a bug in the backup snapshot code where an error reading the body would be handled as an overall error, leaving the backup process indefinitely stuck (#3321). Implicit setting of online backup with the `cnpg backup` plugin command when either `immediate-checkpoint` or `wait-for-archive` options are requested (#3449). Disabling of walsendertimeout when joining through pg_basebackup (#3586) Reloading of secrets used by external clusters (#3565) Connection pooler: Ensuring the controller watches all secrets owned by a `Pooler` resource (#3428). Reconciliation of `RoleBinding` for `Pooler` resources (#3391). Reconciliation of `imagePullSecret` for `Pooler` resources (#3389). Reconciliation of the service of a `Pooler` and addition of the required labels (#3349). Extension of `Pooler` labels to the deployment as well, not just the pods (#3350). Changes: Default operand image set to PostgreSQL" }, { "data": "(#3270). Release date: Nov 3, 2023 Enhancements: Introduce support for online/hot backups with volume snapshots by using the PostgreSQL API for physical online base backups. Default configuration for hot/cold backups on a given Postgres cluster can be controlled through the `online` option and the `onlineConfiguration` stanza in `.spec.backup.volumeSnapshot`. Unless explicitly set, backups on volume snapshots are now taken online by default (#3102) Introduce the possibility to override the above default settings on volume snapshot backup using the `ScheduledBackup` and `Backup` resources (#3208, #3226) Enhance cold backup on volume snapshots by reducing the time window in which the target instance (standby or primary) is fenced, by lifting it as soon as the volume snapshot have been cut and provisioned (#3210) During a recovery from volume snapshots, ensure that the provided volume snapshots are coherent by validating the existing labels and annotations The `backup` command of the `cnpg` plugin for `kubectl` improves the volume snapshot backup experience through the `--online`, `--immediate-checkpoint`, and `--wait-for-archive` runtime options Enhance the `status` command of the `cnpg` plugin for `kubectl` with progress information on active streaming base backups (#3101) Allow the configuration of `maxpreparedstatements` with the pgBouncer `Pooler` resource (#3174) Fixes: Suspend WAL archiving during a switchover and resume it when it is completed (#3227) Ensure that the instance manager always uses `synchronous_commit = local` when managing the PostgreSQL cluster (#3143) Custom certificates for streaming replication user through `.spec.certificates.replicationTLSSecret` are now working (#3209) Set the `cnpg.io/cluster` label to the `Pooler` pods (#3153) Reduce the number of labels in `VolumeSnapshots` resources and render them into more appropriate annotations (#3151) Changes: Volume snapshot backups, introduced in 1.21.0, are now online/hot by default; in order to restore offline/cold backups set `.spec.backup.volumeSnapshot` to `false` Stop using the `postgresql.auto.conf` file inside PGDATA to control Postgres replication settings, and replace it with a file named `override.conf` (#2812) Technical enhancements: Use extended query protocol for PostgreSQL in the instance manager (#3152) Release date: Oct 12, 2023 !!! Important \"Important changes from previous versions\" This release contains a few changes to the default settings of CloudNativePG with the goal to improve general stability and security through predefined values. If you are upgrading from a previous version, please carefully read the \"Important Changes\" section below, as well as the . Features: Volume Snapshot support for backup and recovery: leverage the standard Kubernetes API on Volume Snapshots to take advantage of capabilities like incremental and differential copy for both backup and recovery operations. This first step, covering cold backups from a standby, will continue in 1.22 with support for hot backups using the PostgreSQL API and tablespaces. OLM installation method: introduce support for Operator Lifecycle Manager via OperatorHub.io for the latest patch version of the latest minor release through the stable channel. Many thanks to EDB for donating the bundle of their \"EDB Postgres for Kubernetes\" operator and adapting it for CloudNativePG. Important Changes: Change the default value of `stopDelay` to 1800 seconds instead of 30 seconds (#2848) Introduce a new parameter, called `smartShutdownTimeout`, to control the window of time reserved for the smart shutdown of Postgres to complete; the general formula to compute the overall timeout to stop Postgres is `max(stopDelay - smartShutdownTimeout, 30)` (#2848) Change the default value of `startDelay` to 3600, instead of 30" }, { "data": "(#2847) Replace the livenessProbe initial delay with a more proper Kubernetes startup probe to deal with the start of a Postgres server (#2847) Change the default value of `switchoverDelay` to 3600 seconds instead of 40000000 seconds (#2846) Disable superuser access by default for security (#2905) Enable replication slots for HA by default (#2903) Stop supporting the `postgresql` label - replaced by `cnpg.io/cluster` in 1.18 (#2744) Security: Add a default `seccompProfile` to the operator deployment (#2926) Enhancements: Enable bootstrap of a replica cluster from a consistent set of volume snapshots (#2647) Enable full and Point In Time recovery from a consistent set of volume snapshots (#2390) Introduce the `cnpg.io/coredumpFilter` annotation to control the content of a core dump generated in the unlikely event of a PostgreSQL crash, by default set to exclude shared memory segments from the dump (#2733) Allow to configure ephemeral-storage limits for the shared memory and temporary data ephemeral volumes (#2830) Validate resource limits and requests through the webhook (#2663) Ensure that PostgreSQL's `shared_buffers` are coherent with the pods' allocated memory resources (#2840) Add `uri` and `jdbc-uri` fields in the credential secrets to facilitate developers when connecting their applications to the database (#2186) Add a new phase `Waiting for the instances to become active` for finer control of a cluster's state waiting for the replicas to be ready (#2612) Improve detection of Pod rollout conditions through the `podSpec` annotation (#2243) Add primary timestamp and uptime to the kubectl plugin's `status` command (#2953) Fixes: Ensure that the primary instance is always recreated first by prioritizing ready PVCs with a primary role (#2544) Honor the `cnpg.io/skipEmptyWalArchiveCheck` annotation during recovery to bypass the check for an empty WAL archive (#2731) Prevent a cluster from being stuck when the PostgreSQL server is down but the pod is up on the primary (#2966) Avoid treating the designated primary in a replica cluster as a regular HA replica when replication slots are enabled (#2960) Reconcile services every time the selectors change or when labels/annotations need to be changed (#2918) Defaults to `app` both the owner and database during recovery bootstrap (#2957) Avoid write-read concurrency on cached cluster (#2884) Remove empty items, make them unique and sort in the `ResourceName` sections of the generated roles (#2875) Ensure that the `ContinuousArchiving` condition is properly set to 'failed' in case of errors (#2625) Make the `Backup` resource reconciliation cycle more resilient on interruptions by stopping only if the backup is completed or failed (#2591) Reconcile PodMonitor `labels` and `annotations` (#2583) Fix backup failure due to missing RBAC `resourceNames` on the `Role` object (#2956) Observability: Add TCP port label to default `pgstatreplication` metric (#2961) Fix the `pgwalstat` default metric for Prometheus (#2569) Improve the `pg_replication` default metric for Prometheus (#2744 and Use `alertInstanceLabelFilter` instead of `alertName` in the provided Grafana dashboard Enforce `standardconformingstrings` in metric collection (#2888) Changes: Set the default operand image to PostgreSQL 16.0 Fencing now uses PostgreSQL's fast shutdown instead of smart shutdown to halt an instance (#3051) Rename webhooks from kb.io to cnpg.io group (#2851) Replace the `cnpg snapshot` command with `cnpg backup -m volumeSnapshot` for the `kubectl` plugin Let the `cnpg hibernate` plugin command use the `ClusterManifestAnnotationName` and `PgControldataAnnotationName` annotations on PVCs (#2657) Add the `cnpg.io/instanceRole` label while deprecating the existing `role` label (#2915) Technical enhancements: Replace `k8s-api-docgen` with `gen-crd-api-reference-docs` to automatically build the API reference documentation (#2606)" } ]
{ "category": "App Definition and Development", "file_name": "sql-operators.md", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "id: sql-operators title: \"Druid SQL Operators\" sidebar_label: \"Operators\" <!-- ~ Licensed to the Apache Software Foundation (ASF) under one ~ or more contributor license agreements. See the NOTICE file ~ distributed with this work for additional information ~ regarding copyright ownership. The ASF licenses this file ~ to you under the Apache License, Version 2.0 (the ~ \"License\"); you may not use this file except in compliance ~ with the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, ~ software distributed under the License is distributed on an ~ \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ~ KIND, either express or implied. See the License for the ~ specific language governing permissions and limitations ~ under the License. --> <!-- The format of the tables that describe the functions and operators should not be changed without updating the script create-sql-docs in web-console/script/create-sql-docs, because the script detects patterns in this markdown file and parse it to TypeScript file for web console --> :::info Apache Druid supports two query languages: Druid SQL and . This document describes the SQL language. ::: Operators in typically operate on one or two values and return a result based on the values. Types of operators in Druid SQL include arithmetic, comparison, logical, and more, as described here. When performing math operations, Druid uses 64-bit integer (long) data type unless there are double or float values. If an operation uses float or double values, then the result is a double, which is a 64-bit float. The precision of float and double values is defined by and . Keep the following guidelines in mind to help you manage precision issues: Long values can store up to 2^63 accurately with an additional bit used for the sign. Float values use 32 bits, and doubles use 64 bits. Both types are impacted by floating point precision. If you need exact decimal values, consider storing the number in a non-decimal format as a long value (up to the limit for longs). For example, if you need three decimal places, store the number multiplied by 1000 and then divide by 1000 when" }, { "data": "|Operator|Description| |--|--| |`x + y` |Add| |`x - y` |Subtract| |`x * y` |Multiply| |`x / y` |Divide| For the datetime arithmetic operators, `interval_expr` can include interval literals like `INTERVAL '2' HOUR`. This operator treats days as uniformly 86400 seconds long, and does not take into account daylight savings time. To account for daylight savings time, use the . Also see for datetime arithmetic. |Operator|Description| |--|--| |`timestampexpr + intervalexpr`|Add an amount of time to a timestamp.| |`timestampexpr - intervalexpr`|Subtract an amount of time from a timestamp.| Also see the . |Operator|Description| |--|--| |<code>x &#124;&#124; y</code>|Concatenate strings `x` and `y`.| |Operator|Description| |--|--| |`x = y` |Equal to| |`x IS NOT DISTINCT FROM y`|Equal to, considering `NULL` as a value. Never returns `NULL`.| |`x <> y`|Not equal to| |`x IS DISTINCT FROM y`|Not equal to, considering `NULL` as a value. Never returns `NULL`.| |`x > y` |Greater than| |`x >= y`|Greater than or equal to| |`x < y` |Less than| |`x <= y`|Less than or equal to| |Operator|Description| |--|--| |`x AND y`|Boolean AND| |`x OR y`|Boolean OR| |`NOT x`|Boolean NOT| |`x IS NULL`|True if x is NULL or empty string| |`x IS NOT NULL`|True if x is neither NULL nor empty string| |`x IS TRUE`|True if x is true| |`x IS NOT TRUE`|True if x is not true| |`x IS FALSE`|True if x is false| |`x IS NOT FALSE`|True if x is not false| |`x BETWEEN y AND z`|Equivalent to `x >= y AND x <= z`| |`x NOT BETWEEN y AND z`|Equivalent to `x < y OR x > z`| |`x LIKE pattern [ESCAPE esc]`|True if x matches a SQL LIKE pattern (with an optional escape)| |`x NOT LIKE pattern [ESCAPE esc]`|True if x does not match a SQL LIKE pattern (with an optional escape)| |`x IN (values)`|True if x is one of the listed values| |`x NOT IN (values)`|True if x is not one of the listed values| |`x IN (subquery)`|True if x is returned by the subquery. This will be translated into a join; see for details.| |`x NOT IN (subquery)`|True if x is not returned by the subquery. This will be translated into a join; see for details.| |Operator|Description| |--|--| |`PIVOT (aggregationfunction(columntoaggregate) FOR columnwithvaluestopivot IN (pivotedcolumn1 [, pivoted_column2 ...]))`|Carries out an aggregation and transforms rows into columns in the output.| |`UNPIVOT (valuescolumn FOR namescolumn IN (unpivotedcolumn1 [, unpivotedcolumn2 ... ]))`|Transforms existing column values into rows.|" } ]
{ "category": "App Definition and Development", "file_name": "README.md", "project_name": "OpenMessaging", "subcategory": "Streaming & Messaging" }
[ { "data": "](https://travis-ci.org/openmessaging/openmessaging-java) ](http://search.maven.org/#search%7Cga%7C1%7Copenmessaging) ](https://openmessaging.herokuapp.com/) ](https://www.apache.org/licenses/LICENSE-2.0.html) OpenMessaging, which includes the establishment of industry guidelines and messaging, streaming specifications to provide a common framework for finance, e-commerce, IoT and big-data area. The design principles are the cloud-oriented, simplicity, flexibility, and language independent in distributed heterogeneous environments. Conformance to these specifications will make it possible to develop a heterogeneous messaging applications across all major platforms and operating systems. ." } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.0.23.4.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | HA: Simple HealthMonitor class to watch an HAService | Major | ha | Todd Lipcon | Todd Lipcon | | | Benchmarking random reads with DFSIO | Major | benchmarks, test | Konstantin Shvachko | Konstantin Shvachko | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Stop using \"mapred.used.genericoptionsparser\" to avoid unnecessary warnings | Minor | util | Harsh J | Harsh J | | | hadoop jar command should respect HADOOP\\_OPTS | Minor | scripts | Steven Willis | Steven Willis | | | allow jobs to set a JAR that is in the distributed cached | Major | mrv1, mrv2 | Alejandro Abdelnur | Robert Kanter | | | TestDFSIO should also test compression reading/writing from command-line. | Minor | benchmarks | Plamen Jeliazkov | Plamen Jeliazkov | | | Plugable process tree | Major | nodemanager | Radim Kolar | Radim Kolar | | | Providing a random seed to Slive should make the sequence of filenames completely deterministic | Major | performance, test | Ravi Prakash | Ravi Prakash | | | Change the default scheduler to the CapacityScheduler | Major | scheduler | Siddharth Seth | Siddharth Seth | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | broken doc link for yarn-default.xml in site.xml | Major | documentation | Patrick Hunt | Patrick Hunt | | | FileContext#checkPath should handle URIs with no port | Major | fs | Aaron T. Myers | Aaron T. Myers | | | HeartbeatManager#Monitor may wrongly hold the writelock of namesystem | Major | . | Jing Zhao | Jing Zhao | | | Deadlock between WritableComparator and WritableComparable | Minor | io | Hiroshi Ikeda | Jing Zhao | | | Node Manager throws NPE on startup | Major | nodemanager | Devaraj K | Devaraj K | | | RMContainer should handle a RELEASE event while RUNNING | Major |" }, { "data": "| Siddharth Seth | Siddharth Seth | | | client does not receive job diagnostics for failed jobs | Major | mrv2 | Jason Lowe | Jason Lowe | | | Diagnostics missing from applications that have finished but failed | Major | resourcemanager | Jason Lowe | Jason Lowe | | | FSDownload can create cache directories with the wrong permissions | Critical | nodemanager | Jason Lowe | Jason Lowe | | | DefaultContainerExecutor can fail to set proper permissions | Major | nodemanager | Jason Lowe | Jason Lowe | | | relnotes.py was deleted post mavenization | Major | . | Robert Joseph Evans | Robert Joseph Evans | | | We should only unjar jobjar if there is a lib directory in it. | Major | mrv2 | Robert Joseph Evans | Robert Joseph Evans | | | Old trash directories are never deleted on upgrade from 1.x | Critical | . | Robert Joseph Evans | Jason Lowe | | | Failure to renew tokens due to test-sources left in classpath | Critical | security | Jason Lowe | Jason Lowe | | | 0.22 and 0.23 namenode throws away blocks under construction on restart | Critical | namenode | Kihwal Lee | Kihwal Lee | | | 2.0 release upgrade must handle blocks being written from 1.0 | Blocker | datanode | Suresh Srinivas | Kihwal Lee | | | FileContext HDFS implementation can leak socket caches | Major | hdfs-client | Todd Lipcon | John George | | | Nodemanager needs to set permissions of local directories | Major | nodemanager | Jason Lowe | Jason Lowe | | | Historyserver can report \"Unknown job\" after RM says job has completed | Critical | jobhistoryserver, mrv2 | Jason Lowe | Robert Joseph Evans | | | JobClient.getMapTaskReports on failed job results in NPE | Major | client | Jason Lowe | Jason Lowe | | | Improve default config values for YARN | Major | resourcemanager, scheduler | Arun C Murthy | Harsh J | | | Creating file with invalid path can corrupt edit log | Blocker | namenode | Todd Lipcon | Todd Lipcon | | | Hftp proxy tokens are broken | Blocker | . | Daryn Sharp | Daryn Sharp |" } ]
{ "category": "App Definition and Development", "file_name": "managed-endpoint-azure.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Connect VPCs using Azure Private Link headerTitle: Set up private link linkTitle: Set up private link description: Connect to a VNet in Azure using Private Link. headcontent: Connect your endpoints using Private Link menu: preview_yugabyte-cloud: identifier: managed-endpoint-1-azure parent: cloud-add-endpoint weight: 50 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"../managed-endpoint-aws/\" class=\"nav-link\"> <i class=\"fa-brands fa-aws\" aria-hidden=\"true\"></i> AWS </a> </li> <li> <a href=\"../managed-endpoint-azure/\" class=\"nav-link active\"> <i class=\"fa-brands fa-microsoft\" aria-hidden=\"true\"></i> Azure </a> </li> </ul> Connect your cluster to an application VPC via Azure Private Link. To use Azure Private Link, you need the following: An Azure user account with an active subscription. The subscription ID of the service to which to grant access to the cluster endpoint. To find your subscription, follow the instructions in . Make sure that default security group in your application Azure Virtual Network (VNet) allows internal connectivity. Otherwise, your application may not be able to reach the endpoint. To use Azure Private Link to connect your cluster to an Azure VNet that hosts your application, first create a private service endpoint (PSE) on your cluster, then create a corresponding private endpoint in Azure. You create the PSEs (one for each region) for your cluster on the cluster Settings tab, or using . To create or edit a PSE, do the following: Select your cluster. Navigate to Settings > Network Access > Private Service Endpoint. Click Create Private Service Endpoint or, to edit an existing PSE, Edit Private Service Endpoint to display the Edit Private Service Endpoint sheet. For each region in your cluster, provide an Azure Subscription ID you want to grant access. Click Save. The endpoints are displayed with the following values: Host - The host name of the PSE. You will use this to . The host name of a PSE for Azure always ends in `azure.ybdb.io`. Service Name - The Service Name is also referred to as an alias in Azure. You will use this name when creating the private endpoint in Azure. To create a PSE, do the following: Enter the following command: ```sh ybm cluster network endpoint create \\ --cluster-name <yugabytedb_cluster> \\ --region <cluster_region> \\ --accessibility-type PRIVATESERVICEENDPOINT \\ --security-principals <azuresubscriptionids> ``` Replace values as follows: `yugabytedb_cluster` - name of your cluster. `cluster_region` - cluster region where you want to place the PSE. Must match one of the regions where your cluster is deployed (for example, `westus3`), and preferably match the region where your application is deployed. `azuresubscriptionids` - comma-separated list of the subscription IDs of Azure subscriptions that you want to grant access. Note the endpoint ID in the response. You can also display the PSE endpoint ID by entering the following command: ```sh ybm cluster network endpoint list --cluster-name <yugabytedb_cluster> ``` This outputs the IDs of all the cluster endpoints. After the endpoint becomes ACTIVE, display the service name of the PSE by entering the following command: ```sh ybm cluster network endpoint describe --cluster-name <yugabytedbcluster> --endpoint-id <endpointid> ``` Note the following values: Host - The host name of the PSE. You will use this to . The host name of a PSE for Azure always ends in `azure.ybdb.io`. Service Name - The Service Name is also referred to as an alias in Azure. You will use this service name when creating the private endpoint in Azure. To delete a PSE, enter the following command: ```sh ybm cluster network endpoint delete \\ --cluster-name <yugabytedb_cluster> \\ --endpoint-id <endpoint_id> \\ ``` You can create the private endpoint using the or from the command line using the" }, { "data": "To create a private endpoint to connect to your cluster PSE, do the following: In the , under the Azure services heading, select Private endpoints. If you don't see Private endpoints, use the search box to find it. On the Private endpoints page, click + Create to display the Create a private endpoint wizard. On the Basics page, provide the following details: Subscription - select your subscription. Resource group - select the resource group in which the private endpoint was created. Name - enter a name for the endpoint. Network interface name - enter a network interface name for the endpoint. Region - select the region for the endpoint. This should be the same region where your application resides. Click Next: Resource and set the following values: Connection method - select Connect to an Azure resource by resource ID or alias. Resource ID or alias - enter the Service Name (noted when you ) of the PSE you created for your cluster. Click Next: Virtual Network and set the following values: Select the Azure virtual network and subnet where your application resides. Click Next: DNS, Next: Tags, and Next Review + create >. You don't need to provide any values on the DNS page; Tags are optional. Review the details and click Create to create the private endpoint. The endpoint will take a minute or two to deploy. When complete, the Connection State will be Approved. To verify the status of the endpoint, navigate to Private endpoints under the Azure services heading. Note the Private IP address of the endpoint for use in the following steps. To be able to connect to your cluster using DNS (rather than the bare IP address), you must create a private DNS zone in the same resource group, link the private DNS zone to the VNet containing the private endpoint, and add an A record pointing to the private IP address of the private endpoint. To create a private DNS zone: In the , under the Azure services heading, select Private DNS zones. On the Private DNS zones page, click + Create to display the Create Private DNS zone wizard. Provide the following details: Subscription - select your subscription. Resource group - select the resource group in which the private endpoint was created. Instance details - enter a DNS zone name of `azure.ybdb.io`. Click Next: Tags, and Next Review create >. Review the details and click Create to create the private DNS zone. The DNS zone will take a minute or two to deploy. To view the private DNS zone, navigate to Private DNS zones under the Azure services heading. Navigate to Private DNS zones under the Azure services heading and select the azure.ybdb.io private DNS zone you created. Under Settings, select Virtual network links and click + Add. On the Add virtual network link page, provide the following details: Link name - enter a name for the link. Subscription - select your subscription. Virtual network - select the virtual network where you created the private endpoint. Click OK. The link is listed in the Virtual network links list. Navigate to Private DNS zones under the Azure services heading and select the azure.ybdb.io private DNS zone you created. Select Overview and click + Record set. Under Add record set, set the following values: Name - enter the first part only of the Host name of the cluster PSE (noted when you ). This consists of the text before .azure.ybdb.io. For example, for the host ```sh pse-westus3.65f14618-f86a-41c2-a8c6-7004edbb965a.azure.ybdb.io ``` you would enter only ```sh" }, { "data": "``` The PSE Host is also displayed in YugabyteDB Managed under Connection Parameters on the cluster Settings > Infrastructure tab. Type - select the A - Address record option (this is the default). IP address - enter the private IP address of your Azure private endpoint (noted earlier). Click OK. You can now connect to your cluster from your application in Azure using your cluster PSE host address (for example, `pse-westus3.65f14618-f86a-41c2-a8c6-7004edbb965a.azure.ybdb.io`). To create the private endpoint and connect it to your YBM PSE (called a private link service in Azure) using the , enter the following command: ```sh az network private-endpoint create \\ --connection-name <privatelinkserviceconnectionname> \\ --name <privateendpointname> \\ --private-connection-resource-id <pseservicename> \\ --resource-group <resourcegroupname> \\ --subnet <subnet_name> \\ --vnet-name <privateendpointvnet_name> \\ -location <privateendpointregion_name> ``` Replace values as follows: `privatelinkserviceconnectionname` - provide a name for the private link connection from the private endpoint to the private link service. `privateendpointname` - provide a name for the private endpoint. `pseservicename` - the Service Name of the PSE, which you noted down when creating the PSE. `resourcegroupname` - the resource group in which the private endpoint will be created. `subnet_name` - the name of the subnet in the resource group in which the private endpoint will be created. `privateendpointvnet_name` - the name of the VNet where the private endpoint will be created. `privateendpointregion_name` - the Azure region in which the private endpoint and VNet are present. To be able to connect to your cluster using DNS (rather than the bare IP address), create a private DNS zone in the same resource group, link the private DNS zone to the VNet containing the private endpoint, and add an A record pointing to the private IP address of the private endpoint. To create a private DNS zone, enter the following command: ```sh az network private-dns zone create \\ --name azure.ybdb.io \\ --resource-group <resourcegroupname> ``` Replace values as follows: `resourcegroupname` - the resource group in which the private endpoint was created. All private DNS zones for endpoints that are used with YugabyteDB Managed are named `azure.ybdb.io`. To link the private DNS zone to the VNet containing the private endpoint, enter the following command: ```sh az network private-dns link vnet create --name <privatednszone_name> --registration-enabled true --resource-group <resourcegroupname> --virtual-network <privateendpointvnet_name> --zone-name azure.ybdb.io --tags yugabyte ``` Replace values as follows: `privatednszone_name` - provide a name for the private DNS zone. `resourcegroupname` - the resource group in which the private endpoint was created. `privateendpointvnet_name` - the name of VNet in which the private endpoint was created. To obtain the Network Interface (NIC) resource ID for the private endpoint, enter the following command: ```sh az network private-endpoint show \\ --name <privateendpointname> \\ --resource-group <resourcegroupname> ``` Replace values as follows: `privateendpointname` - the name of the private endpoint. `resourcegroupname` - the resource group in which the private endpoint was created. This command returns the NIC resource ID of the private endpoint (`privateendpointnicresourceid` in the following command). To obtain the ipv4 address of the private endpoint, enter the following command: ```sh az network nic show \\ --ids <privateendpointnicresourceid> \\ --query \"ipConfigurations[0].privateIPAddress\" ``` This command returns the ipv4 address of the private endpoint (`privateendpointipv4_address` in the following command). To create an A record in the private DNS zone, enter the following command: ```sh az network private-dns record-set a add-record \\ --ipv4-address <privateendpointipv4_address> \\ --record-set-name <recordsetname> \\ --resource-group <resourcegroupname> \\ --zone-name azure.ybdb.io ``` Replace values as follows: `privateendpointipv4_address` - the IP address of the private endpoint. `recordsetname` - provide a name for the record. `resourcegroupname` - the resource group in which the private endpoint was created." } ]
{ "category": "App Definition and Development", "file_name": "s3.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" tocmaxheading_level: 4 keywords: ['Broker Load'] import LoadMethodIntro from '../assets/commonMarkdown/loadMethodIntro.md' import InsertPrivNote from '../assets/commonMarkdown/insertPrivNote.md' import PipeAdvantages from '../assets/commonMarkdown/pipeAdvantages.md' StarRocks provides the following options for loading data from AWS S3: <LoadMethodIntro /> Make sure the source data you want to load into StarRocks is properly stored in an S3 bucket. You may also consider where the data and the database are located, because data transfer costs are much lower when your bucket and your StarRocks cluster are located in the same region. In this topic, we provide you with a sample dataset in an S3 bucket, `s3://starrocks-examples/user-behavior-10-million-rows.parquet`. You can access that dataset with any valid credentials as the object is readable by any AWS authenticated user. <InsertPrivNote /> The examples in this topic use IAM user-based authentication. To ensure that you have permission to read data from AWS S3, we recommend that you read and follow the instructions to create an IAM user with proper configured. In a nutshell, if you practice IAM user-based authentication, you need to gather information about the following AWS resources: The S3 bucket that stores your data. The S3 object key (object name) if accessing a specific object in the bucket. Note that the object key can include a prefix if your S3 objects are stored in sub-folders. The AWS region to which the S3 bucket belongs. The access key and secret key used as access credentials. For information about all the authentication methods available, see . This method is available from v3.1 onwards and currently supports only the Parquet, ORC, and CSV (from v3.3.0 onwards) file formats. can read the file stored in cloud storage based on the path-related properties you specify, infer the table schema of the data in the file, and then return the data from the file as data rows. With `FILES()`, you can: Query the data directly from S3 using . Create and load a table using (CTAS). Load the data into an existing table using . Querying directly from S3 using SELECT+`FILES()` can give a good preview of the content of a dataset before you create a table. For example: Get a preview of the dataset without storing the data. Query for the min and max values and decide what data types to use. Check for `NULL` values. The following example queries the sample dataset `s3://starrocks-examples/user-behavior-10-million-rows.parquet`: ```SQL SELECT * FROM FILES ( \"path\" = \"s3://starrocks-examples/user-behavior-10-million-rows.parquet\", \"format\" = \"parquet\", \"aws.s3.region\" = \"us-east-1\", \"aws.s3.access_key\" = \"AAAAAAAAAAAAAAAAAAAA\", \"aws.s3.secret_key\" = \"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\" ) LIMIT 3; ``` NOTE Substitute your credentials for `AAA` and `BBB` in the above command. Any valid `aws.s3.accesskey` and `aws.s3.secretkey` can be used, as the object is readable by any AWS authenticated user. The system returns the following query result: ```Plaintext +--+++--++ | UserID | ItemID | CategoryID | BehaviorType | Timestamp | +--+++--++ | 1 | 2576651 | 149192 | pv | 2017-11-25 01:21:25 | | 1 | 3830808 | 4181361 | pv | 2017-11-25 07:04:53 | | 1 | 4365585 | 2520377 | pv | 2017-11-25 07:49:06 | +--+++--++ ``` NOTE Notice that the column names as returned above are provided by the Parquet file. This is a continuation of the previous example. The previous query is wrapped in CREATE TABLE AS SELECT (CTAS) to automate the table creation using schema inference. This means StarRocks will infer the table schema, create the table you want, and then load the data into the" }, { "data": "The column names and types are not required to create a table when using the `FILES()` table function with Parquet files as the Parquet format includes the column names. NOTE The syntax of CREATE TABLE when using schema inference does not allow setting the number of replicas, so set it before creating the table. The example below is for a system with one replica: ```SQL ADMIN SET FRONTEND CONFIG ('defaultreplicationnum' = \"1\"); ``` Create a database and switch to it: ```SQL CREATE DATABASE IF NOT EXISTS mydatabase; USE mydatabase; ``` Use CTAS to create a table and load the data of the sample dataset `s3://starrocks-examples/user-behavior-10-million-rows.parquet` into the table: ```SQL CREATE TABLE userbehaviorinferred AS SELECT * FROM FILES ( \"path\" = \"s3://starrocks-examples/user-behavior-10-million-rows.parquet\", \"format\" = \"parquet\", \"aws.s3.region\" = \"us-east-1\", \"aws.s3.access_key\" = \"AAAAAAAAAAAAAAAAAAAA\", \"aws.s3.secret_key\" = \"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\" ); ``` NOTE Substitute your credentials for `AAA` and `BBB` in the above command. Any valid `aws.s3.accesskey` and `aws.s3.secretkey` can be used, as the object is readable by any AWS authenticated user. After creating the table, you can view its schema by using : ```SQL DESCRIBE userbehaviorinferred; ``` The system returns the following query result: ```Plain +--+++-++-+ | Field | Type | Null | Key | Default | Extra | +--+++-++-+ | UserID | bigint | YES | true | NULL | | | ItemID | bigint | YES | true | NULL | | | CategoryID | bigint | YES | true | NULL | | | BehaviorType | varchar(1048576) | YES | false | NULL | | | Timestamp | varchar(1048576) | YES | false | NULL | | +--+++-++-+ ``` Query the table to verify that the data has been loaded into it. Example: ```SQL SELECT * from userbehaviorinferred LIMIT 3; ``` The following query result is returned, indicating that the data has been successfully loaded: ```Plaintext +--+++--++ | UserID | ItemID | CategoryID | BehaviorType | Timestamp | +--+++--++ | 225586 | 3694958 | 1040727 | pv | 2017-12-01 00:58:40 | | 225586 | 3726324 | 965809 | pv | 2017-12-01 02:16:02 | | 225586 | 3732495 | 1488813 | pv | 2017-12-01 00:59:46 | +--+++--++ ``` You may want to customize the table that you are inserting into, for example, the: column data type, nullable setting, or default values key types and columns data partitioning and bucketing NOTE Creating the most efficient table structure requires knowledge of how the data will be used and the content of the columns. This topic does not cover table design. For information about table design, see . In this example, we are creating a table based on knowledge of how the table will be queried and the data in the Parquet file. The knowledge of the data in the Parquet file can be gained by querying the file directly in S3. Since a query of the dataset in S3 indicates that the `Timestamp` column contains data that matches a VARCHAR data type, and StarRocks can cast from VARCHAR to DATETIME, the data type is changed to DATETIME in the following DDL. By querying the data in S3, you can find that there are no `NULL` values in the dataset, so the DDL could also set all columns as non-nullable. Based on knowledge of the expected query types, the sort key and bucketing column are set to the column" }, { "data": "Your use case might be different for this data, so you might decide to use `ItemID` in addition to, or instead of, `UserID` for the sort key. Create a database and switch to it: ```SQL CREATE DATABASE IF NOT EXISTS mydatabase; USE mydatabase; ``` Create a table by hand: ```SQL CREATE TABLE userbehaviordeclared ( UserID int(11), ItemID int(11), CategoryID int(11), BehaviorType varchar(65533), Timestamp datetime ) ENGINE = OLAP DUPLICATE KEY(UserID) DISTRIBUTED BY HASH(UserID); ``` Display the schema so that you can compare it with the inferred schema produced by the `FILES()` table function: ```sql DESCRIBE userbehaviordeclared; ``` ```plaintext +--+-++-++-+ | Field | Type | Null | Key | Default | Extra | +--+-++-++-+ | UserID | int | YES | true | NULL | | | ItemID | int | YES | false | NULL | | | CategoryID | int | YES | false | NULL | | | BehaviorType | varchar(65533) | YES | false | NULL | | | Timestamp | datetime | YES | false | NULL | | +--+-++-++-+ ``` :::tip Compare the schema you just created with the schema inferred earlier using the `FILES()` table function. Look at: data types nullable key fields To better control the schema of the destination table and for better query performance, we recommend that you specify the table schema by hand in production environments. ::: After creating the table, you can load it with INSERT INTO SELECT FROM FILES(): ```SQL INSERT INTO userbehaviordeclared SELECT * FROM FILES ( \"path\" = \"s3://starrocks-examples/user-behavior-10-million-rows.parquet\", \"format\" = \"parquet\", \"aws.s3.region\" = \"us-east-1\", \"aws.s3.access_key\" = \"AAAAAAAAAAAAAAAAAAAA\", \"aws.s3.secret_key\" = \"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\" ); ``` NOTE Substitute your credentials for `AAA` and `BBB` in the above command. Any valid `aws.s3.accesskey` and `aws.s3.secretkey` can be used, as the object is readable by any AWS authenticated user. After the load is complete, you can query the table to verify that the data has been loaded into it. Example: ```SQL SELECT * from userbehaviordeclared LIMIT 3; ``` The following query result is returned, indicating that the data has been successfully loaded: ```Plaintext +--+++--++ | UserID | ItemID | CategoryID | BehaviorType | Timestamp | +--+++--++ | 393529 | 3715112 | 883960 | pv | 2017-12-02 02:45:44 | | 393529 | 2650583 | 883960 | pv | 2017-12-02 02:45:59 | | 393529 | 3715112 | 883960 | pv | 2017-12-02 03:00:56 | +--+++--++ ``` You can query the progress of INSERT jobs from the view in the StarRocks Information Schema. This feature is supported from v3.1 onwards. Example: ```SQL SELECT * FROM informationschema.loads ORDER BY JOBID DESC; ``` For information about the fields provided in the `loads` view, see . If you have submitted multiple load jobs, you can filter on the `LABEL` associated with the job. Example: ```SQL SELECT * FROM informationschema.loads WHERE LABEL = 'inserte3b882f5-7eb3-11ee-ae77-00163e267b60' \\G row * JOB_ID: 10243 LABEL: insert_e3b882f5-7eb3-11ee-ae77-00163e267b60 DATABASE_NAME: mydatabase STATE: FINISHED PROGRESS: ETL:100%; LOAD:100% TYPE: INSERT PRIORITY: NORMAL SCAN_ROWS: 10000000 FILTERED_ROWS: 0 UNSELECTED_ROWS: 0 SINK_ROWS: 10000000 ETL_INFO: TASKINFO: resource:N/A; timeout(s):300; maxfilter_ratio:0.0 CREATE_TIME: 2023-11-09 11:56:01 ETLSTARTTIME: 2023-11-09 11:56:01 ETLFINISHTIME: 2023-11-09 11:56:01 LOADSTARTTIME: 2023-11-09 11:56:01 LOADFINISHTIME: 2023-11-09 11:56:44 JOB_DETAILS: {\"All backends\":{\"e3b882f5-7eb3-11ee-ae77-00163e267b60\":[10142]},\"FileNumber\":0,\"FileSize\":0,\"InternalTableLoadBytes\":311710786,\"InternalTableLoadRows\":10000000,\"ScanBytes\":581574034,\"ScanRows\":10000000,\"TaskNumber\":1,\"Unfinished backends\":{\"e3b882f5-7eb3-11ee-ae77-00163e267b60\":[]}} ERROR_MSG: NULL TRACKING_URL: NULL TRACKING_SQL: NULL REJECTEDRECORDPATH: NULL ``` NOTE INSERT is a synchronous command. If an INSERT job is still running, you need to open another session to check its execution status. An asynchronous Broker Load process handles making the connection to S3, pulling the data, and storing the data in" }, { "data": "This method supports the following file formats: Parquet ORC CSV JSON (supported from v3.2.3 onwards) Broker Load runs in the background and clients do not need to stay connected for the job to continue. Broker Load is preferred for long-running jobs, with the default timeout spanning 4 hours. In addition to Parquet and ORC file format, Broker Load supports CSV file format and JSON file format (JSON file format is supported from v3.2.3 onwards). The user creates a load job. The frontend (FE) creates a query plan and distributes the plan to the backend nodes (BEs) or compute nodes (CNs). The BEs or CNs pull the data from the source and load the data into StarRocks. Create a table, start a load process that pulls the sample dataset `s3://starrocks-examples/user-behavior-10-million-rows.parquet` from S3, and verify the progress and success of the data loading. Create a database and switch to it: ```SQL CREATE DATABASE IF NOT EXISTS mydatabase; USE mydatabase; ``` Create a table by hand (we recommend that the table has the same schema as the Parquet file that you want to load from AWS S3): ```SQL CREATE TABLE user_behavior ( UserID int(11), ItemID int(11), CategoryID int(11), BehaviorType varchar(65533), Timestamp datetime ) ENGINE = OLAP DUPLICATE KEY(UserID) DISTRIBUTED BY HASH(UserID); ``` Run the following command to start a Broker Load job that loads data from the sample dataset `s3://starrocks-examples/user-behavior-10-million-rows.parquet` to the `user_behavior` table: ```SQL LOAD LABEL user_behavior ( DATA INFILE(\"s3://starrocks-examples/user-behavior-10-million-rows.parquet\") INTO TABLE user_behavior FORMAT AS \"parquet\" ) WITH BROKER ( \"aws.s3.enable_ssl\" = \"true\", \"aws.s3.useinstanceprofile\" = \"false\", \"aws.s3.region\" = \"us-east-1\", \"aws.s3.access_key\" = \"AAAAAAAAAAAAAAAAAAAA\", \"aws.s3.secret_key\" = \"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\" ) PROPERTIES ( \"timeout\" = \"72000\" ); ``` NOTE Substitute your credentials for `AAA` and `BBB` in the above command. Any valid `aws.s3.accesskey` and `aws.s3.secretkey` can be used, as the object is readable by any AWS authenticated user. This job has four main sections: `LABEL`: A string used when querying the state of the load job. `LOAD` declaration: The source URI, source data format, and destination table name. `BROKER`: The connection details for the source. `PROPERTIES`: The timeout value and any other properties to apply to the load job. For detailed syntax and parameter descriptions, see . You can query the progress of the Broker Load job from the view in the StarRocks Information Schema. This feature is supported from v3.1 onwards. ```SQL SELECT * FROM informationschema.loads WHERE LABEL = 'userbehavior'; ``` For information about the fields provided in the `loads` view, see . This record shows a state of `LOADING`, and the progress is 39%. If you see something similar, then run the command again until you see a state of `FINISHED`. ```Plaintext JOB_ID: 10466 LABEL: user_behavior DATABASE_NAME: mydatabase STATE: LOADING PROGRESS: ETL:100%; LOAD:39% TYPE: BROKER PRIORITY: NORMAL SCAN_ROWS: 4620288 FILTERED_ROWS: 0 UNSELECTED_ROWS: 0 SINK_ROWS: 4620288 ETL_INFO: TASKINFO: resource:N/A; timeout(s):72000; maxfilter_ratio:0.0 CREATE_TIME: 2024-02-28 22:11:36 ETLSTARTTIME: 2024-02-28 22:11:41 ETLFINISHTIME: 2024-02-28 22:11:41 LOADSTARTTIME: 2024-02-28 22:11:41 LOADFINISHTIME: NULL JOB_DETAILS: {\"All backends\":{\"2fb97223-b14c-404b-9be1-83aa9b3a7715\":[10004]},\"FileNumber\":1,\"FileSize\":136901706,\"InternalTableLoadBytes\":144032784,\"InternalTableLoadRows\":4620288,\"ScanBytes\":143969616,\"ScanRows\":4620288,\"TaskNumber\":1,\"Unfinished backends\":{\"2fb97223-b14c-404b-9be1-83aa9b3a7715\":[10004]}} ERROR_MSG: NULL TRACKING_URL: NULL TRACKING_SQL: NULL REJECTEDRECORDPATH: NULL ``` After you confirm that the load job has finished, you can check a subset of the destination table to see if the data has been successfully" }, { "data": "Example: ```SQL SELECT * from user_behavior LIMIT 3; ``` The following query result is returned, indicating that the data has been successfully loaded: ```Plaintext +--+++--++ | UserID | ItemID | CategoryID | BehaviorType | Timestamp | +--+++--++ | 34 | 856384 | 1029459 | pv | 2017-11-27 14:43:27 | | 34 | 5079705 | 1029459 | pv | 2017-11-27 14:44:13 | | 34 | 4451615 | 1029459 | pv | 2017-11-27 14:45:52 | +--+++--++ ``` Starting from v3.2, StarRocks provides the Pipe loading method, which currently supports only the Parquet and ORC file formats. <PipeAdvantages menu=\" object storage like AWS S3 uses ETag \"/> Pipe is ideal for continuous data loading and large-scale data loading: Large-scale data loading in micro-batches helps reduce the cost of retries caused by data errors. With the help of Pipe, StarRocks enables the efficient loading of a large number of data files with a significant data volume in total. Pipe automatically splits the files based on their number or size, breaking down the load job into smaller, sequential tasks. This approach ensures that errors in one file do not impact the entire load job. The load status of each file is recorded by Pipe, allowing you to easily identify and fix files that contain errors. By minimizing the need for retries due to data errors, this approach helps to reduce costs. Continuous data loading helps reduce manpower. Pipe helps you write new or updated data files to a specific location and continuously load the new data from these files into StarRocks. After you create a Pipe job with `\"AUTO_INGEST\" = \"TRUE\"` specified, it will constantly monitor changes to the data files stored in the specified path and automatically load new or updated data from the data files into the destination StarRocks table. Additionally, Pipe performs file uniqueness checks to help prevent duplicate data loading.During the loading process, Pipe checks the uniqueness of each data file based on the file name and digest. If a file with a specific file name and digest has already been processed by a Pipe job, the Pipe job will skip all subsequent files with the same file name and digest. Note that object storage like AWS S3 uses `ETag` as file digest. The load status of each data file is recorded and saved to the `informationschema.pipefiles` view. After a Pipe job associated with the view is deleted, the records about the files loaded in that job will also be deleted. A Pipe job is split into one or more transactions based on the size and number of rows in each data file. Users can query the intermediate results during the loading process. In contrast, an INSERT+`FILES()` job is processed as a single transaction, and users are unable to view the data during the loading process. For each Pipe job, StarRocks maintains a file queue, from which it fetches and loads data files as micro-batches. Pipe does not ensure that the data files are loaded in the same order as they are uploaded. Therefore, newer data may be loaded prior to older data. Create a database and switch to it: ```SQL CREATE DATABASE IF NOT EXISTS mydatabase; USE mydatabase; ``` Create a table by hand (we recommend that the table have the same schema as the Parquet file you want to load from AWS S3): ```SQL CREATE TABLE userbehaviorfrom_pipe ( UserID int(11), ItemID int(11), CategoryID int(11), BehaviorType varchar(65533), Timestamp datetime ) ENGINE = OLAP DUPLICATE KEY(UserID) DISTRIBUTED BY HASH(UserID); ``` Run the following command to start a Pipe job that loads data from the sample dataset `s3://starrocks-examples/user-behavior-10-million-rows/` to the `userbehaviorfrom_pipe` table. This pipe job uses both micro batches, and continuous loading (described above) pipe-specific" }, { "data": "The other examples in this guide load a single Parquet file with 10 million rows. For the pipe example, the same dataset is split into 57 separate files, and these are all stored in one S3 folder. Note in the `CREATE PIPE` command below the `path` is the URI for an S3 folder and rather than providing a filename the URI ends in `/*`. By setting `AUTO_INGEST` and specifying a folder rather than an individual file the pipe job will poll the S3 folder for new files and ingest them as they are added to the folder. ```SQL CREATE PIPE userbehaviorpipe PROPERTIES ( -- highlight-start \"AUTO_INGEST\" = \"TRUE\" -- highlight-end ) AS INSERT INTO userbehaviorfrom_pipe SELECT * FROM FILES ( -- highlight-start \"path\" = \"s3://starrocks-examples/user-behavior-10-million-rows/*\", -- highlight-end \"format\" = \"parquet\", \"aws.s3.region\" = \"us-east-1\", \"aws.s3.access_key\" = \"AAAAAAAAAAAAAAAAAAAA\", \"aws.s3.secret_key\" = \"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\" ); ``` NOTE Substitute your credentials for `AAA` and `BBB` in the above command. Any valid `aws.s3.accesskey` and `aws.s3.secretkey` can be used, as the object is readable by any AWS authenticated user. This job has four main sections: `pipe_name`: The name of the pipe. The pipe name must be unique within the database to which the pipe belongs. `INSERT_SQL`: The INSERT INTO SELECT FROM FILES statement that is used to load data from the specified source data file to the destination table. `PROPERTIES`: A set of optional parameters that specify how to execute the pipe. These include `AUTOINGEST`, `POLLINTERVAL`, `BATCHSIZE`, and `BATCHFILES`. Specify these properties in the `\"key\" = \"value\"` format. For detailed syntax and parameter descriptions, see . Query the progress of the Pipe job by using in the current database to which the Pipe job belongs. ```SQL SHOW PIPES WHERE NAME = 'userbehaviorpipe' \\G ``` The following result is returned: :::tip In the output shown below the pipe is in the `RUNNING` state. A pipe will stay in the `RUNNING` state until you manually stop it. The output also shows the number of files loaded (57) and the last time that a file was loaded. ::: ```SQL row * DATABASE_NAME: mydatabase PIPE_ID: 10476 PIPENAME: userbehavior_pipe -- highlight-start STATE: RUNNING TABLENAME: mydatabase.userbehaviorfrompipe LOAD_STATUS: {\"loadedFiles\":57,\"loadedBytes\":295345637,\"loadingFiles\":0,\"lastLoadedTime\":\"2024-02-28 22:14:19\"} -- highlight-end LAST_ERROR: NULL CREATED_TIME: 2024-02-28 22:13:41 1 row in set (0.02 sec) ``` Query the progress of the Pipe job from the view in the StarRocks Information Schema. ```SQL SELECT * FROM informationschema.pipes WHERE pipename = 'userbehaviorreplica' \\G ``` The following result is returned: :::tip Some of the queries in this guide end in `\\G` instead of a semicolon (`;`). This causes the MySQL client to output the results in vertical format. If you are using DBeaver or another client you may need to use a semicolon (`;`) rather than `\\G`. ::: ```SQL row * DATABASE_NAME: mydatabase PIPE_ID: 10217 PIPENAME: userbehavior_replica STATE: RUNNING TABLENAME: mydatabase.userbehavior_replica LOAD_STATUS: {\"loadedFiles\":1,\"loadedBytes\":132251298,\"loadingFiles\":0,\"lastLoadedTime\":\"2023-11-09 15:35:42\"} LAST_ERROR: CREATED_TIME: 9891-01-15 07:51:45 1 row in set (0.01 sec) ``` You can query the load status of the files loaded from the view in the StarRocks Information Schema. ```SQL SELECT * FROM informationschema.pipefiles WHERE pipename = 'userbehavior_replica' \\G ``` The following result is returned: ```SQL row * DATABASE_NAME: mydatabase PIPE_ID: 10217 PIPENAME: userbehavior_replica FILE_NAME: s3://starrocks-examples/user-behavior-10-million-rows.parquet FILE_VERSION: e29daa86b1120fea58ad0d047e671787-8 FILE_SIZE: 132251298 LAST_MODIFIED: 2023-11-06 13:25:17 LOAD_STATE: FINISHED STAGED_TIME: 2023-11-09 15:35:02 STARTLOADTIME: 2023-11-09 15:35:03 FINISHLOADTIME: 2023-11-09 15:35:42 ERROR_MSG: 1 row in set (0.03 sec) ``` You can alter, suspend or resume, drop, or query the pipes you have created and retry to load specific data files. For more information, see , , , , and ." } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG-v2022.05.24.md", "project_name": "KubeDB by AppsCode", "subcategory": "Database" }
[ { "data": "title: Changelog | KubeDB description: Changelog menu: docs_{{.version}}: identifier: changelog-kubedb-v2022.05.24 name: Changelog-v2022.05.24 parent: welcome weight: 20220524 product_name: kubedb menuname: docs{{.version}} sectionmenuid: welcome url: /docs/{{.version}}/welcome/changelog-v2022.05.24/ aliases: /docs/{{.version}}/CHANGELOG-v2022.05.24/ Add `HealthCheckPaused` condition and `Unknown` phase (#898) Add Raft metrics port as constants (#896) Add support for MySQL semi-sync cluster (#890) Add constants for Kibana 8 (#894) Add method and constants for proxysql (#893) Add doubleOptIn funcs & shortnames for schema-manager (#889) Add constants and helpers for ES Internal Users (#886) Fix typo (#888) Update ProxySQL types and helpers (#883) Fix pgbouncer Version Spec Add support for mariadbdatabase with webhook (#858) Add spec for MongoDB arbiter support (#862) Add TopologySpreadConstraints (#885) Add SyncStatefulSetPodDisruptionBudget helper method (#884) Make ClusterHealth inline in ES insight (#881) fix: update Postgres shared buffer func (#880) Add Support for Opensearch Dashboards (#878) Use Go 1.18 (#879) Prepare for release v0.12.0 (#84) Update dependencies (#83) Update dependencies(nats client, mongo-driver) (#81) Prepare for release v0.27.0 (#664) Update dependencies (#663) Update dependencies(nats client, mongo-driver) (#662) Prepare for release v0.3.0 (#27) Update dependencies (#26) Add support for Kibana 8 (#25) Update dependencies(nats client, mongo-driver) (#24) Add support for Opensearch_Dashboards (#17) Prepare for release v0.27.0 (#577) Update dependencies (#576) Add support for ElasticStack Built-In Users (#574) Update dependencies(nats client, mongo-driver) (#575) Add support for Elasticsearch 8 (#573) Prepare for release v0.11.0 (#147) Update dependencies (#146) Update MariaDB conditions on health check (#138) Update dependencies(nats client, mongo-driver) (#145) Cleanup PodDisruptionBudget when the replica count is one or less (#144) Prepare for release v0.7.0 (#43) Update dependencies (#42) Update dependencies(nats client, mongo-driver) (#41) Prepare for release v0.20.0 (#355) Update dependencies (#354) Update dependencies(nats client, mongo-driver) (#353) Cleanup PodDisruptionBudget when the replica count is one or less (#352) Prepare for release v0.20.0 (#477) Update dependencies (#476) Fix shard database write check (#475) Use updated commit-hash (#474) Add arbiter support (#470) Update dependencies(nats client, mongo-driver) (#472) Cleanup PodDisruptionBudget when the replica count is one or less (#471) Refactor statefulset-related files (#449) Prepare for release v0.20.0 (#470) Update dependencies (#469) Pass `--set-gtid-purged=OFF` to app binding for stash (#468) Add Raft Server ports for MySQL Semi-sync (#467) Add Support for Semi-sync cluster (#464) Update dependencies(nats client, mongo-driver) (#466) Cleanup PodDisruptionBudget when the replica count is one or less (#462) Patch existing Auth secret to db ojbect (#463) Update dependencies (#20) Prepare for release" }, { "data": "(#310) Fix: Redis shard node deletion issue for Horizontal scaling (#304) Fix product name Rename to ops-manager package (#309) Update dependencies (#307) (#308) Update dependencies (#307) Fix mongodb shard scale down (#306) update replication user updating condition (#305) Update Replication User Password (#300) Use updated commit-hash (#303) Ensure right master count when scaling down Redis Shard Cluster Add ProxySQL TLS support (#302) Add arbiter-support for mongodb (#291) Update dependencies(nats client, mongo-driver) (#298) Fix horizontal scaling to support Redis Shard Dynamic Failover (#297) Add PgBouncer TLS Support (#295) Prepare for release v0.14.0 (#258) Update dependencies (#257) Update dependencies(nats client, mongo-driver) (#256) Prepare for release v0.11.0 (#80) Update dependencies (#79) Add Raft Metrics And graceful shutdown of Postgres (#74) Update dependencies(nats client, mongo-driver) (#78) Fix: Fast Shut-down Postgres server to avoid single-user mode shutdown failure (#73) Prepare for release v0.14.0 (#221) Update dependencies (#220) Update dependencies(nats client, mongo-driver) (#218) Update exporter container to support TLS enabled PgBouncer (#217) Fix TLS and Config Related Issues, Add health Check (#210) Cleanup PodDisruptionBudget when the replica count is one or less (#216) Prepare for release v0.27.0 (#573) Update dependencies (#572) Add Raft Metrics exporter Port for Monitoring (#569) Update dependencies(nats client, mongo-driver) (#571) Cleanup podDiscruptionBudget when the replica count is one or less (#570) Fix: Fast Shut-down Postgres server to avoid single-user mode shutdown failure (#568) Prepare for release v0.27.0 (#2) Rename to provisioner module (#1) Update dependencies(nats client, mongo-driver) (#465) Prepare for release v0.14.0 (#235) Update dependencies (#234) Fix phase and condition update for ProxySQL (#233) Add support for ProxySQL clustering and TLS (#231) Update dependencies(nats client, mongo-driver) (#232) Prepare for release v0.20.0 (#397) Update dependencies (#396) Update dependencies(nats client, mongo-driver) (#395) Redis Shard Cluster Dynamic Failover (#393) Refactor StatefulSet ENVs for Redis (#394) Cleanup PodDisruptionBudget when the replica count is one or less (#392) Prepare for release v0.6.0 (#33) Update dependencies (#32) Update dependencies(nats client, mongo-driver) (#31) Update Env Variables (#30) Prepare for release v0.14.0 (#194) Update dependencies (#193) Update dependencies(nats client, mongo-driver) (#192) Prepare for release v0.3.0 (#29) Fix sharded-mongo restore issue; Use typed doubleOptIn funcs (#28) Add support for MariaDB database schema manager (#24) Prepare for release v0.12.0 (#178) Update dependencies (#177) Update dependencies(nats client, mongo-driver) (#176) Prepare for release v0.3.0 (#34) Prepare for release v0.3.0 (#19) Update dependencies (#18) Update dependencies(nats client, mongo-driver) (#17)" } ]
{ "category": "App Definition and Development", "file_name": "column_privileges.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" `column_privileges` identifies all privileges granted on columns to a currently enabled role or by a currently enabled role. The following fields are provided in `column_privileges`: | Field | Description | | -- | | | GRANTEE | The name of the user to which the privilege is granted. | | TABLE_CATALOG | The name of the catalog to which the table containing the column belongs. This value is always `def`. | | TABLE_SCHEMA | The name of the database to which the table containing the column belongs. | | TABLE_NAME | The name of the table containing the column. | | COLUMN_NAME | The name of the column. | | PRIVILEGE_TYPE | The privilege granted. The value can be any privilege that can be granted at the column level. Each row lists a single privilege, so there is one row per column privilege held by the grantee. | | ISGRANTABLE | `YES` if the user has the `GRANT OPTION` privilege, `NO` otherwise. The output does not list `GRANT OPTION` as a separate row with `PRIVILEGETYPE='GRANT OPTION'`. |" } ]
{ "category": "App Definition and Development", "file_name": "client-side.md", "project_name": "Vald", "subcategory": "Database" }
[ { "data": "This page introduces the popular troubleshooting for client side. The helps you find the root reason for your problem. Additionally, if you encounter some errors when using API, the helps you, too. Please check your container limit of memory at first. Vald Agent requires memory for keeping indexing on memory. ```bash kubectl describe statefulset vald-agent-ngt ``` If the limit of memory exists, please remove it or update the value to more enormous. There are two possible reasons. Indexing has not finished in Vald Agent Vald will search the nearest vectors of query from the indexing in Vald Agent. If the indexing process is running, Vald Agent returns no search result. It will resolve when completed indexing instructions, like `CreateIndex`. Too short timeout for searching When the search timeout configuration is too short, Vald LB Gateway stops the searching process before getting the search result from Vald Agent. In the sense of search operation, you can modify search timeout by . <div class=\"notice\"> It is easy to find out which problem occurs by inspections of the log of each Pod, like <a href=\"https://github.com/stern/stern\">stern</a>. </div> Vald Agent NGT requires an AVX2 processor for running. Please check your CPU information." } ]
{ "category": "App Definition and Development", "file_name": "Query_management.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" `Property` is set for user granularity. To set the maximum number of connections between Client and FE, use the following command. ```sql SET PROPERTY [FOR 'user'] 'key' = 'value' [, 'key' = 'value'] ``` User properties include the resources assigned to the user. The properties set here are for the user, not `user_identity`. That is, if two users `jack'@'%` and `jack'@'192.%` are created by the `CREATE USER` statement, then the `SET PROPERTY` statement can work on the user `jack`, not `jack'@'%` or `jack'@'192.%`. Example 1: ```sql For the user `jack`, change the maximum number of connections to 1000 SET PROPERTY FOR 'jack' 'maxuserconnections' = '1000'; Check the connection limit for the root user SHOW PROPERTY FOR 'root'; ``` The session variables can be set by 'key' = 'value', which can limit the concurrency, memory and other query parameters in the current session. For example: parallelfragmentexecinstancenum The parallelism of the query with a default value of 1. It indicates the number of fragment instances on each BE. You can set this to half the number of CPU cores of the BE to improve query performance. querymemlimit Memory limit of a query on each BE node, can be adjusted when a query reports insufficient memory. loadmemlimit Memory limit for import, can be adjusted when an import job reports insufficient memory. Example 2: ```sql set parallelfragmentexecinstancenum = 8; set querymemlimit = 137438953472; ``` The capacity quota of database storage is unlimited by default. And you can change quota value by using `alter database`. ```sql ALTER DATABASE db_name SET DATA QUOTA quota; ``` The quota units are: B/K/KB/M/MB/G/GB/T/TB/P/PB Example 3: ```sql ALTER DATABASE example_db SET DATA QUOTA 10T; ``` To terminate a query on a particular connection with the following command: ```sql kill connection_id; ``` The `connectionid` can be seen by `show processlist;` or `select connectionid();`. ```plain text show processlist; ++++--++++-++ | Id | User | Host | Cluster | Db | Command | Time | State | Info | ++++--++++-++ | 1 | starrocksmgr | 172.26.34.147:56208 | defaultcluster | starrocksmonitor | Sleep | 8 | | | | 129 | root | 172.26.92.139:54818 | default_cluster | | Query | 0 | | | | 114 | test | 172.26.34.147:57974 | defaultcluster | ssb100g | Query | 3 | | | | 3 | starrocksmgr | 172.26.34.147:57268 | defaultcluster | starrocksmonitor | Sleep | 8 | | | | 100 | root | 172.26.34.147:58472 | defaultcluster | ssb100 | Sleep | 637 | | | | 117 | starrocksmgr | 172.26.34.147:33790 | defaultcluster | starrocksmonitor | Sleep | 8 | | | | 6 | starrocksmgr | 172.26.34.147:57632 | defaultcluster | starrocksmonitor | Sleep | 8 | | | | 119 | starrocksmgr | 172.26.34.147:33804 | defaultcluster | starrocksmonitor | Sleep | 8 | | | | 111 | root | 172.26.92.139:55472 | default_cluster | | Sleep | 2758 | | | ++++--++++-++ 9 rows in set (0.00 sec) mysql> select connection_id(); +--+ | CONNECTION_ID() | +--+ | 98 | +--+ mysql> kill 114; Query OK, 0 rows affected (0.02 sec) ```" } ]
{ "category": "App Definition and Development", "file_name": "hllcount.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"HllCount\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <table align=\"left\"> <a target=\"_blank\" class=\"button\" href=\"https://beam.apache.org/releases/javadoc/current/index.html?org/apache/beam/sdk/extensions/zetasketch/HllCount.html\"> <img src=\"/images/logos/sdks/java.png\" width=\"20px\" height=\"20px\" alt=\"Javadoc\" /> Javadoc </a> </table> <br><br> Estimates the number of distinct elements in a data stream using the . The respective transforms to create and merge sketches, and to extract from them, are: `HllCount.Init` aggregates inputs into HLL++ sketches. `HllCount.MergePartial` merges HLL++ sketches into a new sketch. `HllCount.Extract` extracts the estimated count of distinct elements from HLL++ sketches. You can read more about what a sketch is at https://github.com/google/zetasketch. Example 1: creates a long-type sketch for a `PCollection<Long>` with a custom precision: {{< highlight java >}} PCollection<Long> input = ...; int p = ...; PCollection<byte[]> sketch = input.apply(HllCount.Init.forLongs().withPrecision(p).globally()); {{< /highlight >}} Example 2: creates a bytes-type sketch for a `PCollection<KV<String, byte[]>>`: {{< highlight java >}} PCollection<KV<String, byte[]>> input = ...; PCollection<KV<String, byte[]>> sketch = input.apply(HllCount.Init.forBytes().perKey()); {{< /highlight >}} Example 3: merges existing sketches in a `PCollection<byte[]>` into a new sketch, which summarizes the union of the inputs that were aggregated in the merged sketches: {{< highlight java >}} PCollection<byte[]> sketches = ...; PCollection<byte[]> mergedSketch = sketches.apply(HllCount.MergePartial.globally()); {{< /highlight >}} Example 4: estimates the count of distinct elements in a `PCollection<String>`: {{< highlight java >}} PCollection<String> input = ...; PCollection<Long> countDistinct = input.apply(HllCount.Init.forStrings().globally()).apply(HllCount.Extract.globally()); {{< /highlight >}} Example 5: extracts the count distinct estimate from an existing sketch: {{< highlight java >}} PCollection<byte[]> sketch = ...; PCollection<Long> countDistinct = sketch.apply(HllCount.Extract.globally()); {{< /highlight >}} estimates the number of distinct elements or values in key-value pairs (but does not expose sketches; also less accurate than `HllCount`)." } ]
{ "category": "App Definition and Development", "file_name": "submit-job.md", "project_name": "Hazelcast Jet", "subcategory": "Streaming & Messaging" }
[ { "data": "title: Submit a Job to the Cluster description: How to submit a Jet job to a standalone cluster. id: version-4.2-submit-job original_id: submit-job At this point you saw how to create a simple Jet program (a job) that starts its own Jet instance to run on, and how to create a standalone Jet cluster. Now it's time to combine the two and run the program on the cluster. Originally we used `Jet.newJetInstance()` to create an embedded Jet node and noted that this line will change once you have an outside cluster. Jet uses the concept of a bootstrapped instance which acts differently depending on context. Change the line ```java JetInstance jet = Jet.newJetInstance(); ``` to ```java JetInstance jet = Jet.bootstrappedInstance(); ``` If you run the application again, it will have the same behavior as before and create an embedded Jet instance. However, if you package your code in a JAR and pass it to `jet submit`, it will instead return a client proxy that talks to the cluster. If you're using Maven, `mvn package` will generate a JAR file ready to be submitted to the cluster. `gradle build` does the same. You should also set the `Main-Class` attribute in `MANIFEST.MF` to avoid the need to specify the main class in `jet submit`. Both Maven and Gradle can be configured to do this, refer to their docs. <!--DOCUSAURUSCODETABS--> <!--Gradle--> ```groovy application { mainClassName = 'org.example.JetJob' } ``` <!--Maven--> ```xml <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifest> <mainClass>org.example.JetJob</mainClass> </manifest> </archive> </configuration> </plugin> ``` <!--ENDDOCUSAURUSCODE_TABS--> From the Jet home folder execute the command: <!--DOCUSAURUSCODETABS--> <!--Standalone--> ```bash bin/jet submit <pathtoJAR_file> ``` <!--Docker--> ```bash docker run -it -v <pathtoJARfile>:/jars hazelcast/hazelcast-jet jet -t 172.17.0.2 submit /jars/<nameoftheJAR_file> ``` <!--ENDDOCUSAURUSCODE_TABS--> If you didn't specify the main class in the JAR, you must use `-c`: <!--DOCUSAURUSCODETABS--> <!--Standalone--> ```bash bin/jet submit -c <mainclassname> <pathtoJAR_file> ``` <!--Docker--> ```bash docker run -it -v <pathtoJARfile>:/jars hazelcast/hazelcast-jet jet -t 172.17.0.2 submit -c <mainclassname> /jars/<nameoftheJAR_file> ``` <!--ENDDOCUSAURUSCODE_TABS--> You can notice in the server logs that a new job has been submitted and it's running on the cluster. Since we're using the logger as the sink (`Sinks.logger()`), the output of the job appears in the server logs. You can also see a list of running jobs as follows: <!--DOCUSAURUSCODETABS--> <!--Standalone--> ```bash $ bin/jet list-jobs ID STATUS SUBMISSION TIME NAME 03de-e38d-3480-0001 RUNNING 2020-02-09T16:30:26.843 N/A ``` <!--Docker--> ```bash $ docker run -it hazelcast/hazelcast-jet jet -t 172.17.0.2 list-jobs ID STATUS SUBMISSION TIME NAME 03e3-b8f6-5340-0001 RUNNING 2020-02-13T09:36:46.898 N/A ``` <!--ENDDOCUSAURUSCODE_TABS--> As we noted earlier, whether or not you kill the client application, the job keeps running on the server. A job with a streaming source will run indefinitely until explicitly cancelled (`jet cancel <job-id>`) or the cluster is shut down. Keep the job running for now, in the next steps we'll be scaling it up and down." } ]
{ "category": "App Definition and Development", "file_name": "user_defined_functions.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "title: User-Defined Functions weight: 5 type: docs aliases: /dev/userdefinedfunctions.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Most operations require a user-defined function. This section lists different ways of how they can be specified. We also cover `Accumulators`, which can be used to gain insights into your Flink application. {{< tabs \"ff746fe9-c77d-443f-b7ec-41716f968349\" >}} {{< tab \"Java\" >}} The most basic way is to implement one of the provided interfaces: ```java class MyMapFunction implements MapFunction<String, Integer> { public Integer map(String value) { return Integer.parseInt(value); } } data.map(new MyMapFunction()); ``` You can pass a function as an anonymous class: ```java data.map(new MapFunction<String, Integer> () { public Integer map(String value) { return Integer.parseInt(value); } }); ``` Flink also supports Java 8 Lambdas in the Java API. ```java data.filter(s -> s.startsWith(\"http://\")); ``` ```java data.reduce((i1,i2) -> i1 + i2); ``` All transformations that require a user-defined function can instead take as argument a rich function. For example, instead of ```java class MyMapFunction implements MapFunction<String, Integer> { public Integer map(String value) { return Integer.parseInt(value); } } ``` you can write ```java class MyMapFunction extends RichMapFunction<String, Integer> { public Integer map(String value) { return Integer.parseInt(value); } } ``` and pass the function as usual to a `map` transformation: ```java data.map(new MyMapFunction()); ``` Rich functions can also be defined as an anonymous class: ```java data.map (new RichMapFunction<String, Integer>() { public Integer map(String value) { return Integer.parseInt(value); } }); ``` {{< /tab >}} {{< tab \"Scala\" >}} As already seen in previous examples all operations accept lambda functions for describing the operation: ```scala val data: DataStream[String] = // [...] data.filter { _.startsWith(\"http://\") } ``` ```scala val data: DataStream[Int] = // [...] data.reduce { (i1,i2) => i1 + i2 } // or data.reduce { + } ``` All transformations that take as argument a lambda function can instead take as argument a rich function. For example, instead of ```scala data.map { x => x.toInt } ``` you can write ```scala class MyMapFunction extends RichMapFunction[String, Int] { def map(in: String): Int = in.toInt } ``` and pass the function to a `map` transformation: ```scala data.map(new MyMapFunction()) ``` Rich functions can also be defined as an anonymous class: ```scala data.map (new RichMapFunction[String, Int] { def map(in: String): Int =" }, { "data": "}) ``` {{< /tab >}} {{< /tabs >}} {{< top >}} Accumulators are simple constructs with an add operation and a final accumulated result, which is available after the job ended. The most straightforward accumulator is a counter: You can increment it using the ```Accumulator.add(V value)``` method. At the end of the job Flink will sum up (merge) all partial results and send the result to the client. Accumulators are useful during debugging or if you quickly want to find out more about your data. Flink currently has the following built-in accumulators. Each of them implements the {{< gh_link file=\"/flink-core/src/main/java/org/apache/flink/api/common/accumulators/Accumulator.java\" name=\"Accumulator\" >}} interface. {{< ghlink file=\"/flink-core/src/main/java/org/apache/flink/api/common/accumulators/IntCounter.java\" name=\"IntCounter_\" >}}, {{< ghlink file=\"/flink-core/src/main/java/org/apache/flink/api/common/accumulators/LongCounter.java\" name=\"LongCounter_\" >}} and {{< ghlink file=\"/flink-core/src/main/java/org/apache/flink/api/common/accumulators/DoubleCounter.java\" name=\"DoubleCounter_\" >}}: See below for an example using a counter. {{< ghlink file=\"/flink-core/src/main/java/org/apache/flink/api/common/accumulators/Histogram.java\" name=\"Histogram_\" >}}: A histogram implementation for a discrete number of bins. Internally it is just a map from Integer to Integer. You can use this to compute distributions of values, e.g. the distribution of words-per-line for a word count program. How to use accumulators: First you have to create an accumulator object (here a counter) in the user-defined transformation function where you want to use it. ```java private IntCounter numLines = new IntCounter(); ``` Second you have to register the accumulator object, typically in the ```open()``` method of the rich function. Here you also define the name. ```java getRuntimeContext().addAccumulator(\"num-lines\", this.numLines); ``` You can now use the accumulator anywhere in the operator function, including in the ```open()``` and ```close()``` methods. ```java this.numLines.add(1); ``` The overall result will be stored in the ```JobExecutionResult``` object which is returned from the `execute()` method of the execution environment (currently this only works if the execution waits for the completion of the job). ```java myJobExecutionResult.getAccumulatorResult(\"num-lines\"); ``` All accumulators share a single namespace per job. Thus you can use the same accumulator in different operator functions of your job. Flink will internally merge all accumulators with the same name. A note on accumulators and iterations: Currently the result of accumulators is only available after the overall job has ended. We plan to also make the result of the previous iteration available in the next iteration. You can use {{< gh_link file=\"/flink-java/src/main/java/org/apache/flink/api/java/operators/IterativeDataSet.java#L98\" name=\"Aggregators\" >}} to compute per-iteration statistics and base the termination of iterations on such statistics. Custom accumulators: To implement your own accumulator you simply have to write your implementation of the Accumulator interface. Feel free to create a pull request if you think your custom accumulator should be shipped with Flink. You have the choice to implement either {{< gh_link file=\"/flink-core/src/main/java/org/apache/flink/api/common/accumulators/Accumulator.java\" name=\"Accumulator\" >}} or {{< gh_link file=\"/flink-core/src/main/java/org/apache/flink/api/common/accumulators/SimpleAccumulator.java\" name=\"SimpleAccumulator\" >}}. ```Accumulator<V,R>``` is most flexible: It defines a type ```V``` for the value to add, and a result type ```R``` for the final result. E.g. for a histogram, ```V``` is a number and ```R``` is a histogram. ```SimpleAccumulator``` is for the cases where both types are the same, e.g. for counters. {{< top >}}" } ]
{ "category": "App Definition and Development", "file_name": "20150729_range_replica_naming.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "Feature Name: rangereplicanaming Status: completed Start Date: 2015-07-29 RFC PR: Cockroach Issue: Consistently use the word range to refer to a portion of the global sorted map (i.e. an entire raft consensus group, and replica to refer to a single copy of the range's data. We currently use the word \"range\" informally to refer to both the consensus group and the data owned by one member of the group, while in code the `storage.Range` type refers to the latter. This was a deliberate at the time because (at least in code) the latter is more common than the former. However, resolving certain issues related to replication and splits requires us to be precise about the difference, so I propose separating the two usages to improve clarity. Rename the type `storage.Range` to `storage.Replica`, and `range_*.go` to `replica_*.go`. Rename `RaftID` to `RangeID` (the name `RaftID` was chosen to avoid the ambiguity inherent in our old use of \"range\"). A new `Range` type may be created to house pieces of `Replica` that do in fact belong at the level of the range (for example, the `AdminSplit`, `AdminMerge`, and `ChangeReplicas` methods) This reverses a previously-made decision and moves/renames a lot of code. Keep `Range` for a single replica and use `Raft` or `Group` when naming things that relate to the whole group." } ]
{ "category": "App Definition and Development", "file_name": "cr-dp-views.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: crdpviews.sql linkTitle: crdpviews.sql headerTitle: crdpviews.sql description: crdpviews.sql - Part of the code kit for the \"Analyzing a normal distribution\" section within the YSQL window functions documentation. menu: v2.18: identifier: cr-dp-views parent: analyzing-a-normal-distribution weight: 40 type: docs Save this script as `crdpviews.sql`. ```plpgsql create or replace view t4_view as select k, dp_score as score from t4; -- This very simple view allows updates. create or replace view results as select method, bucket, n, mins, maxs from dp_results; ```" } ]
{ "category": "App Definition and Development", "file_name": "built_in.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Builtin Watermark Generators\" weight: 3 type: docs aliases: /dev/eventtimestampextractors.html /apis/streaming/eventtimestampextractors.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> As described in , Flink provides abstractions that allow the programmer to assign their own timestamps and emit their own watermarks. More specifically, one can do so by implementing the `WatermarkGenerator` interface. In order to further ease the programming effort for such tasks, Flink comes with some pre-implemented timestamp assigners. This section provides a list of them. Apart from their out-of-the-box functionality, their implementation can serve as an example for custom implementations. The simplest special case for periodic watermark generation is the when timestamps seen by a given source task occur in ascending order. In that case, the current timestamp can always act as a watermark, because no earlier timestamps will arrive. Note that it is only necessary that timestamps are ascending *per parallel data source task*. For example, if in a specific setup one Kafka partition is read by one parallel data source instance, then it is only necessary that timestamps are ascending within each Kafka partition. Flink's watermark merging mechanism will generate correct watermarks whenever parallel streams are shuffled, unioned, connected, or merged. {{< tabs \"3c316a55-c596-49fd-9f80-3d3b4329415a\" >}} {{< tab \"Java\" >}} ```java WatermarkStrategy.forMonotonousTimestamps(); ``` {{< /tab >}} {{< tab \"Scala\" >}} ```scala WatermarkStrategy.forMonotonousTimestamps() ``` {{< /tab >}} {{< tab \"Python\" >}} ```python WatermarkStrategy.formonotonoustimestamps() ``` {{< /tab >}} {{< /tabs >}} Another example of periodic watermark generation is when the watermark lags behind the maximum (event-time) timestamp seen in the stream by a fixed amount of time. This case covers scenarios where the maximum lateness that can be encountered in a stream is known in advance, e.g. when creating a custom source containing elements with timestamps spread within a fixed period of time for testing. For these cases, Flink provides the `BoundedOutOfOrdernessWatermarks` generator which takes as an argument the `maxOutOfOrderness`, i.e. the maximum amount of time an element is allowed to be late before being ignored when computing the final result for the given window. Lateness corresponds to the result of `t - t_w`, where `t` is the (event-time) timestamp of an element, and `t_w` that of the previous watermark. If `lateness > 0` then the element is considered late and is, by default, ignored when computing the result of the job for its corresponding window. See the documentation about [allowed lateness]({{< ref \"docs/dev/datastream/operators/windows\" >}}#allowed-lateness) for more information about working with late elements. {{< tabs \"678f404c-d241-4e45-8e2e-846e34736d6f\" >}} {{< tab \"Java\" >}} ```java WatermarkStrategy.forBoundedOutOfOrderness(Duration.ofSeconds(10)); ``` {{< /tab >}} {{< tab \"Scala\" >}} ```scala WatermarkStrategy.forBoundedOutOfOrderness(Duration.ofSeconds(10)) ``` {{< /tab >}} {{< tab \"Python\" >}} ```python WatermarkStrategy.forboundedoutoforderness(Duration.of_seconds(10)) ``` {{< /tab >}} {{< /tabs >}} {{< top >}}" } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.0.23.11.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | When a servlet filter throws an exception in init(..), the Jetty server failed silently. | Major | . | Tsz Wo Nicholas Sze | Uma Maheswara Rao G | | | DirectoryScanner: volume path prefix takes up memory for every block that is scanned | Minor | . | Colin P. McCabe | Colin P. McCabe | | | try to refeatchToken while local read InvalidToken occurred | Major | hdfs-client, security | Liang Xie | Liang Xie | | | Allow UGI to login with a known Subject | Major | . | Robert Joseph Evans | Robert Joseph Evans | | | Provide FileContext version of har file system | Major | . | Kihwal Lee | Kihwal Lee | | | Disable quota checks when replaying edit log. | Major | namenode | Kihwal Lee | Kihwal Lee | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | TestUniformSizeInputFormat fails intermittently | Major | test | Karthik Kambatla | Karthik Kambatla | | | ResourceManager webapp is using next port if configured port is already in use | Major | resourcemanager | Nishan Shetty | Kenji Kikushima | | | maximum-am-resource-percent doesn't work after refreshQueues command | Major | capacityscheduler | Devaraj K | Devaraj K | | | TestDFSIO fails intermittently on JDK7 | Major | test | Karthik Kambatla | Karthik Kambatla | | | Diagnostic message from ContainerExitEvent is ignored in ContainerImpl | Blocker | . | Omkar Vinit Joshi | Omkar Vinit Joshi | | | Distcp may succeed when it fails | Critical | tools/distcp | Daryn Sharp | Daryn Sharp | | | Client.setupIOStreams can leak socket resources on exception or error | Critical | ipc | Jason Lowe | Tsuyoshi Ozawa | | | TestJobCleanup fails because of RejectedExecutionException and NPE. | Major | . | Tsuyoshi Ozawa | Jason Lowe | | | Potential file handle leak in aggregated logs web ui | Major | . | Rohith Sharma K S | Rohith Sharma K S | | | Update capacity scheduler docs to include types on the configs | Trivial | capacityscheduler | Thomas Graves | Chen He | | | MRAppMaster does not preempt reducers when scheduled maps cannot be fulfilled | Critical | . | Lohit Vijayarenu | Lohit Vijayarenu | | | Workaround JDK7 Process fd close bug | Critical | util | Daryn Sharp | Daryn Sharp | | | CapacityScheduler tries to reserve more than a node's total memory on" }, { "data": "| Major | capacityscheduler | Thomas Graves | Thomas Graves | | | hadoop-auth has a build break due to missing dependency | Blocker | build | Chuan Liu | Chuan Liu | | | balancer should set SoTimeout to avoid indefinite hangs | Major | balancer & mover | Nathan Roberts | Nathan Roberts | | | [Diskfull] Block recovery will fail if the metafile does not have crc for all chunks of the block | Critical | datanode | Vinayakumar B | Vinayakumar B | | | Fix skip() of the short-circuit local reader (legacy). | Critical | . | Kihwal Lee | Kihwal Lee | | | har file listing doesn't work with wild card | Major | tools | Brandon Li | Brandon Li | | | Job hangs because RMContainerAllocator$AssignedRequests.preemptReduce() violates the comparator contract | Blocker | . | Sangjin Lee | Gera Shegalov | | | Job diagnostics can implicate wrong task for a failed job | Major | jobhistoryserver | Jason Lowe | Jason Lowe | | | ConcurrentModificationException in JobControl.toList | Major | client | Jason Lowe | Jason Lowe | | | JobSummary does not escape newlines in the job name | Major | jobhistoryserver | Jason Lowe | Akira Ajisaka | | | Average Reduce time is incorrect on Job Overview page | Major | jobhistoryserver, webapps | Rushabh S Shah | Rushabh S Shah | | | HttpServer's jetty audit log always logs 200 OK | Major | . | Daryn Sharp | Jonathan Eagles | | | aggregated log writer can write more log data then it says is the log length | Critical | . | Thomas Graves | Mit Desai | | | revisit balancer so\\_timeout | Blocker | balancer & mover | Nathan Roberts | Nathan Roberts | | | Docs still refer to 0.20.205 as stable line | Minor | . | Robert Joseph Evans | Mit Desai | | | Webhdfs authentication issues | Major | webhdfs | Daryn Sharp | Daryn Sharp | | | docs for map output compression incorrectly reference SequenceFile | Trivial | . | Todd Lipcon | Chen He | | | Javascript injection on the job status page | Blocker | . | Mit Desai | Mit Desai | | | Workaround for jetty6 acceptor startup issue | Major | . | Kihwal Lee | Kihwal Lee | | | Incorrect counting in ContentSummaryComputationContext in 0.23. | Critical | . | Kihwal Lee | Kihwal Lee | | | The History Tracking UI is broken for Tez application on ResourceManager WebUI | Critical | applications | Irina Easterling | | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | backport hadoop-10107 to branch-0.23 | Minor | ipc | Chen He | Chen He | | | Public localizer crashes with \"Localized unkown resource\" | Critical | nodemanager | Jason Lowe | Jason Lowe |" } ]
{ "category": "App Definition and Development", "file_name": "partitioning-by-time.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Partition data by time headerTitle: Partition data by time linkTitle: Partition data by time description: Partition data for efficient data management headcontent: Partition data for efficient data management menu: v2.18: identifier: timeseries-partition-by-time parent: common-patterns-timeseries weight: 400 type: docs Partitioning refers to splitting what is logically one large table into smaller physical pieces. The key advantage of partitioning in YugabyteDB is that, because each partition is a separate table, it is efficient to keep the most significant (for example, most recent) data in one partition, and not-so-important data in other partitions so that they can be dropped easily. The following example describes the advantages of partitions in more detail. {{<note title=\"Note\">}} Partitioning is only available in . {{</note>}} {{<cluster-setup-tabs>}} Consider a scenario where you have a lot of data points from cars and you care about only the last month's data. Although you can execute a statement to delete data that is older than 30 days, because data is not immediately removed from the underlying storage (LSM-based DocDB), it could affect the performance of scans. Create a table with an example schema as follows: ```sql CREATE TABLE part_demo ( ts timestamp,/ time at which the event was generated / car varchar, / name of the car / speed int, / speed of your car / PRIMARY KEY(car HASH, ts ASC) ) PARTITION BY RANGE (ts); ``` Create partitions for each month. Also, create a `DEFAULT` partition for data that does not fall into any of the other" }, { "data": "```sql CREATE TABLE part723 PARTITION OF part_demo FOR VALUES FROM ('2023-07-01') TO ('2023-08-01'); CREATE TABLE part823 PARTITION OF part_demo FOR VALUES FROM ('2023-08-01') TO ('2023-09-01'); CREATE TABLE part923 PARTITION OF part_demo FOR VALUES FROM ('2023-09-01') TO ('2023-10-01'); CREATE TABLE defpartdemo PARTITION OF part_demo DEFAULT; ``` Insert some data into the main table `part_demo`: ```sql INSERT INTO part_demo (ts, car, speed) (SELECT '2023-07-01 00:00:00'::timestamp + make_interval(secs=>id, months=>((random()*2)::int)), 'car-' || ceil(random()2), ceil(random()60) FROM generate_series(1,100) AS id); ``` If you retrieve the rows from the respective partitions, notice that they have the rows for their respective date ranges. For example: ```sql SELECT * FROM part923 LIMIT 4; ``` ```output ts | car | speed +-+- 2023-09-01 00:00:04 | car-2 | 45 2023-09-01 00:00:05 | car-2 | 38 2023-09-01 00:00:08 | car-2 | 49 2023-09-01 00:00:23 | car-2 | 33 ``` Although data is stored in different tables as partitions, to access all the data, you just need to query the parent table. Take a look at the query plan for a select query as follows: ```sql EXPLAIN ANALYZE SELECT * FROM part_demo; ``` ```output QUERY PLAN Append (actual time=1.085..5.351 rows=100 loops=1) -> Seq Scan on public.part723 (actual time=1.079..2.431 rows=25 loops=1) Output: part723.ts, part723.car, part723.speed -> Seq Scan on public.part823 (actual time=0.665..1.555 rows=47 loops=1) Output: part823.ts, part823.car, part823.speed -> Seq Scan on public.part923 (actual time=0.648..1.342 rows=28 loops=1) Output: part923.ts, part923.car, part923.speed Planning Time: 0.105 ms Execution Time: 5.434 ms Peak Memory Usage: 19 kB ``` When querying the parent table, the child partitions are automatically queried. As the data is split based on time, when querying for a specific time range, the query executor fetches data only from the partition that the data is expected to be in. For example, see the query plan for fetching data for a specific month: ```sql EXPLAIN ANALYZE SELECT * FROM part_demo WHERE ts > '2023-07-01' AND ts < '2023-08-01'; ``` ```sql{.nocopy} QUERY PLAN -- Append (actual time=2.288..2.310 rows=25 loops=1) -> Seq Scan on public.part723 (actual time=2.285..2.301 rows=25 loops=1) Output: part723.ts, part723.car, part723.speed Remote Filter: ((part723.ts > '2023-07-01 00:00:00'::timestamp without time zone) AND (part723.ts < '2023-08-01 00:00:00'::timestamp without time zone)) Planning Time: 0.309 ms Execution Time: 2.411 ms Peak Memory Usage: 14 kB ``` You can see that the planner has chosen only one partition to fetch the data from. The key advantage of data partitioning is to drop older data easily. To drop the older data, all you need to do is drop that particular partition table. For example, when month `7`'s data is not needed, do the following: ```sql DROP TABLE part723; ``` ```output DROP TABLE Time: 103.214 ms ```" } ]
{ "category": "App Definition and Development", "file_name": "key-features.md", "project_name": "Pravega", "subcategory": "Streaming & Messaging" }
[ { "data": "<!-- Copyright Pravega Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> This document explains some of the key features of Pravega. It may be advantageous if you are already familiar with the core . Pravega was designed to support the new generation of streaming applications. Applications that deal with a large amount of data arriving continuously that needs to generate an accurate analysis of that data by considering the factors like: Delayed data, Data arriving out of order, Failure conditions. There are several Open Source tools to enable developers to build such applications, including , , , etc. These applications uses the following systems to ingest and store data: , , , , . Pravega focuses on both ingesting and storing Stream data. Pravega approaches streaming applications from a storage perspective. It enables applications to ingest Stream data continuously and stores it permanently. Such Stream data can be accessed with low latency (order of milliseconds) and also analyzes historical data. The design of Pravega incorporates lessons learned from using the to build streaming applications and the challenges to deploy streaming applications at scale that consistently deliver accurate results in a fault tolerant manner. The Pravega Architecture provides strong durability and consistency guarantees, delivering a rock solid foundation to build streaming applications upon. With the Lambda Architecture, the developer uses a complex combination of middleware tools that include batch style middleware mainly influenced by Hadoop and continuous processing tools like Storm, Samza, Kafka and others. In this architecture, batch processing is used to deliver accurate, but potentially out of date analysis of data. The second path processes data as it is ingested, and in principle the results are inaccurate, which justifies the first batch path. The programming models of the speed layer are different than those used in the batch layer. An implementation of the Lambda Architecture can be difficult to maintain and manage in production.This style of big data application design consequently has been losing traction. A different kind of architecture has been gaining traction recently that does not rely on a batch processing data path. This architecture is called . The Kappa Architecture style is a reaction to the complexity of the Lambda Architecture and relies on components that are designed for streaming, supporting stronger semantics and delivering both fast and accurate data analysis. The Kappa Architecture provides a simpler approach: There is only one data path to execute, and one implementation of the application logic to maintain.With the right tools, built for the demands of processing streaming data in a fast and accurate fashion, it becomes simpler to design and run applications in the space of IoT:(connected cars, finance, risk management, online services, etc.). Using the right tools, it is possible to build such pipelines and serve applications that present high volume and demand low latency. Applications often require more than one stage of" }, { "data": "Any practical system for stream analytics must be able to accommodate the composition of stages in the form of data pipelines: With data pipelines,it is important to think of guarantees end-to-end rather than on a per component basis. Our goal in Pravega is to enable the design and implementation of data pipelines with strong guarantees end-to-end. Pravega introduces a new storage primitive, a Stream, that matches continuous processing of unbounded data. In Pravega, a Stream is a named, durable, append-only and unbounded sequence of bytes. With this primitive, and the key features discussed in this document, Pravega is an ideal component to combine with Stream processing engines such as to build streaming applications. Because of Pravega's key features, we imagine that it will be the fundamental storage primitive for a new generation of streaming-oriented middleware. Let's examine the key features of Pravega: By exactly once semantics we mean that Pravega ensures that data is not duplicated and no event is missed despite failures. Of course, this statement comes with a number of caveats, like any other system that promises exactly-once semantics, but let's not dive into the gory details here. An important consideration is that exactly-once semantics is a natural part of Pravega and has been a goal and part of the design from day zero. To achieve exactly once semantics, Pravega are durable, ordered, consistent and .We discuss durable and transactional in separate sections below. By ordering, we mean that data is observed by Readers in the order it is written. In Pravega, data is written along with an application-defined Routing Key. Pravega makes in terms of Routing Keys. For example, two Events with the same Routing Key will always be read by a Reader in the order they were written. Pravega'sordering guarantees allow data reads to be replayed (e.g. when applications crash) and the results of replaying the reads will be the same. By consistency, we mean all Readers see the same ordered view of data for a given Routing Key, even in the face of failure. Systems that are \"mostly consistent\" are not sufficient for building accurate data processing. Systems that provide \"at least once\" semantics might present duplication. In such systems, a data producer might write the same data twice in some scenarios. In Pravega, writes are idempotent, rewrites done as a result of reconnection don't result in data duplication. Note that we make no guarantee when the data coming from the source already contains duplicates. Written data is opaque to Pravega and it makes no attempt to remove existing duplicates. Pravega has not limited the focus to exactly-once semantics for writing, however. We also provide, and are actively working on extending the features, that enable exactly-once end-to-end for a data pipeline. The strong consistency guarantees that the Pravega store provides along with the semantics of a data analytics engine like Flink enables such end-to-end guarantees. Unlike systems with static partitioning, Pravega can automatically scale individual data streams to accommodate changes in data ingestion rate. Imagine an IoT application with millions of devices feeding thousands of data streams with information about those devices.Imagine a pipelineof Flink jobs that process those Streams to derive business value from all that raw IoT data: Predicting device failures, Optimizing service deliverythrough those devices, Tailoring a customer's experience when interacting with those devices. Building such an application atscale is difficult without having the components be able to scale automatically as the rate of data increases and" }, { "data": "With Pravega, it is easy toelastically and independently scale data ingestion, storage and processing orchestrating the scaling of every component in a data pipeline. Pravega's support of starts with the idea that Streams are partitioned into Stream Segments.A Stream may have one or more Stream Segments; recall that a Stream Segment is a partition of the Stream associated with a range of Routing Keys. Any data written into the Stream is written to the Stream Segment associated with the data's Routing Key.Writers use domain specific meaningful Routing Keys (like customer ID, Timestamp, Machine ID, etc.) to group similar together. A Stream Segment is the fundamental unit of parallelism in Pravega Streams. Parallel Writes: A Stream with multiple Stream Segments can support more parallelism of data writes; multiple Writers writing data into the different Stream Segments potentially involving all the Pravega Servers in the cluster. Parallel reads: On the Reader side, the number of Stream Segments represents the maximum degree of read parallelism possible. If a Stream has N Stream Segments, then a Reader Group with N Readers can consume from the Stream in parallel. Increase the number of Stream Segments, you can increase the number of Readers in the Reader Group to increase the scale of processing the data from that Stream. And of course if the number of Stream Segments decreases, it would be a good idea to reduce the number of Readers. A Stream can be configured to grow the number of Stream Segments as more data is written to the Stream, and to shrink when data volume drops off. We refer to this configuration as the Stream's Service Level Objective or SLO. Pravega monitors the rate of data input to the Stream and uses the SLO to add or remove Stream Segments from a Stream. Segments are added by splitting a Segment. Segments are removed by merging two Segments.See for more detail on howPravega manages Stream Segments. It is possible to coordinate the Auto Scaling of Streams in Pravega with application scale out (in the works).Using metadata available from Pravega, applications can configurethescaling of their application components; for example, to drive the number of instances of a Flink job. Alternatively, you could use software such as , , or the to deploy new instances of an application to react to increased parallelism at the Pravega level, or to terminate instances as Pravega scales down in response to reduced rate of data ingestion. Pravega is great for distributed applications, such as microservices; it can be used as a data storage mechanism, for messaging between microservices and for other distributed computing services such as leader election. State Synchronizer, a part of the Pravega API, is the basis of sharing state across a cluster with consistency and optimistic concurrency.State Synchronizer is based on a fundamental conditional write operation in Pravega, so that data is written only if it would appear at a given position in the Stream. If a conditional write operation cannot meet the condition, it fails. State Synchronizer is therefore a strong synchronization primitive that can be used for shared state in a cluster, membership management, leader election and other distributed computing scenarios. For more information, refer to . Pravega write latency is of the order of milliseconds. It seamlessly scales to handle high throughput reads and writes from thousands of concurrent clients, making it ideal for IoT and other time sensitive" }, { "data": "Streams are light weight, Pravega can support millions of Streams, this frees the application from worrying about static configuration of Streams and preallocating a small fixed number of Streams and limiting Stream resource. Write operations in Pravega are low latency, under 10ms to return an acknowledgment is returned to a Writer. Furthermore, writes are optimized so that I/O throughput is limited by network bandwidth; the persistence mechanism is not the bottleneck.Pravega uses Apache BookKeeper to persist all write operations. BookKeeper persists and protects the data very efficiently. Because data is protected before the write operation is acknowledged to the Writer, data is always durable. As we discuss below, data durability is a fundamental characteristic of a storage primitive. To add further efficiency, writes to BookKeeper often involve data from multiple Stream Segments, so the cost of persisting data to disk can be amortized over several write operations. There is no durability performance trade-off with Pravega. A Reader can read from a Stream either at the tail of the Stream or at any part of the Stream's history. Unlike some log-based systems that use the same kind of storage for tail reads and writes as well as reads to historical data, Pravega uses two types of storage. The tail of the Stream is in so-called . The historical part of the Stream is in . Pravega uses efficient in-memory read ahead cache, taking advantage of the fact that Streams are usually read in large contiguous chunks and that HDFS is well suited forthosesort of large, high-throughput reads.It is also worth noting that tail reads do not impact the performance of writes. Data in Streams can be retained based on the application needs. It is constrained to the amount of data available, which is unbounded given the use of cloud storage in Tier 2. Pravega provides one convenient API to access both real-time and historical data. With Pravega, batch and real-time applications can both be handled efficiently; _yet another reason why Pravega is a great storage primitive for Kappa architectures_. If there is a value to retain old data, why not keep it around?For example, in a machine learning example, you may want to periodically change the model and train the new version of the model against as much historical data as possible to enhance and yield more accurate predictive power of the model.With Pravega auto-tiering, retaining lots of historical data does not affect the performance of tail reads and writes. Size of a stream is not limited by the storage capacity of a single server, but rather, it is limited only by the storage capacity of your storage cluster or cloud provider. As cost of storage decreases, the economic incentive to delete data goes away. Use Pravega to build pipelines of data processing, combining batch, real-time and other applications without duplicating data for every step of the pipeline. Consider the following data processing environment: Real time processing using Spark, Flink, and or Storm Batch processing using Hadoop Full text search can be performed using Lucene-based or Search mechanism like Elastic Search. Micro-services apps can be supported using one (or several) NoSQL databases. Using traditional approaches, one set of source data, for example, sensor data from an IoT app, would be ingested and replicated separately by each" }, { "data": "You would end up with three replicas of the data protected in the pub/sub system, three copies in HDFS, three copies in Lucene and three copies in the NoSQL database. When we consider the source data is measured in TB, the cost of data replication separated by middleware category becomes prohibitively expensive. Consider the same pipeline using Pravega and middleware adapted to use Pravega for its storage: With Pravega, the data is ingested and protected in one place; Pravega provides the single source of truth for the entire pipeline.Furthermore, with the bulk of the data being stored in Tier 2 enabled with erasure coding to efficiently protect the data, the storage cost of the data is substantially reduced. With Pravega, you don't face a compromise between performance, durability and consistency. Pravegaprovides durable storage of streaming data with strong consistency, ordering guarantees and great performance. Durability is a fundamental storage primitive requirement. Storage that could lose data is not reliable storage. Systems based on such storage are not production quality. Once a write operation is acknowledged, the data will never be lost, even when failures occur. This is because Pravega always saves data in protected, persistent storage before the write operation returns to the Writer. With Pravega, data in the Stream is protected. A Stream can be treated as a system of record, just as you would treat data stored in databases or files. A developer uses a Pravega Transaction to ensure that a set of events are written to a Stream atomically. A Pravega Transaction is part of Pravega's Writer API.Data can be written to a Stream directly through the API, or an application can write data through a Transaction. With Transactions, a Writer can persist data now, and later decide whether the data should be appended to a Stream or abandoned. Using a Transaction, data is written to the Stream only when the Transaction is committed. When the Transaction is committed, all data written to the Transaction is atomically appended to the Stream. Because Transactions are implemented in the same way as Stream Segments, data written to a Transaction is just as durable as data written directly to a Stream. If a Transaction is abandoned (e.g. if the Writer crashes) the Transaction is aborted and all data is discarded. Of course, an application can choose to abort the Transaction through the API if a condition occurs that suggests the Writer should discard the data. Transactions are key to chaining Flink jobs together. When a Flink job uses Pravega as a sink, itcan begin a Transaction, and if it successfully finishes processing, commit the Transaction, writing the data into its Pravega based sink. If the job fails for some reason, the Transaction times out and data is not written. When the job is restarted, there is no \"partial result\" in the sink that would need to be managed or cleaned up. Combining Transactions and other key features of Pravega, it is possible to chain Flink jobs together, having one job's Pravega based sink be the source for a downstream Flink job. This provides the ability for an entire pipeline of Flink jobs to have end-to-end exactly once, guaranteed ordering of data processing. Of course, it is possible for Transactions across multiple Streams be coordinated with Transactions, so that a Flink job can use two or more Pravega based sinks to provide source input to downstream Flink jobs. In addition, it is possible for application logic to coordinate Pravega Transactions with external databases such as Flink's checkpoint store. For more information, see section." } ]
{ "category": "App Definition and Development", "file_name": "sql-ref-syntax-aux-set-var.md", "project_name": "Apache Spark", "subcategory": "Streaming & Messaging" }
[ { "data": "layout: global title: SET VAR displayTitle: SET VAR license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. The `SET VAR` command sets a temporary variable which has been previously declared in the current session. To set a config variable or a hive variable use . ```sql SET { VAR | VARIABLE } { { variable_name = { expression | DEFAULT } } [, ...] | ( variable_name [, ...] ) = ( query ) } ``` variable_name* Specifies an existing variable. If you specify multiple variables, there must not be any duplicates. expression* Any expression, including scalar subqueries. DEFAULT* If you specify `DEFAULT`, the default expression of the variable is assigned, or `NULL` if there is none. query* A that returns at most one row and as many columns as the number of specified variables. Each column must be implicitly castable to the data type of the corresponding variable. If the query returns no row `NULL` values are assigned. ```sql -- DECLARE VARIABLE var1 INT DEFAULT 7; DECLARE VARIABLE var2 STRING; -- A simple assignment SET VAR var1 = 5; SELECT var1; 5 -- A complex expression assignment SET VARIABLE var1 = (SELECT max(c1) FROM VALUES(1), (2) AS t(c1)); SELECT var1; 2 -- resetting the variable to DEFAULT SET VAR var1 = DEFAULT; SELECT var1; 7 -- A multi variable assignment SET VAR (var1, var2) = (SELECT max(c1), CAST(min(c1) AS STRING) FROM VALUES(1), (2) AS t(c1)); SELECT var1, var2; 2 1 -- Too many rows SET VAR (var1, var2) = (SELECT c1, CAST(c1 AS STRING) FROM VALUES(1), (2) AS t(c1)); Error: ROWSUBQUERYTOOMANYROWS -- No rows SET VAR (var1, var2) = (SELECT c1, CAST(c1 AS STRING) FROM VALUES(1), (2) AS t(c1) WHERE 1=0); SELECT var1, var2; NULL NULL ```" } ]
{ "category": "App Definition and Development", "file_name": "smart-drivers-ycql.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: YugabyteDB YCQL drivers linkTitle: Smart drivers description: Use YugabyteDB drivers to improve performance with partition-aware load balancing and JSON support for YCQL headcontent: Manage partition-aware load balancing automatically using YCQL drivers menu: stable: identifier: smart-drivers-ycql parent: drivers-orms weight: 400 type: docs {{<tabs>}} {{<tabitem href=\"../smart-drivers/\" text=\"YSQL\" icon=\"postgres\" >}} {{<tabitem href=\"../smart-drivers-ycql/\" text=\"YCQL\" icon=\"cassandra\" active=\"true\" >}} {{</tabs>}} The standard/official drivers available for Cassandra work out-of-the-box with . Yugabyte has extended the upstream Cassandra drivers with specific changes to leverage YugabyteDB features, available as open source software under the Apache 2.0 license. | GitHub project | Based on | Learn more | | : | : | : | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | YugabyteDB YCQL drivers have the following key features. | Feature | Notes | | : | : | | | Like the upstream driver, the YugabyteDB YCQL driver retries certain operations in case of failures. | | | As with the upstream driver, you can specify multiple contact points for the initial connection, to avoid dropping connections in the case where the primary contact point is unavailable. | | | (YugabyteDB driver only) You can target a specific tablet or node for a particular query where the required data is hosted. | | | (YugabyteDB driver only) YugabyteDB YCQL drivers support the use of the JSONB data type for table columns. | YugabyteDB is a distributed, fault tolerant, and highly available database with low latencies for reads and writes. Data in YugabyteDB is automatically sharded, replicated, and balanced across multiple nodes that can potentially be in different availability zones and regions. For better performance and fault tolerance, you can also balance application traffic (database connections) across the nodes in the universe to avoid excessive CPU and memory load on any single node. As with the upstream Cassandra drivers, YugabyteDB YCQL drivers can retry certain operations if they fail the first time. This is governed by a retry-policy. The default number of retries is one for a failed operation for certain cases, depending on the language driver. The retry may happen on the same node or the next available node as per the query's plan. Some drivers allow you to provide a custom retry policy. Refer to in DataStax documentation for information on built-in retry polices. Usually, the contact point is the host address given in the connection configuration by the application. The driver creates a control connection to this host address. If this host address is unavailable at the time of application's initialization, the application itself fails. To avoid this scenario, you can specify multiple comma-separated host addresses for your applications as contact points. If the first host is not accessible, the driver tries to create a control connection to the next contact point, and so on, until it succeeds or all contact points are attempted. Yugabyte has extended the standard Cassandra drivers with the following additional features to further optimize the performance of YugabyteDB clusters. These features are enabled by" }, { "data": "The YugabyteDB drivers provide a few built-in load balancing policies which govern how requests from the client applications should be routed to a particular node in the cluster. Some policies prefer nodes in the local datacenter, whereas others select a node randomly from the entire cluster irrespective of the datacenter they are in. YugabyteDB partitions its table data across the nodes in the cluster as tablets, and labels one copy of each tablet as the leader tablet and other copies as follower tablets. The YCQL drivers have been modified to make use of this distinction, and optimize the read and write performance, by introducing a new load balancing policy. Whenever it is possible to calculate the hash of the partitioning key (token in Cassandra) from a query, the driver figures out the nodes hosting the leader/follower tablets and sends the query to the appropriate node. The decision on which node gets picked up is also determined by the values of Consistency Level (CL) and local datacenter specified by the application. YugabyteDB supports only two CLs: QUORUM and ONE. If none is specified, it considers QUORUM. <!-- <<table of how combination of CL and localDC affect node selection (for Java alone?)>> --> YugabyteDB supports JSONB data type, similar to PostgreSQL, and this data type is not supported by the upstream Cassandra drivers. For information on how to define and handle table columns with JSONB data type, see . YCQL drivers maintain a pool of connections for each node in a cluster. When a client application starts, the driver attempts to create a control connection to the node specified as . After the connection is established, the driver fetches the information about all the nodes in the cluster, and then creates a pool of connections for each of those nodes. You can configure the size of the connection pool. For example, in the Java driver, you can define different pool sizes for nodes in a local datacenter, and for those in other (remote) datacenters. If the number of connections in a pool go lower than the defined size for any reason, the driver immediately tries to fill the gap in the background. clusters automatically use load balancing provided by the cloud provider where the cluster is provisioned. The nodes are not directly accessible to the outside world. This is in contrast to the desired setup of YCQL drivers, which need direct access to all the nodes in a cluster. The drivers still work in this situation, but the client applications lose some of the benefits of the driver, including and the default . To take advantage of the driver's partition-aware load balancing feature when connecting to clusters in YugabyteDB Managed, the applications must be deployed in a VPC that has been peered with the cluster VPC; VPC peering enables the client application to access all nodes of the cluster. For information on VPC peering in YugabyteDB Managed, refer to . If VPC peering is not possible, you can attempt to restore the retry capability by increasing the size of the connection pool (the Java driver default size is 1) and providing a custom retry policy to retry the failed operation on the same node. Note that there is no workaround for partition-aware query routing. where YugabyteDB has forked a Cassandra Connector for Apache Spark." } ]
{ "category": "App Definition and Development", "file_name": "compiling-osx.md", "project_name": "Apache Heron", "subcategory": "Streaming & Messaging" }
[ { "data": "id: version-0.20.5-incubating-compiling-osx title: Compiling on OS X sidebar_label: Compiling on OS X original_id: compiling-osx <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> This is a step-by-step guide to building Heron on Mac OS X (versions 10.10 and 10.11). If isn't yet installed on your system, you can install it using this one-liner: ```bash $ /usr/bin/ruby -e \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\" ``` Bazelisk helps automate the management of Bazel versions ```bash brew install bazelisk ``` ```bash brew install automake brew install cmake brew install libtool brew install ant brew install pkg-config ``` ```bash $ export CC=/usr/bin/clang $ export CXX=/usr/bin/clang++ $ echo $CC $CXX ``` ```bash $ git clone https://github.com/apache/incubator-heron.git && cd incubator-heron ``` ```bash $ ./bazel_configure.py ``` If this configure script fails with missing dependencies, Homebrew can be used to install those dependencies. ```bash $ bazel build heron/... ``` This will build in the Bazel default `fastbuild` mode. Production release packages include additional performance optimizations not enabled by default. To enable production optimizations, include the `opt` flag. This defaults to optimization level `-O2`. The second option overrides the setting to bump it to `-CO3`. ```bash $ bazel build -c opt heron/... ``` ```bash $ bazel build -c opt --copt=-O3 heron/... ``` If you wish to add the code syntax style check, add `--config=stylecheck`. ```bash $ bazel build scripts/packages:binpkgs $ bazel build scripts/packages:tarpkgs ``` This will install Heron packages in the `bazel-bin/scripts/packages/` directory." } ]
{ "category": "App Definition and Development", "file_name": "point-in-time-recovery-ysql.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Point-in-Time Recovery for YSQL headerTitle: Point-in-time recovery linkTitle: Point-in-time recovery description: Restore data from a specific point in time in YugabyteDB for YSQL menu: v2.18: identifier: cluster-management-point-in-time-recovery parent: explore-cluster-management weight: 704 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../point-in-time-recovery-ysql/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL </a> </li> <li > <a href=\"../point-in-time-recovery-ycql/\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL </a> </li> </ul> Point-in-time recovery (PITR) allows you to restore the state of your cluster's data and of metadata from a specific point in time. This can be relative, such as \"three hours ago\", or an absolute timestamp. For more information, see . For details on the `yb-admin` commands, refer to the section of the yb-admin documentation. The following examples show how you can use the PITR feature by creating a database and populating it, creating a snapshot schedule, and restoring from a snapshot on the schedule. Note that the examples are deliberately simplified. In many of the scenarios, you could drop the index or table to recover. Consider the examples as part of an effort to undo a larger schema change, such as a database migration, which has performed several operations. The examples run on a local multi-node YugabyteDB universe. To create a universe, see . The process of undoing data changes involves creating and taking a snapshot of a table, and then performing a restore from either an absolute or relative time. Before attempting a restore, you need to confirm that there is no restore in progress for the subject keyspace or table; if multiple restore commands are issued, the data might enter an inconsistent state. For details, see . Start the YSQL shell and connect to your local instance: ```sh ./bin/ysqlsh -h 127.0.0.1 ``` Create a table and populate some sample data: ```sql CREATE TABLE employees ( employee_no integer PRIMARY KEY, name text, department text, salary integer ); INSERT INTO employees (employee_no, name, department, salary) VALUES (1221, 'John Smith', 'Marketing', 50000), (1222, 'Bette Davis', 'Sales', 55000), (1223, 'Lucille Ball', 'Operations', 70000), (1224, 'John Zimmerman', 'Sales', 60000); SELECT * from employees; ``` ```output employee_no | name | department | salary -+-++-- 1223 | Lucille Ball | Operations | 70000 1224 | John Zimmerman | Sales | 60000 1221 | John Smith | Marketing | 50000 1222 | Bette Davis | Sales | 55000 (4 rows) ``` Create a snapshot as follows: At a terminal prompt, create a snapshot schedule for the database from a shell prompt. In the following example, the schedule is one snapshot every minute, and each snapshot is retained for ten minutes: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ createsnapshotschedule 1 10 ysql.yugabyte ``` ```output.json { \"schedule_id\": \"0e4ceb83-fe3d-43da-83c3-013a8ef592ca\" } ``` Verify that a snapshot has happened: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ listsnapshotschedules ``` ```output.json { \"schedules\": [ { \"id\": \"0e4ceb83-fe3d-43da-83c3-013a8ef592ca\", \"options\": { \"interval\": \"60.000s\", \"retention\": \"600.000s\" }, \"snapshots\": [ { \"id\": \"8d588cb7-13f2-4bda-b584-e9be47a144c5\", \"snapshottimeutc\":" }, { "data": "} ] } ] } ``` From a command prompt, get a timestamp: ```sh python -c 'import datetime; print(datetime.datetime.now().strftime(\"%s%f\"))' ``` ```output 1620418817729963 ``` Add a row for employee 9999 to the table: ```sql INSERT INTO employees (employee_no, name, department, salary) VALUES (9999, 'Wrong Name', 'Marketing', 10000); SELECT * FROM employees; ``` ```output employee_no | name | department | salary -+-++-- 1223 | Lucille Ball | Operations | 70000 9999 | Wrong Name | Marketing | 10000 1224 | John Zimmerman | Sales | 60000 1221 | John Smith | Marketing | 50000 1222 | Bette Davis | Sales | 55000 (5 rows) ``` Restore the snapshot schedule to the timestamp you obtained before you added the data, at a terminal prompt: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ restoresnapshotschedule 0e4ceb83-fe3d-43da-83c3-013a8ef592ca 1620418817729963 ``` ```output.json { \"snapshot_id\": \"2287921b-1cf9-4bbc-ad38-e309f86f72e9\", \"restoration_id\": \"1c5ef7c3-a33a-46b5-a64e-3fa0c72709eb\" } ``` Next, verify the restoration is in `RESTORED` state (you'll see more snapshots in the list, as well): ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ list_snapshots ``` ```output Snapshot UUID State Creation Time 8d588cb7-13f2-4bda-b584-e9be47a144c5 COMPLETE 2023-04-20 00:24:58.246932 1f4db0e2-0706-45db-b157-e577702a648a COMPLETE 2023-04-20 00:26:03.257519 b91c734b-5c57-4276-851e-f982bee73322 COMPLETE 2023-04-20 00:27:08.272905 04fc6f05-8775-4b43-afbd-7a11266da110 COMPLETE 2023-04-20 00:28:13.287202 e7bc7b48-351b-4713-b46b-dd3c9c028a79 COMPLETE 2023-04-20 00:29:18.294031 2287921b-1cf9-4bbc-ad38-e309f86f72e9 COMPLETE 2023-04-20 00:30:23.306355 97aa2968-6b56-40ce-b2c5-87d2e54e9786 COMPLETE 2023-04-20 00:31:28.319685 Restoration UUID State 1c5ef7c3-a33a-46b5-a64e-3fa0c72709eb RESTORED ``` In the YSQL shell, verify the data is restored, without a row for employee 9999: ```sql yugabyte=# select * from employees; ``` ```output employee_no | name | department | salary -+-++-- 1223 | Lucille Ball | Operations | 70000 1224 | John Zimmerman | Sales | 60000 1221 | John Smith | Marketing | 50000 1222 | Bette Davis | Sales | 55000 (4 rows) ``` In addition to restoring to a particular timestamp, you can also restore from a relative time, such as \"ten minutes ago\". When you specify a relative time, you can specify any or all of days, hours, minutes, and seconds. For example: `\"5m\"` to restore from five minutes ago `\"1h\"` to restore from one hour ago `\"3d\"` to restore from three days ago `\"1h 5m\"` to restore from one hour and five minutes ago Relative times can be in any of the following formats (again, note that you can specify any or all of days, hours, minutes, and seconds): ISO 8601: `3d 4h 5m 6s` Abbreviated PostgreSQL: `3 d 4 hrs 5 mins 6 secs` Traditional PostgreSQL: `3 days 4 hours 5 minutes 6 seconds` SQL standard: `D H:M:S` Refer to the yb-admin for more details. In addition to data changes, you can also use PITR to recover from metadata changes, such as creating, altering, and deleting tables and indexes. Before you begin, if a local universe is currently running, first , and create a local multi-node YugabyteDB universe as described in . At a terminal prompt, create a snapshot schedule for the database. In this example, the schedule is on the default `yugabyte` database, one snapshot every minute, and each snapshot is retained for ten minutes: ```sh" }, { "data": "\\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ createsnapshotschedule 1 10 ysql.yugabyte ``` ```output.json { \"schedule_id\": \"1fb2d85a-3608-4cb1-af63-3e4062300dc1\" } ``` Verify that a snapshot has happened: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ listsnapshotschedules ``` ```output.json { \"schedules\": [ { \"id\": \"1fb2d85a-3608-4cb1-af63-3e4062300dc1\", \"options\": { \"filter\": \"ysql.yugabyte\", \"interval\": \"1 min\", \"retention\": \"10 min\" }, \"snapshots\": [ { \"id\": \"34b44c96-c340-4648-a764-7965fdcbd9f1\", \"snapshot_time\": \"2023-04-20 00:20:38.214201\" } ] } ] } ``` To restore from an absolute time, get a timestamp from the command prompt. You'll create a table, then restore to this time to undo the table creation: ```sh python -c 'import datetime; print(datetime.datetime.now().strftime(\"%s%f\"))' ``` ```output 1681964544554620 ``` Start the YSQL shell and create a table as described in . Restore the snapshot schedule to the timestamp you obtained before you created the table, at a terminal prompt: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ restoresnapshotschedule 1fb2d85a-3608-4cb1-af63-3e4062300dc1 1681964544554620 ``` ```output.json { \"snapshot_id\": \"0f1582ea-c10d-4ad9-9cbf-e2313156002c\", \"restoration_id\": \"a61046a2-8b77-4d6e-87e1-1dc44b5ebc69\" } ``` Verify the restoration is in `RESTORED` state (you'll see more snapshots in the list, as well): ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ list_snapshots ``` ```output Snapshot UUID State Creation Time 34b44c96-c340-4648-a764-7965fdcbd9f1 COMPLETE 2023-04-20 00:20:38.214201 bacd0b53-6a51-4628-b898-e35116860735 COMPLETE 2023-04-20 00:21:43.221612 0f1582ea-c10d-4ad9-9cbf-e2313156002c COMPLETE 2023-04-20 00:22:48.231456 617f9df8-3087-4b04-9187-399b52e738ee COMPLETE 2023-04-20 00:23:53.239147 489e6903-2848-478b-9519-577084e49adf COMPLETE 2023-04-20 00:24:58.246932 Restoration UUID State a61046a2-8b77-4d6e-87e1-1dc44b5ebc69 RESTORED ``` Verify that the table no longer exists: ```sh ./bin/ysqlsh -d yugabyte; ``` ```sql \\d employees; ``` ```output Did not find any relation named \"employees\". ``` Start the YSQL shell and create a table as described in . Verify that a snapshot has happened since table creation: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ listsnapshotschedules ``` ```output.json { \"schedules\": [ { \"id\": \"1fb2d85a-3608-4cb1-af63-3e4062300dc1\", \"options\": { \"filter\": \"ysql.yugabyte\", \"interval\": \"1 min\", \"retention\": \"10 min\" }, \"snapshots\": [ { \"id\": \"34b44c96-c340-4648-a764-7965fdcbd9f1\", \"snapshot_time\": \"2023-04-20 00:20:38.214201\" }, { \"id\": \"bacd0b53-6a51-4628-b898-e35116860735\", \"snapshot_time\": \"2023-04-20 00:21:43.221612\", \"previoussnapshottime\": \"2023-04-20 00:20:38.214201\" }, [...] { \"id\": \"c98c890a-97ae-49f0-9c73-8d27c430874f\", \"snapshot_time\": \"2023-04-20 00:28:13.287202\", \"previoussnapshottime\": \"2023-04-20 00:27:08.272905\" } ] } ] } ``` To restore from an absolute time, get a timestamp from the command prompt. You'll delete the table, then restore to this time to undo the delete: ```sh python -c 'import datetime; print(datetime.datetime.now().strftime(\"%s%f\"))' ``` ```output 1681965106732671 ``` In ysqlsh, drop this table: ```sql drop table employees; ``` ```output DROP TABLE ``` Restore the snapshot schedule to the timestamp you obtained before you deleted the table, at a terminal prompt: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ restoresnapshotschedule 1fb2d85a-3608-4cb1-af63-3e4062300dc1 1681965106732671 ``` ```output.json { \"snapshot_id\": \"fc95304a-b713-4468-a128-d5155c85333a\", \"restoration_id\": \"2bc005ca-c842-4c7c-9cc7-34e1f75ca467\" } ``` Verify the restoration is in `RESTORED` state (you'll see more snapshots in the list, as well): ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ list_snapshots ``` ```output Snapshot UUID State Creation Time 489e6903-2848-478b-9519-577084e49adf COMPLETE 2023-04-20 00:24:58.246932 e4c12e39-6b15-49f2-97d1-86f777650d6b COMPLETE 2023-04-20 00:26:03.257519 3d1176d0-f56d-44f3-bb29-2fcb9b08186b COMPLETE 2023-04-20 00:27:08.272905 c98c890a-97ae-49f0-9c73-8d27c430874f COMPLETE 2023-04-20 00:28:13.287202 17e9c8f7-2965-48d0-8459-c9dc90b8ed93 COMPLETE 2023-04-20 00:29:18.294031 e1900004-9a89-4c3a-b60b-4b570058c4da COMPLETE 2023-04-20 00:30:23.306355 15ac0ae6-8ac2-4248-af69-756bb0abf534 COMPLETE 2023-04-20 00:31:28.319685 fc95304a-b713-4468-a128-d5155c85333a COMPLETE 2023-04-20 00:32:33.332482 4a42a175-8065-4def-969a-b33ddc1bbdba COMPLETE 2023-04-20 00:33:38.345533 Restoration UUID State a61046a2-8b77-4d6e-87e1-1dc44b5ebc69 RESTORED 2bc005ca-c842-4c7c-9cc7-34e1f75ca467 RESTORED ``` Verify that the table exists with the data: ```sh" }, { "data": "-d yugabyte; ``` ```sql select * from employees; ``` ```output employee_no | name | department | salary -+-++-- 1223 | Lucille Ball | Operations | 70000 1224 | John Zimmerman | Sales | 60000 1221 | John Smith | Marketing | 50000 1222 | Bette Davis | Sales | 55000 (4 rows) ``` Verify that a snapshot has happened since table restoration: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ listsnapshotschedules ``` ```output.json { \"schedules\": [ { \"id\": \"1fb2d85a-3608-4cb1-af63-3e4062300dc1\", \"options\": { \"filter\": \"ysql.yugabyte\", \"interval\": \"1 min\", \"retention\": \"10 min\" }, \"snapshots\": [ { \"id\": \"e4c12e39-6b15-49f2-97d1-86f777650d6b\", \"snapshot_time\": \"2023-04-20 00:26:03.257519\", \"previoussnapshottime\": \"2023-04-20 00:24:58.246932\" }, { \"id\": \"3d1176d0-f56d-44f3-bb29-2fcb9b08186b\", \"snapshot_time\": \"2023-04-20 00:27:08.272905\", \"previoussnapshottime\": \"2023-04-20 00:26:03.257519\" }, [...] { \"id\": \"d30fb638-6315-466a-a080-a6050e0dbb04\", \"snapshot_time\": \"2023-04-20 00:34:43.358691\", \"previoussnapshottime\": \"2023-04-20 00:33:38.345533\" } ] } ] } ``` To restore from an absolute time, get a timestamp from the command prompt. You'll add a column to the table, then restore to this time in order to undo the column addition: ```sh python -c 'import datetime; print(datetime.datetime.now().strftime(\"%s%f\"))' ``` ```output 1681965472490517 ``` Using the same database, alter your table by adding a column: ```sql alter table employees add column v2 int; select * from employees; ``` ```output employee_no | name | department | salary | v2 -+-++--+- 1223 | Lucille Ball | Operations | 70000 | 1224 | John Zimmerman | Sales | 60000 | 1221 | John Smith | Marketing | 50000 | 1222 | Bette Davis | Sales | 55000 | (4 rows) ``` At a terminal prompt, restore the snapshot schedule to the timestamp you obtained before you added the column: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ restoresnapshotschedule 1fb2d85a-3608-4cb1-af63-3e4062300dc1 1681965472490517 ``` ```output.json { \"snapshot_id\": \"b3c12c51-e7a3-41a5-bf0d-77cde8520527\", \"restoration_id\": \"470a8e0b-9fe4-418f-a13a-773bdedca013\" } ``` Verify the restoration is in `RESTORED` state (you'll see more snapshots in the list, as well): ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ list_snapshots ``` ```output Snapshot UUID State Creation Time e1900004-9a89-4c3a-b60b-4b570058c4da COMPLETE 2023-04-20 00:30:23.306355 15ac0ae6-8ac2-4248-af69-756bb0abf534 COMPLETE 2023-04-20 00:31:28.319685 fc95304a-b713-4468-a128-d5155c85333a COMPLETE 2023-04-20 00:32:33.332482 4a42a175-8065-4def-969a-b33ddc1bbdba COMPLETE 2023-04-20 00:33:38.345533 d30fb638-6315-466a-a080-a6050e0dbb04 COMPLETE 2023-04-20 00:34:43.358691 d228210b-cd87-4a74-bff6-42108f73456f COMPLETE 2023-04-20 00:35:48.372783 390e4fec-8aa6-466d-827d-6bee435af5aa COMPLETE 2023-04-20 00:36:53.394833 b3c12c51-e7a3-41a5-bf0d-77cde8520527 COMPLETE 2023-04-20 00:37:58.408458 d99317fe-6d20-4c7f-b469-ffb16409fbcf COMPLETE 2023-04-20 00:39:03.419109 Restoration UUID State a61046a2-8b77-4d6e-87e1-1dc44b5ebc69 RESTORED 2bc005ca-c842-4c7c-9cc7-34e1f75ca467 RESTORED 470a8e0b-9fe4-418f-a13a-773bdedca013 RESTORED ``` Check that the v2 column is gone: ```sql select * from employees; ``` ```output employee_no | name | department | salary -+-++-- 1223 | Lucille Ball | Operations | 70000 1224 | John Zimmerman | Sales | 60000 1221 | John Smith | Marketing | 50000 1222 | Bette Davis | Sales | 55000 (4 rows) ``` To restore from an absolute time, get a timestamp from the command prompt. You'll remove a column from the table, then restore to this time to get the column back: ```sh python -c 'import datetime;" }, { "data": "``` ```output 1681965684502460 ``` Using the same database, alter your table by dropping a column: ```sql alter table employees drop salary; select * from employees; ``` ```output employee_no | name | department -+-+-- 1223 | Lucille Ball | Operations 1224 | John Zimmerman | Sales 1221 | John Smith | Marketing 1222 | Bette Davis | Sales (4 rows) ``` Restore the snapshot schedule to the timestamp you obtained before you dropped the column, at a terminal prompt. ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ restoresnapshotschedule 1fb2d85a-3608-4cb1-af63-3e4062300dc1 1681965684502460 ``` ```output { \"snapshot_id\": \"49311e65-cc5b-4d41-9f87-e84d630016a9\", \"restoration_id\": \"fe08826b-9b1d-4621-99ca-505d1d58e184\" } ``` Verify the restoration is in `RESTORED` state (you'll see more snapshots in the list, as well): ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ list_snapshots ``` ```output Snapshot UUID State Creation Time 4a42a175-8065-4def-969a-b33ddc1bbdba COMPLETE 2023-04-20 00:33:38.345533 d30fb638-6315-466a-a080-a6050e0dbb04 COMPLETE 2023-04-20 00:34:43.358691 d228210b-cd87-4a74-bff6-42108f73456f COMPLETE 2023-04-20 00:35:48.372783 390e4fec-8aa6-466d-827d-6bee435af5aa COMPLETE 2023-04-20 00:36:53.394833 b3c12c51-e7a3-41a5-bf0d-77cde8520527 COMPLETE 2023-04-20 00:37:58.408458 d99317fe-6d20-4c7f-b469-ffb16409fbcf COMPLETE 2023-04-20 00:39:03.419109 3f6651a5-00b2-4a9d-99e2-63b8b8e75ccf COMPLETE 2023-04-20 00:40:08.432723 7aa1054a-1c96-4d33-bd37-02cdefaa5cad COMPLETE 2023-04-20 00:41:13.445282 49311e65-cc5b-4d41-9f87-e84d630016a9 COMPLETE 2023-04-20 00:42:18.454674 Restoration UUID State a61046a2-8b77-4d6e-87e1-1dc44b5ebc69 RESTORED 2bc005ca-c842-4c7c-9cc7-34e1f75ca467 RESTORED 470a8e0b-9fe4-418f-a13a-773bdedca013 RESTORED fe08826b-9b1d-4621-99ca-505d1d58e184 RESTORED ``` Verify that the salary column is back: ```sql select * from employees; ``` ```output employee_no | name | department | salary -+-++-- 1223 | Lucille Ball | Operations | 70000 1224 | John Zimmerman | Sales | 60000 1221 | John Smith | Marketing | 50000 1222 | Bette Davis | Sales | 55000 (4 rows) ``` To restore from an absolute time, get a timestamp from the command prompt. You'll create an index on the table, then restore to this time to undo the index creation: ```sh python -c 'import datetime; print(datetime.datetime.now().strftime(\"%s%f\"))' ``` ```output 1681965868912921 ``` Create an index on the table: ```sql create index t1index on employees (employeeno); \\d employees; ``` ```output Table \"public.employees\" Column | Type | Collation | Nullable | Default -++--+-+ employee_no | integer | | not null | name | text | | | department | text | | | salary | integer | | | Indexes: \"employeespkey\" PRIMARY KEY, lsm (employeeno HASH) \"t1index\" lsm (employeeno HASH) ``` Restore the snapshot schedule to the timestamp you obtained before you created the index, at a terminal prompt: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ restoresnapshotschedule 1fb2d85a-3608-4cb1-af63-3e4062300dc1 1681965868912921 ``` ```output { \"snapshot_id\": \"6a014fd7-5aad-4da0-883b-0c59a9261ed6\", \"restoration_id\": \"6698a1c4-58f4-48cb-8ec7-fa7b31ecca72\" } ``` Verify the restoration is in `RESTORED` state (you'll see more snapshots in the list, as well): ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ list_snapshots ``` ```output Snapshot UUID State Creation Time 390e4fec-8aa6-466d-827d-6bee435af5aa COMPLETE 2023-04-20 00:36:53.394833 b3c12c51-e7a3-41a5-bf0d-77cde8520527 COMPLETE 2023-04-20 00:37:58.408458 d99317fe-6d20-4c7f-b469-ffb16409fbcf COMPLETE 2023-04-20 00:39:03.419109 3f6651a5-00b2-4a9d-99e2-63b8b8e75ccf COMPLETE 2023-04-20 00:40:08.432723 7aa1054a-1c96-4d33-bd37-02cdefaa5cad COMPLETE 2023-04-20 00:41:13.445282 49311e65-cc5b-4d41-9f87-e84d630016a9 COMPLETE 2023-04-20 00:42:18.454674 c6d37ea5-002e-4dff-b691-94d458f4b1f9 COMPLETE 2023-04-20 00:43:23.469233 98879e83-d507-496c-aa69-368fc2de8cf8 COMPLETE 2023-04-20 00:44:28.476244 6a014fd7-5aad-4da0-883b-0c59a9261ed6 COMPLETE 2023-04-20 00:45:33.467234 Restoration UUID State a61046a2-8b77-4d6e-87e1-1dc44b5ebc69 RESTORED 2bc005ca-c842-4c7c-9cc7-34e1f75ca467 RESTORED 470a8e0b-9fe4-418f-a13a-773bdedca013 RESTORED fe08826b-9b1d-4621-99ca-505d1d58e184 RESTORED 6698a1c4-58f4-48cb-8ec7-fa7b31ecca72 RESTORED ``` Verify that the index is gone: ```sql \\d employees; ``` ```output Table \"public.employees\" Column | Type | Collation | Nullable | Default -++--+-+ employee_no | integer | | not null | name | text | | | department | text | | | salary | integer | | | Indexes: \"employeespkey\" PRIMARY KEY, lsm (employeeno HASH) ``` Along similar lines, you can undo index deletions and alter table rename columns." } ]
{ "category": "App Definition and Development", "file_name": "cluster_conf.md", "project_name": "EDB", "subcategory": "Database" }
[ { "data": "CloudNativePG supports mounting custom files inside the Postgres pods through `.spec.projectedVolumeTemplate`. This ability is useful for several Postgres features and extensions that require additional data files. In CloudNativePG, the `.spec.projectedVolumeTemplate` field is a template in Kubernetes that allows you to mount arbitrary data under the `/projected` folder in Postgres pods. This simple example shows how to mount an existing TLS secret (named `sample-secret`) as files into Postgres pods. The values for the secret keys `tls.crt` and `tls.key` in `sample-secret` are mounted as files into the paths `/projected/certificate/tls.crt` and `/projected/certificate/tls.key` in the Postgres pod. ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example-projected-volumes spec: instances: 3 projectedVolumeTemplate: sources: secret: name: sample-secret items: key: tls.crt path: certificate/tls.crt key: tls.key path: certificate/tls.key storage: size: 1Gi ``` You can find a complete example that uses a projected volume template to mount the secret and ConfigMap in the deployment manifest. CloudNativePG relies on for part of the internal activities. Ephemeral volumes exist for the sole duration of a pod's life, without persisting across pod restarts. The operator uses by default an `emptyDir` volume, which can be customized by using the `.spec.ephemeralVolumesSizeLimit field`. This can be overridden by specifying a volume claim template in the `.spec.ephemeralVolumeSource` field. In the following example, a `1Gi` ephemeral volume is set. ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example-ephemeral-volume-source spec: instances: 3 ephemeralVolumeSource: volumeClaimTemplate: spec: accessModes: [\"ReadWriteOnce\"] storageClassName: \"scratch-storage-class\" resources: requests: storage: 1Gi ``` Both `.spec.emphemeralVolumeSource` and `.spec.ephemeralVolumesSizeLimit.temporaryData` cannot be specified simultaneously. This volume is used as shared memory space for Postgres and as an ephemeral type but stored in memory. You can configure an upper bound on the size using the `.spec.ephemeralVolumesSizeLimit.shm` field in the cluster spec. Use this field only in case of . You can customize some system behavior using environment variables. One example is the `LDAPCONF` variable, which can point to a custom LDAP configuration file. Another example is the `TZ` environment variable, which represents the timezone used by the PostgreSQL container. CloudNativePG allows you to set custom environment variables using the `env` and the `envFrom` stanza of the cluster specification. This example defines a PostgreSQL cluster using the `Australia/Sydney` timezone as the default cluster-level timezone: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example spec: instances: 3 env: name: TZ value: Australia/Sydney storage: size: 1Gi ``` The `envFrom` stanza can refer to ConfigMaps or secrets to use their content as environment variables: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example spec: instances: 3 envFrom: configMapRef: name: config-map-name secretRef: name: secret-name storage: size: 1Gi ``` The operator doesn't allow setting the following environment variables: `POD_NAME` `NAMESPACE` Any environment variable whose name starts with `PG`. Any change in the `env` or in the `envFrom` section triggers a rolling update of the PostgreSQL pods. If the `env` or the `envFrom` section refers to a secret or a ConfigMap, the operator doesn't detect any changes in them and doesn't trigger a rollout. The kubelet uses the same behavior with pods, and you must trigger the pod rollout manually." } ]
{ "category": "App Definition and Development", "file_name": "disabling_catchall.md", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "+++ title = \"`basic_result(Args...) = delete`\" description = \"Disabling catchall constructor used to give useful diagnostic error when trying to use non-inplace constructors when `predicate::constructors_enabled` is false.\" categories = [\"constructors\", \"disabling-constructors\"] weight = 160 +++ Disabling catchall constructor used to give useful diagnostic error when trying to use non-inplace constructors when `predicate::constructors_enabled` is false. Requires: `predicate::constructors_enabled` is false. Complexity: N/A." } ]
{ "category": "App Definition and Development", "file_name": "HdfsRollingUpgrade.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> HDFS Rolling Upgrade ==================== <!-- MACRO{toc|fromDepth=0|toDepth=3} --> Introduction HDFS rolling upgrade allows upgrading individual HDFS daemons. For examples, the datanodes can be upgraded independent of the namenodes. A namenode can be upgraded independent of the other namenodes. The namenodes can be upgraded independent of datanodes and journal nodes. Upgrade In Hadoop v2, HDFS supports highly-available (HA) namenode services and wire compatibility. These two capabilities make it feasible to upgrade HDFS without incurring HDFS downtime. In order to upgrade a HDFS cluster without downtime, the cluster must be setup with HA. If there is any new feature which is enabled in new software release, may not work with old software release after upgrade. In such cases upgrade should be done by following steps. Disable new feature. Upgrade the cluster. Enable the new feature. Note that rolling upgrade is supported only from Hadoop-2.4.0 onwards. In an HA cluster, there are two or more NameNodes (NNs), many DataNodes (DNs), a few JournalNodes (JNs) and a few ZooKeeperNodes (ZKNs). JNs is relatively stable and does not require upgrade when upgrading HDFS in most of the cases. In the rolling upgrade procedure described here, only NNs and DNs are considered but JNs and ZKNs are not. Upgrading JNs and ZKNs may incur cluster downtime. Suppose there are two namenodes NN1 and NN2, where NN1 and NN2 are respectively in active and standby states. The following are the steps for upgrading an HA cluster: Prepare Rolling Upgrade Run \"\" to create a fsimage for rollback. Run \"\" to check the status of the rollback image. Wait and re-run the command until the \"`Proceed with rolling upgrade`\" message is shown. Upgrade Active and Standby NNs Shutdown and upgrade NN2. Start NN2 as standby with the \"\" option. Failover from NN1 to NN2 so that NN2 becomes active and NN1 becomes standby. Shutdown and upgrade NN1. Start NN1 as standby with the \"\" option. Upgrade DNs Choose a small subset of datanodes (e.g. all datanodes under a particular rack). Run \"\" to shutdown one of the chosen datanodes. Run \"\" to check and wait for the datanode to shutdown. Upgrade and restart the datanode. Perform the above steps for all the chosen datanodes in the subset in parallel. Repeat the above steps until all datanodes in the cluster are" }, { "data": "Finalize Rolling Upgrade Run \"\" to finalize the rolling upgrade. In a federated cluster, there are multiple namespaces and a pair of active and standby NNs for each namespace. The procedure for upgrading a federated cluster is similar to upgrading a non-federated cluster except that Step 1 and Step 4 are performed on each namespace and Step 2 is performed on each pair of active and standby NNs, i.e. Prepare Rolling Upgrade for Each Namespace Upgrade Active and Standby NN pairs for Each Namespace Upgrade DNs Finalize Rolling Upgrade for Each Namespace For non-HA clusters, it is impossible to upgrade HDFS without downtime since it requires restarting the namenodes. However, datanodes can still be upgraded in a rolling manner. In a non-HA cluster, there are a NameNode (NN), a SecondaryNameNode (SNN) and many DataNodes (DNs). The procedure for upgrading a non-HA cluster is similar to upgrading an HA cluster except that Step 2 \"Upgrade Active and Standby NNs\" is changed to below: Upgrade NN and SNN* Shutdown SNN Shutdown and upgrade NN. Start NN with the \"\" option. Upgrade and restart SNN Downgrade and Rollback When the upgraded release is undesirable or, in some unlikely case, the upgrade fails (due to bugs in the newer release), administrators may choose to downgrade HDFS back to the pre-upgrade release, or rollback HDFS to the pre-upgrade release and the pre-upgrade state. Note that downgrade can be done in a rolling fashion but rollback cannot. Rollback requires cluster downtime. Note also that downgrade and rollback are possible only after a rolling upgrade is started and before the upgrade is terminated. An upgrade can be terminated by either finalize, downgrade or rollback. Therefore, it may not be possible to perform rollback after finalize or downgrade, or to perform downgrade after finalize. Downgrade Downgrade restores the software back to the pre-upgrade release and preserves the user data. Suppose time T is the rolling upgrade start time and the upgrade is terminated by downgrade. Then, the files created before or after T remain available in HDFS. The files deleted before or after T remain deleted in HDFS. A newer release is downgradable to the pre-upgrade release only if both the namenode layout version and the datanode layout version are not changed between these two releases. In an HA cluster, when a rolling upgrade from an old software release to a new software release is in progress, it is possible to downgrade, in a rolling fashion, the upgraded machines back to the old software release. Same as before, suppose NN1 and NN2 are respectively in active and standby states. Below are the steps for rolling downgrade without downtime: Downgrade DNs Choose a small subset of datanodes (e.g. all datanodes under a particular rack). Run \"\" to shutdown one of the chosen datanodes. Run \"\" to check and wait for the datanode to shutdown. Downgrade and restart the datanode. Perform the above steps for all the chosen datanodes in the subset in" }, { "data": "Repeat the above steps until all upgraded datanodes in the cluster are downgraded. Downgrade Active and Standby NNs Shutdown and downgrade NN2. Start NN2 as standby normally. Failover from NN1 to NN2 so that NN2 becomes active and NN1 becomes standby. Shutdown and downgrade NN1. Start NN1 as standby normally. Finalize Rolling Downgrade Run \"\" to finalize the rolling downgrade. Note that the datanodes must be downgraded before downgrading the namenodes since protocols may be changed in a backward compatible manner but not forward compatible, i.e. old datanodes can talk to the new namenodes but not vice versa. Rollback -- Rollback restores the software back to the pre-upgrade release but also reverts the user data back to the pre-upgrade state. Suppose time T is the rolling upgrade start time and the upgrade is terminated by rollback. The files created before T remain available in HDFS but the files created after T become unavailable. The files deleted before T remain deleted in HDFS but the files deleted after T are restored. Rollback from a newer release to the pre-upgrade release is always supported. However, it cannot be done in a rolling fashion. It requires cluster downtime. Suppose NN1 and NN2 are respectively in active and standby states. Below are the steps for rollback: Rollback HDFS Shutdown all NNs and DNs. Restore the pre-upgrade release in all machines. Start NN1 as Active with the \"\" option. Run `-bootstrapStandby' on NN2 and start it normally as standby. Start DNs with the \"`-rollback`\" option. Commands and Startup Options for Rolling Upgrade hdfs dfsadmin -rollingUpgrade <query|prepare|finalize> Execute a rolling upgrade action. Options: | | | | `query` | Query the current rolling upgrade status. | | `prepare` | Prepare a new rolling upgrade. | | `finalize` | Finalize the current rolling upgrade. | hdfs dfsadmin -getDatanodeInfo <DATANODEHOST:IPCPORT> Get the information about the given datanode. This command can be used for checking if a datanode is alive like the Unix `ping` command. hdfs dfsadmin -shutdownDatanode <DATANODEHOST:IPCPORT> [upgrade] Submit a shutdown request for the given datanode. If the optional `upgrade` argument is specified, clients accessing the datanode will be advised to wait for it to restart and the fast start-up mode will be enabled. When the restart does not happen in time, clients will timeout and ignore the datanode. In such case, the fast start-up mode will also be disabled. Note that the command does not wait for the datanode shutdown to complete. The \"\" command can be used for checking if the datanode shutdown is completed. hdfs namenode -rollingUpgrade <rollback|started> When a rolling upgrade is in progress, the `-rollingUpgrade` namenode startup option is used to specify various rolling upgrade options. Options: | | | | `rollback` | Restores the namenode back to the pre-upgrade release but also reverts the user data back to the pre-upgrade state. | | `started` | Specifies a rolling upgrade already started so that the namenode should allow image directories with different layout versions during startup. | WARN: downgrade options is obsolete. It is not necessary to start namenode with downgrade options explicitly." } ]
{ "category": "App Definition and Development", "file_name": "design_of_gae.md", "project_name": "GraphScope", "subcategory": "Database" }
[ { "data": "In GraphScope, Graph Analytics Engine (GAE) is responsible for handling various graph analytics algorithms. GAE in GraphScope derives from , a graph processing system proposed on SIGMOD-2017. GRAPE differs from prior systems in its ability to parallelize sequential graph algorithms as a whole. Different from other parallel graph processing systems which need to recast the entire algorithm into a new model, in GRAPE, sequential algorithms can be easily plugged into with only minor changes and get parallelized to handle large graphs efficiently. In addition to the ease of programming, GRAPE is designed to be highly efficient and flexible, to cope the scale, variety and complexity from real-life graph applications. GAE has three main components: graph storage, execution framework and algorithm library. Next, we give an overview to each of them below. GAE as well as other execution engines of GraphScope work on top of a unified graph storage. The graph storage consists of different graph formats with different features, e.g., some support real time graph update while others treat graph data as immutable data once created; some store graph data in memory for better performance, while others support persistent storage. Although there exist diverse types of graph storage, GraphScope offers a unified interfaces for graph storage, and thus GAE does not care about how each type of graph storage is implemented. Please check out for more details. At the core of GAE is its execution framework, which has the following features in order to handle efficient graph analytics execution over distributed and large-scale graph data. There exist many programming models in various graph analytics systems, and GAE supports both PIE model vertex-centric model (Pregel). Different from other programming models which need to recast the entire algorithm into a new model, in PIE, the execution of existing sequential algorithms can be automatically parallelized with only minor" }, { "data": "To be more specific, as shown in the following figure, in the PIE model, users only need to provide three functions, (1) PEval, a function for given a query, computes the answer on a local graph; (2) IncEval, an incremental function, computes changes to the old output by treating incoming messages as updates; and (3) Assemble, which collects partial answers, and combines them into a complete answer. After that, GAE auto-parallelizes the graph analytics tasks across a cluster of workers. More details of the PIE model can refer to . Multi-language SDKs are provided by GAE. Users choose to write their own algorithms in either C++, Java or Python. With Python, users can still expect a high performance. GAE integrated a compiler built with Cython. It can generate efficient native code from Python algorithms behind the scenes, and dispatch the code to the GraphScope cluster for execution. The SDKs further lower the total cost of ownership of graph analytics. GAE achieves high performance through a highly optimized analytical runtime based on libgrape-lite. Many optimization techniques, such as pull/push dynamic switching, cache-efficient memory layout, and pipelining were employed in the runtime. It performs well in LDBC Graph Analytics Benchmark, and outperforms other state-of-the-art graph systems. GAE is designed to be highly efficient and flexible, to cope with the scale, variety and complexity from real-life graph analytics applications. GAE of GraphScope provides 20 graph analytics algorithms as built-in algorithms, and users can directly invoke them. The full lists of build-in algorithms are: `sssp(src)` `pagerank()` `lpa()` `kkore()` `kshell()` `hits()` `dfs(src)` `bfs(src)` `voderank()` `clustering()` `allpairsshortestpathlength()` `attribute_assortativity.()` `averagedegreeassortativity.()` `degree_assortativity.()` `betweenness_centrality()` `closeness_centrality()` `degree_centrality()` `eigenvector_centrality()` `katz_centrality()` `sampling_path()` In addition, GraphScope is compatible with NetworkX APIs, and thus diverse kinds of can also be directly invoked by users. In total, over 100 build-in graph analytical algorithms can be directly executed over GraphScope, without any developing effort." } ]
{ "category": "App Definition and Development", "file_name": "kbcli_migration_templates.md", "project_name": "KubeBlocks by ApeCloud", "subcategory": "Database" }
[ { "data": "title: kbcli migration templates List migration templates. ``` kbcli migration templates [NAME] [flags] ``` ``` kbcli migration templates kbcli migration templates mytemplate kbcli migration templates mytemplate -o yaml kbcli migration templates mytemplate -o json kbcli migration templates mytemplate -o wide ``` ``` -A, --all-namespaces If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace. -h, --help help for templates -o, --output format prints the output in the specified format. Allowed values: table, json, yaml, wide (default table) -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. --show-labels When printing, show all labels as the last column (default hide labels column) ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Data migration between two data sources." } ]
{ "category": "App Definition and Development", "file_name": "syntax-diagrams.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Edit syntax diagrams headerTitle: Edit syntax diagrams linkTitle: Syntax diagrams description: Edit syntax diagrams headcontent: How to edit syntax diagrams menu: preview: identifier: docs-syntax-diagrams parent: docs-edit weight: 3000 rightNav: hideH4: true type: docs This document describes how to make changes to YSQL API or add new ones. {{< note >}} The following describes how to create and edit the syntax grammar and diagrams for the . The YCQL documentation uses a different method. Take advice from colleagues if you need to work on YCQL diagrams. {{< /note >}} Before editing syntax diagrams, you should understand some basic terminology. A syntax rule is the formal definition of the grammar of a SQL statement, or a component of a SQL statement. Every syntax rule is defined textually in the single . A syntax rule is specified using . The following is an example of the syntax rule for the statement: ```output.ebnf preparestatement ::= 'PREPARE' name [ '(' datatype { ',' data_type } ')' ] 'AS' statement ; |-- Rule Name --| |-- Rule Definition --> ``` When this is presented on the Grammar tab in the , it is transformed into the PostgreSQL notation. This is widely used in database documentation. But it is not suitable as input for a program that generate diagrams. A syntax rule has two parts: Rule name - This is left of `::=` in the rule. Names conventionally use lower case with underscores. Rule definition - This is right of `::=` in the rule. The definition has three kinds of element: EBNF syntax elements like these: `[ ] ( ) ' { } ,` Keywords and punctuation that are part of the grammar that is being defined. Keywords are conventionally spelled in upper case. Each individual keyword is surrounded by single quotes. This conveys the meaning that whitespace between these, and other elements in the target grammar, is insignificant. Punctuation characters in the target grammar are surrounded by single quotes to distinguish them from punctuation characters in the EBNF grammar. References to other rule names. These become clickable links in the generated diagrams (but not in the generated grammars). Rule names are spelled in lower case. Notice how underscore is used between the individual English words. The single space before the `;` terminator is significant. The result, in the published documentation, of a that is contained in the . The syntax diagram and the syntax rule bear a one-to-one mutual relationship. The of the account of the statement provides a short, but sufficient, example. The syntax diagram appears as a (member of a) tabbed pair which gives the reader the choice to see a syntax rule as either the \"Grammar\" form (in the syntax used in the PostgreSQL documentation and the documentation for several other databases) or the \"Diagram\" form (a so-called \"railroad diagram\" that again is used commonly in the documentation for several other databases). A set of syntax rule names grouped and displayed together on a content page. The `ysqlgrammar.ebnf` file located at `/docs/content/preview/api/ysql/syntaxresources/` holds the definition, written in EBNF notation, of every . This file in all its entirety is manually typed. You create a syntax rule by adding it to this file. The order in which the rules are specified in this file is reproduced in the . This, in turn, reflects decisions made by authors about what makes a convenient reading" }, { "data": "Try to spot what informs the present ordering and insert new rules in a way that respects this. In the limit, insert a new rule at the end as the last new properly-defined rule and before this comment: ```ebnf ( Supporting rules ) ``` The grammar contains every that is generated from all of the that are found in the . This file is automatically generated. are generated dynamically. The diagram/grammar pair are automatically populated based on the . There are typically two workflows: Add or modify syntax rules. You do this by editing the . Specify the syntax diagrams to be displayed on a page. You do this by adding an `ebnf` block to the page. To add a new syntax rule or modify an existing one, edit the . If you are developing on a local machine, you need to restart Hugo after modifying the for the changes to be reflected on the local preview. Note that regenerating diagrams can take upwards of a minute or more, and longer if there are errors. Check the syntax of your rule carefully; refer to for examples of errors you can encounter. To specify the syntax diagrams to display on a page, use the `ebnf` shortcode and provide the rule name of each that you want included as a separate line ending with a comma. For example, for , to show `selectstart` and `windowclause` on the page, you would add the following syntax diagram set: ```ebnf {{%/ebnf/%}} select_start, window_clause {{%//ebnf/%}} ``` This would add the grammar and syntax tabs as follows: {{%ebnf%}} select_start, window_clause {{%/ebnf%}} The syntax and grammar diagrams are generated in the same order as included in the `ebnf` shortcode. Suppose that a syntax rule includes a reference to another syntax rule. If the referenced syntax rule is included in the same , then the name of the syntax rule in the referring becomes a link to the syntax rule in that same syntax diagram set. Otherwise the generated link target of the referring rule is in the . The way that this link is spelled depends on the location of the `.md` file that includes the generated syntax diagram. In the case you have multiple syntax diagram sets on the same page and would like to cross-reference each other on the same page, specify the local rules that need to be cross referenced as comma separated values in the `localrefs` argument of the `ebnf` shortcode. For example, ```ebnf {{%/ebnf localrefs=\"window_definition,frame_clause\"/%}} ``` This will ensure that any reference to `windowdefinition` or `frameclause` in this syntax diagram set will link to another syntax diagram set on the same page and not to the grammar diagrams file. In at least one case, the use of semantically redundant notation in the conveys the author's intent about how the diagram should be constructed. Here's an example of semantically significant use of `( ... )` in algebra: &nbsp;&nbsp;`(a/b) - c` differs in meaning from `a/(b - c)` And while the rules do give `a/b - c` an unambiguous meaning (advertised by the absence of space surrounding the `/` operator), it's generally considered unhelpful to write this, thereby forcing the reader to rehearse the rules in order to discern the meaning. Rather, the semantically redundant orthography `(a/b) - c` is preferred. In EBNF, `( ... )` plays a similar role as it does in algebra. In some situations, just which subset of items is grouped changes the" }, { "data": "But in some situations (like `a + b + c` in algebra) the use of `( ... )` conveys no semantics. Look for this at the end of the : ```ebnf (* Notice that the \"demo-2\" rule uses ( ... ) redundantly. The two rules are semantically identical but they produce the different diagrams. The diagrams do express the same meaning but they are drawn using different conventions. Uncomment the two rule definitions and look at the end of the grammar diagrams file to see the two semantically equivalent, but differently drawn, diagrams. Make sure that you comment these out again before creating a Pull request. *) (* demo-1-irrelevant-for-ysql-syntax ::= ( a 'K' b { ',' a 'K' b } ) ; demo-2-irrelevant-for-ysql-syntax ::= ( ( a 'K' b ) { ',' ( a 'K' b ) } ) ; *) ``` Look for this syntax rule in the : ```ebnf declare = 'DECLARE' cursor_name [ 'BINARY' ] [ 'INSENSITIVE' ] [ [ 'NO' ] 'SCROLL' ] \\ 'CURSOR' [ ( 'WITH' | 'WITHOUT' ) 'HOLD' ] 'FOR' subquery ; ``` Remove the single quotes that surround `'INSENSITIVE'`. This will change its status in EBNF's grammar from keyword to syntax rule. Most people find it hard to spot such a typo just by proofreading. Now restart Hugo. You won't see any errors reported on stderr. But if you look carefully at the stdout report, you'll see this warning: ```bash WARNING: Undefined rules referenced in rule 'declare': [INSENSITIVE] ``` It's clear what it means. And you must fix it because otherwise a reader of the YSQL documentation will see a semantic errorand will wonder what on earth the insensitive rule is. Fix the problem immediately and restart Hugo. Then see a clean report again. This exercise tells you that it's a very good plan to restart Hugo after every edit to any single grammar rule. You'll know what rule you just edited and so you'll immediately know where to look for the error. Look for this in the : ```ebnf savepoint_rollback = ( 'ROLLBACK' ['WORK' | 'TRANSACTION' ] 'TO' [ 'SAVEPOINT' ] name ) ; ``` Edit it to change, say, the first `]` character to `}`. It's easy to do this typo because these two characters are on the same key and it's hard to see the difference when you use a small font. Now restart Hugo. You'll see this on stderr: ```java Exception in thread \"main\" java.lang.IllegalStateException: This element must not be nested and should have been processed before entering generation. at net.nextencia.rrdiagram.grammar.rrdiagram.RRBreak.computeLayoutInfo(RRBreak.java:19) at net.nextencia.rrdiagram.grammar.rrdiagram.RRSequence.computeLayoutInfo(RRSequence.java:34) at net.nextencia.rrdiagram.grammar.rrdiagram.RRChoice.computeLayoutInfo(RRChoice.java:30) at net.nextencia.rrdiagram.grammar.rrdiagram.RRSequence.computeLayoutInfo(RRSequence.java:34) at net.nextencia.rrdiagram.grammar.rrdiagram.RRDiagram.toSVG(RRDiagram.java:333) at net.nextencia.rrdiagram.grammar.rrdiagram.RRDiagramToSVG.convert(RRDiagramToSVG.java:30) at net.nextencia.rrdiagram.Main.regenerateReferenceFile(Main.java:139) at net.nextencia.rrdiagram.Main.regenerateFolder(Main.java:72) at net.nextencia.rrdiagram.Main.main(Main.java:54) ``` The error will cause the notorious Hugo black screen in the browser. It's best to \\<ctrl\\>-C Hugo now. This is hardly user-friendly! You'll also see several warnings on stdout. Almost all of these are basic consequences of the actual problem and so tell you nothing. Here's the significant information: ```sql WARNING: Exception occurred while exporting rule savepoint_rollback WARNING: savepoint_rollback = 'ROLLBACK' [ 'WORK' | 'TRANSACTION' } 'TO' [ 'SAVEPOINT' ] name ) ; ... ``` You can see that the offending `}` is mentioned. Fix it immediately, restart Hugo, and then see a clean report again. This exercise, too, tells you that you should restart Hugo after every edit to a grammar rule. Here too, you'll know what rule you just edited and so will immediately know where to look for the error." } ]
{ "category": "App Definition and Development", "file_name": "buildvariants.md", "project_name": "MongoDB", "subcategory": "Database" }
[ { "data": "This document describes build variants (a.k.a. variants, or builds, or buildvariants) that are used in `mongodb-mongo-*` projects. To know more about build variants, please refer to the section of the Evergreen wiki. Build variant configuration files are in `etc/evergreenymlcomponents/variants` directory. They are merged into `etc/evergreen.yml` and `etc/evergreen_nightly.yml` with Evergreen's feature. Inside `etc/evergreenymlcomponents/variants` directory there are more directories, which are in most cases platform names (e.g. amazon, rhel etc.) or build variant group names (e.g. sanitizer etc.). Be aware that some of these files could be also used or re-used to be merged into `etc/system_perf.yml` which is used for `sys-perf` project. `mongodb-mongo-master` evergreen project uses `etc/evergreen.yml` and contains all build variants for development, including all feature-specific, patch build required, and suggested variants. `mongodb-mongo-master-nightly` evergreen project uses `etc/evergreen_nightly.yml` and contains build variants for public nightly builds. \"Required\" build variants are defined as any build variant with a `!` at the front of its display name in Evergreen. These build variants also have `required` tag. \"Suggested\" build variants are defined as any build variant with a `*` at the front of its display name in Evergreen. These build variants also have `suggested` tag. In each of platform or build variant group directory there can be these files: `test_dev.yml` these files are merged into `etc/evergreen.yml` which is used for `mongodb-mongo-master` project on master branch after branching on all new branches these files are merged into `etc/evergreen_nightly.yml` which is used for a new branch `mognodb-mongo-vX.Y` project `testdevmasterandltsbranchesonly.yml` these files are merged into `etc/evergreen.yml` which is used for `mongodb-mongo-master` project on master branch after branching for LTS release (v7.0, v8.0 etc.) on a new branch these files are merged into `etc/evergreen_nightly.yml` which is used for a new branch `mognodb-mongo-vX.Y` project important: all tests that are running on these build variants will NOT run on a new Rapid release (v7.1, v7.2, v7.3, v8.1, v8.2, v8.3 etc.) branch projects `testdevmasterbranchonly.yml` these files are merged into `etc/evergreen.yml` which is used for `mongodb-mongo-master` project on master branch after branching on all new branches these files are NOT used important: all tests that are running on these build variants will NOT run on a new branch `mongodb-mongo-vX.Y` project `test_release.yml` these files are merged into `etc/evergreen_nightly.yml` which is used for `mongodb-mongo-master-nightly` project on master branch after branching on all new branches these files are merged into `etc/evergreen_nightly.yml` which is used for a new branch `mognodb-mongo-vX.Y` project `testreleasemasterandltsbranchesonly.yml` these files are merged into `etc/evergreen_nightly.yml` which is used for `mongodb-mongo-master-nightly` project on master branch after branching for LTS release (v7.0, v8.0 etc.) on a new branch these files are merged into `etc/evergreen_nightly.yml` which is used for a new branch `mognodb-mongo-vX.Y` project important: all tests that are running on these build variants will NOT run on a new Rapid release (v7.1, v7.2, v7.3, v8.1, v8.2, v8.3 etc.) branch projects `testreleasemasterbranchonly.yml` these files are merged into `etc/evergreen_nightly.yml` which is used for `mongodb-mongo-master-nightly` project on master branch after branching on all new branches these files are NOT used important: all tests that are running on these build variants will NOT run on a new branch `mongodb-mongo-vX.Y` project" } ]
{ "category": "App Definition and Development", "file_name": "adl_bridging.md", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "+++ title = \"ADL bridging\" description = \"\" weight = 20 tags = [ \"adl-bridging\"] +++ {{% notice note %}} In Outcome v2.2 the ADL-based event hooks were replaced with policy-based event hooks (next page). The code in this section is still valid in v2.2 onwards, it's just that ADL is no longer used to find the hooks. {{% /notice %}} In a previous section, we used the `failure_info` type to create the ADL bridge into the namespace where the ADL discovered function was to be found. Here we do the same, but more directly by creating a thin clone of `std::error_code` into the local namespace. This ensures that this namespace will be searched by the compiler when discovering the event hooks (Outcome v2.1 and earlier only). {{% snippet \"errorcodeextended.cpp\" \"errorcodeextended2\" %}} For convenience, we template alias local copies of `result` and `outcome` in this namespace bound to the ADL bridging `error_code`." } ]
{ "category": "App Definition and Development", "file_name": "backup_barmanobjectstore.md", "project_name": "CloudNativePG", "subcategory": "Database" }
[ { "data": "CloudNativePG natively supports online/hot backup of PostgreSQL clusters through continuous physical backup and WAL archiving on an object store. This means that the database is always up (no downtime required) and that Point In Time Recovery is available. The operator can orchestrate a continuous backup infrastructure that is based on the tool. Instead of using the classical architecture with a Barman server, which backs up many PostgreSQL instances, the operator relies on the `barman-cloud-wal-archive`, `barman-cloud-check-wal-archive`, `barman-cloud-backup`, `barman-cloud-backup-list`, and `barman-cloud-backup-delete` tools. As a result, base backups will be tarballs. Both base backups and WAL files can be compressed and encrypted. For this, it is required to use an image with `barman-cli-cloud` included. You can use the image `ghcr.io/cloudnative-pg/postgresql` for this scope, as it is composed of a community PostgreSQL image and the latest `barman-cli-cloud` package. !!! Important Always ensure that you are running the latest version of the operands in your system to take advantage of the improvements introduced in Barman cloud (as well as improve the security aspects of your cluster). A backup is performed from a primary or a designated primary instance in a `Cluster` (please refer to for more information about designated primary instances), or alternatively on a . If you are looking for a specific object store such as , , , or , or a compatible provider, please refer to . !!! Important Retention policies are not currently available on volume snapshots. CloudNativePG can manage the automated deletion of backup files from the backup object store, using retention policies based on the recovery window. Internally, the retention policy feature uses `barman-cloud-backup-delete` with `--retention-policy RECOVERY WINDOW OF {{ retention policy value }} {{ retention policy unit }}`. For example, you can define your backups with a retention policy of 30 days as follows: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster [...] spec: backup: barmanObjectStore: destinationPath: \"<destination path here>\" s3Credentials: accessKeyId: name: aws-creds key: ACCESSKEYID secretAccessKey: name: aws-creds key: ACCESSSECRETKEY retentionPolicy: \"30d\" ``` !!! Note \"There's more ...\" The recovery window retention policy is focused on the concept of Point of Recoverability (`PoR`), a moving point in time determined by `current time - recovery window`. The first valid backup is the first available backup before `PoR` (in reverse chronological order). CloudNativePG must ensure that we can recover the cluster at any point in time between `PoR` and the latest successfully archived WAL file, starting from the first valid backup. Base backups that are older than the first valid backup will be marked as obsolete and permanently removed after the next backup is" }, { "data": "CloudNativePG by default archives backups and WAL files in an uncompressed fashion. However, it also supports the following compression algorithms via `barman-cloud-backup` (for backups) and `barman-cloud-wal-archive` (for WAL files): bzip2 gzip snappy The compression settings for backups and WALs are independent. See the and sections in the API reference. It is important to note that archival time, restore time, and size change between the algorithms, so the compression algorithm should be chosen according to your use case. The Barman team has performed an evaluation of the performance of the supported algorithms for Barman Cloud. The following table summarizes a scenario where a backup is taken on a local MinIO deployment. The Barman GitHub project includes a . | Compression | Backup Time (ms) | Restore Time (ms) | Uncompressed size (MB) | Compressed size (MB) | Approx ratio | |-||-||--|--| | None | 10927 | 7553 | 395 | 395 | 1:1 | | bzip2 | 25404 | 13886 | 395 | 67 | 5.9:1 | | gzip | 116281 | 3077 | 395 | 91 | 4.3:1 | | snappy | 8134 | 8341 | 395 | 166 | 2.4:1 | Barman 2.18 introduces support for tagging backup resources when saving them in object stores via `barman-cloud-backup` and `barman-cloud-wal-archive`. As a result, if your PostgreSQL container image includes Barman with version 2.18 or higher, CloudNativePG enables you to specify tags as key-value pairs for backup objects, namely base backups, WAL files and history files. You can use two properties in the `.spec.backup.barmanObjectStore` definition: `tags`: key-value pair tags to be added to backup objects and archived WAL file in the backup object store `historyTags`: key-value pair tags to be added to archived history files in the backup object store The excerpt of a YAML manifest below provides an example of usage of this feature: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster [...] spec: backup: barmanObjectStore: [...] tags: backupRetentionPolicy: \"expire\" historyTags: backupRetentionPolicy: \"keep\" ``` You can append additional options to the `barman-cloud-backup` command by using the `additionalCommandArgs` property in the `.spec.backup.barmanObjectStore.data` section. This property is a list of strings that will be appended to the `barman-cloud-backup` command. For example, you can use the `--read-timeout=60` to customize the connection reading timeout. For additional options supported by `barman-cloud-backup` you can refer to the official barman documentation . If an option provided in `additionalCommandArgs` is already present in the declared options in the `barmanObjectStore` section, the extra option will be ignored. The following is an example of how to use this property: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster [...] spec: backup: barmanObjectStore: [...] data: additionalCommandArgs: \"--min-chunk-size=5MB\" \"--read-timeout=60\" ```" } ]
{ "category": "App Definition and Development", "file_name": "ssl-x509.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "slug: /en/operations/external-authenticators/ssl-x509 title: \"SSL X.509 certificate authentication\" import SelfManaged from '@site/docs/en/snippets/selfmanagedonlynoroadmap.md'; <SelfManaged /> enables mandatory certificate validation for the incoming connections. In this case, only connections with trusted certificates can be established. Connections with untrusted certificates will be rejected. Thus, certificate validation allows to uniquely authenticate an incoming connection. `Common Name` field of the certificate is used to identify connected user. This allows to associate multiple certificates with the same user. Additionally, reissuing and revoking of the certificates does not affect the ClickHouse configuration. To enable SSL certificate authentication, a list of `Common Name`'s for each ClickHouse user must be specified in the settings file `users.xml `: Example ```xml <clickhouse> <!- ... --> <users> <user_name> <ssl_certificates> <commonname>host.domain.com:exampleuser</common_name> <commonname>host.domain.com:exampleuserdev</commonname> <!-- More names --> </ssl_certificates> <!-- Other settings --> </user_name> </users> </clickhouse> ``` For the SSL to work correctly, it is also important to make sure that the parameter is configured properly." } ]
{ "category": "App Definition and Development", "file_name": "v20.5.4.40-stable.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "Backported in : Fix \" Index not used for IN operator with literals\", performance regression introduced around v19.3. (). Backported in : Fix performance for selects with `UNION` caused by wrong limit for the total number of threads. Fixes . (). Backported in : Fixed the behaviour when `SummingMergeTree` engine sums up columns from partition key. Added an exception in case of explicit definition of columns to sum which intersects with partition key columns. This fixes . (). Backported in : Fixed the behaviour when during multiple sequential inserts in `StorageFile` header for some special types was written more than once. This fixed . (). Backported in : kafka: fix SIGSEGV if there is an message with error in the middle of the batch. (). Backported in : Fix TOTALS/ROLLUP/CUBE for aggregate functions with `-State` and `Nullable` arguments. This fixes . (). Backported in : Allow to CLEAR column even if there are depending DEFAULT expressions. This fixes . (). Backported in : Avoid exception when negative or floating point constant is used in WHERE condition for indexed tables. This fixes . (). Backported in : Fix crash in JOIN with dictionary when we are joining over expression of dictionary key: `t JOIN dict ON expr(dict.id) = t.id`. Disable dictionary join optimisation for this case. (). Backported in : Fixed performance issue, while reading from compact parts. (). Backported in : Fixing race condition in live view tables which could cause data duplication. (). Backported in : Now ClickHouse will recalculate checksums for parts when file `checksums.txt` is absent. Broken since . (). Backported in : Fix error `Output of TreeExecutor is not sorted` for `OPTIMIZE DEDUPLICATE`. Fixes . (). Backported in : Fix possible `Pipeline stuck` error for queries with external sorting. Fixes . (). Backported in : Better exception message in disk access storage. (). Backported in : Exception `There is no supertype...` can be thrown during `ALTER ... UPDATE` in unexpected cases (e.g. when subtracting from UInt64 column). This fixes . This fixes . (). Backported in : Add support for function `if` with `Array(UUID)` arguments. This fixes . (). Backported in : Fix SIGSEGV in StorageKafka when broker is unavailable (and not only). (). Backported in : fixes fix bloom filter index with const expression. (). Backported in : fixes allow push predicate when subquery contains with clause. (). Backported in : Fix memory tracking for inputformatparallel_parsing (by attaching thread to group). (). Backported in : Fix performance with large tuples, which are interpreted as functions in `IN` section. The case when user write `WHERE x IN tuple(1, 2, ...)` instead of `WHERE x IN (1, 2, ...)` for some obscure reason. (). Backported in : Corrected mergewithttl_timeout logic which did not work well when expiration affected more than one partition over one time interval. (Authored by @excitoon). (). Backported in : Fix `Block structure mismatch` error for queries with `UNION` and `JOIN`. Fixes . (). Backported in : Fix crash which was possible for queries with `ORDER BY` tuple and small `LIMIT`. Fixes . (). Backported in : Fix wrong index analysis with functions. It could lead to pruning wrong parts, while reading from `MergeTree` tables. Fixes . Fixes . ()." } ]
{ "category": "App Definition and Development", "file_name": "runtime.md", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "<!-- ~ Licensed to the Apache Software Foundation (ASF) under one ~ or more contributor license agreements. See the NOTICE file ~ distributed with this work for additional information ~ regarding copyright ownership. The ASF licenses this file ~ to you under the Apache License, Version 2.0 (the ~ \"License\"); you may not use this file except in compliance ~ with the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, ~ software distributed under the License is distributed on an ~ \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ~ KIND, either express or implied. See the License for the ~ specific language governing permissions and limitations ~ under the License. --> This section explains how the various configuration pieces come together to run tests. See also: This module has a number of folders that are used by all tests: `compose`: A collection of Docker Compose scripts that define the basics of the cluster. Each test \"inherits\" the bits that it needs. `compose/environment-configs`: Files which define, as environment variables, the runtime properties for each service in the cluster. (See below for details.) `assets`: The `log4j2.xml` file used by images for logging. The container itself is a bit of a hybrid. The Druid distribution, along with some test-specific extensions, is reused. The container also contains libraries for Kafka, MySQL and MariaDB. Druid configuration is passed into the container as environment variables, and then converted to a `runtime.properties` file by the container launch script. Though a bit of a mechanism, it does have one important advantage over the usual Druid configs: we can support inheritance and overrides. The various `<service>.env` files provide the standard configurations. Test-specific Docker Compose files can modify any setting. The container mounts a shared volume, defined in the `target/shared` directory of each test" }, { "data": "This volume can provide extra libraries and class path items. The one made available by default is `log4j2.xml`, but tests can add more as needed. Container \"output\" also goes into the shared folder: logs, \"cold storage\" and so on. Each container exposes the Java debugger on port 8000, mapped to a different host port for each container. Each container exposes the usual Druid ports so you can work with the container as you would a local cluster. Two handy tools are the Druid console and the scriptable . Tests run using the Maven plugin which is designed for integration tests. The Maven phases are: `pre-integration-test`: Starts the test cluster with `cluster.sh up` using Docker Compose. `integration-test`: Runs tests that start or end with `IT`. `post-integration-test`: Stops the test cluster using `cluster.sh down` `verify`: Checks the result of the integration tests. See for JUnit setup with failsafe. The basic process for running a test group (sub-module) is: Cluser startup builds a `target/shared` directory with items to be mounted into the containers, such as the `log4j2.xml` file, sample data, etc. The shared directory also holds log files, Druid persistent storage, the metastore (MySQL) DB, etc. See `test-image/README.md` for details. The test is configured via a `druid-cluster/compose.yaml` file. This file defines the services to run and their configuration. The `cluster.sh up` script builds the shared directory, loads the env vars defined when the image was created and starts the cluster. Tests run on the local host within JUnit. The `Initialization` class loads the cluster configuration (see below), optionally populates the Druid metadata storage, and is used to inject instances into the test. The individual tests run. The `cluster.sh down` script shuts down the cluster. `cluster.sh` uses the generated `test-image/target/env.sh` for versions and and other environment variables. This ensures that tests run with the same versions used to build the image. It also simplifies the Maven boilerplate to be copy/pasted into each test sub-project." } ]
{ "category": "App Definition and Development", "file_name": "009-pulsar-connector.md", "project_name": "Hazelcast IMDG", "subcategory": "Database" }
[ { "data": "title: 009 - Apache Pulsar Connector description: Jet source and sink for Apache Pulsar. Since: Contrib Module Apache Pulsar connector enables Jet to ingest data from Pulsar topics into Jet pipelines and publish messages to Pulsar topics. It uses the Pulsar client library to read and publish messages from/to a Pulsar topic. Pulsar client library provides two different ways of reading messages from a topic. They are namely Consumer API and Reader API that have major differences. Pulsar connectors have benefits of both Consumer API and Reader API. Besides, this Pulsar client library has an API called the Producer API. The Pulsar connector enables users to use the Pulsar topic as a sink in Jet pipelines. In the implementation of the Apache Pulsar sources, both Consumer API and Reader API are used. Each API is advantageous in different aspects compared to one another. By using the Consumer API, the distributed source is created without dealing with the partition mapping - it is handled below this abstraction level. But, since this API does not return the cursor for the last consumed message, it is incapable of assuring fault-tolerance. This issue will be discussed in the upcoming section. As opposed to Consumer API, the Reader API helps us to build a source that consumes through a Pulsar topic with an exactly-once guarantee if the remaining sections of the pipeline also assure this. Pulsar provides an abstraction called Consumer API that is based on a topic subscription. An application that subscribes to the topic can consume messages from the first unacknowledged message for this subscription. The Consumer API has the 4 main operations listed below: Subscribe to a topic. Start to consume from the first unacknowledged message. Consume messages Send an acknowledgment to the broker. A subscription determines how messages are delivered to consumers, and it is identified by its subscription-name. There are three available subscription modes in Pulsar: exclusive, shared, and failover. In the Pulsar source connector using Consumer API abstraction, the shared or round-robin mode is used. The shared subscription allows multiple consumers to attach to the same subscription. The messages belong to the subscribed topic are delivered in a round-robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. This subscription mode allows consuming messages in a distributed manner. However, this distributed manner prevents the order of messages from being preserved. Please see Pulsars documentation for information on other subscription options. Consumer API provides 4 different policies (2 for single or batch x 2 for sync or async) for receiving messages from the broker. These are Blocking Single Receive Async Single Receive Blocking Batch Receive (Our choice) Async Batch Receive Since `SourceBuilder.SourceBuffer` and `SourceBuilder.TimestampedSourceBuffer` are not thread-safe, it would require extra care when async policies were chosen. Therefore, handling the result of the async calls should not be performed by creating new threads. This requirement conflicts with the usual practice of async processing. That is why I did not prefer async policies. We prefer to use batch receive assuming that it is better in" }, { "data": "The batch receiving of Pulsar consumer assures that the batch receive will be completed as long as anyone of the conditions(has enough number of messages, has enough of the size of messages in terms of bytes and number, wait for a timeout) is met. As the default batch receive policy, we set timeout to 1000 ms and max number of messages to 512 messages. Since the batch receive policy of the consumer is more likely to change, a separate builder setter is created for it. As a default, Pulsar Broker deletes a message from the topic provided that all subscriptions for this topic acknowledge this message. By changing the broker configurations, these fully-acknowledged messages can continue to persist. With a consumer, the application resume consuming from the first unacknowledged message. It resembles the cursor mechanism but it hinders some tricky conditions that can result in message loss. In order to support fault-tolerance, the rollback mechanism should be able to perform properly in case of failure. However, the acknowledgment mechanism of Consumer API has no rollback mechanism. At the first trial of mine, between two snapshotting times, I stored the consumed messages ids in a list. Then, I tried to acknowledge the list of messages at the snapshot time (by assuming the job does not fail at snapshot time). But if the job fails during the snapshot, there is no way to rollback these acknowledgments. As a result, after the restart of the job, the job loses these acknowledged messages. If the Consumer API had a commit mechanism, it could guarantee exactly-once processing. To avoid these complications, you could remove acknowledgment logic entirely. But, this would introduce another short come: all of the messages would be stored permanently unless any eviction mechanism exists. For the continuous processes, this may cause excessive storage usage. Note that: The Pulsar consumer source does not fail fast in the scenario that when the job reading Pulsar topic using the Pulsar consumer source is running and Pulsar broker is closed, the job doesn't fail, it just prints log at warn level. The reason for this is that the Pulsar Client tries to reconnect to the broker with exponential backoff in this case, and while doing this, we cannot notice the connection issue because it only prints logs without throwing an exception. It should be determined when the messages are acknowledged or whether should be acknowledged at all. The current implementation of the Pulsar Consumer source acknowledges messages whenever it consumes them. It should be noted that this implementation prevents at-least-once logic let alone exactly-once. This issue with existence and timing of acknowledgment should be discussed further. The design choices regarding the usage of Consumer API are listed below: Shared subscription mode is preferred. Receive messages in a synchronous batch manner. Immediately send acknowledgments to a Pulsar broker after consuming a batch of messages. The Reader API is a lower-level API of the Pulsar. It does not have any subscription mode or so. A user can simply read from a topic by telling the Reader API from which message it should read onwards. The Reader API provides us the `MessageId` of the earliest or latest message. You can use one of these as the starting point for message" }, { "data": "Or, the user can declare the exact starting position by giving `MessageId` between these two ends. That being said, the current implementation of the source using Reader API overwrites any user-specified preference on the starting point and just reads from the earliest message onwards. The implementation might require enhancements in order not to ignore the users preference on the starting point of reading messages. Since Reader API enables us to start reading from a specified message, the source using Reader API can provide exactly-once processing. In Pulsar source using Reader API, the `MessageId` of the latest read message is stored in the snapshot. In case of failure, the job restarts from the latest snapshot and reads from the stored `MessageId`. For further information about Reader API visit its . Note that: The Pulsar reader source does not fail fast in the scenario that when the job reading Pulsar topic using the Pulsar reader source is running and Pulsar broker is closed, the job doesn't fail, it just prints logs at warn level. The reason for this is that the Pulsar Client tries to reconnect to the broker with exponential backoff in this case, and while doing this, we cannot notice the connection issue because it only prints logs without throwing an exception. Pulsar has Producer API to publish messages to the topics. Producer API provides 4 different options for sending messages to brokers: Sync Single Publish Async Single Publish (Our choice) Sync Batch Publish Async Batch Publish The builder pattern is used when constructing a Pulsar Sink. The constructor of the builder, `PulsarSinkBuilder`, consists of the required fields for Sink. The other optional fields can be added with setter functions as well. At the sink, the messages are sent in an asynchronous manner. This async sending has a retry mechanism, the max retry count is set to 10. Pulsar does not provide a commit mechanism as a messaging system (they want very low-latency). It has a deduplication mechanism to provide exactly-once processing. After enabling the deduplication, the Pulsar brokers remove the duplicate messages. It detects duplicates by looking into the `SequenceId` field of messages which is added at publish-time. Since the order of processing items cannot be guaranteed to be preserved in Jet pipelines, we cannot assign the same `SequenceId` to the same message after a job restart. The wrong assignments of `SequenceId` lead to loss of messages, we cannot benefit from this deduplication mechanism. For more information about Pulsar producers . Both the source and sink creation APIs of the Pulsar connector implement the builder pattern. We get the required parameters in the constructor and other parameters in the setters. Configuration values are set to default values and these default values can be changed using setters. We have shown basic usage in all of the usage examples we provided, if you want to change the values of optional parameters, you can use the setter methods of the builder. Both creating sources or sink of the Pulsar requires some kind of projection functions from the user as a required parameter. These functions are used to transform the data format with respect to the data transfer direction. In the case of the source, we need to convert the received Pulsar message to a Jet-compatible serializable data, that is, emitting" }, { "data": "The Jet sources use `Event Time` of the Pulsar message object as a timestamp, if it exists. Otherwise, its `Publish Time` is used as a timestamp which always presents. In turn, converting the processed items to the Pulsar message form is required at the sink. Additionally, Pulsar has a mechanism for type-safety called Schema. Normally Pulsar allows messaging without schema, but we preferred to impose schema usage while creating the sources and sink. It should be noted that not using any schema is almost equivalent to using Schema.BYTE in practice. The `PulsarTestSupport extends JetTestSupport` is formed to provide a Pulsar container, Pulsar clients, source and sink setup functions for tests. Various integration tests were to test the Pulsar connector. To test the sources, firstly, a Pulsar container is initiated. Then, through some client that is created in `PulsarTestSupport` for the purpose of testing (external to Jet), some sequence of messages are published into a pre-defined topic. After that, by using the Pulsar source, we run a job that reads from that topic. As the last step of the job, we collect the items and check whether all of the messages are read. To test the Pulsar sink, we perform the steps above in reverse order. To check fault tolerance support of PulsarReader source, distributed node failure recovery is simulated. Two Jet instances are created and the job is submitted to them. Once we make sure at least one snapshot is created for the job, we enforce the job to restart by killing one of the Jet instances and check if it manages not to lose or duplicate any messages. The current version of the source using Reader API does not support distributed processing. It can be enabled by adding a custom partition mapping mechanism to it. If a mechanism for ordering processing items add to the Jet pipelines, then the exactly-once processing semantics can be achieved on the Pulsar sink Example code for Pulsar Reader Source can be found in this . The program below creates a job that connects the local Pulsar cluster located at `localhost:6650`(default address) and then consumes messages from the pulsar topic, `hazelcast-demo-topic`, ```java package com.hazelcast.jet.contrib.pulsar; import com.hazelcast.jet.Jet; import com.hazelcast.jet.JetInstance; import com.hazelcast.jet.pipeline.Pipeline; import com.hazelcast.jet.pipeline.Sinks; import com.hazelcast.jet.pipeline.StreamSource; import org.apache.pulsar.client.api.BatchReceivePolicy; import org.apache.pulsar.client.api.Message; import org.apache.pulsar.client.api.PulsarClient; import org.apache.pulsar.client.api.Schema; import java.util.Collections; import java.util.HashMap; import java.util.Map; import java.util.concurrent.TimeUnit; public class PulsarConsumerDemo { public static void main(String[] args) { JetInstance jet = Jet.bootstrappedInstance(); final StreamSource<Integer> pulsarConsumerSource = PulsarSources.pulsarConsumerBuilder( \"hazelcast-demo-topic\", () -> PulsarClient.builder().serviceUrl(\"pulsar://localhost:6650\").build(), () -> Schema.INT32, Message::getValue).build(); Pipeline pipeline = Pipeline.create(); pipeline.readFrom(pulsarConsumerSource) .withoutTimestamps() .writeTo(Sinks.logger()); jet.newJob(pipeline).join(); } } ``` The created source above uses default client configurations, you can change them by using builder methods. The program below creates a job that connects the local Pulsar cluster located at `localhost:6650`(default address) and then publishes messages to the pulsar topic, `hazelcast-demo-topic`, ```java package com.hazelcast.jet.contrib.pulsar; import com.hazelcast.function.FunctionEx; import com.hazelcast.jet.Jet; import com.hazelcast.jet.JetInstance; import com.hazelcast.jet.config.JobConfig; import com.hazelcast.jet.pipeline.Pipeline; import com.hazelcast.jet.pipeline.Sink; import com.hazelcast.jet.pipeline.test.TestSources; import org.apache.pulsar.client.api.PulsarClient; import org.apache.pulsar.client.api.Schema; import java.util.HashMap; import java.util.Map; public class PulsarProducerDemo { public static void main(String[] args) { JetInstance jet = Jet.bootstrappedInstance(); Pipeline p = Pipeline.create(); Sink<Integer> pulsarSink = PulsarSinks.builder( \"hazelcast-demo-topic\", () -> PulsarClient.builder() .serviceUrl(\"pulsar://localhost:6650\") .build(), () -> Schema.INT32, FunctionEx.identity()).build(); p.readFrom(TestSources.itemStream(15)) .withoutTimestamps() .map(x -> (int) x.sequence()) .writeTo(pulsarSink); JobConfig jobConfig = new JobConfig(); jobConfig.setName(\"hazelcast-pulsar-producer\"); jet.newJob(p, jobConfig).join(); } } ```" } ]
{ "category": "App Definition and Development", "file_name": "cmd_analyze.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: ANALYZE statement [YSQL] headerTitle: ANALYZE linkTitle: ANALYZE description: Collect statistics about database tables with the ANALYZE statement. techPreview: /preview/releases/versioning/#feature-availability menu: v2.18: identifier: cmd_analyze parent: statements type: docs ANALYZE collects statistics about the contents of tables in the database, and stores the results in the `pg_statistic` system catalog. These statistics help the query planner to determine the most efficient execution plans for queries. The YugabyteDB implementation is based on the framework provided by PostgreSQL, which requires the storage layer to provide a random sample of rows of a predefined size. The size is calculated based on a number of factors, such as the included columns' data types. {{< note title=\"Note\" >}} The sampling algorithm is not currently optimized for large tables. It may take several minutes to collect statistics from a table containing millions of rows of data. {{< /note >}} {{< note title=\"Note\" >}} Currently, YugabyteDB doesn't run a background job like PostgreSQL's autovacuum to analyze the tables. To collect or update statistics, run the ANALYZE command manually. {{< /note >}} {{%ebnf%}} analyze, tableandcolumns {{%/ebnf%}} Enable display of progress messages. Table name to be analyzed; may be schema-qualified. Optional. Omit to analyze all regular tables in the current database. List of columns to be analyzed. Optional. Omit to analyze all columns of the table. ```plpgsql yugabyte=# ANALYZE some_table; ``` ```output ANALYZE ``` ```plpgsql yugabyte=# ANALYZE some_table(col1, col3); ``` ```output ANALYZE ``` ```plpgsql yugabyte=# ANALYZE VERBOSE sometable, othertable; ``` ```output INFO: analyzing \"public.some_table\" INFO: \"some_table\": scanned, 3 rows in sample, 3 estimated total rows INFO: analyzing \"public.other_table\" INFO: \"other_table\": scanned, 3 rows in sample, 3 estimated total rows ANALYZE ``` This example demonstrates how statistics affect the optimizer. Let's create a new table... ```sql yugabyte=# CREATE TABLE test(a int primary key, b int); ``` ```output CREATE TABLE ``` ... and populate it. ```sql yugabyte=# INSERT INTO test VALUES (1, 1), (2, 2), (3, 3); ``` ```output INSERT 0 3 ``` In the absence of statistics, the optimizer uses hard-coded defaults, such as 1000 for the number of rows in the table. ```sql yugabyte=# EXPLAIN select * from test where b = 1; ``` ```output QUERY PLAN Seq Scan on test (cost=0.00..102.50 rows=1000 width=8) Filter: (b = 1) (2 rows) ``` Now run the ANALYZE command to collect statistics. ```sql yugabyte=# ANALYZE test; ``` ```output ANALYZE ``` After ANALYZE number of rows is accurate. ```sql yugabyte=# EXPLAIN select * from test where b = 1; ``` ```output QUERY PLAN Seq Scan on test (cost=0.00..0.31 rows=3 width=8) Filter: (b = 1) (2 rows) ``` Once the optimizer has better idea about data in the tables, it is able to create better performing query plans. {{< note title=\"Note\" >}} The query planner currently uses only the number of rows when calculating execution costs of the sequential and index scans. {{< /note >}}" } ]
{ "category": "App Definition and Development", "file_name": "annindexes.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "Nearest neighborhood search is the problem of finding the M closest points for a given point in an N-dimensional vector space. The most straightforward approach to solve this problem is a brute force search where the distance between all points in the vector space and the reference point is computed. This method guarantees perfect accuracy, but it is usually too slow for practical applications. Thus, nearest neighborhood search problems are often solved with . Approximative nearest neighborhood search techniques, in conjunction with [embedding methods](https://cloud.google.com/architecture/overview-extracting-and-serving-feature-embeddings-for-machine-learning) allow to search huge amounts of media (pictures, songs, articles, etc.) in milliseconds. Blogs: In terms of SQL, the nearest neighborhood problem can be expressed as follows: ``` sql SELECT * FROM tablewithann_index ORDER BY Distance(vectors, Point) LIMIT N ``` `vectors` contains N-dimensional values of type , for example embeddings. Function `Distance` computes the distance between two vectors. Often, the Euclidean (L2) distance is chosen as distance function but [other distance functions](/docs/en/sql-reference/functions/distance-functions.md) are also possible. `Point` is the reference point, e.g. `(0.17, 0.33, ...)`, and `N` limits the number of search results. An alternative formulation of the nearest neighborhood search problem looks as follows: ``` sql SELECT * FROM tablewithann_index WHERE Distance(vectors, Point) < MaxDistance LIMIT N ``` While the first query returns the top-`N` closest points to the reference point, the second query returns all points closer to the reference point than a maximally allowed radius `MaxDistance`. Parameter `N` limits the number of returned values which is useful for situations where `MaxDistance` is difficult to determine in advance. With brute force search, both queries are expensive (linear in the number of points) because the distance between all points in `vectors` and `Point` must be computed. To speed this process up, Approximate Nearest Neighbor Search Indexes (ANN indexes) store a compact representation of the search space (using clustering, search trees, etc.) which allows to compute an approximate answer much quicker (in sub-linear time). Syntax to create an ANN index over an column: ```sql CREATE TABLE tablewithann_index ( `id` Int64, `vectors` Array(Float32), INDEX [GRANULARITY [N]] ) ENGINE = MergeTree ORDER BY id; ``` ANN indexes are built during column insertion and merge. As a result, `INSERT` and `OPTIMIZE` statements will be slower than for ordinary tables. ANNIndexes are ideally used only with immutable or rarely changed data, respectively when are far more read requests than write requests. ANN indexes support two types of queries: ORDER BY queries: ``` sql SELECT * FROM tablewithann_index [WHERE ...] ORDER BY Distance(vectors, Point) LIMIT N ``` WHERE queries: ``` sql SELECT * FROM tablewithann_index WHERE Distance(vectors, Point) < MaxDistance LIMIT N ``` :::tip To avoid writing out large vectors, you can use [query parameters](/docs/en/interfaces/cli.md#queries-with-parameters-cli-queries-with-parameters), e.g. ```bash clickhouse-client --paramvec='hello' --query=\"SELECT * FROM tablewithannindex WHERE L2Distance(vectors, {vec: Array(Float32)}) < 1.0\" ``` ::: Restrictions: Queries that contain both a `WHERE Distance(vectors, Point) < MaxDistance` and an `ORDER BY Distance(vectors, Point)` clause cannot use ANN indexes. Also, the approximate algorithms used to determine the nearest neighbors require a limit, hence queries without `LIMIT` clause cannot utilize ANN indexes. Also, ANN indexes are only used if the query has a `LIMIT` value smaller than setting `maxlimitforannqueries` (default: 1 million" }, { "data": "This is a safeguard to prevent large memory allocations by external libraries for approximate neighbor search. Differences to Skip Indexes Similar to regular , ANN indexes are constructed over granules and each indexed block consists of `GRANULARITY = [N]`-many granules (`[N]` = 1 by default for normal skip indexes). For example, if the primary index granularity of the table is 8192 (setting `index_granularity = 8192`) and `GRANULARITY = 2`, then each indexed block will contain 16384 rows. However, data structures and algorithms for approximate neighborhood search (usually provided by external libraries) are inherently row-oriented. They store a compact representation of a set of rows and also return rows for ANN queries. This causes some rather unintuitive differences in the way ANN indexes behave compared to normal skip indexes. When a user defines an ANN index on a column, ClickHouse internally creates an ANN \"sub-index\" for each index block. The sub-index is \"local\" in the sense that it only knows about the rows of its containing index block. In the previous example and assuming that a column has 65536 rows, we obtain four index blocks (spanning eight granules) and an ANN sub-index for each index block. A sub-index is theoretically able to return the rows with the N closest points within its index block directly. However, since ClickHouse loads data from disk to memory at the granularity of granules, sub-indexes extrapolate matching rows to granule granularity. This is different from regular skip indexes which skip data at the granularity of index blocks. The `GRANULARITY` parameter determines how many ANN sub-indexes are created. Bigger `GRANULARITY` values mean fewer but larger ANN sub-indexes, up to the point where a column (or a column's data part) has only a single sub-index. In that case, the sub-index has a \"global\" view of all column rows and can directly return all granules of the column (part) with relevant rows (there are at most `LIMIT [N]`-many such granules). In a second step, ClickHouse will load these granules and identify the actually best rows by performing a brute-force distance calculation over all rows of the granules. With a small `GRANULARITY` value, each of the sub-indexes returns up to `LIMIT N`-many granules. As a result, more granules need to be loaded and post-filtered. Note that the search accuracy is with both cases equally good, only the processing performance differs. It is generally recommended to use a large `GRANULARITY` for ANN indexes and fall back to a smaller `GRANULARITY` values only in case of problems like excessive memory consumption of the ANN structures. If no `GRANULARITY` was specified for ANN indexes, the default value is 100 million. Annoy indexes are currently experimental, to use them you first need to `SET allowexperimentalannoy_index = 1`. They are also currently disabled on ARM due to memory safety problems with the algorithm. This type of ANN index is based on the which recursively divides the space into random linear surfaces (lines in 2D, planes in 3D etc.). <div class='vimeo-container'> <iframe" }, { "data": "width=\"640\" height=\"360\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen> </iframe> </div> Syntax to create an Annoy index over an column: ```sql CREATE TABLE tablewithannoy_index ( id Int64, vectors Array(Float32), INDEX [annindexname] vectors TYPE annoy([Distance[, NumTrees]]) [GRANULARITY N] ) ENGINE = MergeTree ORDER BY id; ``` Annoy currently supports two distance functions: `L2Distance`, also called Euclidean distance, is the length of a line segment between two points in Euclidean space (). `cosineDistance`, also called cosine similarity, is the cosine of the angle between two (non-zero) vectors (). For normalized data, `L2Distance` is usually a better choice, otherwise `cosineDistance` is recommended to compensate for scale. If no distance function was specified during index creation, `L2Distance` is used as default. Parameter `NumTrees` is the number of trees which the algorithm creates (default if not specified: 100). Higher values of `NumTree` mean more accurate search results but slower index creation / query times (approximately linearly) as well as larger index sizes. :::note All arrays must have same length. To avoid errors, you can use a , for example, `CONSTRAINT constraintname1 CHECK length(vectors) = 256`. Also, empty `Arrays` and unspecified `Array` values in INSERT statements (i.e. default values) are not supported. ::: The creation of Annoy indexes (whenever a new part is build, e.g. at the end of a merge) is a relatively slow process. You can increase setting `maxthreadsforannoyindex_creation` (default: 4) which controls how many threads are used to create an Annoy index. Please be careful with this setting, it is possible that multiple indexes are created in parallel in which case there can be overparallelization. Setting `annoyindexsearchknodes` (default: `NumTrees * LIMIT`) determines how many tree nodes are inspected during SELECTs. Larger values mean more accurate results at the cost of longer query runtime: ```sql SELECT * FROM table_name ORDER BY L2Distance(vectors, Point) LIMIT N SETTINGS annoyindexsearchknodes=100; ``` :::note The Annoy index currently does not work with per-table, non-default `index_granularity` settings (see ). If necessary, the value must be changed in config.xml. ::: This type of ANN index is based on the , which implements the [HNSW algorithm](https://arxiv.org/abs/1603.09320), i.e., builds a hierarchical graph where each point represents a vector and the edges represent similarity. Such hierarchical structures can be very efficient on large collections. They may often fetch 0.05% or less data from the overall dataset, while still providing 99% recall. This is especially useful when working with high-dimensional vectors, that are expensive to load and compare. The library also has several hardware-specific SIMD optimizations to accelerate further distance computations on modern Arm (NEON and SVE) and x86 (AVX2 and AVX-512) CPUs and OS-specific optimizations to allow efficient navigation around immutable persistent files, without loading them into RAM. <div class='vimeo-container'> <iframe src=\"//www.youtube.com/embed/UMrhB3icP9w\" width=\"640\" height=\"360\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen> </iframe> </div> Syntax to create an USearch index over an column: ```sql CREATE TABLE tablewithusearch_index ( id Int64, vectors Array(Float32), INDEX [annindexname] vectors TYPE usearch([Distance[, ScalarKind]]) [GRANULARITY N] ) ENGINE = MergeTree ORDER BY id; ``` USearch currently supports two distance functions: `L2Distance`, also called Euclidean distance, is the length of a line segment between two points in Euclidean space (). `cosineDistance`, also called cosine similarity, is the cosine of the angle between two (non-zero) vectors (). USearch allows storing the vectors in reduced precision formats. Supported scalar kinds are `f64`, `f32`, `f16` or `i8`. If no scalar kind was specified during index creation, `f16` is used as default. For normalized data, `L2Distance` is usually a better choice, otherwise `cosineDistance` is recommended to compensate for scale. If no distance function was specified during index creation, `L2Distance` is used as default." } ]
{ "category": "App Definition and Development", "file_name": "show-unused-sharding-key-generators.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \"SHOW UNUSED SHARDING KEY GENERATORS\" weight = 6 +++ `SHOW SHARDING KEY GENERATORS` syntax is used to query sharding key generators that are not used in specified database. {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql ShowShardingKeyGenerators::= 'SHOW' 'SHARDING' 'KEY' 'GENERATOR' ('FROM' databaseName)? databaseName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} When databaseName is not specified, the default is the currently used DATABASE. If DATABASE is not used, No database selected will be prompted. | column | Description | |--|--| | name | Sharding key generator name | | type | Sharding key generator type | | props | Sharding key generator properties | Query sharding key generators that are not used in the specified logical database ```sql SHOW UNUSED SHARDING KEY GENERATORS FROM sharding_db; ``` ```sql mysql> SHOW UNUSED SHARDING KEY GENERATORS FROM sharding_db; +-+--+-+ | name | type | props | +-+--+-+ | snowflakekeygenerator | snowflake | | +-+--+-+ 1 row in set (0.00 sec) ``` Query sharding key generators that are not used in the current logical database ```sql SHOW UNUSED SHARDING KEY GENERATORS; ``` ```sql mysql> SHOW UNUSED SHARDING KEY GENERATORS; +-+--+-+ | name | type | props | +-+--+-+ | snowflakekeygenerator | snowflake | | +-+--+-+ 1 row in set (0.00 sec) ``` `SHOW`, `UNUSED`, `SHARDING`, `KEY`, `GENERATORS`, `FROM`" } ]
{ "category": "App Definition and Development", "file_name": "role.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: ROLE linkTitle: ROLE description: ROLE menu: preview: parent: api-yedis weight: 2240 aliases: /preview/api/redis/role /preview/api/yedis/role type: docs YEDIS only has `master` role as far as Redis compatibility is concerned. `ROLE` This command provides information of a Redis instance, such as its role, its state of replication, its followers, or its master. Roles are either \"master\", \"follower\", or \"sentinel\". Information of a master instance may include the following. \"master\" An integer that represents state of replication An array of connected followers { IP address, IP port, State of replication } Information of a follower instance may include the following. \"follower\" Master IP address Master IP port Connection state that is either \"disconnected\", \"connecting\", \"sync\", or \"connected\" An integer that represents state of replication Information of a sentinel instance may include the following. \"sentinel\" An array of master names. Returns an array of values. ```sh $ ROLE ``` ``` 1) \"master\" 2) 0 3) 1) 1) \"127.0.0.1\" 2) \"9200\" 3) \"0\" 2) 1) \"127.0.0.1\" 2) \"9201\" 3) \"0\" ``` ," } ]
{ "category": "App Definition and Development", "file_name": "ycql-4.6.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Connect an application linkTitle: Connect an app description: Java driver for YCQL menu: v2.16: identifier: ycql-java-driver-4.6 parent: java-drivers weight: 500 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"../yugabyte-jdbc/\" class=\"nav-link\"> YSQL </a> </li> <li class=\"active\"> <a href=\"../ycql/\" class=\"nav-link\"> YCQL </a> </li> </ul> <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../ycql/\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YugabyteDB Java Driver for YCQL (3.10) </a> </li> <li > <a href=\"../ycql-4.6/\" class=\"nav-link active\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YugabyteDB Java Driver for YCQL (4.6) </a> </li> </ul> To build a sample Java application with the , add the following Maven dependency to your application: ```xml <dependency> <groupId>com.yugabyte</groupId> <artifactId>java-driver-core</artifactId> <version>4.6.0-yb-6</version> </dependency> ``` This tutorial assumes that you have: installed YugabyteDB, created a universe, and are able to interact with it using the YCQL shell. If not, follow the steps in . installed JDK version 1.8 or later. installed Maven 3.3 or later. Create a file, named `pom.xml`, and then copy the following content into it. The Project Object Model (POM) includes configuration information required to build the project. ```xml <?xml version=\"1.0\"?> <project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <modelVersion>4.0.0</modelVersion> <groupId>com.yugabyte.sample.apps</groupId> <artifactId>hello-world</artifactId> <version>1.0</version> <packaging>jar</packaging> <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>com.yugabyte</groupId> <artifactId>java-driver-core</artifactId> <version>4.6.0-yb-6</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.1</version> <executions> <execution> <id>copy-dependencies</id> <phase>prepare-package</phase> <goals> <goal>copy-dependencies</goal> </goals> <configuration> <outputDirectory>${project.build.directory}/lib</outputDirectory> <overWriteReleases>true</overWriteReleases> <overWriteSnapshots>true</overWriteSnapshots> <overWriteIfNewer>true</overWriteIfNewer> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> ``` Create the appropriate directory structure as expected by Maven. ```sh $ mkdir -p src/main/java/com/yugabyte/sample/apps ``` Copy the following contents into the file `src/main/java/com/yugabyte/sample/apps/YBCqlHelloWorld.java`. ```java package com.yugabyte.sample.apps; import java.net.InetSocketAddress; import java.util.List; import com.datastax.oss.driver.api.core.CqlSession; import com.datastax.oss.driver.api.core.cql.ResultSet; import com.datastax.oss.driver.api.core.cql.Row; public class YBCqlHelloWorld { public static void main(String[] args) { try { // Create a YCQL client. CqlSession session = CqlSession .builder() .addContactPoint(new InetSocketAddress(\"127.0.0.1\", 9042)) .withLocalDatacenter(\"datacenter1\") .build(); // Create keyspace 'ybdemo' if it does not exist. String createKeyspace = \"CREATE KEYSPACE IF NOT EXISTS ybdemo;\"; session.execute(createKeyspace); System.out.println(\"Created keyspace ybdemo\"); // Create table 'employee', if it does not exist. String createTable = \"CREATE TABLE IF NOT EXISTS ybdemo.employee (id int PRIMARY KEY, \" + \"name varchar, \" + \"age int, \" + \"language varchar);\"; session.execute(createTable); System.out.println(\"Created table employee\"); // Insert a row. String insert = \"INSERT INTO ybdemo.employee (id, name, age, language)\" + \" VALUES (1, 'John', 35, 'Java');\"; session.execute(insert); System.out.println(\"Inserted data: \" + insert); // Query the row and print out the result. String select = \"SELECT name, age, language FROM ybdemo.employee WHERE id = 1;\"; ResultSet selectResult = session.execute(select); List < Row > rows = selectResult.all(); String name = rows.get(0).getString(0); int age = rows.get(0).getInt(1); String language = rows.get(0).getString(2); System.out.println(\"Query returned \" + rows.size() + \" row: \" + \"name=\" + name + \", age=\" + age + \", language: \" + language); // Close the client. session.close(); } catch (Exception e) { System.err.println(\"Error: \" + e.getMessage()); } } } ``` To build the project, run the following `mvn package` command. ```sh $ mvn package ``` You should see a `BUILD SUCCESS` message. To use the application, run the following command. ```sh $ java -cp \"target/hello-world-1.0.jar:target/lib/*\" com.yugabyte.sample.apps.YBCqlHelloWorld ``` You should see the following as the output. ```output Created keyspace ybdemo Created table employee Inserted data: INSERT INTO ybdemo.employee (id, name, age, language) VALUES (1, 'John', 35, 'Java'); Query returned 1 row: name=John, age=35, language: Java ```" } ]
{ "category": "App Definition and Development", "file_name": "version.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "Use the `version` subcommand to find out the version of the {{ ydb-short-name }} CLI installed and manage new version availability auto checks. New version availability auto checks are made when you run any {{ ydb-short-name }} CLI command, except `ydb version --enable-checks` and `ydb version --disable-checks`, but only once in 24 hours. The result and time of the last check are saved to the {{ ydb-short-name }} CLI configuration file. General format of the command: ```bash {{ ydb-cli }} [global options...] version [options...] ``` `global options`: . `options`: . View a description of the command: ```bash {{ ydb-cli }} version --help ``` | Parameter | Description | | | `--semantic` | Get only the version number. | | `--check` | Check if a new version is available. | | `--disable-checks` | Disable new version availability checks. | | `--enable-checks` | Enable new version availability checks. | When running {{ ydb-short-name }} CLI commands, the system automatically checks if a new version is available. If the host where the command is run doesn't have internet access, this causes a delay and the corresponding warning appears during command execution. To disable auto checks for updates, run: ```bash {{ ydb-cli }} version --disable-checks ``` Result: ```text Latest version checks disabled ``` To facilitate data handling in scripts, you can limit result to the {{ ydb-short-name }} CLI version number: ```bash {{ ydb-cli }} version --semantic ``` Result: ```text 1.9.1 ```" } ]
{ "category": "App Definition and Development", "file_name": "static-constructor.md", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "+++ title = \"Phase 2 construction\" description = \"\" weight = 30 +++ Its phase 2 constructor: {{% snippet \"constructors.cpp\" \"file\" %}} The static member function implementing phase 2 firstly calls phase 1 which puts the object into a legally destructible state. We then proceed to implement phase 2 of construction, filling in the various parts as we go, reporting via `result` any failures. {{% notice note %}} Remember that `operator new` has a non-throwing form, `new(std::nothrow)`. {{% /notice %}} For the final return, in theory we could just `return ret` and depending on the C++ version currently in force, it might work via move, or via copy, or it might refuse to compile. You can of course type lots of boilerplate to be explicit, but this use via initialiser list is a reasonable balance of explicitness versus brevity, and it should generate minimum overhead code irrespective of compiler, C++ version, or any other factor." } ]
{ "category": "App Definition and Development", "file_name": "3.10.6.md", "project_name": "RabbitMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "RabbitMQ `3.10.6` is a maintenance release in the `3.10.x` release series. Please refer to the upgrade section from if upgrading from a version prior to 3.10.0. This release requires at least Erlang 23.2, and supports Erlang 24 and 25. has more details on Erlang version requirements for RabbitMQ. Release notes can be found on GitHub at . Stream metric collection is now more CPU efficient. It helps in environments that have many streams. GitHub issue: Optimization: internal message GUID is no longer generated for quorum queues and streams, as they are specific to classic queues. GitHub issue: Two more AMQP 1.0 connection lifecycle events are now logged. GitHub issue: CRC32 checksum verification now can be disabled for quorum queues: ``` quorumqueue.computechecksums = false ``` This may be beneficial in environments that messages are large and CPU resources are limited. Note the CRC32 checksums are not the only data corruption detection mechanism used by quorum queues but they are good at catching certain types of corruption. Disabling checksums can provide quorum queue throughput boost of up to 15%. In environments where message size is small the gains will be smaller. GitHub issue: TLS configuration for inter-node stream replication connections now can use function references and definitions. GitHub issue: Stream protocol connection logging is now less verbose. GitHub issue: Max stream segment size is now limited to 3 GiB to avoid a potential stream position overflow. GitHub issue: Stream coordinator warnings now include operation name for clarify. GitHub issue: Logging messages that use microseconds now use \"us\" for the SI symbol to be compatible with more tools. GitHub issue: Channels on connections to mixed clusters that had 3.8 nodes in them could run into an exception. GitHub issue: Inter-node cluster link statistics did not have any data when TLS was enabled for them. GitHub issue: Quorum queues now correctly propagate errors when a `basic.get` (polling consumption) operation hits a timeout. Contributed by Ayanda @Ayanda-D Dube. GitHub issue: Stream consumer that used AMQP 0-9-1 instead of a stream protocol client, and disconnected, leaked a file handle. GitHub issue: Max frame size and client heartbeat parameters for clients were not correctly set when taken from `rabbitmq.conf`. GitHub issue: Removed a duplicate exchange decorator set operation. Contributed by Pter @gomoripeti Gmri. GitHub issue: Node restarts could result in a hashing ring inconsistency. This required a potentially breaking change: this exchange type now only allows for one binding between an exchange and a queue (or another exchange). All subsequent binding operations between them will be ignored, so \"first write wins\". This is a natural topology for this plugin, and enforcing it helps avoid a set of potential issues with concurrent node restarts and client operations that affect consistent hash ring state. GitHub issue: Consul peer discovery now supports client-side TLS options, much like its Kubernetes and etcd peers. ``` ini cluster_formation.consul.scheme = https cluster_formation.consul.port = 8501 clusterformation.consul.ssloptions.cacertfile = /path/to/consul/generated/ca_certificate.pem clusterformation.consul.ssloptions.certfile = /path/to/client/certificate.pem clusterformation.consul.ssloptions.keyfile = /path/to/client/client_key.pem ``` GitHub issue: `ra` upgraded from To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.10.6.tar.xz` instead of the source tarball produced by GitHub." } ]
{ "category": "App Definition and Development", "file_name": "DESCRIBE.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" You can use the statement to perform the following operations: View the schema of a table stored in your StarRocks cluster, along with the type of the and of the table. View the schema of a table stored in the following external data sources, such as Apache Hive. Note that you can perform this operation only in StarRocks 2.4 and later versions. ```SQL DESCtable_name [ALL]; ``` | Parameter | Required | Description | | - | | | | catalogname | No | The name of the internal catalog or an external catalog. <ul><li>If you set the value of the parameter to the name of the internal catalog, which is `defaultcatalog`, you can view the schema of the table stored in your StarRocks cluster. </li><li>If you set the value of the parameter to the name of an external catalog, you can view the schema of the table stored in the external data source.</li></ul> | | db_name | No | The database name. | | table_name | Yes | The table name. | | ALL | No | <ul><li>If this keyword is specified, you can view the type of the sort key, materialized view, and schema of a table stored in your StarRocks cluster. If this keyword is not specified, you only view the table schema. </li><li>Do not specify this keyword when you view the schema of a table stored in an external data source.</li></ul> | ```Plain +--++-+++--++-+ | IndexName | IndexKeysType | Field | Type | Null | Key | Default | Extra | +--++-+++--++-+ ``` The following table describes the parameters returned by this statement. | Parameter | Description | | - | | | IndexName | The table name. If you view the schema of a table stored in an external data source, this parameter is not returned. | | IndexKeysType | The type of the sort key of the table. If you view the schema of a table stored in an external data source, this parameter is not returned. | | Field | The column name. | | Type | The data type of the column. | | Null | Whether the column values can be NULL. <ul><li>`yes`: indicates the values can be NULL. </li><li>`no`: indicates the values cannot be NULL. </li></ul>| | Key | Whether the column is used as the sort key. <ul><li>`true`: indicates the column is used as the sort key. </li><li>`false`: indicates the column is not used as the sort key. </li></ul>| | Default | The default value for the data type of the column. If the data type does not have a default value, a NULL is" }, { "data": "| | Extra | <ul><li>If you see the schema of a table stored in your StarRocks cluster, this field displays the following information about the column: <ul><li>The aggregate function used by the column, such as `SUM` and `MIN`. </li><li>Whether a bloom filter index is created on the column. If so, the value of `Extra` is `BLOOM_FILTER`. </li></ul></li><li>If you see the schema of a table stored in external data sources, this field displays whether the column is the partition column. If the column is the partition column, the value of `Extra` is `partition key`. </li></ul>| Note: For information about how a materialized view is displayed in the output, see Example 2. Example 1: View the schema of `example_table` stored in your StarRocks cluster. ```SQL DESC example_table; ``` Or ```SQL DESC defaultcatalog.exampledb.example_table; ``` The output of the preceding statements is as follows. ```Plain +-+++-++-+ | Field | Type | Null | Key | Default | Extra | +-+++-++-+ | k1 | TINYINT | Yes | true | NULL | | | k2 | DECIMAL(10,2) | Yes | true | 10.5 | | | k3 | CHAR(10) | Yes | false | NULL | | | v1 | INT | Yes | false | NULL | | +-+++-++-+ ``` Example 2: View the schema, type of the sort key, and materialized view of `salesrecords` stored in your StarRocks cluster. In the following example, one materialized view `storeamt` is created based on `sales_records`. ```Plain DESC db1.sales_records ALL; +++--+--++-++-+ | IndexName | IndexKeysType | Field | Type | Null | Key | Default | Extra | +++--+--++-++-+ | salesrecords | DUPKEYS | record_id | INT | Yes | true | NULL | | | | | seller_id | INT | Yes | true | NULL | | | | | store_id | INT | Yes | true | NULL | | | | | sale_date | DATE | Yes | false | NULL | NONE | | | | sale_amt | BIGINT | Yes | false | NULL | NONE | | | | | | | | | | | storeamt | AGGKEYS | store_id | INT | Yes | true | NULL | | | | | sale_amt | BIGINT | Yes | false | NULL | SUM | +++--+--++-++-+ ``` Example 3: View the schema of `hive_table` stored in your Hive cluster. ```Plain DESC hivecatalog.hivedb.hive_table; +-+-++-+++ | Field | Type | Null | Key | Default | Extra | +-+-++-+++ | id | INT | Yes | false | NULL | | | name | VARCHAR(65533) | Yes | false | NULL | | | date | DATE | Yes | false | NULL | partition key | +-+-++-+++ ```" } ]
{ "category": "App Definition and Development", "file_name": "v21.3.17.2-lts.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Fix a rare bug in `DROP PART` which can lead to the error `Unexpected merged part intersects drop range`. (). Backported in : Fix bug which can lead to error `Existing table metadata in ZooKeeper differs in sorting key expression.` after alter of `ReplicatedVersionedCollapsingMergeTree`. Fixes . (). Backported in : Fix benign race condition in ReplicatedMergeTreeQueue. Shouldn't be visible for user, but can lead to subtle bugs. (). Backported in : Fix reading of subcolumns from compact parts. (). Backported in : Fix higher-order array functions (`SIGSEGV` for `arrayCompact`/`ILLEGAL_COLUMN` for `arrayDifference`/`arrayCumSumNonNegative`) with consts. ()." } ]
{ "category": "App Definition and Development", "file_name": "EclipseUnitTestDebug.md", "project_name": "VoltDB", "subcategory": "Database" }
[ { "data": "This describes a step by step procedure for setting up an Eclipse project with a simple test application that allows unit testing and debugging of stored procedure and client code. You have installed a Java 8 JDK You have VoltDB: Enterprise or Pro Edition: the voltdb-ent-version.tar.gz file. Community Edition: follow wiki instructions on , then: run \"ant dist\" to build ./obj/release/voltdb-<version>.tar.gz You have followed to install the voltdb[-ent]-version.tar.gz file. You have added the voltdb[-ent]-version/bin directory to your PATH You have installed Eclipse and set up a workspace In order to run unit tests or debug stored procedures, it is necessary to run an instance of the VoltDB database within a java process. In v7.2, code was added to Voltdb to do this. If you are using v7.1 or earlier, you can load equivalent code from the app-debug-and-test repository. Follow the steps below to download this repository and build a jar containing these two classes, which you will add to your Eclipse project later. You can run these commands in any directory you wish. git clone https://github.com/VoltDB/app-debug-and-test.git cd app-debug-and-test ./compile_utils.sh Check that VoltDBProcedureTestUtils.jar was built in the app-debug-and-test directory. Choose File / New / Java Project from the Eclipse menu. On the \"Create a Java Project\" dialog: Project name: provide a name, e.g. TEST. Click \"Configure JREs\" Select \"Java SE 8 [1.8.0_version]\", then click \"Duplicate\" On the JRE Definition dialog: JRE name: add \"(for VoltDB application)\" as a suffix Default VM arguments: \"-ea -Xmx1g\" Click \"OK\" JRE: select \"Use a project specific JRE:\" and select \"Java SE 8 [1.8.0_version] (for VoltDB application)\" Click \"Finish\" VoltDB stored procedures depend on loading a voltdb[-ent]-<version>.jar library jar file found in the \"voltdb\" folder of your VoltDB distribution. If you are using v7.1 or earlier, in order to write a unit test that creates an in-process VoltDB instance, you also need the VoltDBProcedureTestUtils.jar file built earlier, as well as the third-party libraries found in the \"lib\" folder of your VoltDB distribution. Right click the \"TEST\" project and select \"Build Path\" and then \"Configure Build Path...\". Select the Libraries tab. Expand the \"JRE System Library...\" item in the build path tree Select the \"Native library location\" item. Click \"Edit...\" on the right side. In the \"Native Library Folder Configuration\" dialog: click \"External Folder...\" Select the \"voltdb\" folder from your installed VoltDB distribution, then click \"Open\" Click \"OK\" on the dialog Click \"Add External JARs...\" and select all jar files in the VoltDB distribution \"lib\" folder. Click \"Add External JARs...\" and select the voltdb (not client) jar file the VoltDB distribution \"voltdb\" folder. Click \"Add External JARs...\" and select the app-debug-and-test/VoltDBProcedureTestUtils.jar file you built eariler (only if you are using v7.1 or earlier) Click \"Add External JARs...\" and select the app-debug-and-test/lib/junit-version.jar file. Optionally, you could use another junit-version.jar file if you already have one. Click \"OK\" Select the \"src\" folder Choose File / New / File File Name: log4j.xml Click \"Finish\" Open the log4j.xml file in the editor. Then on the bottom of the editor, click on the \"Source\" tab. Then paste the following into the editor: ```xml <?xml version=\"1.0\" encoding=\"UTF-8\" ?> <!DOCTYPE log4j:configuration SYSTEM \"log4j.dtd\"> <log4j:configuration xmlns:log4j=\"http://jakarta.apache.org/log4j/\"> <appender name=\"console\" class=\"org.apache.log4j.ConsoleAppender\"> <layout class=\"org.apache.log4j.PatternLayout\"> <param name=\"ConversionPattern\" value=\"%d %-5p [%t] %c: %m%n\"/> </layout> </appender> <root> <priority value=\"info\" /> <appender-ref ref=\"console\" /> </root> </log4j:configuration> ``` If using VoltDB Enterprise or Pro Edition, you will need to add a license.xml file to your" }, { "data": "For Community Edition, this is not necessary. Later on, when creating the JUnit TestCase class, you will need to reference this file. Copy your purchased license file into the TEST project folder. Or, to use the trial license, copy the license.xml file provided in your VoltDB kit under the \"voltdb\" subfolder into the TEST project folder. Then use the File / Refresh menu item (or press F5) and the file should appear in the project. Choose File / New / File from the menu File name: DDL.sql Click \"Finish\" Open the DDL.sql file in the editor and paste in the following: ```sql CREATE TABLE foo ( id INTEGER NOT NULL, val VARCHAR(15), last_updated TIMESTAMP, PRIMARY KEY (id) ); PARTITION TABLE foo ON COLUMN id; CREATE PROCEDURE PARTITION ON TABLE foo COLUMN id FROM CLASS procedures.UpdateFoo; ``` Choose File / New / Package from the menu. Name: procedures Click \"Finish\" Select the \"procedures\" package and choose New / Class from the context menu. Name: UpdateFoo Superclass: org.voltdb.VoltProcedure Click \"Finish\" Open the UpdateFoo.java file in the editor and paste in the following: ```java package procedures; import org.voltdb.SQLStmt; import org.voltdb.VoltProcedure; import org.voltdb.VoltTable; import org.voltdb.VoltProcedure.VoltAbortException; public class UpdateFoo extends VoltProcedure { public final SQLStmt updateFoo = new SQLStmt( \"UPDATE foo SET last_updated = NOW, val = ? WHERE id = ?;\"); public final SQLStmt insertFoo = new SQLStmt( \"INSERT INTO foo VALUES (?,?,NOW);\"); public VoltTable[] run(int id, String val) throws VoltAbortException { voltQueueSQL(updateFoo, val, id); VoltTable[] results = voltExecuteSQL(); if (results[0].asScalarLong() == 0) { voltQueueSQL(insertFoo,id,val); results = voltExecuteSQL(); } return results; } } ``` Choose File / New / Package from the menu. Name: tests Click \"Finish\" Select the package and choose New / Class from the context menu. Name: BasicTest Superclass: junit.framework.TestCase Press Finish to create and open the new class. Open the BasicTest.java file in the editor and paste in the following: ```java package tests; import java.util.Date; import junit.framework.TestCase; import org.voltdb.InProcessVoltDBServer; import org.voltdb.VoltTable; import org.voltdb.client.Client; import org.voltdb.client.ClientResponse; import org.voltdb.client.ProcCallException; public class BasicTest extends TestCase { public void testProcedureReturn() throws Exception { // Create an in-process VoltDB server instance InProcessVoltDBServer volt = new InProcessVoltDBServer(); // If using Enterprise or Pro Edition, a license is required. // If using Community Edition, comment out the following line. volt.configPathToLicense(\"./license.xml\"); // Start the database volt.start(); // Load the schema volt.runDDLFromPath(\"./ddl.sql\"); // Create a client to communicate with the database Client client = volt.getClient(); // TESTS... // insert a row using a default procedure int id = 1; String val = \"Hello VoltDB\"; Date initialDate = new Date(); ClientResponse response = client.callProcedure(\"FOO.insert\",id,val,initialDate); assertEquals(response.getStatus(),ClientResponse.SUCCESS); // try inserting the same row, expect a unique constraint violation try { response = client.callProcedure(\"FOO.insert\",id,val,initialDate); } catch (ProcCallException e) { } // call the UpdateFoo procedure val = \"Hello again\"; response = client.callProcedure(\"UpdateFoo\", id, val); assertEquals(response.getStatus(), ClientResponse.SUCCESS); // check that one row was updated assertEquals(response.getResults()[0].asScalarLong(), 1); // select the row and check the values response = client.callProcedure(\"FOO.select\", id); VoltTable t = response.getResults()[0]; assertEquals(t.getRowCount(),1); t.advanceRow(); long lastUpdatedMicros = t.getTimestampAsLong(\"LAST_UPDATED\"); long initialDateMicros = initialDate.getTime()*1000; assertTrue(lastUpdatedMicros > initialDateMicros); String latestVal = t.getString(\"VAL\"); assertEquals(latestVal,val); volt.shutdown(); } } ``` Select the BasicTest.java class, and click the green \"Run\" button from the toolbar, or right-click and select Run As... > JUnit Test. The test should complete successfully. At this point you can launch the BasicTest class. You can freely set breakpoints in either the client code or the stored procedure since they run in the same process." } ]
{ "category": "App Definition and Development", "file_name": "Project-ideas.md", "project_name": "Apache Storm", "subcategory": "Streaming & Messaging" }
[ { "data": "layout: documentation DSLs for non-JVM languages:* These DSL's should be all-inclusive and not require any Java for the creation of topologies, spouts, or bolts. Since topologies are structs, Nimbus is a Thrift service, and bolts can be written in any language, this is possible. Online machine learning algorithms:* Something like but for online algorithms Suite of performance benchmarks:* These benchmarks should test Storm's performance on CPU and IO intensive workloads. There should be benchmarks for different classes of applications, such as stream processing (where throughput is the priority) and distributed RPC (where latency is the priority)." } ]
{ "category": "App Definition and Development", "file_name": "go-sdk-release.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Go SDK Exits Experimental in Apache Beam 2.33.0\" date: 2021-11-04 00:00:01 -0800 categories: blog authors: lostluck <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Apache Beams latest release, version , is the first official release of the long experimental Go SDK. Built with the , the Go SDK joins the Java and Python SDKs as the third implementation of the Beam programming model. <!--more--> New users of the Go SDK can start using it in their Go programs by importing the main beam package: ``` import \"github.com/apache/beam/sdks/v2/go/pkg/beam\" ``` The next run of `go mod tidy` will fetch the latest stable version of the module. Alternatively executing `go get github.com/apache/beam/sdks/v2/go/pkg/beam` will download it to the local module cache immeadiately, and add it to your `go.mod` file. Existing users of the experimental Go SDK need to update to new `v2` import paths to start using the latest versions of the SDK. This can be done by adding `v2` to the import paths, changing `github.com/apache/beam/sdks/go/`... to `github.com/apache/beam/sdks/v2/go/`... where applicable, and then running `go mod tidy`. Further documentation on using the SDK is available in the , and in the package . At time of writing, the Go SDK is currently \"Batteries Not Included\". This means that there are gaps or edge cases in supported IOs and transforms. That said, the core of the SDK enables a great deal of the Beam Model for custom user use, supporting the following features: PTransforms Impulse Create ParDo with user DoFns Iterable side inputs Multiple output emitters Receive and return key-value pairs SplittableDoFns GroupByKey and CoGroupByKey Combine and CombinePerKey with user CombineFns Flatten Partition Composite transforms Cross language transforms Event time windowing Global, Interval, Sliding, and Session windows Aggregating over windowed PCollections with GroupByKeys or Combines Coders Primitive Go types (ints, string, []bytes, and more) Beam Schemas for Go struct types (including struct, slice, and map fields) Registering custom coders Metrics PCollection metrics (element counts, size estimates) Custom user metrics Post job user metrics querying (coming in 2.34.0) DoFn profiling metrics (coming in 2.35.0) Built-in transforms Sum, count, min, max, top, filter Scalable TextIO reading Upcoming feature roadmap, and known issues are discussed below. In particular, we plan to support a much richer set of IO connectors via Beam's cross-language capabilities. With this release, the Go SDK now uses for dependency management. This makes it so users, SDK authors, and the testing infrastructure can all rely on the same versions of dependencies, making builds reproducible. This also makes" }, { "data": "Versioned SDK worker containers are now built and , with the SDK using matching tagged versions. User jobs no longer need to specify a container to use, except when using custom containers. The Go SDK will largely follow suit with the Go notion of compatibility. Some concessions are made to keep all SDKs together on the same release cycle. The SDK will be tested at a minimum , and use available language features and standard library packages accordingly. To maintain a broad compatibility, the Go SDK will not require the latest major version of Go. We expect to follow the 2nd newest supported release of the language, with a possible exception when Go 1.18 is released, in order to begin experimenting with in the SDK. Release notes will call out when the minimum version of the language changes. The primary user packages will avoid changing in backwards incompatible ways for core features. This is to be inline with Go's notion of the . If an old package and a new package have the same import path, the new package must be backwards compatible with the old package. Exceptions to this policy are around newer, experimental, or in development features and are subject to change. Such features will have a doc comment noting the experimental status. Major changes will be mentioned in the release notes. For example, using `beam.WindowInto` with Triggers is currently experimental and may have the API changed in a future release. Primary user packages include: The main beam package `github.com/apache/beam/sdks/v2/go/pkg/beam` Sub packages under `.../transforms`, `.../io`, `.../runners`, and `.../testing`. Generally, packages in the module other than the primary user packages are for framework use and are at risk of changing. Current native transforms are undertested IOs may not be written to scale Go Direct Runner is incomplete and is not portable, prefer using the Python Portable runner, or Flink Doesn't support side input windowing. Doesn't serialize data, making it unlikely to catch coder issues Can use other general improvements, and become portable Current Trigger API is under iteration and subject to change API has a possible breaking change between 2.33.0 and 2.34.0, and may change again Support of the SDK on services, like Google Cloud Dataflow, remains at the service owner's discretion Need something? File a ticket in the and, Email the list! `top.SmallestPerKey` was broken `beam.TryCrossLanguage` API didn't match non-Try version This is a breaking change if one was calling `beam.TryCrossLanguage` Non-global window side inputs don't match (correctness bug) Until 2.35.0 it's not recommended to use side inputs that are not using the global window. DoFns using side inputs accumulate memory over bundles, causing out of memory issues The has been updated. Ongoing focus is to bolster streaming focused features, improve existing connectors, and make connectors easier to implement. In the nearer term this comes in the form of improvements to side inputs, and providing wrappers and improving ease-of-use for cross language transforms from Java. We hope you find the SDK useful, and it's still early days. If you make something with the Go SDK, consider . And remember, are always welcome." } ]
{ "category": "App Definition and Development", "file_name": "is-distinct-from.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "Comparing of two values. Unlike the regular , NULLs are treated as equal to each other. More precisely, the comparison is carried out according to the following rules: 1) The operators `IS DISTINCT FROM`/`IS NOT DISTINCT FROM` are defined for those and only for those arguments for which the operators `!=` and `=` are defined. 2) The result of `IS NOT DISTINCT FROM` is equal to the logical negation of the `IS DISTINCT FROM` result for these arguments. 3) If the result of the `==` operator is not equal to zero for some arguments, then it is equal to the result of the `IS NOT DISTINCT FROM` operator for the same arguments. 4) If both arguments are empty `Optional` or `NULL`s, then the value of `IS NOT DISTINCT FROM` is `True`. 5) The result of `IS NOT DISTINCT FROM` for an empty `Optional` or `NULL` and filled-in `Optional` or non-`Optional` value is `False`. For values of composite types, these rules are used recursively." } ]
{ "category": "App Definition and Development", "file_name": "sql.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "Last update: September 2018 This document provides an architectural overview of the SQL layer in CockroachDB. The SQL layer is responsible for providing the \"SQL API\" that enables access to a CockroachDB cluster by client applications. Original author: knz Table of contents: Component details: - - - This document complements the prior document \"Life of a SQL query\" by Andrei. Andrei's document is structured as an itinerary, where the reader follows the same path as a SQL query and its response data through the architecture of CockroachDB. This architecture document is a top-down perspective of all the components involved side-by-side, which names and describes the relationships between them. In short, answers the question \"what happens and how\" and this document answers the question \"what are the parts involved\". tl;dr: there is an architecture, but it is not yet visible in the source code. In most state-of-the-art software projects, there exists a relatively good correspondence between the main conceptual items of the overall architecture (and its diagrams, say) and the source code. For example, if the architecture calls out a thing called e.g. \"query runner\", which takes as input a logical query plan (a data structure) and outputs result rows (another data structure), you'd usually expect a thing in the source code called \"query runner\" that looks like a class whose instances would carry the execution's internal state providing some methods that take a logical plan as input, and returning result rows as results. In CockroachDB's source code, this way of thinking does not apply: instead, CockroachDB's architecture is an emergent property of its source code. \"Emergent\" means that while it is possible to understand the architecture by reading the source code, the idea of an architecture can only emerge in the mind of the reader after an intense and powerful mental exercise of abstraction. Without this active effort, the code just looks like a plate of spaghetti until months of grit and iterative navigation and tinkering stimulates the reader's subconscious mind to run through the abstraction exercise on its own, and to slowly and incrementally reveal the architecture, while the reader's experience builds up. There are multiple things that can be said about this state of affairs: this situation sounds much worse than it really is. While the code is initially difficult to map to an overarching architecture, every person who has touched the code has made their best effort at maintaining good separation of responsibilities between different components. The fact that this document is able to reconstruct a relatively sane architectural model from the source code *despite the lack of explicit overarching architectural guidelines so far* is a testament to the quality of said source code and the work of all past and current contributors. nevertheless, *it exerts a high resistance against the onboarding of new team members, and it constitutes an obstacle to the formation of truly decoupled teams*. Let me explain. While our \"starter projects\" ensure that new team members get quickly up to speed with our engineering process, they are rather powerless at creating any high-level understanding whatsoever of how CockroachDB's SQL layer really" }, { "data": "My observations so far suggest that onboarding a contributor to CockroachDB's SQL code such that they can contribute non-trivial changes to any part of the SQL layer requires four to six months of incrementally complex assignments over all of the SQL layer. The reason for this is that (until this document was written) the internal components of the SQL layer were not conceptually isolated, so one had to work with all of them to truly understand their boundaries. By the time any good understanding of any single component could develop, the team member would have needed to look at and comprehend every other component. And therefore teams could not maintain strong conceptual isolation between areas of the source code, for any trainee would be working across boundaries all the time. finally, this situation is changing, and will change further. As the number of more experienced engineers grows, more of us are starting to consciously realize that this situation is untenable and that we must start to actively address complexity growth and the lack of internal boundaries. Me authoring this document serves as witness to this change of winds. Moreover, some feature work (e.g. concurrent execution of SQL statements) is already motivating some good refactorings by Nathan, and more are coming on the horizon. Ideally, this entire \"disclaimer\" section in this architecture document would eventually disappear. There is probably space for a document that would outline how we wish CockroachDB's SQL architecture to look like; this is left as an exercise for a next iteration, and we will focus here on recognizing what is there without judgement. In short, the rest of this document is a model, not a specification. Also, several sections have a note \"Whom to ask for details\". This reflects the current advertised expertise of several team members, so as to serve as a possible point of entry for questions by newcomers, but does not intend to denote \"ownership\": so far I know, we don't practice \"ownership\" in this part of the code base. The flow of data in the SQL layer during query processing can be summarized as follows: There are overall five main component groups: pgwire: the protocol translator between clients and the executor; the SQL front-end, responsible for parsing, desugaring, free simplifications and semantic analysis; this comprises the two blocks \"Parser\" and \"Expression analysis\" in the overview diagram. the SQL middle-end, responsible for logical planning and optimization. the SQL back-end, which comprises \"physical planning\" and \"query execution\". the executor, which coordinates between the previous four things, the session data, the state SQL transaction and the interactions with the state of the transaction in the KV layer. Note that these components are a fictional model: for efficiency and engineering reasons, the the front-end and middle-end are grouped together in the code; meanwhile the back-end is here considered as a single component but is effectively developed and maintained as multiple separate sub-components. Besides these components on the \"main\" data path of a common SQL query, there are additional auxiliary components that can also participate: the lease manager for access to SQL object descriptors; the schema change manager to perform schema changes asynchronously; the memory monitors for memory accounting. Although they are auxiliary to the main components above, only the memory monitor is relatively simple -- a large architectural discussion would be necessary to fully comprehend the complexity of SQL leases and schema" }, { "data": "The [detailed model section below](#detailed-model-components-and-data-flow) describes these components further and where they are located in the source code. It is common for SQL engines to separate processing of a query into two phases: preparation and execution. This is especially valuable because the work of preparation can be performed just once for multiple executions. In CockroachDB this separation exists, and the preparation phase is itself split into sub-phases: logical preparation and physical preparation. This can be represented as follows: This diagram reveals the following: There are 3 main groups of statements: Read-only queries which only use SELECT, TABLE and VALUES. DDL statements (CREATE, ALTER etc) which incur schema changes. The rest (non-DDL), including SHOW, SET and SQL mutations. The logical preparation phase contains two sub-phases: Semantic analysis and validation, which is common to all SQL statements; Plan optimization, which uses different optimization implementations (or none) depending on the statement group. The physical preparation is performed differently depending on the statement group. Query execution is also performed differently depending on the statement group, but with some shared components across statement groups. The previous section revealed that different statements pass through different stages in the SQL layer. This can be further illustrated in the following diagram: This diagram reveals the following: There are actually 6 statement groups currently: The read-only queries introduced above, DDL statements introduced above, SHOW/SET and other non-mutation SQL statements, SQL mutation statements, Bulk I/O statements in CCL code that influence the schema: IMPORT/EXPORT, BACKUP/RESTORE, Other CCL statements. There are 2 separate, independent and partly redundant implementations of semantic analysis and validation. The CCL code uses its own. (This is bad and ought to be changed, see below.) There are 3 separate, somewhat independent but redundant implementations of logical planning and optimizations. the SQL cost-based planner and optimizer is the new main component. the heuristic planner and optimizer was the code used before the cost-based optimizer was implemented, and is still used for every statement not yet supported by the optimizer. This code is being phased out as its features are being taken over by the cost-based optimizer. the CCL planning code exists in a separate package and tries hard (and badly) to create logical plans as an add-on package. It interfaces with the heuristic planner via some glue code that was organically grown over time without any consideration for maintainability. So it's bad on its own and also heavily relies on another component (the heuristic planner) which is already obsolete. (This is bad; this code needs to disappear.) There are 2 somewhat independent but redundant execution engines for SQL query plans: distributed and local. These two are currently being merged, although CCL statements have no way to integrate with distributed execution currently and still heavily rely on local execution. (This is bad; this needs to change.) The remaining components are used adequately by the statement types that require them and not more. This proliferation of components is a historical artifact of the CockroachDB implementation strategy in 2017, and is not to remain in the long term. The desired situation looks more like the following: That is, use the same planning and execution code for all the statement" }, { "data": "Here is a more detailed version of the summary of data flow interactions between components, introduced at the beginning: (Right-click then \"open image in new window\" to zoom in and keep the diagram open while you read the rest of this document.) There are two main interfaces between the SQL layer and its \"outside world\": the network SQL interface, for clients connections that want to speak SQL (via pgwire); the transactional KV store, CockroachDB's next high-level abstraction layer; I call these \"main\" interfaces because they are fundamentally necessary to provide any kind of SQL functionality. Also they are rather conceptually narrow: the network SQL interface is more or less \"SQL in, rows out\" and the KV interface is more or less \"KV ops out, data/acks in\". In addition, there exist also a few interfaces that are a bit less visible and emerge as a side-effect of how the current source code is organized: the distSQL flows to/from \"processors\" running locally and on other nodes. these establish their own network streams (on top of gRPC) to/from other nodes. the interface protocol is more complex: there are sub-protocols to set up and tear down flows; managing errors, and transferring data between processors. distSQL processors do not access most of the rest of the SQL code; the only interactions are limited to expression evaluation (a conceptually very small part of the local runner) and accessing the KV client interface. the distSQL physical planner also talks directly to the distributed storage layer to get locality information about which nodes are leaseholders for which ranges. the internal SQL interface, by which other components of CockroachDB can use the SQL interface to access lower layers without having to open a pgwire connection. The users of the internal interface include: within the SQL layer itself, the lease manager and the schema change manager, which are outlined below, the admin RPC endpoints, used by CLI tools and the admin web UI. outside of the SQL layer: metrics collector (db stats), database event log collector (db/table creation/deletion etc), etc the memory monitor interface; this is currently technically in the SQL layer but it aims to regulate memory allocations across client connections and the admin RPC, so it has global state independent of SQL and I count it as somewhat of a fringe component. the event logger: this is is where the SQL layer saves details about important events like when a DB or table was created, etc. (This is perhaps the architectural component that is the most recognizable as an isolated thing in the source code.) Roles: primary: serve as a protocol translator between network clients that speak pgwire and the internal API of the SQL layer. secondary: authenticate incoming client connections. How: Overall architecture: event loop, one per connection (in separate goroutines, `v3Conn.serve()`). Get data from network, call into executor, put data into network when executor call returns, rinse, repeat. Interfaces: The network side (`v3Conn.conn` implementing `net.Conn`): gets bytes of pgwire protocol in from the network, sends bytes of pgwire protocol out to the network. memory monitor (`Server.connMonitor`): pre-reserves chunks of memory from the global SQL pool (`Server.sqlMemoryPool`), that can be reused for smallish SQL sessions without having to grab the global mutex. Executor: pgwire queues input SQL queries and COPY data packets to the \"conn executor\" in the `sql`" }, { "data": "For each input SQL query pgwire also prepares a \"result collector\" that goes into the queue. The executor monitors this queue, executes the incoming queries and delivers the results via the result collectors. pgwire then translates the results to response packets towards the client. Code lives in `sql/pgwire`. Whom to ask for details: mattj, jordan, alfonso, nathan. the \"Parser\" (really: lexer + parser), in charge of syntactic analysis. scalar expression semantic analysis, including name resolution, constant folding, type checking and simplification. statement semantic analysis, including e.g. existence tests on the target names of schema change statements. Reminder: \"semantic analysis\" as a general term is the phase in programming language transformers where the compiler determines if the input makes sense. The output of semantic analysis is thus conceptually a yes/no answer to the question \"does this make sense\" and the input program, optionally with some annotations. Role: transform SQL strings into syntax trees. Interface: SQL string in, AST (Abstract Syntax Tree) out. mainly `Parser.Parse()` in `sql/parser/parse.go`. How: The code is a bit spread out but quite close to what every textbook suggests. `Parser.Parse()` really: creates a LL(2) lexer (`Scanner` in `scan.go`) invokes a go-yacc-generated LALR parser using said scanner (`sql.go` generated from `sql.y`) go-yacc generates LALR(1) logic, but SQL really needs LALR(2) because of ambiguities with AS/NOT/WITH; to work around this, the LL(2) scanner creates LALR(1) pseudo-tokens marked with `_LA` based on its 2nd lookahead. expects either an error or a `Statement` list from the parser, and returns that to its caller. the list of tokens recognized by the lexer is automatically derived from the yacc grammar (cf. `sql/parser/Makefile`) many AST nodes!!! until now we have wanted to be able to pretty-print the AST back to its original SQL form or as close as possible no good reason from a product perspective, it was just useful in tests early on so we keep trying out of tradition so the parser doesn't desugar most things (there can be `ParenExpr` or `ParenSelect` nodes in the parsed AST...) except it does actually desugars some things like `TRIM(TRAILING ...)` to `RTRIM(...)`. too many nodes, really, a hell to maintain. \"IR project\" ongoing to auto-generate all this code. AST nodes have a slot for a type annotation, filled in the middle-end (below) by the type checker. Whom to ask for details: pretty much anyone. Role: check AST expressions are valid, do some preliminary optimizations on them, provide them with types. Interface: `Expr` AST in, `TypedExpr` AST out (actually: typed+simplified expression) via `analyzeExpr()` (`sql/analyze.go`) How: name resolution (in various places): replaces column names by `parser.IndexedVar` instances, replaces function names by `parser.FuncDef` references. `parser.TypeCheck()`/`parser.TypeCheckAndRequire()`: performs constant folding; performs type inference; performs type checking; memoizes comparator functions on `ComparisonExpr` nodes; annotates expressions and placeholders with their types. `parser.NormalizeExpr()`: desugar and simplify expressions: for example, `(a+1) < 3` is transformed to `a < 2` for example, `-(a - b)` is transformed to `(b - a)` for example, `a between c and d` is transformed to `a >= c and a <= d` the name \"normalize\" is a bit of a misnomer, since there is no real normalization going on. The historical motivation for the name was the transform that tries hard to pull everything but variable names to the right of" }, { "data": "The implementation of these sub-tasks is nearly purely functional. The only wart is that `TypeCheck` spills the type of SQL placeholders (`$1`, `$2` etc) onto the semantic context object passed through the recursion in a way that is order-sensitive. Note: it's possible to inspect the expressions without desugaring and simplification using `EXPLAIN(EXPRS, TYPES)`. Whom to ask for details: the SQL team(s). Role: check that SQL statements are valid. Interface: There are no interfaces here, unfortunately. The code for statement semantic analysis is currently interleaved with the code to construct the logical query plan. This does use (call into) expression semantic analysis as described above. How: check the existence of databases or tables for statements that assume their existence this interacts with the lease manager to access descriptors check permissions for statements that require specific privileges. perform expression semantic analysis for every expression used by the statement. check the validity of requested schema change operations for DDL statements. Code: in the `opt` package, also currently some code in the `sql` package. Whom to ask for details: the SQL team(s). Two things are involved here: logical planner: transforms the annotated AST into a logical plan. logical plan optimizer: makes the logical plan better. Role: turn the AST into a logical plan. Interface: see `opt/optbuilder`. How: in-order depth-first recursive traversal of the AST; invokes semantics checks on the way; constructs a tree of relational expression nodes. This tree is also called the memo because of the data structure it uses internally. the resulting tree is the logical plan. Whom to ask for details: the SQL team(s). Role: make queries run faster. Interface: see `opt`. Whom to ask for details: the optimizer team. Role: plan the distribution of query execution (= decide which computation goes to which node) and then actually run the query. See the distSQL RFC and \"Life of a SQL query\" for details. Code: `pkg/sql/distsql{plan,run}` Whom to ask for details: the SQL execution team. Role: perform individual relational operations in a currently executing distributed plan. Whom to ask for details: the SQL execution team. Roles: coordinate between the other components maintain the state of the SQL transaction maintain the correspondence between the SQL txn state and the KV txn state perform automatic retries of implicit transactions, or transactions entirely contained in a SQL string received from pgwire track metrics Interfaces: from pgwire: `ExecuteStatements()`, `Prepare()`, `session.PreparedStatements.New()`/`Delete()`, `CopyData()`/`CopyDone()`/`CopyEnd()`; for the internal SQL interface: `QueryRow()`, `queryRows()`, `query()`, `exec()`; into the other components within the SQL layer: see the interfaces in the previous sections of this document; towards the memory monitor: to account for result set accumulated in memory between transaction boundaries; How: maintains its state in the `Session` object; there's a monster spaghetti code state machine in `executor.go`; there's a monster \"god class\" called `planner`; it's a mess, and yet it works! Whom to ask for details: andrei, nathan This thing is responsible for leasing cached descriptors to the rest of SQL. Interface: the lease manager presents an interface to \"get cached descriptors\" for the rest of the SQL code. Why: we don't want to retrieve the schema descriptors using KV in every transaction, as this would be slow, so we cache" }, { "data": "since we cache descriptors, we need a cache consistency protocol with other nodes to ensure that descriptors are not cached forever and that cached copies are not so stale as to be invalid when there are schema changes. The lease manager abstracts this protocol from the rest of the SQL code. How: It's quite complicated. However the state of the lease manager is itself stored in a SQL table `system.leases`, and thus internally the lease manager must be able to issue SQL queries to access that table. For this, it uses the internal SQL interface. It's really like \"SQL calling into itself\". The reason why we don't get \"turtles all the way down\" is that the descriptor for `system.leases` is not itself cached. Note that the lease manager uses the same KV `txn` object as the ongoing SQL session, to ensure that newly leased descriptors are atomically consistent with the rest of the statements in the same transaction. Code: `sql/lease.go`. Whom to ask for details: vivek, dt, andrei. This is is responsible for performing changes to the SQL schema. Interface: \"intentions\" to change the schema are defined as mutation records on the various descriptors, once mutation records are created, client components can write the descriptors back to the KV store, however they also must inform the schema change manager that a schema change must start via `notifySchemaChange`. Why: Adding a column to a very large table or removing a column can be very long. Instead of performing these operations atomically within the transaction where they were issued, CockroachDB runs schema changes asynchronously. Then asynchronously the schema change manager will process whatever needs to be done, such as backfilling a column or populating an index, using a sequence of separate KV transactions. How: It's quite complicated. Unlike the lease manager, the current state of ongoing schema changes is not stored in a SQL table (it's stored directly in the descriptors); however the schema change manager is (soon) to maintain an informational \"job table\" to provide insight to users about the progress of schema changes, and that is a SQL table. So like the lease manager, the schema change manager uses the internal SQL interface, and we have another instance here of \"SQL calling into itself\". The reason why we don't get \"turtles all the way down\" is that the schema change manager never issues SQL that performs schema changes, and thus never issues requests to itself. Also the schema change manager internally talks to the lease manager: leases have to stay consistent with completed schema changes! Code: `sql/schema_changer.go`. Whom to ask for details: vivek, dt. Memory monitors have a relatively simple role: remember how much memory has been allocated so far and ensure that the sum of allocations does not exceed some preset maximum. To ensure this: monitors get initialized with the maximum value (\"budget\") they will support; other things register their allocations to their monitor using an \"account\"; registrations can fail with an error \"not enough budget\"; all allocations can be de-registered at once by calling `Close` on an account. In addition a monitor can be \"subservient\" to another monitor, with its allocations counted against both its own budget and the budget of the monitor one level up. Code: `util/mon`; more details in a comment at the start of `util/mon/bytes_usage.go`. Whom to ask for details: the SQL execution team" } ]
{ "category": "App Definition and Development", "file_name": "README.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> The upgrading of the vendored dependencies should be performed in two steps: Firstly, we need to perform a formal release of the vendored dependency. The of the vendored dependency is separate from the release of Apache Beam. When the release of the vendored dependency is out, we can migrate Apache Beam to use the newly released vendored dependency. The is useful for the vendored dependency upgrades. It reports the linkage errors across multiple Apache Beam artifact ids. For example, when we upgrade the version of gRPC to 1.54.0 and the version of the vendored gRPC is 0.1-SNAPSHOT, we could run the linkage tool as following: ``` $ ./gradlew -p vendor/grpc-1601 publishMavenJavaPublicationToMavenLocal -Ppublishing -PvendoredDependenciesOnly $ ./gradlew -PvendoredDependenciesOnly -Ppublishing -PjavaLinkageArtifactIds=beam-vendor-grpc-1601:0.1-SNAPSHOT :checkJavaLinkage ``` It's expected that the task outputs some linkage errors. While the `checkJavaLinkage` task does not retrieve optional dependencies to avoid bloated dependency trees, Netty (one of gRPC dependencies) has various optional features through optional dependencies. Therefore the task outputs the linkage errors on the references to missing classes in the optional dependencies when applied for the vendored gRPC" }, { "data": "As long as Beam's use of gRPC does not touch these optional Netty features or the classes are available at runtime, it's fine to have the references to the missing classes. Here are the known linkage errors: References to `org.junit.*`: `io.grpc.testing.GrpcCleanupRule` and `io.grpc.testing.GrpcServerRule` uses JUnit classes, which are present when we run Beam's tests. References from `io.netty.handler.ssl`: Netty users can choose SSL implementation based on the platform (). Beam's vendored gRPC uses `netty-tcnative-boringssl-static`, which contains the static libraries for all supported OS architectures (x86_64 and aarch64). The `io.netty.handler.ssl` package has classes that have references to missing classes in other unused optional SSL implementations. References from `io.netty.handler.codec.compression`: Beam does not use the optional dependencies for compression algorithms (brotli, jzlib, lzma, lzf, and zstd) through Netty's features. References to `com.google.protobuf.nano` and `org.jboss.marshalling`: Beam does not use the optional serialization algorithms. References from `io.netty.util.internal.logging`: Netty's logging framework can choose available loggers at runtime. The logging implementations are optional dependencies and thus are not needed to be included in the vendored artifact. Slf4j-api is available at Beam's runtime. References to `reactor.blockhound`: When enabled, Netty's BlockHound integration can detect unexpected blocking calls. Beam does not use it. Once you've verified using the linkage tool, you can test new artifacts by running unit and integration tests against a PR. Example PRs: Updating gRPC version (large) https://github.com/apache/beam/pull/16460 Testing updated gRPC version (large) https://github.com/apache/beam/pull/22595 Updating protobuf for calcite (minor version update): https://github.com/apache/beam/pull/16476 Steps: Generate new artifact files with `publishMavenJavaPublicationToMavenLocal` and copy to the `tempLib` folder in Beam: ``` ./gradlew -p vendor/grpc-1601 publishMavenJavaPublicationToMavenLocal -Ppublishing -PvendoredDependenciesOnly mkdir -p tempLib/org/apache/beam cp -R ~/.m2/repository/org/apache/beam/beam-vendor-grpc-1601` \\ tempLib/org/apache/beam/ ``` Add the folder to the expected project repositories: ``` repositories { maven { url \"${project.rootDir}/tempLib\" } maven { ... } } ``` Migrate all references from the old dependency to the new dependency, including imports if needed. Commit any added or changed files and create a PR to run unit and integration tests on. This can be a draft PR, as you will not merge this PR." } ]
{ "category": "App Definition and Development", "file_name": "Socket.md", "project_name": "SeaTunnel", "subcategory": "Streaming & Messaging" }
[ { "data": "Socket source connector Spark<br/> Flink<br/> SeaTunnel Zeta<br/> Used to read data from Socket. The File does not have a specific type list, and we can indicate which SeaTunnel data type the corresponding data needs to be converted to by specifying the Schema in the config. | SeaTunnel Data type | || | STRING | | SHORT | | INT | | BIGINT | | BOOLEAN | | DOUBLE | | DECIMAL | | FLOAT | | DATE | | TIME | | TIMESTAMP | | BYTES | | ARRAY | | MAP | | Name | Type | Required | Default | Description | |-||-||-| | host | String | Yes | _ | socket server host | | port | Integer | Yes | _ | socket server port | | common-options | | no | - | Source plugin common parameters, please refer to for details. | Configuring the SeaTunnel config file The following example demonstrates how to create a data synchronization job that reads data from Socket and prints it on the local client: ```bash env { parallelism = 1 job.mode = \"BATCH\" } source { Socket { host = \"localhost\" port = 9999 } } sink { Console { parallelism = 1 } } ``` Start a port listening ```shell nc -l 9999 ``` Start a SeaTunnel task Socket Source send test data ```text ~ nc -l 9999 test hello flink spark ``` Console Sink print data ```text [test] [hello] [flink] [spark] ```" } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.1.2.2.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Unnecessary Configuration instantiation in IFileInputStream slows down merge - Port to branch-1 | Blocker | mrv1 | Stanislav Barton | Stanislav Barton | | | Map-only job with new-api runs wrong OutputCommitter when cleanup scheduled in a reduce slot | Blocker | client, job submission | Gera Shegalov | Gera Shegalov |" } ]
{ "category": "App Definition and Development", "file_name": "load.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"LOAD Statements\" weight: 12 type: docs aliases: /dev/table/sql/load.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> LOAD statements are used to load a built-in or user-defined module. {{< tabs \"load statement\" >}} {{< tab \"Java\" >}} LOAD statements can be executed with the `executeSql()` method of the `TableEnvironment`. The `executeSql()` method returns 'OK' for a successful LOAD operation; otherwise, it will throw an exception. The following examples show how to run a LOAD statement in `TableEnvironment`. {{< /tab >}} {{< tab \"Scala\" >}} LOAD statements can be executed with the `executeSql()` method of the `TableEnvironment`. The `executeSql()` method returns 'OK' for a successful LOAD operation; otherwise, it will throw an exception. The following examples show how to run a LOAD statement in `TableEnvironment`. {{< /tab >}} {{< tab \"Python\" >}} LOAD statements can be executed with the `executesql()` method of the `TableEnvironment`. The `executesql()` method returns 'OK' for a successful LOAD operation; otherwise, it will throw an exception. The following examples show how to run a LOAD statement in `TableEnvironment`. {{< /tab >}} {{< tab \"SQL CLI\" >}} LOAD statements can be executed in . The following examples show how to run a LOAD statement in SQL CLI. {{< /tab >}} {{< /tabs >}} {{< tabs \"load modules\" >}} {{< tab \"Java\" >}} ```java StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment tEnv = StreamTableEnvironment.create(env); // load a hive module tEnv.executeSql(\"LOAD MODULE hive WITH ('hive-version' = '3.1.3')\"); tEnv.executeSql(\"SHOW MODULES\").print(); // +-+ // | module name | // +-+ // | core | // | hive | // +-+ ``` {{< /tab >}} {{< tab \"Scala\" >}} ```scala val env = StreamExecutionEnvironment.getExecutionEnvironment() val tEnv = StreamTableEnvironment.create(env) // load a hive module tEnv.executeSql(\"LOAD MODULE hive WITH ('hive-version' = '3.1.3')\") tEnv.executeSql(\"SHOW MODULES\").print() // +-+ // | module name | // +-+ // | core | // | hive | // +-+ ``` {{< /tab >}} {{< tab \"Python\" >}} ```python table_env = StreamTableEnvironment.create(...) tableenv.executesql(\"LOAD MODULE hive WITH ('hive-version' = '3.1.3')\") tableenv.executesql(\"SHOW MODULES\").print() ``` {{< /tab >}} {{< tab \"SQL CLI\" >}} ```sql Flink SQL> LOAD MODULE hive WITH ('hive-version' = '3.1.3'); [INFO] Load module succeeded! Flink SQL> SHOW MODULES; +-+ | module name | +-+ | core | | hive | +-+ ``` {{< /tab >}} {{< /tabs >}} {{< top >}} The following grammar gives an overview of the available syntax: ```sql LOAD MODULE module_name [WITH ('key1' = 'val1', 'key2' = 'val2', ...)] ``` {{< hint warning >}} `module_name` is a simple identifier. It is case-sensitive and should be identical to the module type defined in the module factory because it is used to perform module discovery. Properties `('key1' = 'val1', 'key2' = 'val2', ...)` is a map that contains a set of key-value pairs (except for the key `'type'`) and passed to the discovery service to instantiate the corresponding module. {{< /hint >}}" } ]
{ "category": "App Definition and Development", "file_name": "adr007-use-msw-to-mock-service-requests.md", "project_name": "Backstage", "subcategory": "Application Definition & Image Build" }
[ { "data": "id: adrs-adr007 title: 'ADR007: Use MSW to mock http requests' description: Architecture Decision Record (ADR) log on Use MSW to mock http requests Network request mocking can be a total pain sometimes, in all different types of tests, unit tests to e2e tests always have their own implementation of mocking these requests. There's been traction in the outer community towards using this library to mock network requests by using an express style declaration for routes. react-testing-library suggests using this library instead of mocking fetch directly whether this be in a browser or in node. https://github.com/mswjs/msw Moving forward, we have decided that any `fetch` or `XMLHTTPRequest` that happens, should be mocked by using `msw`. Here is an example: ```ts import { setupWorker, rest } from 'msw'; const worker = setupWorker( rest.get('*/user/:userId', (req, res, ctx) => { return res( ctx.json({ firstName: 'John', lastName: 'Maverick', }), ); }), ); // Start the Mock Service Worker worker.start(); ``` and in a more real life scenario, taken from ```ts beforeEach(() => { server.use( rest.get(`${mockApiOrigin}${mockBasePath}/entities`, (_, res, ctx) => { return res(ctx.json(defaultResponse)); }), ); }); it('should entities from correct endpoint', async () => { const entities = await client.getEntities(); expect(entities).toEqual(defaultResponse); }); ``` A little more code to write Gradually will replace the codebase with `msw`" } ]
{ "category": "App Definition and Development", "file_name": "merges.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "slug: /en/operations/system-tables/merges Contains information about merges and part mutations currently in process for tables in the MergeTree family. Columns: `database` (String) The name of the database the table is in. `table` (String) Table name. `elapsed` (Float64) The time elapsed (in seconds) since the merge started. `progress` (Float64) The percentage of completed work from 0 to 1. `num_parts` (UInt64) The number of pieces to be merged. `resultpartname` (String) The name of the part that will be formed as the result of merging. `is_mutation` (UInt8) 1 if this process is a part mutation. `totalsizebytes_compressed` (UInt64) The total size of the compressed data in the merged chunks. `totalsizemarks` (UInt64) The total number of marks in the merged parts. `bytesreaduncompressed` (UInt64) Number of bytes read, uncompressed. `rows_read` (UInt64) Number of rows read. `byteswrittenuncompressed` (UInt64) Number of bytes written, uncompressed. `rows_written` (UInt64) Number of rows written. `memory_usage` (UInt64) Memory consumption of the merge process. `thread_id` (UInt64) Thread ID of the merge process. `merge_type` The type of current merge. Empty if it's an mutation. `merge_algorithm` The algorithm used in current merge. Empty if it's an mutation." } ]
{ "category": "App Definition and Development", "file_name": "add_months.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" Adds a specified number of months to a given date (DATE or DATETIME). The function provides similar functionalities. The day component in the resulting month remains the same as that specified in `date`, unless the resulting month has fewer days than the day component of the given date, in which case the day will be the last day of the resulting month. For example, `select addmonths('2022-01-31', 1);` returns `2022-02-28 00:00:00`. If the resulting month has more days than the day component of the given date, the result has the same day component as `date`. For example, `select addmonths('2022-02-28', 1)` returns `2022-03-28 00:00:00`. Difference with Oracle: In Oracle, if `date` is the last day of the month, then the result is the last day of the resulting month. Returns NULL if an invalid date or a NULL argument is passed in. ```Haskell ADD_MONTH(date, months) ``` `date`: It must be a valid date or datetime expression. `months`: the months you want to add. It must be an integer. A positive integer adds months to `date`. A negative integer subtracts months from `date`. Returns a DATETIME value. If the date does not exist, for example, `2020-02-30`, NULL is returned. If the date is a DATE value, it will be converted into a DATETIME value. ```Plain Text select add_months('2022-01-01', 2); +--+ | add_months('2022-01-01', 2) | +--+ | 2022-03-01 00:00:00 | +--+ select add_months('2022-01-01', -5); ++ | add_months('2022-01-01', -5) | ++ | 2021-08-01 00:00:00 | ++ select add_months('2022-01-31', 1); +--+ | add_months('2022-01-31', 1) | +--+ | 2022-02-28 00:00:00 | +--+ select add_months('2022-01-31 17:01:02', -2); ++ | add_months('2022-01-31 17:01:02', -2) | ++ | 2021-11-30 17:01:02 | ++ select add_months('2022-02-28', 1); +--+ | add_months('2022-02-28', 1) | +--+ | 2022-03-28 00:00:00 | +--+ ```" } ]
{ "category": "App Definition and Development", "file_name": "LICENSE.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "The MIT License (MIT) Copyright (c) 2014 Grav Copyright (c) 2016 MATHIEU CORNIC Copyright (c) 2017 Valere JEANTET Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." } ]
{ "category": "App Definition and Development", "file_name": "storm-jms.md", "project_name": "Apache Storm", "subcategory": "Streaming & Messaging" }
[ { "data": "title: Storm JMS Integration layout: documentation documentation: true Storm JMS is a generic framework for integrating JMS messaging within the Storm framework. Storm-JMS allows you to inject data into Storm via a generic JMS spout, as well as consume data from Storm via a generic JMS bolt. Both the JMS Spout and JMS Bolt are data agnostic. To use them, you provide a simple Java class that bridges the JMS and Storm APIs and encapsulates and domain-specific logic. The JMS Spout component allows for data published to a JMS topic or queue to be consumed by a Storm topology. A JMS Spout connects to a JMS Destination (topic or queue), and emits Storm \"Tuple\" objects based on the content of the JMS message received. The JMS Bolt component allows for data within a Storm topology to be published to a JMS destination (topic or queue). A JMS Bolt connects to a JMS Destination, and publishes JMS Messages based on the Storm \"Tuple\" objects it receives." } ]
{ "category": "App Definition and Development", "file_name": "2022-09-19-distributed-ddl-reorg.md", "project_name": "TiDB", "subcategory": "Database" }
[ { "data": "Author(s): , Tracking Issue: https://github.com/pingcap/tidb/issues/41208 This is distributed processing of design in the DDL reorg phase. The current design is based on the main logic that only the DDL owner can handle DDL jobs. However, for jobs in the reorg phase, it is expected that all TiDBs can claim subtasks in the reorg phase based on resource usage. At present, TiDB already supports parallel processing of DDL jobs on the owner. However, the resources of a single TiDB are limited. Even if it supports a parallel framework, the execution speed of DDL is relatively limited, and it will compete for resources that affect the daily operations such as TiDB's TPS. DDL Jobs can be divided into the general job and the reorg job. It can also be considered that improving DDL operation performance can be divided into improving the performance of all DDL jobs (including the time consumption of each schema state change, checking all TiDB schema state update success, etc.), and improving the performance of the reorg stage. The current time-consuming and resource-consuming stage is obviously the reorg stage. At present, considering the problem of significantly improving DDL performance and improving TiDB resource utilization, and relatively stable design and development, we will DDL reorg stage for distributed processing. At present, the master branch reorg stage processing logic (that is, no lighting optimization is added), takes an added index as an example. The simple steps that the owner needs to perform in the reorg stage of the added index operation: Split the entire table [startHandle: endHandle] into ranges by region. Each backfill worker scans the data in the corresponding range, then checks the data and writes it to the index. After all backfill workers complete step 2, check if there is still data to process: If there is continued step 2 If not, complete the entire reorg phase and update the relevant meta info. The reorg worker and backfill worker for this scenario are completely decoupled, i.e. the two roles are not related. Backfill workers build the associated worker pool to handle subtasks ( DDL small tasks that a job splits into during the reorg phase). The overall process of this document program is rough as follows: DDL After the owner gets the reorg job, the reorg worker will handle its various state changes until the reorg stage. We split the job into multiple subtasks by data key, and then store the relevant information on the table. After that, regularly check whether all subtasks are processed (this operation is similar to the original logic), and do some other management, such as cancellation. All TiDB backfill workers (regardless of whether TiDB is a DDL owner) will get subtasks to handle. Get the corresponding number of backfill workers from the backfill worker pool, and let them process subtasks in parallel. This operation is similar to the original logic and can be optimized later. Each backfill worker serially gets subtasks, executes them serially until all processing is complete, and then exits. After checking in step 2 that all subtasks have been processed, update the relevant meta info and proceed to the next" }, { "data": "If any subtasks fail, cancel the other subtasks and finally roll back the DDL job. The contents of the existing table structure may be lacking, and a new Metadata needs to be added or defined. Add a new field to the `DDLReorgMeta` structure in the `mysql.tidbddljob` table, for example: ```go type DDLReorgMeta struct { ... // Some of the original fields IsDistReorg bool // Determine whether do dist-reorg } ``` Consider that if all subtask information is added to the TiDBddlreorg.reorg field, there may be a lock problem. It is added to the `mysql.tidbbackgroundsubtask` table, the specific structure is as follows: ```sql ++++-+ | Field | Type | Null | Key | ++++-+ | id | bigint(20) | NO | PK | auto | Namespace string | varchar(256) | NO | MUL | | Key string | varchar(256) | NO | MUL | // elekey, eleid, ddljobid, sub_id | ddlphysicaltid | bigint(20) | NO | | | type | int | NO | | // e.g.ddl_addIndex type | exec_id | varchar(256) | YES | | | exec_expired | Timestamp | YES | | // TSO | state | varchar(64) | YES | | | checkpoint | longblob | YES | | | start_time | bigint(20) | YES | | | stateupdatetime | bigint(20) | YES | | | meta | longblob | YES | | ++++-+ ``` Add the following to the BackfillMeta field: ```go type BackfillMeta struct { CurrKey kv.Key StartKey kv.Key EndKey kv.Key EndInclude bool ReorgTp ReorgType ... *JobMeta // parent job meta } ``` Add `mysql.tidbbackgroundsubtaskhistory` table to record completed (including failure status) subtasks. The table structure is the same as tidbbackground_subtask . Considering the number of subtasks, some records of the history table are deleted regularly in the later stage. The general process is simply divided into two parts: Managing reorg jobs is divided into the following two parts. This function is done by the reorg worker on the DDL owner node. Split the reorg job and insert it into subtasks as needed. Check if the reorg job is processing complete (including status such as failure). Process the subtask and update the relevant metadata. After completion, move the subtask to the history table. This function can be processed by all roles and is completed by backfill workers. Regarding step 1.b, the current plan is to reorg worker through timer regular check, consider the completion of subtask synchronization through PD, to actively check. Rules for backfill workers to claim subtasks: The idle backfill worker on TiDB-server will be woken up by a timer to try to preempt the remaining subtasks. Lease mechanism, the current TiDB backfill worker does not update the exec_expired field for a long time (keep-alive process), and other TiDB backfill workers can preempt it. The Owner Specifies the value. At present, the reorg worker will first split the reorg into subtasks, and then use the total number of subtasks to determine whether only native execution is required or all nodes are" }, { "data": "The total number of split tasks is less than minDistTaskCnt, then mark them all as native, so that the node where the owner is located has priority; Otherwise, all nodes preempt the task in the first two ways. Later, it can support more flexible segmentation tasks and assign claim tasks. Subtask claim notification method: Active way: The Owner node notifies the backfill worker on the local machine through chan. The Owner node notifies backfill workers to other nodes by changing the information registered in the PD. Passive mode: All nodes themselves periodically check if there are tasks to handle. Adjust the `backfiller` and `backfillWorker` to update their interfaces and make them more explicit and generic when fetching and processing tasks. `backfiller` interfaces: ```go // backfiller existing interfaces: func BackfillDataInTxn(handleRange reorgBackfillTask) (taskCtx backfillTaskContext, errInTxn error) func AddMetricInfo(float64) // backfiller new interfaces: // get batch tasks func GetTasks() ([]*BackfillJob, error){} // update task func UpdateTask(bfJob *BackfillJob) error{} func FinishTask(bfJob *BackfillJob) error{} // get the backfill context func GetCtx() *backfillCtx{} func String() string{} ``` Interfaces that need to be added or modified by `backfillWorker`. ```go // In the current implementation, the result is passed between the reorg worker and the backfill worker using chan, and it runs tasks by calling `run` // In the new framework, two situations need to be adapted // 1. As before, transfer via chan and reorg workers under the same TiDB-server // 2. Added support for transfer through system tables to reorg workers between different TiDB-servers // Consider early compatibility. Implement the two adaptations separately, i.e., use the original `run` function for function 1 and `runTask` for function 2 func (w backfillWorker) runTask(task reorgBackfillTask) (result *backfillResult) {} // updatet reorg substask execid and execlease func (w backfillWorker) updateLease(bfJob BackfillJob) error{} func (w *backfillWorker) releaseLease() {} // return backfiller related info func (w *backfillWorker) String() string {} ``` Add the backfill worker pool like `WorkerPool`(later considered to be unified with the existing WorkerPool). In addition, the above interface will be modified in the second phase of this project to make it more general. In the current scheme, the backfill worker obtains subtasks and the reorg worker checks whether the subtask is completed through regular inspection and processing. Here, we consider combining PD watches for communication. When the network partition or abnormal exit occurs in the TiDB where the current backfill worker is located, the corresponding subtask may not be handled by the worker. In the current scheme, it is tentatively planned to mark whether the executor owner of the current subtask is valid by lease. There are more suitable schemes that can be discussed later. The specific operation of this scheme: When the backfill worker handles a subtask, it will record the current DDLID (may need workertypeworkerid suffix) in the tidbbackgroundsubtask table as the execid, and regularly update the execexpired value and curr_key. Non-DDL owner TiDB encountered this problem: When there is a network problem in the TiDB where the backfill worker who is processing the subtask is located, and another TiDB obtains the current subtask and finds that its execexpired expired (for example, the execexpired + lease value is earlier than now () ), the execid and execexpired values of this subtask are updated, and the subtask is processed from" }, { "data": "DDL Owner TiDB may encounter this problem refer to the following changing owner description. DDL an exception may occur in the TiDB where the owner is located, resulting in the need to switch DDL owner. The reorg worker will check the reorg info to confirm that the reorg job has completed subtasks. If it is not completed, enter the stage of reorg job splitting, and then enter the process of checking the completion of the reorg job. The subsequent process will not be repeated. If completed, enter the process of checking the completion of the reorg job. The follow-up process will not be repeated. (Problem: under the new framework, no owner can continue to perform backfill phase tasks). When processing the reorg stage, the process with an error when backfilling is handled as follows: When one of the reorg workers has an error when processing subtask, it changes the state in the tidbbackgroundsubtask table to the failed state and exits the process of processing this subtask. DDL In addition to checking whether all tasks are completed, it will also check whether there is a subtask execution failure (currently considering an error will return ). Move unprocessed subtasks into the TiDBbackgroundsubtask_history table. When there is no subtask to process, the error is passed to the generation logic. This will convert the DDL job to a rollback job according to the original logic. All TiDB b ackfill worker in each task to take subtask, if the half of the execution found that the task does not exist (indicating that half of the reorg task failed to execute, the owner cleaned up its subtask), then exit normally . Follow-up operations refer to the rollback process. When the user executes admin cancel ddl job , the job is marked as canceling as in the original logic. DDL the reorg worker where the owner is located checks this field and finds that it is canceling, the next process is similar to step 3-6 of Failed. Since the subtask may be segmented by each table region, it may cause the `mysql.tidbbackgroundsubtask_history` table is particularly large, so you need to add a regular cleaning function. The first stage can be through subtasks inside row count to calculate the entire DDL job row count. Then the display is the same as the original logic. Subsequent progress can be displayed more humanely, providing results such as percentages, allowing users to better understand the processing of the reorg phase. Update and add some new logs and metrics. Improve and optimize backfill processing subtask scheduling strategy Use more flexible and reasonable subtask segmentation and preemption mechanism Prevent small reorg jobs from being blocked by large reorg jobs , this function should be handled in conjunction with resource management functions The framework is more general, and the current form and interface are more general, but relatively simple, the future will be improved so that it can be used more with DDL reorg as slower background tasks Consider the design of removing DDL owner Remove the reorg worker layers, and each TiDB -server only keeps one DDL worker for schema synchronization and other work." } ]
{ "category": "App Definition and Development", "file_name": "RELEASENOTES.2.2.0.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! --> These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. | Blocker | Define constraints on Auxiliary Service names. Change ShuffleHandler service name from mapreduce.shuffle to mapreduce\\_shuffle.* WARNING: No release note provided for this change. | Major | Clean up Fair Scheduler configuration loading* WARNING: No release note provided for this change. | Blocker | disable symlinks temporarily* During review of symbolic links, many issues were found related to the impact on semantics of existing APIs such FileSystem#listStatus, FileSystem#globStatus etc. There were also many issues brought up about symbolic links, and the impact on security and functionality of HDFS. All these issues will be addressed in the upcoming release 2.3. Until then the feature is temporarily disabled." } ]
{ "category": "App Definition and Development", "file_name": "kbcli_fault_network_duplicate.md", "project_name": "KubeBlocks by ApeCloud", "subcategory": "Database" }
[ { "data": "title: kbcli fault network duplicate Make pods communicate with other objects to pick up duplicate packets. ``` kbcli fault network duplicate [flags] ``` ``` kbcli fault network partition kbcli fault network partition mycluster-mysql-1 --external-targets=kubeblocks.io kbcli fault network partition mycluster-mysql-1 --target-label=statefulset.kubernetes.io/pod-name=mycluster-mysql-2 // Like the partition command, the target can be specified through --target-label or --external-targets. The pod only has obstacles in communicating with this target. If the target is not specified, all communication will be blocked. kbcli fault network loss --loss=50 kbcli fault network loss mysql-cluster-mysql-2 --loss=50 kbcli fault network corrupt --corrupt=50 kbcli fault network corrupt mysql-cluster-mysql-2 --corrupt=50 kbcli fault network duplicate --duplicate=50 kbcli fault network duplicate mysql-cluster-mysql-2 --duplicate=50 kbcli fault network delay --latency=10s kbcli fault network delay mysql-cluster-mysql-2 --latency=10s kbcli fault network bandwidth mysql-cluster-mysql-2 --rate=1kbps --duration=1m ``` ``` --annotation stringToString Select the pod to inject the fault according to Annotation. (default []) -c, --correlation string Indicates the correlation between the probability of a packet error occurring and whether it occurred the previous time. Value range: [0, 100]. --direction string You can select \"to\"\" or \"from\"\" or \"both\"\". (default \"to\") --dry-run string[=\"unchanged\"] Must be \"client\", or \"server\". If with client strategy, only print the object that would be sent, and no data is actually sent. If with server strategy, submit the server-side request, but no data is persistent. (default \"none\") --duplicate string the probability of a packet being repeated. Value range: [0, 100]. --duration string Supported formats of the duration are: ms / s / m / h. (default \"10s\") -e, --external-target stringArray a network target outside of Kubernetes, which can be an IPv4 address or a domain name, such as \"www.baidu.com\". Only works with direction: to. -h, --help help for duplicate --label stringToString label for pod, such as '\"app.kubernetes.io/component=mysql, statefulset.kubernetes.io/pod-name=mycluster-mysql-0. (default []) --mode string You can select \"one\", \"all\", \"fixed\", \"fixed-percent\", \"random-max-percent\", Specify the experimental mode, that is, which Pods to experiment with. (default \"all\") --node stringArray Inject faults into pods in the specified node. --node-label stringToString label for node, such as '\"kubernetes.io/arch=arm64,kubernetes.io/hostname=minikube-m03,kubernetes.io/os=linux. (default []) --ns-fault stringArray Specifies the namespace into which you want to inject faults. (default [default]) -o, --output format Prints the output in the specified" }, { "data": "Allowed values: JSON and YAML (default yaml) --phase stringArray Specify the pod that injects the fault by the state of the pod. --target-label stringToString label for pod, such as '\"app.kubernetes.io/component=mysql, statefulset.kubernetes.io/pod-name=mycluster-mysql-0\"' (default []) --target-mode string You can select \"one\", \"all\", \"fixed\", \"fixed-percent\", \"random-max-percent\", Specify the experimental mode, that is, which Pods to experiment with. --target-ns-fault stringArray Specifies the namespace into which you want to inject faults. --target-value string If you choose mode=fixed or fixed-percent or random-max-percent, you can enter a value to specify the number or percentage of pods you want to inject. --value string If you choose mode=fixed or fixed-percent or random-max-percent, you can enter a value to specify the number or percentage of pods you want to inject. ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Network chaos." } ]
{ "category": "App Definition and Development", "file_name": "smart-driver.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "The current Yugabyte JDBC driver is built on the Postgres JDBC driver. It works fine with YugabyteDB as YugabyteDB is wire compatible with PostgreSQL. However, YugabyteDB being a distributed database can use some smarts to not only increase performance but also obviate the need of external load balancers like HAProxy, pgpool etc. Elaborating on the key motivations. Each of the available tservers can be connected to for a database connection. However for convenience client applications just use a single url (host port combination specific to one tserver) and make connections to that. This is not ideal and leads to operational complexities. This is because the load is not uniformly distributed across all the peers. Also in case any tserver crashes or is unavailable due to host specific problems, the client application will stop working despite the presence of other healthy servers in the system. External load balancers like HAProxy and pgpool etc can be used to uniformly distribute the client load to multiple servers behind the proxy, however that has two main disadvantages. Firstly, configure and maintain an extra software component which adds to complexity and two, in a true distributed system like YugabyteDB, clusters are configured to scale up and down depending on the load and in that case the external load balancers also would be needed to everytime made aware of the changing list of servers behind them. Most of the times client applications would like to connect to subset of available servers for performance reasons. Like latency sensitive applications can choose to just connect to available servers in a particular datacenter/region. It would be really nice if the driver can understand this requirement and directs traffic to only those servers that are placed in the desired region/datacenter. All the queries and dmls first go to the server to which they are connected to. As part of the execution the backend determines all the tablets which need to be scanned/written to, by talking to the master ( uses cached information also if present ). It also gets, from the master, the location of those tablets and then opens a scan which remotely fetches data from those locations or sends data to those as the case may" }, { "data": "( Always the primary tablet location unless follower reads are configured in which case it may go to a secondary copy as well ). These remote fetches or writes adds quite a bit of latency specially for oltp kind of queries and therefore it would be desirable that for each operation the request from client driver hits a server where most likely the data of interest lies locally. An in-built function called yb_servers will be added in Yugabyte. The purpose of this function is to return one record of information for each tserver present in the cluster. <table> <tr> <td><strong>host</strong> </td> <td><strong>port</strong> </td> <td><strong>num_connections</strong> </td> <td><strong>node_type</strong> </td> <td><strong>cloud</strong> </td> <td><strong>region</strong> </td> <td><strong>zone</strong> </td> <td><strong>public_ip</strong> </td> </tr> <tr> <td>internal ip of the tserver </td> <td>database port </td> <td>Number of clients connected (not used now) </td> <td>current possible values are 'primary' or 'read_replica' </td> <td>cloud where the server is hosted </td> <td>region where the server is hosted </td> <td>zone where the server is hosted </td> <td>public_ip of the server, may be different from the internal ip </td> </tr> </table> Connection property: A new property is being added: load-balance It expects true/false as its possible values. \\ In YBClusterAwareDataSource load balancing is true by default. However when using the DriverManager.getConnection() API the 'load-balance' property needs to be set to 'true'. How does it work: The driver transparently fetches the list of all servers when it creates the first connection. After that the driver chooses the least loaded server for subsequent connections. The driver keeps track of the number of connections it has created on each server and hence knows about the least loaded server from it's perspective. Servers list can change with time because of many reasons. Servers can get added/removed from the cluster. Therefore it is essential to refresh the server list frequently. The driver explicitly refreshes this information every time a new connection request comes to it and if the information which it has is more than 5 minutes old. An additional property topology-keys is added to indicate only servers belonging to the locations indicated by the topology keys would be considered for establishing connections It expects a commaseparatedgeolocations_ as it's value(s). For example: topology-keys=cloud1.region1.zone1,cloud1.region1.zone2 NOTE: This feature is still in the design phase. NOTE: This feature is still in the design phase." } ]
{ "category": "App Definition and Development", "file_name": "ysql-dapper.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: C# ORM example application that uses Dapper ORM and YSQL headerTitle: C# ORM example application linkTitle: C# description: C# ORM example application that uses Dapper ORM and the YSQL API. menu: v2.18: identifier: csharp-dapper parent: orm-tutorials weight: 720 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"../ysql-entity-framework/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> Entity Framework ORM </a> </li> <li> <a href=\"../ysql-dapper/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> Dapper ORM </a> </li> </ul> The following tutorial implements a REST API server using the ORM. The scenario is that of an e-commerce application where database access is managed using the ORM. The source for the application can be found in the repository. This tutorial assumes that you have: YugabyteDB up and running. Download and install YugabyteDB by following the steps in . or later. ```sh $ git clone https://github.com/YugabyteDB-Samples/orm-examples.git && cd orm-examples/csharp/dapper/DapperORM ``` To modify the database connection settings, change the `DefaultConnection` field in `appsettings.json` file which is in the following format: `Host=$hostName; Port=$dbPort; Username=$dbUser; Password=$dbPassword; Database=$database` | Properties | Description | Default | | : | :- | : | | Host | Database server IP address or DNS name. | 127.0.0.1 | | Port | Database port where it accepts client connections. | 5433 | | Username | The username to connect to the database. | yugabyte | | Password | The password to connect to the database. | | | Database | Database instance in database server. | yugabyte | To change default port for the REST API Server, go to `Properties/launchSettings.json` and change the `applicationUrl` field under the `DapperORM` field. Build the REST API server. ```sh $ dotnet build ``` Run the REST API server ```sh $ dotnet run ``` The REST server runs at `http://localhost:8080` by default. Create 2 users. ```sh $ curl --data '{ \"firstName\" : \"John\", \"lastName\" : \"Smith\", \"email\" : \"[email protected]\" }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/users ``` ```sh $ curl --data '{ \"firstName\" : \"Tom\", \"lastName\" : \"Stewart\", \"email\" : \"[email protected]\" }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/users ``` Create 2" }, { "data": "```sh $ curl \\ --data '{ \"productName\": \"Notebook\", \"description\": \"200 page notebook\", \"price\": 7.50 }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/products ``` ```sh $ curl \\ --data '{ \"productName\": \"Pencil\", \"description\": \"Mechanical pencil\", \"price\": 2.50 }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/products ``` Verify the `userId` and `productId` from the database using the following YSQL commands from your shell. ```sql yugabyte=# SELECT * FROM users; ``` ```output userid | firstname | lastname | useremail ++--+- 1 | John | Smith | [email protected] 101 | Tom | Stewart | [email protected] (2 rows) ``` ```sql yugabyte=# SELECT * FROM products; ``` ```output productid | description | price | productname +-+-+-- 1 | 200 page notebook | 7.50 | Notebook 101 | Mechanical pencil | 2.50 | Pencil (2 rows) ``` Create 2 orders with products using the `userId` for John. ```sh $ curl \\ --data '{ \"userId\": \"1\", \"products\": [ { \"productId\": 1, \"units\": 2 } ] }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/orders ``` ```sh $ curl \\ --data '{ \"userId\": \"1\", \"products\": [ { \"productId\": 1, \"units\": 2 }, { \"productId\": 101, \"units\": 4 } ] }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/orders ``` ```sql yugabyte=# SELECT count(*) FROM users; ``` ```output count 2 (1 row) ``` ```sql yugabyte=# SELECT count(*) FROM products; ``` ```output count 2 (1 row) ``` ```sql yugabyte=# SELECT count(*) FROM orders; ``` ```output count 2 (1 row) ``` To use the REST API server to verify that the users, products, and orders were created in the `yugabyte` database, enter the following commands. The results are output in JSON format. ```sh $ curl http://localhost:8080/users ``` ```output.json { \"content\": [ { \"userId\": 101, \"firstName\": \"Tom\", \"lastName\": \"Stewart\", \"email\": \"[email protected]\" }, { \"userId\": 1, \"firstName\": \"John\", \"lastName\": \"Smith\", \"email\": \"[email protected]\" } ], ... } ``` ```sh $ curl http://localhost:8080/products ``` ```output.json { \"content\": [ { \"productId\": 101, \"productName\": \"Pencil\", \"description\": \"Mechanical pencil\", \"price\": 2.5 }, { \"productId\": 1, \"productName\": \"Notebook\", \"description\": \"200 page notebook\", \"price\": 7.5 } ], ... } ``` ```sh $ curl http://localhost:8080/orders ``` ```output.json { \"content\": [ { \"orderId\":\"2692e1e9-0bbd-40e8-bf51-4fbcc4e9fea2\", \"orderTime\":\"2022-02-24T02:32:52.60555\", \"orderTotal\":15.00, \"userId\":1, \"users\":null, \"products\":null }, { \"orderId\":\"f7343f22-7dfc-4a18-b4d3-9fcd17161518\", \"orderTime\":\"2022-02-24T02:33:06.832663\", \"orderTotal\":25.00, \"userId\":1, \"users\":null, \"products\":null } ] } ```" } ]
{ "category": "App Definition and Development", "file_name": "sampling.md", "project_name": "MongoDB", "subcategory": "Database" }
[ { "data": "TCMalloc uses sampling to get representative data on memory usage and allocation. We chose to sample an allocation every N bytes where N is a random value using with a mean set by the profile sample rate using . By default this is every 2MiB, and can be overridden in code. Note that this is an statistical expectation and it's not the case that every 2 MiB block of memory has exactly one sampled byte. We'd like to sample each byte in memory with a uniform probability. The granularity there is too fine; for better fast-path performance, we can use a simple counter and sample an allocation if 1 or more bytes in it are sampled instead. This causes a slight statistical skew; the larger the allocation, the more likely it is that more than one byte of it should be sampled, which we correct for, as well as the fact that requested and allocated size may be different, in the weighting process. We defer a . determines when we should sample an allocation; if we should, it returns a weight that indicates how many bytes that the sampled allocation represents in expectation. We do some additional processing around that allocation using to record the call stack, alignment, request size, and allocation size. Then we go through all the active samplers using and tell them about the allocation. We also tell the span that we're sampling it. We can do this because we do sampling at tcmalloc page sizes, so each sample corresponds to a particular page in the pagemap. For small allocations, we make two allocations: the returned allocation (which uses an entire tcmalloc page, not shared with any other allocations) and a proxy allocation in a non-sampled span (the proxy object is used when computing fragmentation profiles). When allocations are sampled, the virtual addresses associated with the allocation are . This, combined with the whole-page behavior above, means that *every allocation gets its own native (OS) page(s)* shared with no other allocations. Each sampled allocation is tagged. Using this, we can quickly test whether a particular allocation might be a sample. When we are done with the sampled span we release it using . To handle heap and fragmentation profiling we just need to traverse the list of sampled objects and compute either their degree of fragmentation (with the proxy object), or the amount of heap they consume. Each allocation gets additional metadata associated with it when it is exposed in the heap profile. In the preparation for writing the heap profile, probes the operating system for whether or not the underlying memory pages in the sampled allocation are swapped or not resident at all (this can happen if they've never been written to). We use to obtain this information for each underlying OS page. The OS is more aggressive at swapping out pages for sampled allocations than the statistics might otherwise indicate. Sampled allocations do not share memory pages (either huge or otherwise) with any other allocations, so a sampled rarely-accessed allocation becomes eligible for reclaim more readily than an allocation that has not been sampled (which might share pages with other allocations that are heavily" }, { "data": "This design is a fortunate consequence of other aspects of sampling; we want to identify specific allocations as being more readily swapped independent of our memory allocation behavior. More information is available via pageflags, but these require `root` to access `/proc/kpageflags`. To make this information available to tcmalloc, would need to be merged. Allocation profiling reports a list of sampled allocations during a length of time. We start an allocation profile using , then wait until time has elapsed, then call `Stop` on the token. and report the profile. While the allocation sampler is active it is added to the list of samplers for allocations and removed from the list when it is claimed. Lifetime profiling reports a list of object lifetimes as pairs of allocation and deallocation records. Profiling is initiated by calling . Profiling continues until `Stop` is invoked on the token. Lifetimes are only reported for objects where allocation and deallocation are observed while profiling is active. A description of the sampling based lifetime profiler can be found in Section 4 of . We have one parameter, sampling period, that we denote with $$T$$ throughout. Ideally we'd like to sample each byte of allocated memory with a constant probability $$p = 1/T$$. The distance between each successive pair of sampled bytes is a geometric distribution on $$\\mathbb N$$; the actual code takes the ceiling of an exponentially-distributed variable with parameter $$\\lambda = 1/T$$. Whenever we make an allocation, we decrement the requested size plus one byte (we concentrate more on this in the next section; we can treat this as a decrement of the allocated size for now) from a counter initialized to one realization of this random variable. If all allocations were one byte wide, this would work perfectly. Each sampled byte would represent $$T$$ bytes in the total, and each byte of memory would be uniformly likely to be sampled. Unfortunately, allocations are typically more than one byte wide, so we will need to compensate. Take an allocation that has been chosen to be sampled (of size $$k$$). The byte marked `*` is the byte that decreases the counter to zero; the counter starts at $$f$$ before this allocation. $$ \\underbrace{ \\boxed{\\phantom{*}} \\boxed{\\phantom{*}} \\boxed{\\phantom{*}} \\boxed{\\phantom{*}} \\boxed{\\phantom{*}} \\boxed{\\phantom{*}} \\boxed{*} }_\\text{$f$} \\underbrace{ \\boxed{\\phantom{*}} \\boxed{\\phantom{*}} \\boxed{\\phantom{*}} \\boxed{\\phantom{*}} \\boxed{\\phantom{*}} }_\\text{$k-f$} $$ The sampling weight $$W$$ (n.b. not `allocation count estimate`, which is reported as `weight` in `MallocHook::SampledAlloc`) of this allocation is the number of bytes represented by this sample. The byte marked `*` contributes $$T$$ bytes, as that is the average sampling rate; the bytes before the sampled byte were specifically not sampled and don't contribute to the sampling weight. The $$k-f$$ bytes after `*` have some probability of being sampled. Specifically, each byte continues to have a $$1/T$$ probability of being sampled, which means that a remaining $$X$$ bytes would be sampled if we had continued our technique, where $$ X \\sim \\mathrm{Pois}\\left(\\frac{k-f}{T}\\right), $$ which follows by applying the definition of the Poisson distribution. So each sampled allocation has a weight of $$ W = T + TX. $$ Instead of realizing an instantiation of $$X$$, which is computationally cumbersome, we simplify in this case by taking $$X$$ to be its expectation: $$ W = T + T\\hat X = T + T \\left(\\frac{k - f}{T}\\right) = T + k -" }, { "data": "$$ Now let's consider our estimate on the total memory usage $$ M = \\sumi Wi $$ where the sum ranges over all sampled allocations. In either the world where we did not make the simplifying assumption or in the limit where all allocations are one byte, there is no $$k-f$$ part to worry about; the variance is just that of the underlying Poisson process, which is $$\\sigma^2 = \\lambda = TM.$$ In the other direction with one gigantic allocation $$k = M \\gg T$$, the variance is just on a single realization of our geometric random variable representing the distance between sampled allocations, which is $$\\sigma^2 = \\frac{1-p}{p^2} = \\frac{1 - \\frac1T}{\\frac{1}{T^2}} = T^2 - T.$$ Thus, depending on allocation pattern, the variance for our estimate on $$M$$ varies between $$T^2 - T$$ and $$TM$$. To keep the fast path fast, all of the above logic works on requested size plus one byte, not actual allocation sizes. For larger allocations (as of the time of writing, greater than 262144 bytes), these two values differ by a byte; for smaller allocations, allocations are into a set of discrete size classes for optimal cache performance. For the smallest allocation, zero bytes, we run into an issue: we actually do allocate memory (we round it up to the smallest class size), so we need to ensure that there is some chance of sampling even zero byte allocations. We increment all requested sizes by one in order to deal with this and we come up with the final term of \"requested plus one.\" When we choose an allocation to be sampled, we report an allocation estimate, which is called `weight` in `MallocHook::SampledAlloc`; this variable means \"how many allocations this sample represents.\" We take the sampling weight $$W$$ from the previous section and divide by the actual requested size plus one (not the allocated size). Thus the estimate on the number of bytes a sampled allocation represents is $$\\frac{a}{r+1} W$$ where $$r$$ is the requested size, $$a$$ is the actual allocated size, and $$W$$ is the allocation weight from the previous section. For an intuitive sense of how this works, suppose we only have 1-byte requested allocations (which are currently rounded up to 8-byte actual allocations). Each allocation decrements the distance counter from the previous section by 2 bytes ($$r+1$$) instead of the true value of 8 bytes. When an allocation is chosen to be sampled, we multiply by the same unskewing ratio (4, which is the allocated size, 8, over the requested size plus one, 2). Essentially, what we are doing here is actually sampling slightly less than the requested sampling rate, at a variable rate depending on the requested-to-allocated ratio. This will increase the variance on allocations geometrically far from the next size class, but does not change the underlying expectation of the distribution; the memorylessness nature of the distribution means that no allocation pattern can result in over- or under- reporting of allocations that differ significantly in requested-to-allocated. In the extreme case (all allocations are much smaller than the smallest size class), this approach becomes identical to a \"per-allocation\" sampling strategy. A per-allocation sampling strategy results in a higher variance for larger allocations; here, we note that our actual distribution of size classes covers any larger allocations and the variance term can only increase by a small amount." } ]
{ "category": "App Definition and Development", "file_name": "README.md", "project_name": "openGauss", "subcategory": "Database" }
[ { "data": "This module contains xgboost, gdbt, prophet, agglomerative, This module is compatible with openGauss, postgresql, and greenplum. 1) if you use facebook prophet ``` pip install pystan pip install holidays==0.9.8 pip install fbprophet==0.4 ``` 2) if you use xgboost ``` pip install pandas pip install scikit-learn pip install xgboost ``` ``` cd madlib_modules cp -r * YOURMADLIBSOURCE_CODE/src/ports/postgres/modules ``` THEN, add following to `src/config/Modules.yml` to register those modules. ``` name: agglomerative_clustering depends: ['utilities'] name: xgboost_gs depends: ['utilities'] name: facebook_prophet depends: ['utilities'] name: gbdt depends: ['utilities', 'recursive_partitioning'] ``` Next, compile and your MADlib as usual. Finally, run madpack to install. sion, i.e. a new snapshot. In addition to compact data representation, the primary goal of DB4AI Snapshots is high read performance, i.e. when used in highly repetitive and concurrent read operations for serving as training data sets for concurrent training of multiple ML models. As a secondary performance goal, DB4AI Snapshots provides efficient data manipulation for creating and manipulation huge volumes of versioned relational data. In addition, DB4AI Snapshots maintains a full documentation of the origin of any dataset, providing lineage and provenance for data, e.g. to be used in the context of reproducible training of ML models. In addition, DB4AI Snapshots facilitates automation, i.e. when applying repetitive transformation steps in data cleansing, or for automatically updating an existing training dataset with new data. DB4AI Snapshots is automatically installed in every new database instance of openGauss. Therefore the CREATE DATABASE procedure creates the database schema db4ai within the new database and populates it with objects required for managing snapshot data. After successful database creation, any user may start exploring the snapshot functionality. No additional privileges are required. Set snapshot mode to compact storage model CSS (Computed SnapShot mode). SET db4aisnapshotmode = CSS; Create a snapshot '[email protected]' from existing data, where stored in table 'test_data'. The SQL statement may use arbitrary joins and mappings for defining the snapshot. CREATE SNAPSHOT m0 AS SELECT a1, a3, a6, a8, a2, pk, a1 b9, a7, a5 FROM test_data; Create snapshot '[email protected]' from snapshot '[email protected]' by applying arbitrary DML and DDL statements. The new version number indicates a snapshot schema revision by means of at least one ALTER statement. CREATE SNAPSHOT m0 FROM @1.0.0 USING ( UPDATE SNAPSHOT SET a1 = 5 WHERE pk % 10 = 0; ALTER SNAPSHOT ADD \" u+ \" INTEGER, ADD \"x <> y\"INT DEFAULT 2, ADD t CHAR(10) DEFAULT '', DROP a2, DROP COLUMN IF EXISTS b9, DROP COLUMN IF EXISTS b10; UPDATE SNAPSHOT SET \"x <> y\" = 8 WHERE pk < 100; ALTER SNAPSHOT DROP \" u+ \", DROP IF EXISTS \" x+ \"; DELETE FROM SNAPSHOT WHERE pk = 3 ); Create snapshot '[email protected]' from snapshot '[email protected]' by UPDATE while using a reference to another table. The new version number indicates an update operation (minor data patch). This example uses an AS clause for introducing 'i' as the snapshot's custom correlation name for joining with tables during the UPDATE operation. CREATE SNAPSHOT m0 FROM @2.0.0 USING ( UPDATE SNAPSHOT AS i SET a5 = o.a2 FROM test_data o WHERE i.pk = o.pk AND o.a3 % 8 = 0 ); Create snapshot '[email protected]' from snapshot '[email protected]' by DELETE while using a reference to another table. The new version number indicates a data" }, { "data": "This example uses the snapshot's default correlation name 'SNAPSHOT' for joining with another table. CREATE SNAPSHOT m0 FROM @2.0.1 USING ( DELETE FROM SNAPSHOT USING test_data o WHERE SNAPSHOT.pk = o.pk AND o.A7 % 2 = 0 ); Create snapshot '[email protected]' from snapshot '[email protected]' by inserting new data. The new version number indicates another data revision. CREATE SNAPSHOT m0 FROM @2.1.0 USING ( INSERT INTO SNAPSHOT SELECT a1, a3, a6, a8, a2, pk+1000 pk, a7, a5, a4 FROM test_data WHERE pk % 10 = 4 ); The SQL syntax was extended with the new @ operator in relation names, allowing the user to specify a snapshot with version throughout SQL. Internally, snapshots are stored as views, where the actual name is generated according to GUC parameters on arbitrary level, e.g. on database level using the current setting of the version delimiter: -- DEFAULT db4aisnapshotversion_delimiter IS '@' ALTER DATABASE <name> SET db4aisnapshotversion_delimiter = '#'; Similarly the version separator can be changed by the user: -- DEFAULT db4aisnapshotversion_separator IS . ALTER DATABASE <name> SET db4aisnapshotversionseparator = ; Independently from the GUC parameter settings mentioned above, any snapshot version can be accessed: -- standard version string @schema.revision.patch: SELECT * FROM [email protected]; -- user-defined version strings: SELECT * FROM accounting.invoice@2021; SELECT * FROM user.cars@cleaned; -- quoted identifier for blanks, keywords, special characters, etc.: SELECT * FROM user.cars@\"rev 1.1\"; -- or string literal: SELECT * FROM user.cars@'rev 1.1'; Alternative, using internal name (depends on GUC settings): -- With internal view name, using default GUC settings SELECT * FROM public.\"[email protected]\"; -- With internal view name, using custom GUC settings, as above SELECT * FROM public.data#123; All members of role PUBLIC may use DB4AI Snapshots. The current version of DB4AI Snapshots is tested in openGauss SQL compatibility modes A, B and C. DB4AI Snapshots uses standard SQL for implementing its functionality. None. DB4AI Snapshots exposes several configuration parameters, via the system's global unified configuration (GUC) management. Configuration parameters may be set on the scope of functions (CREATE FUNCTION), transactions (SET LOCAL), sessions (SET), user (ALTER USER), database (ALTER DATABASE), or on system-wide scope (postgresql.conf). SET [SESSION | LOCAL] configuration_parameter { TO | = } { value | 'value' } CREATE FUNCTION <..> SET configuration_parameter { TO | = } { value | 'value' } ALTER DATABASE name SET configuration_parameter { TO | = } { value | 'value' } ALTER USER name [ IN DATABASE databasename ] SET configurationparameter { TO | = } { value | 'value' } The following snapshot configuration parameters are currently supported: This snapshot configuration parameter allows to switch between materialized snapshot (MSS) mode, where every new snapshot is created as compressed but fully materialized copy of its parent's data, or computed snapshot (CSS) mode. In CSS mode, the system attempts to exploit redundancies among dependent snapshot versions for minimizing storage requirements. The setting of db4ai_snapshot_mode may be adjusted at any time and it will have effect on subsequent snapshot operations within the scope of the new setting. Whenever db4ai_snapshot_mode is not set in the current scope, it defaults to MSS. SET db4aisnapshotmode = CSS; This snapshot configuration parameter controls the character that delimits the snapshot version postfix within snapshot names. In consequence, the character used as db4ai_snapshot_version_delimiter cannot be used in snapshot names, neither in the snapshot name prefix, nor in the snapshot version" }, { "data": "Also, the setting of db4ai_snapshot_version_delimiter must be distinct from db4ai_snapshot_version_separator. Whenever db4ai_snapshot_version_delimiter is not set in the current scope, it defaults to the symbol '@' (At-sign). Note: Snapshots created with different settings of db4ai_snapshot_version_delimiter are not compatible among each other. Hence, it is advisable to ensure the setting is stable, i.e. by setting it permanently, e.g. on database scope. ALTER DATABASE name SET db4aisnapshotversion_delimiter = '#'; This snapshot configuration parameter controls the character that separates the snapshot version within snapshot names. In consequence, db4ai_snapshot_version_separator must not be set to any character representing a digit [0-9]. Also, the setting of db4ai_snapshot_version_separator must be distinct from db4ai_snapshot_version_delimiter. Whenever db4ai_snapshot_version_separator is not set in the current scope, it defaults to punctuation mark '.' (period). Note: Snapshots created with different settings of db4ai_snapshot_version_separator do not support automatic version number generation among each other. Hence, it is advisable to ensure the setting is stable, i.e. by setting it permanently, e.g. on database scope. ALTER DATABASE name SET db4aisnapshotversionseparator = ''; Independently from the GUC parameter settings mentioned above, any snapshot version can be accessed: SELECT FROM [,] <snapshotqualifiedname> @ <vconst | ident | sconst> [WHERE ] [GROUP BY ] [HAVING ] [ORDER BY ]; Alternative, using standard version string as internal name (depends on GUC settings): SELECT FROM [,] <snapshotqualifiedname> INTEGER <db4aisnapshotversion_delimiter> INTEGER <db4aisnapshotversion_delimiter> INTEGER [WHERE ] [GROUP BY ] [HAVING ] [ORDER BY ]; Alternative, using user-defined version string as internal name (depends on GUC settings): SELECT FROM [,] <snapshotqualifiedname> <db4aisnapshotversiondelimiter> <snapshotversion_string> [WHERE ] [GROUP BY ] [HAVING ] [ORDER BY ]; If any component of <snapshotqualifiedname>, <db4aisnapshotversiondelimiter>, <db4aisnapshotversionseparator>, or <snapshotversionstring> should contain special characters, then quoting of the snapshot name is required. CREATE SNAPSHOT <qualified_name> [@ <version | ident | sconst>] [COMMENT IS <sconst>} AS <SelectStmt>; The CREATE SNAPSHOT AS statement is invoked for creating a new snapshot. A caller provides the qualified_name for the snapshot to be created. The \\<SelectStmt\\> defines the content of the new snapshot in SQL. The optional @ operator allows to assign a custom version number or string to the new snapshot. The default version number is @ 1.0.0. A snapshot may be annotated using the optional COMMENT IS clause. Example: CREATE SNAPSHOT public.cars AS SELECT id, make, price, modified FROM cars_table; The CREATE SNAPSHOT AS statement will create the snapshot '[email protected]' by selecting some columns of all the tuples in relation cars_table, which exists in the operational data store. The created snapshot's name 'public.cars' is automatically extended with the suffix '@1.0.0' to the full snapshot name '[email protected]', thereby creating a unique, versioned identifier for the snapshot. The DB4AI module of openGauss stores metadata associated with snapshots in a DB4AI catalog table db4ai.snapshot. The catalog exposes various metadata about the snapshot, particularly noteworthy is the field 'snapshot_definition' that provides documentation how the snapshot was generated. The DB4AI catalog serves for managing the life cycle of snapshots and allows exploring available snapshots in the system. In summary, an invocation of the CREATE SNAPSHOT AS statement will create a corresponding entry in the DB4AI catalog, with a unique snapshot name and documentation of the snapshot's lineage. The new snapshot is in state 'published'. Initial snapshots serve as true and reusable copy of operational data, and as a starting point for subsequent data curation, therefore initial snapshots are already" }, { "data": "In addition, the system creates a view with the published snapshot's name, with grantable read-only privileges for the current user. The current user may access the snapshot, using arbitrary SQL statements against this view, or grant read-access privileges to other user for sharing the new snapshot for collaboration. Published snapshots may be used for model training, by using the new snapshot name as input parameter to the db4ai.train function of the DB4AI model warehouse. Other users may discover new snapshots by browsing the DB4AI catalog, and if corresponding read access privileges on the snapshot view are granted by the snapshot's creator, collaborative model training using this snapshot as training data can commence. CREATE SNAPSHOT <qualified_name> [@ <version | ident | sconst>] FROM @ <version | ident | sconst> [COMMENT IS <sconst>} USING ( { INSERT [INTO SNAPSHOT] | UPDATE [SNAPSHOT] [AS <alias>] SET [FROM ] [WHERE ] | DELETE [FROM SNAPSHOT] [AS <alias>] [USING ] [WHERE ] | ALTER [SNAPSHOT] { ADD | DROP } [, ] } [; ] ); The CREATE SNAPSHOT FROM statement serves for creating a modified and immutable snapshot based on an existing snapshot. The parent snapshot is specified by the qualified_name and the version provided in the FROM clause. The new snapshot is created within the parent's schema and it also inherits the prefix of the parent's name, but without the parent's version number. The statements listed in the USING clause define how the parent snapshot shall be modified by means of a batch of SQL DDL and DML statements, i.e. ALTER, INSERT, UPDATE, and DELETE. Examples: CREATE SNAPSHOT public.cars FROM @1.0.0 USING ( ALTER ADD year int, DROP make; INSERT SELECT * FROM carstable WHERE modified=CURRENTDATE; UPDATE SET year=in.year FROM cars_table in WHERE SNAPSHOT.id=in.id; DELETE WHERE modified<CURRENT_DATE-30 ); -- Example with 'short SQL' notation CREATE SNAPSHOT public.cars FROM @1.0.0 USING ( ALTER SNAPSHOT ADD COLUMN year int, DROP COLUMN make; INSERT INTO SNAPSHOT SELECT * FROM carstable WHERE modified=CURRENTDATE; UPDATE SNAPSHOT SET year=in.year FROM cars_table in WHERE SNAPHOT.id=in.id; DELETE FROM SNAPSHOT WHERE modified<CURRENT_DATE-30 }; -- Example with standard SQL syntax In this example, the new snapshot version starts with the current state of snapshot '[email protected]' and adds a new column 'year' to the 'cars' snapshot, while dropping column 'make' that has become irrelevant for the user. This first example uses the short SQL notation, where the individual statements are provided by the user without explicitly stating the snapshot's correlation name. In addition to this syntax, openGauss also accepts standard SQL statements (second example) which tend to be slightly more verbose. Note that both variants allow the introduction of custom correlation names in UPDATE FROM and DELETE USING statements with the AS clause, e.g. `UPDATE AS s [...] WHERE s.id=in.id);` or `UPDATE SNAPSHOT AS s [...] WHERE s.id=in.id);` in the example above. The INSERT operation shows an example for pulling fresh data from the operational data store into the new snapshot. The UPDATE operation exemplifies populating the newly added column 'year' with data coming from the operational data store, and finally the DELETE operation demonstrates how to remove obsolete data from a snapshot. The name of the resulting snapshot of this invocation is '[email protected]'. Similar as in CREATE SNAPSHOT AS, the user may override the default version numbering scheme that generates version number" }, { "data": "The optional COMMENT IS clause allows the user to associate a descriptive textual 'comment' with the unit of work corresponding to this invocation of CREATE SNAPHOT FROM, for improving collaboration and documentation, as well as for change tracking purposes. Since all snapshots are immutable by definition, CREATE SNAPSHOT FROM creates a separate snapshot '[email protected]', initially as a logical copy of the parent snapshot '[email protected]', and applies changes from the USING clause, which corresponds to an integral unit of work in data curation. Similar to a SQL script, the batch of operations is executed atomically and consecutively on the logical copy of the parent snapshot. The parent snapshot remains immutable and is completely unaffected by these changes. Additionally, the system automatically records all applied changes in the DB4AI snapshot catalog, such that the catalog contains an accurate, complete, and consistent documentation of all changes applied to any snapshot. The catalog also stores the origin and lineage of the created snapshot as a reference to its parent, making provenance of data fully traceable and, as demonstrated later, serves as a repository of data curation operations for supporting automation in snapshot generation. The operations themselves allow data scientists to remove columns from the parent snapshot, but also to add and populate new ones, e.g. for the purpose of data annotation. By INSERT, rows may be freely added, e.g. from the operational data source or from other snapshots. Inaccurate or irrelevant data can be deleted, as part of the data cleansing process, regardless of whether the data comes from an immutable parent snapshot or directly from the operational data store. Finally, UPDATE statements allow correction of inaccurate or corrupt data, serve for data imputation of missing data and allow normalization of numeric values to a common scale. In summary, the CREATE SNAPSHOT FROM statement was designed for supporting the full spectrum of recurring tasks in data curation: Data cleansing: Remove or correct irrelevant, inaccurate, or corrupt data Data Imputation: Fill missing data Labeling & Annotation: add immutable columns with computed values Data normalization: Update existing columns to a common scale Permutation: Support reordering of data for iterative model training Indexing: Support random access for model training Invoking CREATE SNAPSHOT FROM statement allows multiple users to collaborate concurrently in the process of data curation, where each user may break data curation tasks into a set of CREATE SNAPSHOT FROM operations, to be executed in atomic batches. This form of collaboration is similar to software engineers collaborating on a common code repository, but here the concept is extended to include code and data. One invocation of CREATE SNAPSHOT FROM corresponds to a commit operation in a git repository. In summary, an invocation of the CREATE SNAPSHOT FROM statement will create a corresponding entry in the DB4AI catalog, with a unique snapshot name and documentation of the snapshot's lineage. The new snapshot remains in state 'unpublished', potentially awaiting further data curation. In addition, the system creates a view with the created snapshot's name, with grantable read-only privileges for the current user. Concurrent calls to CREATE SNAPSHOT FROM are permissive and result in separate new versions originating from the same parent (branches). The current user may access the snapshot using arbitrary, read-only SQL statements against this view, or grant read-access privileges to other user, for sharing the created snapshot and enabling collaboration in data curation. Unpublished snapshots may not participate in model" }, { "data": "Yet, other users may discover unpublished snapshots by browsing the DB4AI catalog, and if corresponding read access privileges on the snapshot view are granted by the snapshot's creator, collaborative data curation using this snapshot can commence. SAMPLE SNAPSHOT <qualified_name> @ <version | ident | sconst> [STRATIFY BY attr_list] { AS <label> AT RATIO <num> [COMMENT IS <comment>] } [, ] The SAMPLE SNAPSHOT statement is used to sample data from a given snapshot (original snapshot) into one or more descendant, but independent snapshots (branches), satisfying a condition given under the parameter 'ratio'. Example: SAMPLE SNAPSHOT [email protected] STRATIFY BY color AS _train AT RATIO .8, AS _test AT RATIO .2; This invocation of SAMPLE SNAPSHOT creates two snapshots from the snapshot '[email protected]', one designated for ML model training purposes: '[email protected]' and the other for ML model testing: '[email protected]'. Note that descendant snapshots inherit the parent's schema, name prefix and version suffix, while each sample definition provides a name infix for making descendant snapshot names unique. The AT RATIO clause specifies the ratio of tuples qualifying for the resulting snapshots, namely 80% for training and 20% for testing. The STRATIFY BY clause specifies that the fraction of records for each car color (white, black, red) is the same in all three participating snapshots. PUBLISH SNAPSHOT <qualified_name> @ <version | ident | sconst>; Whenever a snapshot is created with the CREATE SNAPSHOT FROM statement, it is initially unavailable for ML model training. Such snapshots allow users to collaboratively apply further changes in manageable units of work, for facilitating cooperation in data curation. A snapshot is finalized by publishing it via the PUBLISH SNAPSHOT statement. Published snapshots may be used for model training, by using the new snapshot name as input parameter to the db4ai.train function of the DB4AI model warehouse. Other users may discover new snapshots by browsing the DB4AI catalog, and if corresponding read access privileges on the snapshot view are granted by the snapshot's creator, collaborative model training using this snapshot as training data can commence. Example: PUBLISH SNAPSHOT [email protected]; Above is an exemplary invocation, publishing snapshot '[email protected]'. ARCHIVE SNAPSHOT <qualified_name> @ <version | ident | sconst>; Archiving changes the state of any snapshot to 'archived', while the snapshot remains immutable and cannot participate in CREATE SNAPSHOT FROM or db4ai.train operations. Archived snapshots may be purged, permanently deleting their data and recovering occupied storage space, or they may be reactivated by invoking PUBLISH SNAPSHOT an archived snapshot. Example: ARCHIVE SNAPSHOT [email protected]; The example above archives snapshot '[email protected]' that was previously in state 'published' or 'unpublished'. PURGE SNAPSHOT <qualified_name> @ <version | ident | sconst>; The PURGE SNAPSHOT statement is used to permanently delete all data associated with a snapshot from the system. A prerequisite to purging is that the snapshot is not referenced by any existing trained model in the DB4AI model warehouse. Snapshots still referenced by trained models cannot be purged. Purging snapshots without existing descendant snapshots, removes them completely and occupied storage space is recovered. If descendant snapshots exist, the purged snapshot will be merged into adjacent snapshots, such that no information on lineage is lost, but storage efficiency is improved. In any case, the purged snapshot's name becomes invalid and is removed from the system. Example: PURGE SNAPSHOT [email protected]; The example above recovers storage space occupied by '[email protected]' by removing the snapshot completely." } ]
{ "category": "App Definition and Development", "file_name": "CONTRIBUTING.md", "project_name": "OceanBase", "subcategory": "Database" }
[ { "data": "The development guide is located under the folder. Welcome to [oceanbase]! We're thrilled that you'd like to contribute. Your help is essential for making it better. Before you start contributing, please make sure you have read and understood our . First, fork the to your own GitHub account. This will create a copy of the project under your account. ```bash git clone https://github.com/`your-github-name`/oceanbase ``` ```bash cd oceanbase ``` Create a new branch for your feature or bug fix: ```bash git checkout -b feature-branch ``` feature-branch is the name of the branch where you will be making your changes. You can name this whatever you want. Make your changes and commit them: ```bash git add . git commit -m \"Description of your changes\" ``` Push your changes to your fork: ```bash git push origin feature-branch ``` Finally Click on `Compare & Pull request` to contribute on this repository. After you create the pull request, a member of the Oceanbase team will review your changes and provide feedback. Once satisfied, they will merge your pull request. And there are some CI checks to pass before your pull request can be merged. Currently, there are two types of CI checks: Compile: This check will compile the code on CentOS and Ubuntu. Farm: This check will run the unit tests and some mysql test cases. Note: If the farm failed and you think it is not related to your changes, you can ask the reviewer to re-run the farm or the reviewer will re-run the farm. In default, the pull request is merged into develop branch which is the default branch of . We will merge develop into master branch periodically. So if you want to get the latest code, you can pull the master branch. If you want to develop a new feature, you should create a first. If your idea is accepted, you can create a new issue and start to develop your feature and we will create a feature branch for you. After you finish your feature, you can create a pull request to merge your feature branch into oceanbase feature branch. The flow like below. Create a Create a new issue Create a new feature branch on for your feature Make your changes and commit them Push your changes to your fork Create a pull request to merge your code into feature branch After your pull request is merged, we will merge your feature branch into master" } ]
{ "category": "App Definition and Development", "file_name": "tpcds.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "type: languages title: \"TPC-DS benchmark suite\" aliases: /documentation/sdks/java/tpcds/ <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> From TPC-DS : \"TPC-DS is a decision support benchmark that models several generally applicable aspects of a decision support system, including queries and data maintenance. The benchmark provides a representative evaluation of performance as a general purpose decision support system.\" In general, TPC-DS is: Industry standard benchmark (OLAP/Data Warehouse); https://www.tpc.org/tpcds/ Implemented for many analytical processing systems - RDBMS, Apache Spark, Apache Flink, etc; It provides a wide range of different queries (SQL); It incorporates the tools to generate input data of different sizes. From TPC-DS : The TPC-DS schema models the sales and sales returns process for an organization that employs three primary sales channels: stores, catalogs, and the Internet. The schema includes seven fact tables: A pair of fact tables focused on the product sales and returns for each of the three channels A single fact table that models inventory for the catalog and internet sales channels. In addition, the schema includes 17 dimension tables that are associated with all sales channels. store_sales - Every row represents a single lineitem for a sale made through the store channel and recorded in the store_sales fact table store_returns - Every row represents a single lineitem for the return of an item sold through the store channel and recorded in the store_returns fact table. catalog_sales - Every row represents a single lineitem for a sale made through the catalog channel and recorded in the catalog_sales fact table. catalog_returns - Every row represents a single lineitem for the return of an item sold through the catalog channel and recorded in the catalog_returns table. web_sales - Every row represents a single lineitem for a sale made through the web channel and recorded in the web_sales fact table. web_returns - Every row represents a single lineitem for the return of an item sold through the web sales channel and recorded in the web_returns table. inventory - Every row represents the quantity of a particular item on-hand at a given warehouse during a specific week. store - Every row represents details of a store. call_center - Every row represents details of a call center. catalog_page - Every row represents details of a catalog page. web_site - Every row represents details of a web site. web_page - Every row represents details of a web page within a web site. warehouse - Every row represents a warehouse where items are stocked. customer - Every row represents a customer. customer_address - Every row represents a unique customer address (each customer can have more than one address). customer_demographics - The customer demographics table contains one row for each unique combination of customer demographic information. date_dim - Every row represents one calendar day. The surrogate key (ddatesk) for a given row is derived from the julian date being described by the row. household_demographics - Every row defines a household demographic profile. item - Every row represents a unique product formulation (e.g., size, color, manufacturer, etc.). income_band - Every row represents details of an income range. promotion - Every row represents details of a specific product promotion (e.g., advertising, sales," }, { "data": "reason - Every row represents a reason why an item was returned. ship_mode - Every row represents a shipping mode. time_dim - Every row represents one second. TPC-DS benchmark contains 99 distinct SQL-99 queries (including OLAP extensions). Each query answers a business question, which illustrates the business context in which the query could be used. All queries are templated with random input parameters and used to compare completeness and performance of SQL implementations. Input data source: Input files (CSV) are generated with CLI tool `dsdgen` Input datasets can be generated for different scale factor sizes: 1GB / 10GB / 100GB / 1000GB The tool constrains the minimum amount of data to be generated to 1GB Beam provides a of TPC-DS benchmark. There are several reasons to have TPC-DS benchmarks in Beam: Compare the performance of Beam SQL against native SQL implementations for different runners; Exercise Beam SQL on different runtime environments; Identify missing or incorrect Beam SQL features; Identify performance issues in Beam and Beam SQL. All TPC-DS queries in Beam are pre-generated and stored in the provided artifacts. For the moment, 28 out of 103 SQL queries (99 + 4) successfully pass by running with Beam SQL transform since not all SQL-99 operations are supported. Currently (as of Beam 2.40.0 release) supported queries are: 3, 7, 10, 22, 25, 26, 29, 35, 38, 40, 42, 43, 50, 52, 55, 69, 78, 79, 83, 84, 87, 93, 96, 97, 99 All TPC-DS table schemas are stored in the provided artifacts. CSV and Parquet input data has been pre-generated and staged in the Google Cloud Storage bucket `gs://beam-tpcds`. Staged in `gs://beam-tpcds/datasets/text/*` bucket, spread by different data scale factors. Staged in `gs://beam-tpcds/datasets/parquet/nonpartitioned/` and `gs://beam-tpcds/datasets/parquet/partitioned/`, spreaded by different data scale factors. For `partitioned` version, some large tables has been pre-partitioned by a date column into several files in the bucket. TPC-DS extension for Beam can only be run in Batch mode and supports these runners for the moment (not tested with other runners): Spark Runner Flink Runner Dataflow Runner The TPC-DS launcher accepts the `--runner` argument as usual for programs that use Beam PipelineOptions to manage their command line arguments. In addition to this, the necessary dependencies must be configured. When running via Gradle, the following two parameters control the execution: -P tpcds.args The command line to pass to the TPC-DS main program. -P tpcds.runner The Gradle project name of the runner, such as \":runners:spark:3\" or \":runners:flink:1.17. The project names can be found in the root `settings.gradle.kts`. Test data has to be generated before running a suite and stored to accessible file system. The query results will be written into output files. Scale factor size of input dataset (1GB / 10GB / 100GB / 1000GB): --dataSize=<1GB|10GB|100GB|1000GB> Path to input datasets directory: --dataDirectory=<path to dir> Path to results directory: --resultsDirectory=<path to dir> Format of input files: --sourceType=<CSV|PARQUET> Select queries to run (comma separated list of query numbers or `all` for all queries): --queries=<1,2,...N|all> Number of queries N to run in parallel: --tpcParallel=N Here are some examples demonstrating how to run TPC-DS benchmarks on different runners. Running suite on the SparkRunner (local) with Query3 against 1Gb dataset in Parquet format: ./gradlew :sdks:java:testing:tpcds:run \\ -Ptpcds.runner=\":runners:spark:3\" \\ -Ptpcds.args=\" --runner=SparkRunner --dataSize=1GB --sourceType=PARQUET --dataDirectory=gs://beam-tpcds/datasets/parquet/partitioned --resultsDirectory=/tmp/beam-tpcds/results/spark/ --tpcParallel=1 --queries=3\" Running suite on the FlinkRunner (local) with Query7 and Query10 in parallel against 10Gb dataset in CSV format: ./gradlew :sdks:java:testing:tpcds:run \\ -Ptpcds.runner=\":runners:flink:1.13\" \\ -Ptpcds.args=\" --runner=FlinkRunner --parallelism=2 --dataSize=10GB --sourceType=CSV --dataDirectory=gs://beam-tpcds/datasets/csv --resultsDirectory=/tmp/beam-tpcds/results/flink/ --tpcParallel=2 --queries=7,10\" Running suite on the DataflowRunner with all queries against 100GB dataset in PARQUET format: ./gradlew :sdks:java:testing:tpcds:run \\ -Ptpcds.runner=\":runners:google-cloud-dataflow-java\" \\ -Ptpcds.args=\" --runner=DataflowRunner --region=<region_name> --project=<project_name> --numWorkers=4 --maxNumWorkers=4 --autoscalingAlgorithm=NONE --dataSize=100GB --sourceType=PARQUET --dataDirectory=gs://beam-tpcds/datasets/parquet/partitioned --resultsDirectory=/tmp/beam-tpcds/results/dataflow/" } ]
{ "category": "App Definition and Development", "file_name": "fuzzJSON.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "slug: /en/sql-reference/table-functions/fuzzJSON sidebar_position: 75 sidebar_label: fuzzJSON Perturbs a JSON string with random variations. ``` sql fuzzJSON({ namedcollection [, option=value [,..]] | jsonstr[, random_seed] }) ``` Arguments `named_collection`- A . `option=value` - Named collection optional parameters and their values. `json_str` (String) - The source string representing structured data in JSON format. `random_seed` (UInt64) - Manual random seed for producing stable results. `reuse_output` (boolean) - Reuse the output from a fuzzing process as input for the next fuzzer. `malform_output` (boolean) - Generate a string that cannot be parsed as a JSON object. `maxoutputlength` (UInt64) - Maximum allowable length of the generated or perturbed JSON string. `probability` (Float64) - The probability to fuzz a JSON field (a key-value pair). Must be within [0, 1] range. `maxnestinglevel` (UInt64) - The maximum allowed depth of nested structures within the JSON data. `maxarraysize` (UInt64) - The maximum allowed size of a JSON array. `maxobjectsize` (UInt64) - The maximum allowed number of fields on a single level of a JSON object. `maxstringvalue_length` (UInt64) - The maximum length of a String value. `minkeylength` (UInt64) - The minimum key length. Should be at least 1. `maxkeylength` (UInt64) - The maximum key length. Should be greater or equal than the `minkeylength`, if specified. Returned Value A table object with a a single column containing perturbed JSON strings. ``` sql CREATE NAMED COLLECTION jsonfuzzer AS jsonstr='{}'; SELECT * FROM fuzzJSON(json_fuzzer) LIMIT 3; ``` ``` text {\"52Xz2Zd4vKNcuP2\":true} {\"UPbOhOQAdPKIg91\":3405264103600403024} {\"X0QUWu8yT\":[]} ``` ``` sql SELECT * FROM fuzzJSON(jsonfuzzer, jsonstr='{\"name\" : \"value\"}', random_seed=1234) LIMIT 3; ``` ``` text {\"key\":\"value\", \"mxPG0h1R5\":\"L-YQLv@9hcZbOIGrAn10%GA\"} {\"BRE3\":true} {\"key\":\"value\", \"SWzJdEJZ04nrpSfy\":[{\"3Q23y\":[]}]} ``` ``` sql SELECT * FROM fuzzJSON(jsonfuzzer, jsonstr='{\"students\" : [\"Alice\", \"Bob\"]}', reuse_output=true) LIMIT 3; ``` ``` text {\"students\":[\"Alice\", \"Bob\"], \"nwALnRMc4pyKD9Krv\":[]} {\"students\":[\"1rNY5ZNs0wU&82t_P\", \"Bob\"], \"wLNRGzwDiMKdw\":[{}]} {\"xeEk\":[\"1rNY5ZNs0wU&82t_P\", \"Bob\"], \"wLNRGzwDiMKdw\":[{}, {}]} ``` ``` sql SELECT * FROM fuzzJSON(jsonfuzzer, jsonstr='{\"students\" : [\"Alice\", \"Bob\"]}', maxoutputlength=512) LIMIT 3; ``` ``` text {\"students\":[\"Alice\", \"Bob\"], \"BREhhXj5\":true} {\"NyEsSWzJdeJZ04s\":[\"Alice\", 5737924650575683711, 5346334167565345826], \"BjVO2X9L\":true} {\"NyEsSWzJdeJZ04s\":[\"Alice\", 5737924650575683711, 5346334167565345826], \"BjVO2X9L\":true, \"k1SXzbSIz\":[{}]} ``` ``` sql SELECT * FROM fuzzJSON('{\"id\":1}', 1234) LIMIT 3; ``` ``` text {\"id\":1, \"mxPG0h1R5\":\"L-YQLv@9hcZbOIGrAn10%GA\"} {\"BRjE\":16137826149911306846} {\"XjKE\":15076727133550123563} ``` ``` sql SELECT * FROM fuzzJSON(jsonnc, jsonstr='{\"name\" : \"FuzzJSON\"}', randomseed=1337, malformoutput=true) LIMIT 3; ``` ``` text U\"name\":\"FuzzJSON*\"SpByjZKtr2VAyHCO\"falseh {\"name\"keFuzzJSON, \"g6vVO7TCIk\":jTt^ {\"DBhz\":YFuzzJSON5} ```" } ]
{ "category": "App Definition and Development", "file_name": "infinite-and-while-loops.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Infinite and while loops [YSQL] headerTitle: The \"infinite loop\" and the \"while loop\" linkTitle: Infinite and while loops description: Describes the syntax and semantics of the \"infinite loop\" and the \"while loop\" [YSQL] menu: stable: identifier: infinite-and-while-loops parent: loop-exit-continue weight: 10 type: docs showRightNav: true This page describes the two kinds of unbounded loop. The infinite loop is the most basic and most flexible form of the . It looks like this: ```plpgsql <<label_17>>loop <statement list 1> exit <label> when <boolean expression>; <statement list 2> end loop label_17; ``` or this: ```plpgsql <<label_17>>loop <statement list 1> continue label_17 when <boolean expression 1>; <statement list 2> exit label_17 when <boolean expression 2>; <statement list 3> end loop label_17; ``` An infinite loop must have either an exit statement (or, though these are rare practices, a return statement or a raise exception statement). Otherwise, it will simply iterate forever. <a name=\"infinite-loop-over-cursor-results\"></a> Try this example: ```plpgsql \\c :db :u drop schema if exists s cascade; create schema s; create table s.t(k serial primary key, v int not null); insert into s.t(v) select generate_series(0, 99, 5); create function s.finfinite(klo in int, k_hi in int) returns table(k int, v int) set searchpath = pgcatalog, pg_temp language plpgsql as $body$ declare cur refcursor not null := 'cur'; begin open cur no scroll for ( select t.k, t.v from s.t where t.k between klo and khi order by t.k); -- Infinite loop -- loop fetch next from cur into k, v; exit when not found; return next; end loop; close cur; end; $body$; select k, v from s.f_infinite(6, 11); ``` The code relies on these basic PL/pgSQL statements: See also the section . This is the result: ```outout k | v -+- 6 | 25 7 | 30 8 | 35 9 | 40 10 | 45 11 | 50 ``` Notice the use of the special built-in variable found. (This is described in the section .) See also the section at the end of the section. Because of the current restrictions that it describes, and because of the fact that fetch all is anyway not supported in PL/pgSQL in vanilla PostgreSQL, the only viable cursor operation in PL/pgSQL besides open and close is fetch next... into. Given this, the while loop approach for iterating over the results of a query shown here adds no value over what the brings. The other form of the is the while loop. It looks like this: ```plpgsql <<label_42>>while <boolean expression> loop <statement list> end loop label_42; ``` The boolean expression is evaluated before starting an" }, { "data": "If it evaluates to false, then no iteration takes placejust if the loop was written as an infinite loop and exit when not \\<boolean expression\\> were written as the very first statement inside the loop. Otherwise, the code inside the loop had better change the outcome of the boolean expression so that it eventually becomes false. This form is sometimes referred to as the pre-tested loop because it checks the condition before executing each next iteration. In this example, label42 isn't mentioned in an exit statement or a continue statement. But the name used at the end of the loop statement must anyway match the name used at its start. (If they don't match, then you get the 42601_ syntax error.) {{< tip title=\"Label all loops.\" >}} Many programmers recommend labelling all loops and argue that doing so improves the code's readabilityprecisely because the syntax check guarantees that you can be certain where a loop statement ends, even when it extends over many lines and even if the author has made a mistake with the indentation. {{< /tip >}} As an exercise, re-write the code example from the section, above, to use a while loop, thus: ```plpgsql create function s.fwhile(klo in int, k_hi in int) returns table(k int, v int) set searchpath = pgcatalog, pg_temp language plpgsql as $body$ declare cur refcursor not null := 'cur'; begin open cur no scroll for ( select t.k, t.v from s.t where t.k between klo and khi order by t.k); -- While loop -- fetch next from cur into k, v; while found loop return next; fetch next from cur into k, v; end loop; close cur; end; $body$; select k, v from s.f_while(6, 11); ``` It produces exactly the same result as does s.finfinite(). Notice that the infinite loop and the while loop each uses the same number, five, of code lines. Stylists debate which version is nicerand the choice is determined by taste. Very often, the while loop requires some code before the loop to establish the starting condition. And in this case, fetch next from cur into k, v is written twice: both before the loop and inside it. In contrast, with the infinite loop, it's written just once. The code inside the while loop (\"print the result from the previous iteration and then get the next result\") feels back-to-front in comparison with the infinite loop's \"get the next result and print it\". Sometimes, depending on the use case, the while loop_ feels like the better choice. Notice that this is legal: ```plpgsql <<strange>>while true loop <statement list> continue <label> when <boolean expression>; <statement list> exit <label> when <boolean expression>; <statement list> end loop strange; ``` However, the effect of writing \"while true loop\" is indistinguishable from the effect of writing just \"loop\". Using the verbose form is therefore pointless; and it's likely that doing so will simply confuse the reader." } ]
{ "category": "App Definition and Development", "file_name": "01-fix-hostname-usage-related-issues.md", "project_name": "Hazelcast IMDG", "subcategory": "Database" }
[ { "data": "- - - - ||| ||| |Related Jira| https://hazelcast.atlassian.net/browse/HZ-528 | |Related Github issues | https://github.com/hazelcast/hazelcast/issues/15722, https://github.com/hazelcast/hazelcast/issues/16651, https://github.com/hazelcast/hazelcast-enterprise/issues/3627, https://github.com/hazelcast/hazelcast-enterprise/issues/4342 | |Implementation PR|https://github.com/hazelcast/hazelcast/pull/20014| |Document Status / Completeness | DESIGN REVIEW| |Requirement owner | Requirement owner | |Developer(s) | Ufuk Ylmaz | |Quality Engineer | Viacheslav Sytnyk | |Support Engineer | Support Engineer | |Technical Reviewers | Vassilis Bekiaris, Josef Cacek | |Simulator or Soak Test PR(s) | Link to Simulator or Soak test PR | Several problems can arise when using hostnames in Hazelcast's network and WAN configurations. With this effort, we aim to provide a seamless usage of hostnames in the Hazelcast configurations by fixing issues related to hostname usage. The source of the problems is the fact that multiple network addresses can point to the same member, and we ignore this fact while dealing with network addresses in our codebase. A significant part of our codebase is written assuming a member will only have one network address, but this assumption fails, so we encounter errors/wrong behavior in the different processes of the Hazelcast. When using hostnames in the network configuration of the members, we directly encounter the situation of having multiple addresses for the members as a consequence of using hostnames. Because when we use hostnames, the members know both the IP address of the member by resolving the hostname and the raw hostname address for this member. Then, issues arise because members don't properly manage these multiple addresses. In this effort, we will try to manage properly these multiple network addresses of the members and this will resolve the hostname usage related issues. We want to provide a seamless hostname usage experience to our users. Until now, many of our users and customers have had problems with this regard as shown in the known issues. Although the hostname usages can be replaced with IP based addressing in simple networking setups, it is a necessary functionality for maintaining complex network setups where IPs are dynamically assigned to the machines that Hazelcast members will work on. This is specifically important for cloud-based deployments that could be the subject of more complex networking setups compared to on premise deployments that may include members whose IPs are assigned dynamically and these addresses identified by DNS based hostname resolution mechanism. Support hostname-based addressing in member-to-member communication Support hostname-based addressing in WAN configuration Support the hostname-based addressing in relevant discovery plugins (e.g. discovery in AWS, Azure, GCP, Kubernetes) In summary, we want Hazelcast to work when a hostname address used instead of an IP address in any Hazelcast cluster configuration that works when an IP address is used in the configuration element. In short, we want any cluster setup that works properly when IP addresses are used, to operate when hostnames are used. |Term|Definition| ||| |Hostname|A hostname is an easy to use name assigned to a host machine on a network which is mapped into an IP address/IP addresses (We will not consider hostnames mapped to multiple IP addresses, as there is no such use case for it in Hazelcast).| |Hostname resolution|The process of converting the hostname to its mapped IP Address so that socket bind and connect can be performed on this IP address| User configures members' join and bind configuration with using hostnames and starts members. These members must be able to join and form cluster. Any other Hazelcast functionalities depending on networking must continue to work" }, { "data": "(Since Hazelcast is a distributed system, networking changes concern many parts of it) User configures members' wan target configuration with using hostnames and starts members. The WAN members must be able to connect each other Any other Hazelcast functionalities depending on networking must continue to work correctly. (Since Hazelcast is a distributed system, networking changes concern many parts of it) There will be no functional changes. If users have previously used member addresses in their custom codes for member identification purposes, they will still encounter address management problems when their configurations include hostnames. We need to provide them with a way to manage member addresses. So, for solving this kind of problem, we should expose our managed `UUID-Set<Address>` and its reverse mappings to our users. The jet connectors as a user of Hazelcast member APIs uses some `Address` dependent logic inside. e.g. In Elasticsearch connector, we use member addresses to check if in case the Hazelcast cluster runs on the same machines as Elastic it uses the addresses to plan the job in a way the reading from the local shard that belongs to same machine and doesn't go over the network. This is also same for the hadoop connectors. This logic may go wrong when Hazelcast members have multiple addresses. Up until now, our users mainly use the picked public address of the members for member identification purposes in the member logs. We should not make any big changes in the member logs so as not to confuse our users. We should store the address configured by the user as the primary address and continue to print it in the member logs. Even if we do the member identification with a UUID-based logic, we should keep the member address in related places to show it in the logs. What we propose as a solution to these issues is that we will try to manage this multiple network addresses at the Hazelcast networking level, and we will avoid dealing with multiple addresses in high-level places by only exposing single representation of these addresses to the higher levels. We'll call this single exposed address the primary address of the member. We select this primary address as follows: For the member protocol connections and for other connections using unified connection manager, we select the public address of the member protocol which corresponds to `com.hazelcast.instance.ProtocolType#MEMBER` as the primary address. For the other protocol connections (this is possible when advanced networking is enabled.), we select the public address of the corresponding protocol as the primary address. e.g. we select the public address that corresponds to `com.hazelcast.instance.ProtocolType#WAN` for the incoming connections to the connection manager which manages only the wan connections. After exposing only the primary address to those high-level places and after making sure its singularity, we only need to manage multiple addresses in the network/connection manager layer. In the connection manager, we will manage with multiple member addresses and their connections using the members' unique UUIDs. Although there is only one selected public address of the member corresponds to each EndpointQualifier of this member, it is possible to reach these endpoints using addresses other than this defined public addresses. It is impossible to understand that these different addresses point to the same member just only looking at the content of these address objects. Even if we only use the IP addresses, multiple different IP addresses can refer to the same host machine. Hazelcast server socket can bind different network interfaces of the host machine which can have multiple private and public addresses" }, { "data": "or consider that adding network proxies to the front of the member, the addresses of these proxies behave as the addresses of the member on the viewpoint of connection initiator. These public/private addresses or proxy addresses are completely different in their contents. Also, multiple hostnames can be defined in DNS to refer to the same machine (we may understand these hostnames point to the same member if they resolve to the same IP address, but how to decide when to do this resolution, and they don't always have to resolve to the same IPs .), and so we don't have a chance to understand these addresses referring to the same member just only looking at their contents. To understand this, we need to connect the remote member and process the `MemberHandshake` of this remote member. In the member handshake, a member receive the member UUID of the remote member, and after that, we can understand the connected address belongs to which member. We are registering this connected address as the address of the corresponding member UUID during this `MemberHandshake` processing. In this `MemberHandshake`, the remote member also shares its public addresses, and we also register the public addresses of this member under the member's unique UUID together with the connected address as aliases of each other. To manage the address aliases, we create an Address registry to store `UUID -> Set<Address>` and `Address'es -> UUID` mappings. Since we can access the UUIDs of the members after getting the MemberHandshake response, the entries of these maps will be created on connection registration which is performed during the handshake processing. We decide to remove the address registry entries when all connections to the members are closed. While determining the address removal timing, we especially considered the cases such as hot restart on a different machine with the same UUID, member restarts, and split-brain merge; these actions may cause the address registry entries to become stale. If we tried to extend the lifetime of these map entries a little longer, this would cause problems with Hot Restart with the same UUID that's why we chose to stick the lifetime of registrations to the lifetime of connections. While trying to create this design, we considered removing all the `Address` usages from the codebase. But, this change was quite intrusive since we use the addresses in lots of different places, and in some places, it was not possible to remove them without breaking backward compatibility. Also, we had a strict dependency on Address usage in socket bind and socket connect operations by the nature; and we cannot remove the addresses from these places. Since our retry mechanisms in the operation service also try to reconnect the socket channel if it's not available, an operation can trigger reconnect action if we don't have an active connection to the address. For most of the operations, we can have already established connection but for the join, wan, and some cluster (heartbeat) operations, we depend on `connect` semantics. But even the fact that few of the operations mentioned above are dependent on connect semantics prevents us from easily removing the Address usage from our operation invocation services. Also. in some cluster service methods, lexicographic address comparisons are performed between member addresses to determine some order among the members (when deciding on a member to perform some task such as claiming mastership). We don't have a chance to remove addresses from these places without breaking backward" }, { "data": "Code examples that we cannot remove the addresses: ```java // When deciding on a member that will claim mastership after the first join private boolean isThisNodeMasterCandidate(Collection<Address> addresses) { int thisHashCode = node.getThisAddress().hashCode(); for (Address address : addresses) { if (isBlacklisted(address)) { continue; } if (node.getServer().getConnectionManager(MEMBER).get(address) != null) { if (thisHashCode > address.hashCode()) { return false; } } } return true; } ``` ```java // When deciding on a member that will merge to target member after the split brain private boolean shouldMergeTo(Address thisAddress, Address targetAddress) { String thisAddressStr = \"[\" + thisAddress.getHost() + \"]:\" + thisAddress.getPort(); String targetAddressStr = \"[\" + targetAddress.getHost() + \"]:\" + targetAddress.getPort(); if (thisAddressStr.equals(targetAddressStr)) { throw new IllegalArgumentException(\"Addresses must be different! This: \" thisAddress + \", Target: \" + targetAddress); } // Since strings are guaranteed to be different, result will always be non-zero. int result = thisAddressStr.compareTo(targetAddressStr); return result > 0; } ``` In the implementation, we defined a separate abstraction named `LocalAddressRegistry` to manage the instance uuid's and corresponding addresses. This registry keeps the registration count for the registrations made on same uuid and using this registration count mechanism, it removes this registry entry only when all connections to an instance are closed. For the implementation details see: The new connection registration where we register uuid-address mapping entries: https://github.com/hazelcast/hazelcast/blob/5.1-BETA-1/hazelcast/src/main/java/com/hazelcast/internal/server/tcp/TcpServerConnectionManager.java#L197 Connection close event callback where address registry entry deregistrations takes place for the member connections: https://github.com/hazelcast/hazelcast/blob/4a73cc5f9b4ebef09c5a8e0067da464b5ef629be/hazelcast/src/main/java/com/hazelcast/internal/server/tcp/TcpServerConnectionManagerBase.java#L284 For the client connections: https://github.com/hazelcast/hazelcast/blob/4a73cc5f9b4ebef09c5a8e0067da464b5ef629be/hazelcast/src/main/java/com/hazelcast/client/impl/ClientEngineImpl.java#L443 1) Which address should take precedence while getting from `UUID-Set<Address>` map - IP or hostname? It seems like depending on the context, the precedence can be changed, e.g. we can prefer user-configured address in the member logs and TLS favorites using hostnames. 2) We had a strict dependency to Address usage in socket bind and connect operations. No matter how much we try to isolate the use of the addresses from upper places outside the networking level, in some operations we are dependent on bind/connect semantics. These are specifically join and WAN operations, and we couldn't isolate Addresses from them. 3) When we try to enforce UUID usage for member identification purposes, it can cause backwards compatibility issues. Removing the `Address` usages from our code base requires us to consider each changed place in terms of backwards compatibility. Since we replace addresses with uuid, we may need to convert serialized objects to send uuid instead of address, so we need to add RU compatibility paths for the serialization of those objects. These places will be resolved through the implementation. We need to document the changes in package by package manner. Questions to be answered: What is going to happen to use of `Address` in `com.hazelcast.xxx` package? 4) To note, we can use a new class to represent our aliases set, `Set<Address>` in our mappings, but I couldn't provide a good abstraction yet. Questions about the change: How does this work in an on-prem deployment? How about on AWS and Kubernetes? How does the change behave in mixed-version deployments? During a version upgrade? Which migrations are needed? In this effort, we avoid the changes that would break backward compatibility, so we don't expect any backward compatibility issue with respect to this change. What are the possible interactions with other features or sub-systems inside Hazelcast? How does the behavior of other code change implicitly as a result of the changes outlined in the design document? (Provide examples if relevant.) Is there other ongoing or recent work that is related? (Cross-reference the relevant design" }, { "data": "There is a previously reverted PR: https://github.com/hazelcast/hazelcast/pull/18591 (the related TDD is available in this PR changes), https://github.com/hazelcast/hazelcast/pull/19684 What are the edge cases? What are example uses or inputs that we think are uncommon but are still possible and thus need to be handled? How are these edge cases handled? Provide examples. Mention alternatives, risks and assumptions. Why is this design the best in the space of possible designs? What other designs have been considered and what is the rationale for not choosing them? Add links to any similar functionalities by other vendors, similarities and differentiators Questions about performance: Does the change impact performance? How? How is resource usage affected for large loads? For example, what do we expect to happen when there are 100000 items/entries? 100000 data structures? 1000000 concurrent operations? Also investigate the consequences of the proposed change on performance. Pay especially attention to the risk that introducing a possible performance improvement in one area can slow down another area in an unexpected way. Examine all the current \"consumers\" of the code path you are proposing to change and consider whether the performance of any of them may be negatively impacted by the proposed change. List all these consequences as possible drawbacks. Since we did not perform a lookup from the concurrent `UUID-Set<Address>` and its reverse map in the hot paths, we didn't expect any performance degradation with this change. We perform a simple benchmark on this change and don't see any performance difference with the version before the change. See the benchmark for it: https://hazelcast.atlassian.net/wiki/spaces/PERF/pages/3949068293/Performance+Tests+for+5.1+Hostname+Fix Stability questions: Can this new functionality affect the stability of a node or the entire cluster? How does the behavior of a node or a cluster degrade if there is an error in the implementation? Can the new functionality be disabled? Can a user opt out? How? Can the new functionality affect clusters which are not explicitly using it? What testing and safe guards are being put in place to protect against unexpected problems? Unit and integration tests should: Verify that hostnames in TCP-IP member configuration work; Verify that hostnames in TCP-IP client configuration work; Verify that hostnames in WAN target configuration work; Verify that a hostname available only after the member starts works when used for establishing connections from other members; Verify that it's possible to use multiple hostnames to reference one member; Verify UUID-Address management with Persistence enabled remains working (consider restarting with same UUID); Verify TLS remains working (when host validation is enabled); Verify the performance doesn't significantly drop in different environments (On premise, Kubernetes, GKE, AWS deployments etc.) Verify cluster is correctly being formed, Persistence and WAN is working when `setPublicAdress` is applied Verify that the client from external network can connect to the cluster It would definitely be better to test these scenarios also with hazelcast test containers as well. Stress tests must validate the Hazelcast work when: A member in the cluster gracefully shutdown A member forcefully shutdown Split-brain happens and resolves Cluster-wide crash happens on some cluster B when WAN replication is set between cluster A and B The above scenarios must also be tested when Hazelcast persistence is enabled. This fix must be tested on different cloud environments since they are capable of flexible networking setups (they can have networking structures that we do not easily encounter in other premise setups.). e.g. expose externally feature of operator. We should definitely verify that the hostnames work in these cloud networking setups. If possible, automated tests should be implemented for these." } ]
{ "category": "App Definition and Development", "file_name": "Eventlogging.md", "project_name": "Apache Storm", "subcategory": "Streaming & Messaging" }
[ { "data": "title: Topology event inspector layout: documentation documentation: true Topology event inspector provides the ability to view the tuples as it flows through different stages in a storm topology. This could be useful for inspecting the tuples emitted at a spout or a bolt in the topology pipeline while the topology is running, without stopping or redeploying the topology. The normal flow of tuples from the spouts to the bolts is not affected by turning on event logging. Note: Event logging needs to be enabled first by setting the storm config \"topology.eventlogger.executors\" to a non zero value. Please see the section for more details. Events can be logged by clicking the \"Debug\" button under the topology actions in the topology view. This logs the tuples from all the spouts and bolts in a topology at the specified sampling percentage. <div align=\"center\"> <img title=\"Enable Eventlogging\" src=\"images/enable-event-logging-topology.png\" style=\"max-width: 80rem\"/> <p>Figure 1: Enable event logging at topology level.</p> </div> You could also enable event logging at a specific spout or bolt level by going to the corresponding component page and clicking \"Debug\" under component actions. <div align=\"center\"> <img title=\"Enable Eventlogging at component level\" src=\"images/enable-event-logging-spout.png\" style=\"max-width: 80rem\"/> <p>Figure 2: Enable event logging at component level.</p> </div> The Storm \"logviewer\" should be running for viewing the logged tuples. If not already running log viewer can be started by running the \"bin/storm logviewer\" command from the storm installation directory. For viewing the tuples, go to the specific spout or bolt component page from storm UI and click on the \"events\" link under the component summary (as highlighted in Figure 2 above). This would open up a view like below where you can navigate between different pages and view the logged tuples. <div align=\"center\"> <img title=\"Viewing logged tuples\" src=\"images/event-logs-view.png\" style=\"max-width: 80rem\"/> <p>Figure 3: Viewing the logged events.</p> </div> Each line in the event log contains an entry corresponding to a tuple emitted from a specific spout/bolt in a comma separated format. `Timestamp, Component name, Component task-id, MessageId (in case of anchoring), List of emitted values` Event logging can be disabled at a specific component or at the topology level by clicking the \"Stop Debug\" under the topology or component actions in the Storm UI. <div align=\"center\"> <img title=\"Disable Eventlogging at topology level\" src=\"images/disable-event-logging-topology.png\" style=\"max-width: 80rem\"/> <p>Figure 4: Disable event logging at topology" }, { "data": "</div> Eventlogging works by sending the events (tuples) from each component to an internal eventlogger bolt. By default Storm does not start any event logger tasks, but this can be easily changed by setting the below parameter while running your topology (by setting it in storm.yaml or passing options via command line). | Parameter | Meaning | | -|--| | \"topology.eventlogger.executors\": 0 | No event logger tasks are created (default). | | \"topology.eventlogger.executors\": 1 | One event logger task for the topology. | | \"topology.eventlogger.executors\": nil | One event logger task per worker. | Storm provides an `IEventLogger` interface which is used by the event logger bolt to log the events. ```java / EventLogger interface for logging the event info to a sink like log file or db for inspecting the events via UI for debugging. */ public interface IEventLogger { / Invoked during eventlogger bolt prepare. */ void prepare(Map stormConf, Map<String, Object> arguments, TopologyContext context); / Invoked when the {@link EventLoggerBolt} receives a tuple from the spouts or bolts that has event logging enabled. * @param e the event */ void log(EventInfo e); / Invoked when the event logger bolt is cleaned up */ void close(); } ``` The default implementation for this is a FileBasedEventLogger which logs the events to an events.log file ( `logs/workers-artifacts/<topology-id>/<worker-port>/events.log`). Alternate implementations of the `IEventLogger` interface can be added to extend the event logging functionality (say build a search index or log the events in a database etc) If you just want to use FileBasedEventLogger but with changing the log format, you can simply implement your own by extending FileBasedEventLogger and override `buildLogMessage(EventInfo)` to provide log line explicitly. To register event logger to your topology, add to your topology's configuration like: ```java conf.registerEventLogger(org.apache.storm.metric.FileBasedEventLogger.class); ``` You can refer and overloaded methods from javadoc. Otherwise edit the storm.yaml config file: ```yaml topology.event.logger.register: class: \"org.apache.storm.metric.FileBasedEventLogger\" class: \"org.mycompany.MyEventLogger\" arguments: endpoint: \"event-logger.mycompany.org\" ``` When you implement your own event logger, `arguments` is passed to Map<String, Object> when is called. Please keep in mind that EventLoggerBolt is just a kind of Bolt, so whole throughput of the topology will go down when registered event loggers cannot keep up handling incoming events, so you may want to take care of the Bolt like normal Bolt. One of idea to avoid this is making your implementation of IEventLogger as `non-blocking` fashion." } ]
{ "category": "App Definition and Development", "file_name": "HDFSDiskbalancer.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> HDFS Disk Balancer =================== <!-- MACRO{toc|fromDepth=0|toDepth=2} --> Overview -- Diskbalancer is a command line tool that distributes data evenly on all disks of a datanode. This tool is different from which takes care of cluster-wide data balancing. Data can have uneven spread between disks on a node due to several reasons. This can happen due to large amount of writes and deletes or due to a disk replacement. This tool operates against a given datanode and moves blocks from one disk to another. Architecture Disk Balancer operates by creating a plan and goes on to execute that plan on the datanode. A plan is a set of statements that describe how much data should move between two disks. A plan is composed of multiple move steps. A move step has source disk, destination disk and number of bytes to move. A plan can be executed against an operational data node. Disk balancer should not interfere with other processes since it throttles how much data is copied every second. Please note that disk balancer is enabled by default on a cluster. Commands -- The following sections discusses what commands are supported by disk balancer and how to use them. The plan command can be run against a given datanode by running `hdfs diskbalancer -plan node1.mycluster.com` The command accepts . The plan command also has a set of parameters that allows user to control the output and execution of the plan. | COMMAND\\_OPTION | Description | |:- |:- | | `-out`| Allows user to control the output location of the plan file.| | `-bandwidth`| Since datanode is operational and might be running other jobs, diskbalancer limits the amount of data moved per second. This parameter allows user to set the maximum bandwidth to be used. This is not required to be set since diskBalancer will use the default bandwidth if this is not specified.| | `-thresholdPercentage`| Since we operate against a snap-shot of datanode, the move operations have a tolerance percentage to declare success. If user specifies 10% and move operation is say 20GB in size, if we can move 18GB that operation is considered successful. This is to accommodate the changes in datanode in real time. This parameter is not needed and a default is used if not specified.| | `-maxerror` | Max error allows users to specify how many block copy operations must fail before we abort a move step. Once again, this is not a needed parameter and a system-default is used if not specified.| | `-v`| Verbose mode, specifying this parameter forces the plan command to print out a summary of the plan on stdout.| |`-fs`| - Specifies the namenode to use. if not specified default from config is used. | The plan command writes two output files. They are `<nodename>.before.json` which captures the state of the cluster before the diskbalancer is run, and" }, { "data": "Execute command takes a plan command executes it against the datanode that plan was generated against. `hdfs diskbalancer -execute /system/diskbalancer/nodename.plan.json` This executes the plan by reading datanodes address from the plan file. When DiskBalancer executes the plan, it is the beginning of an asynchronous process that can take a long time. So, query command can help to get the current status of execute command. | COMMAND\\_OPTION | Description | |:- |:- | | `-skipDateCheck` | Skip date check and force execute the plan.| Query command gets the current status of the diskbalancer from specified node(s). `hdfs diskbalancer -query nodename1.mycluster.com,nodename2.mycluster.com,...` | COMMAND\\_OPTION | Description | |:- |:- | |`-v` | Verbose mode, Prints out status of individual moves| Cancel command cancels a running plan. Restarting datanode has the same effect as cancel command since plan information on the datanode is transient. `hdfs diskbalancer -cancel /system/diskbalancer/nodename.plan.json` or `hdfs diskbalancer -cancel planID -node nodename` Plan ID can be read from datanode using query command. Report command provides detailed report of specified node(s) or top nodes that will benefit from running disk balancer. The node(s) can be specified by a host file or comma-separated list of nodes. `hdfs diskbalancer -fs http://namenode.uri -report -node <file://> | [<DataNodeID|IP|Hostname>,...]` or `hdfs diskbalancer -fs http://namenode.uri -report -top topnum` Settings -- There is a set of diskbalancer settings that can be controlled via hdfs-site.xml | Setting | Description | |:- |:- | |`dfs.disk.balancer.enabled`| This parameter controls if diskbalancer is enabled for a cluster. if this is not enabled, any execute command will be rejected by the datanode.The default value is true.| |`dfs.disk.balancer.max.disk.throughputInMBperSec` | This controls the maximum disk bandwidth consumed by diskbalancer while copying data. If a value like 10MB is specified then diskbalancer on the average will only copy 10MB/S. The default value is 10MB/S.| |`dfs.disk.balancer.max.disk.errors`| sets the value of maximum number of errors we can ignore for a specific move between two disks before it is abandoned. For example, if a plan has 3 pair of disks to copy between , and the first disk set encounters more than 5 errors, then we abandon the first copy and start the second copy in the plan. The default value of max errors is set to 5.| |`dfs.disk.balancer.block.tolerance.percent`| The tolerance percent specifies when we have reached a good enough value for any copy step. For example, if you specify 10% then getting close to 10% of the target value is good enough.| |`dfs.disk.balancer.plan.threshold.percent`| The percentage threshold value for volume Data Density in a plan. If the absolute value of volume Data Density which is out of threshold value in a node, it means that the volumes corresponding to the disks should do the balancing in the plan. The default value is 10.| |`dfs.disk.balancer.plan.valid.interval`| Maximum amount of time disk balancer plan is valid. Supports the following suffixes (case insensitive): ms(millis), s(sec), m(min), h(hour), d(day) to specify the time (such as 2s, 2m, 1h, etc.). If no suffix is specified then milliseconds is assumed. Default value is 1d| Debugging Disk balancer generates two output files. The nodename.before.json contains the state of cluster that we read from the namenode. This file contains detailed information about datanodes and volumes. if you plan to post this file to an apache JIRA, you might want to replace your hostnames and volume paths since it may leak your personal" }, { "data": "You can also trim this file down to focus only on the nodes that you want to report in the JIRA. The nodename.plan.json contains the plan for the specific node. This plan file contains as a series of steps. A step is executed as a series of move operations inside the datanode. To diff the state of a node before and after, you can either re-run a plan command and diff the new nodename.before.json with older before.json or run report command against the node. To see the progress of a running plan, please run query command with option -v. This will print out a set of steps -- Each step represents a move operation from one disk to another. The speed of move is limited by the bandwidth that is specified. The default value of bandwidth is set to 10MB/sec. if you do a query with -v option you will see the following values. \"sourcePath\" : \"/data/disk2/hdfs/dn\", \"destPath\" : \"/data/disk3/hdfs/dn\", \"workItem\" : \"startTime\" : 1466575335493, \"secondsElapsed\" : 16486, \"bytesToCopy\" : 181242049353, \"bytesCopied\" : 172655116288, \"errorCount\" : 0, \"errMsg\" : null, \"blocksCopied\" : 1287, \"maxDiskErrors\" : 5, \"tolerancePercent\" : 10, \"bandwidth\" : 10 source path - is the volume we are copying from. dest path - is the volume to where we are copying to. start time - is current time in milliseconds. seconds elapsed - is updated whenever we update the stats. This might be slower than the wall clock time. bytes to copy - is number of bytes we are supposed to copy. We copy plus or minus a certain percentage. So often you will see bytesCopied -- as a value lesser than bytes to copy. In the default case, getting within 10% of bytes to move is considered good enough. bytes copied - is the actual number of bytes that we moved from source disk to destination disk. error count - Each time we encounter an error we will increment the error count. As long as error count remains less than max error count (default value is 5), we will try to complete this move. if we hit the max error count we will abandon this current step and execute the next step in the plan. error message - Currently a single string that reports the last error message. Older messages should be in the datanode log. blocks copied - Number of blocks copied. max disk errors - The configuration used for this move step. currently it will report the default config value, since the user interface to control these values per step is not in place. It is a future work item. The default or the command line value specified in plan command is used for this value. tolerance percent - This represents how much off we can be while moving data. In a busy cluster this allows admin to say, compute a plan, but I know this node is being used so it is okay if disk balancer can reach +/- 10% of the bytes to be copied. bandwidth - This is the maximum aggregate source disk bandwidth used by the disk balancer. After moving a block disk balancer computes how many seconds it should have taken to move that block with the specified bandwidth. If the actual move took less time than expected, then disk balancer will sleep for that duration. Please note that currently all moves are executed sequentially by a single thread." } ]
{ "category": "App Definition and Development", "file_name": "as.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "Can be used in the following scenarios: Adding a short name (alias) for columns or tables within the query. Using named arguments in function calls. To specify the target type in the case of explicit type casting, see . {% if select_command != \"SELECT STREAM\" %} Examples: ```yql SELECT key AS k FROM my_table; ``` ```yql SELECT t.key FROM my_table AS t; ``` ```yql SELECT MyFunction(key, 123 AS myoptionalarg) FROM my_table; ``` {% else %} Examples: ```yql SELECT STREAM key AS k FROM my_stream; ``` ```yql SELECT STREAM s.key FROM my_stream AS s; ``` ```yql SELECT STREAM MyFunction(key, 123 AS myoptionalarg) FROM my_stream; ``` {% endif %}" } ]
{ "category": "App Definition and Development", "file_name": "months_add.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" Adds a specified number of months to the date, accurate to the month. The function provides similar functionalities. ```Haskell DATETIME months_add(DATETIME expr1, INT expr2); ``` `expr1`: the start time. It must be of the DATETIME or DATE type. `expr2`: the months to add. It must be of the INT type. It can be greater, equal, or less than zero. A negative value subtracts months from `date`. Returns a DATETIME value. ```Plain select months_add('2019-08-01 13:21:03', 8); +--+ | months_add('2019-08-01 13:21:03', 8) | +--+ | 2020-04-01 13:21:03 | +--+ select months_add('2019-08-01', 8); +--+ | months_add('2019-08-01', 8) | +--+ | 2020-04-01 00:00:00 | +--+ select months_add('2019-08-01 13:21:03', -8); ++ | months_add('2019-08-01 13:21:03', -8) | ++ | 2018-12-01 13:21:03 | ++ select months_add('2019-02-28 13:21:03', 1); +--+ | months_add('2019-02-28 13:21:03', 1) | +--+ | 2019-03-28 13:21:03 | +--+ select months_add('2019-01-30 13:21:03', 1); +--+ | months_add('2019-01-30 13:21:03', 1) | +--+ | 2019-02-28 13:21:03 | +--+ ```" } ]
{ "category": "App Definition and Development", "file_name": "feat-11610.en.md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "Implemented a preliminary Role-Based Access Control for the Dashboard. In this version, there are two predefined roles: superuser This role could access all resources. viewer This role can only view resources and data, corresponding to all GET requests in the REST API." } ]
{ "category": "App Definition and Development", "file_name": "set.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "slug: /en/sql-reference/statements/set sidebar_position: 50 sidebar_label: SET ``` sql SET param = value ``` Assigns `value` to the `param` for the current session. You cannot change this way. You can also set all the values from the specified settings profile in a single query. ``` sql SET profile = 'profile-name-from-the-settings-file' ``` For more information, see ." } ]
{ "category": "App Definition and Development", "file_name": "concepts.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "Definitions of the terms used in this project ============================================================================== Two main strategies are used: Dynamic Masking* offers an altered view of the real data without modifying it. Some users may only read the masked data, others may access the authentic version. Permanent Destruction* is the definitive action of substituting the sensitive information with uncorrelated data. Once processed, the authentic data cannot be retrieved. The data can be altered with several techniques: Deletion or Nullification* simply removes data. Static Substitution* consistently replaces the data with a generic value. For instance: replacing all values of a TEXT column with the value \"CONFIDENTIAL\". Variance* is the action of \"shifting\" dates and numeric values. For example, by applying a +/- 10% variance to a salary column, the dataset will remain meaningful. Generalization* reduces the accuracy of the data by replacing it with a range of values. Instead of saying \"Bob is 28 years old\", you can say \"Bob is between 20 and 30 years old\". This is useful for analytics because the data remains true. Shuffling* mixes values within the same columns. This method is open to being reversed if the shuffling algorithm can be deciphered. Randomization replaces sensitive data with random-but-plausible* values. The goal is to avoid any identification from the data record while remaining suitable for testing, data analysis and data processing. Partial scrambling* is similar to static substitution but leaves out some part of the data. For instance : a credit card number can be replaced by '40XX XXXX XXXX XX96' Custom rules* are designed to alter data following specific needs. For instance, randomizing simultaneously a zipcode and a city name while keeping them coherent. Pseudonymization is a way to protect* personal information by hiding it using additional information. Encryption and Hashing are two examples of pseudonymization techniques. However a pseudonymizated data is still linked to the original data." } ]
{ "category": "App Definition and Development", "file_name": "delphix.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Delphix linkTitle: Delphix description: Use Delphix to virtualize your YugabyteDB database. menu: preview_integrations: identifier: Delphix parent: integrations-platforms weight: 571 type: docs Delphix is a data operations company that enables organizations to modernize their data infrastructure by providing software solutions for data management and delivery. Some of its key features include data virtualization, data masking, data replication, and data management across on-premises, cloud, and hybrid environments. Delphix offers the following key capabilities as part of its integration with YugabyteDB: Select Connector for continuous data (Data Virtualization): Delphix can virtualize YugabyteDB databases by ingesting data to the Delphix platform, and create lightweight copies of YugabyteDB that can be provisioned quickly. Select Connector for continuous compliance (Data Masking): Delphix provides data masking capabilities for YugabyteDB to protect sensitive information in the database, and also comply with data privacy regulations. The resulting virtual copies can then be used for different use cases such as development, testing, analytics, reporting, and so on. For more information, refer to in the Delphix documentation." } ]
{ "category": "App Definition and Development", "file_name": "01-merge-changes.md", "project_name": "Hazelcast IMDG", "subcategory": "Database" }
[ { "data": "The Hazelcast and Hazelcast Jet products were merged together to provide a seamless experience. Changes done during the merge of Hazelcast and Hazelcast Jet codebases are summarized in this document. Hazelcast Jet repository was merged into Hazelcast repository. The name Jet doesn't refer to standalone product anymore, but is kept when referring to the streaming engine part of the product - the Jet engine. Hazelcast repository - https://github.com/hazelcast/hazelcast Hazelcast Jet repository - https://github.com/hazelcast/hazelcast-jet After some preparatory commits removing conflicting files (e.g. root pom.xml), the merge was performed by the following command: ```bash $ git merge --allow-unrelated-histories hazelcast-jet/master ``` This preserves history for both projects and allows easy rebasing of patches both ways (forward porting and backporting from/to Hazelcast Jet). Hazelcast repository continues its development as with a next major version. `hazelcast-jet-core` code was merged into `hazelcast` module `hazelcast-jet-spring` module was merged into `hazelcast-spring` and removed `hazelcast-jet-distribution` module was removed, `hazelcast-distribution` module was adapted to produce very similar artifacts - full and slim distribution the extension modules were kept under `extensions/*`, keeping the groupId and artifactId coordinates, this makes it easy for existing users to migrate. `examples` modules were deleted, they will be merged with imdg example, which are stored in a separate repository at https://github.com/hazelcast/hazelcast-code-samples/ `hazelcast-jet-sql` module was merged into `hazelcast-sql` module (this actually happened after the merge, but for users it is indistinguishable).` JetNodeExtension was merged with DefaultNodeExtension The Jet service is started when a HazelcastInstance is started HazelcastInstance can return JetInstance via #getJetInstance (this will change before the release into a `JetService getJet()` where `JetService` provides a subset of `JetInstance` methods) Jet data structures created by com.hazelcast.jet.impl.JobRepository are created lazily when needed JetConfig is now a field in `com.hazelcast.config.Config` Jet run tests in parallel by default, Jet tests were marked with `QuickTest` and `ParallelJVMTest` `@Category` accordingly IMDG's `smallInstanceConfig()` for tests sets the `com.hazelcast.jet.config.InstanceConfig#setCooperativeThreadCount` to 2 Jet cooperative threads are suspended if no jobs are running. One can disable Jet code upload for security reasons. Jet code was adapted to IMDG checkstyle rules some checkstyle rules were relaxed There are 2 outstanding items to resolve: some checkstyle rules were ignored for Jet code Jet had some stricter rules regarding public javadoc, this is now not in place, ideally we should apply it to the whole product IMDG and Jet distributions were merged, the result is closer to the Jet distribution as there were more features. the distribution is started using `bin/hazelcast-start` command the cli tool is under `bin/hazelcast` (used to be `bin/jet`) the configuration files hazelcast.yaml - configuration file for hazelcast, contains also configuration for the Jet engine (follows the structure where the JetConfig is a field of Config) hazelcast-client.yaml - configuration file for the CLI tool" } ]
{ "category": "App Definition and Development", "file_name": "3.11.7.md", "project_name": "RabbitMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "RabbitMQ `3.11.7` is a maintenance release in the `3.11.x` . Please refer to the upgrade section from if upgrading from a version prior to 3.11.0. This release requires Erlang 25. has more details on Erlang version requirements for RabbitMQ. As of 3.11.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases. Erlang 25 as our new baseline means much improved performance on ARM64 architectures, across all architectures, and the most recent TLS 1.3 implementation available to all RabbitMQ 3.11 users. Release notes can be found on GitHub at . `directexchangerouting_v2` feature flag could sometimes fail to enable on freshly started nodes. GitHub issue: Improvements to the feature flag subsystem. GitHub issues: , , Preserve additional information in the log message when heartbeat frame cannot be sent due to a TCP timeout. GitHub issue: `rabbitmqctl add_vhost` now coerces a single string value of `--tags` into an array. GitHub issue: Core server did not correctly translate empty stream message bodies to AMQP 0-9-1 when a stream was consumed by an AMQP 0-9-1 (as opposed to RabbitMQ Stream protocol) client. GitHub issue: `ERROR` frames delivery is now correctly delivered w.r.t. TCP connection closure for clients that run into certain types of exceptions. Contributed by @csicar. GitHub issue: `prometheus.erl` was upgraded To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.11.7.tar.xz` instead of the source tarball produced by GitHub." } ]
{ "category": "App Definition and Development", "file_name": "how-to-release.md", "project_name": "Vitess", "subcategory": "Database" }
[ { "data": "In this section we describe our current release process. Below is a summary of this document. -- This section highlights the different pre-requisites the release team has to meet before releasing. The tool `gh` must be installed locally and ready to be used. You must have access to the Java release, more information in the section. You must be able to create branches and have admin right on the `vitessio/vitess` and `planetscale/vitess-operator` repositories. -- A new major version of Vitess is released every four months. For each major version there is at least one release candidate, which we release three weeks before the GA version. We usually create the RC1 during the first week of the month, and the GA version three weeks later. Before creating RC1, there is a code freeze. Assuming the release of RC1 happens on a Tuesday, the release branch will be frozen Friday of the previous week. This allows us to test that the release branch can be released and avoid discovering unwanted events during the release day. Once the RC1 is released, there are three more weeks to backport bug fixes into the release branches. However, we also proceed to a code freeze the Friday before the GA release. (Assuming GA is on a Tuesday) Regarding patch releases, no code freeze is planned. For each release, it is recommended to create an issue like to track the current and past progress of a release. It also allows us to document what happened during a release. -- This step happens a few weeks before the actual release (whether it is an RC, GA or a patch release). The main goal of this step is to make sure everything is ready to be released for the release day. That includes: Making sure Pull Requests are being reviewed and merged. > - All the Pull Requests that need to be in the release must be reviewed and merged before the code freeze. > - The code freeze usually happens a few days before the release. Making sure the people doing the release have access to all the tools and infrastructure needed to do the release. > - This includes write access to the Vitess repository and to the Maven repository. Preparing and cleaning the release notes summary. > - If the release does not contain significant changes (i.e. a small patch release) then this step can be skipped > - One or more Pull Requests have to be submitted in advance to create and update the release summary. > - The summary files are located in: `./changelog/.0/../summary.md`. > - The summary file for a release candidate is the same as the one for the GA release. > - Make sure to run `go run ./go/tools/releases/releases.go` to update the `changelog` directory with the latest release notes. Finishing the blog post, and coordinating with the different organizations for cross-posting. Usually CNCF and PlanetScale. This step applies only for GA releases. > - The blog post must be finished and reviewed. > - A Pull Request on the website repository of Vitess has to be created so we can easily publish the blog during the release day. Code freeze. > - As soon as we go into code freeze, if we are doing an RC, create the release branch. > - If we are doing a GA release, do not merge any new Pull" }, { "data": "> - The guide on how to do a code freeze is available in the section. > - It is not advised to merge a PR during code freeze, but if it is deemed necessary by the release lead, then follow the steps in section. Create the Vitess release. > - A guide on how to create a Vitess release is available in the section. > - This step will create a Release Pull Request, it must be reviewed and merged before the release day. The release commit will be used to tag the release. Preparing the Vitess Operator release. > - While the Vitess Operator is located in a different repository, we also need to do a release for it. > - The Operator follows the same cycle: RC1 -> GA -> Patches. > - Documentation for the pre-release of the Vitess Operator is available . Update the website documentation. > - We want to open a preparatory draft Pull Request to update the documentation. > - There are several pages we want to update: > - : we must add the new release to the list with all its information and link. The links can be broken (404 error) while we are preparing for the release, this is fine. > - : we must use the proper version increment for this guide and the proper SHA. The SHA will have to be modified once the Release Pull Request and the release is tagged is merged. > - , , , and : we must checkout to the proper release branch after cloning Vitess. > - If we are doing a GA or RC release follow the instructions below: > - There are two scripts in the website repository in `./tools/{ga|rc}_release.sh`, use them to update the website documentation. The scripts automate: > - For an RC, we need to create a new entry in the sidebar which represents the next version on `main` and mark the version we are releasing as RC. > - For a GA, we need to mark the version we are releasing as \"Stable\" and the next one as \"Development\". Create a new GitHub Milestone > - Our GitHub Milestones is a good representation of all our ongoing development cycles. We have a Milestone for `main` and for all release branches. > - After doing Code Freeze, we can create a new GitHub Milestone that matches the next development cycle. > - If we release a major version (v18.0.0-rc1): we must create a `v19.0.0` Milestone. > - If we release a patch release (v17.0.3): we must create a `v17.0.4` Milestone. -- On the release day, there are several things to do: Merge the Release Pull Request. > - During the code freeze, we created a Release Pull Request. It must be merged. Tag the Vitess release. > - A guide on how to tag a version is available in the section. Update the release notes on `main`. > - One Pull Request against `main` must be created, it will contain the new release notes that we are adding in the Release Pull Request. Create the corresponding Vitess operator release. > - Applies only to versions greater or equal to `v14.0.0`. > - If we are doing an RC release, then we will need to create the Vitess Operator RC too. If we are doing a GA release, we're also doing a GA release in the Operator. > - The Vitess Operator release documentation is available . Create the Java" }, { "data": "> - Applies only to GA releases. > - This step is explained in the section. Update the website documentation repository. > - Review the Website Pull Request that was opened during the Pre-Release. > - The git SHA used in should be updated with the new proper SHA for this release. > - Merge the Pull Request. Publish the blog post on the Vitess website. > - Applies only to GA releases. > - The corresponding Pull Request was created beforehand during the pre-release. Merge it. Make sure _arewefastyet_ starts benchmarking the new release. > - This can be done by visiting . > - New elements should be added to the execution queue. > - After a while, those elements will finish their execution and their status will be green. > - This step is even more important for GA releases as we often include a link to arewefastyet in the blog post. > - The benchmarks need to complete before announcing the blog posts or before they get cross-posted. Go back to dev mode on the release branch. > - The version constants across the codebase must be updated to `SNAPSHOT`. Ensure the k8s images are available on DockerHub. Close the current GitHub Milestone > - Once we are done releasing the current version, we must close its corresponding GitHub Milestone as the development cycle for it is over. > - This does not apply if we are releasing an RC release. For instance, if we are releasing `v18.0.0-rc1` we want to keep the `v18.0.0` milestone opened as development is not fully done for `v18.0.0`. > - For instance, if we release `v18.0.1`, we must close the `v18.0.1` Milestone as the development cycle for `v18.0.1` is over. > - When closing a Milestone, we need to look through all the PRs/Issues in that Milestone and re-assign a newer Milestone to them. -- Once the release is over, we need to announce it on both Slack and Twitter. We also want to make sure the blog post was cross-posted, if applicable. We need to verify that arewefastyet has finished the benchmark too. Moreover, once the roadmap discussions are over for the next release, we need to update the roadmap presented in the Vitess website (https://vitess.io/docs/resources/roadmap/). We must remove everything that is now done in this release and add new items based on the discussions. -- In this example our current version is `v14.0.3` and we release the version `v15.0.0`. Alongside Vitess' release, we also release a new version of the operator. Since we are releasing a release candidate here, the new version of the operator will also be a release candidate. In this example, the new operator version is `2.8.0`. It is important to note that before the RC, there is a code freeze during which we create the release branch. The release branch in this example is `release-15.0`. The example also assumes that `origin` is the `vitessio/vitess` remote. Fetch `github.com/vitessio/vitess`'s remote. ```shell git fetch origin ``` Creation of the Release Pull Request. > This step will create the Release Pull Request that will then be reviewed ahead of the release day. > The merge commit of that Pull Request will be used during the release day to tag the release. Run the `create_release` script using the Makefile: Release Candidate: ```shell make BASEBRANCH=\"release-15.0\" BASEREMOTE=\"origin\" RELEASEVERSION=\"15.0.0-rc1\" VTOPVERSION=\"2.8.0-rc1\" create_release ``` General Availability: ```shell make BASEBRANCH=\"release-15.0\" BASEREMOTE=\"origin\" RELEASEVERSION=\"15.0.0\"" }, { "data": "create_release ``` The script will prompt you `Pausing so release notes can be added. Press enter to continue`. We are now going to generate the release notes, continue to the next sub-step. Run the following command to generate the release notes. Note that you can omit the `--summary` flag if there are no summary. ```shell go run ./go/tools/release-notes --version \"v15.0.0\" --summary \"./changelog/15.0/15.0.0/summary.md\" ``` > Make sure to also run `go run ./go/tools/releases/releases.go` to update the `./changelog` directory. > Important note: The release note generation fetches a lot of data from the GitHub API. You might reach the API request limit. In which case you should use the `--threads=` flag and set an integer value lower than 10 (the default). This command will generate the release notes by looking at all the commits between the tag `v14.0.3` and the reference `HEAD`. It will also use the file located in `./changelog/15.0/15.0.0/summary.md` to prefix the release notes with a text that the maintainers wrote before the release. Please verify the generated release notes to make sure it is well-formatted and all the bookmarks are generated properly. Follow the instruction prompted by the `create_release` Makefile command's output in order to push the newly created branch and create the Release Pull Request on GitHub. If we are doing an RC release it means we created a new branch from `main`. We need to update `main` with the next SNAPSHOT version. If `main` was on `15.0.0-SNAPSHOT`, we need to update it to `16.0.0-SNAPSHOT`. A simple find and replace in the IDE is sufficient, there only a handful of files that must be changed: `version.go` and several java files. -- This section is divided into two parts: . This step implies that you have created a beforehand and that it has been reviewed. The merge commit of this Release Pull Request will be used to tag the release. In this example our current version is `v14.0.3` and we release the version `v15.0.0`. Alongside Vitess' release, we also release a new version of the operator. Since we are releasing a release candidate here, the new version of the operator will also be a release candidate. In this example, the new operator version is `2.8.0`. It is important to note that before the RC, there is a code freeze during which we create the release branch. The release branch in this example is `release-15.0`. The example also assumes that `origin` is the `vitessio/vitess` remote. Fetch `github.com/vitessio/vitess`'s remote. ```shell git fetch origin ``` Checkout to the merge commit of the Release Pull Request. Tag the release and push the tags ```shell git tag v15.0.0 && git tag v0.15.0 && git push origin v15.0.0 && git push origin v0.15.0 ``` Create a Pull Request against the `main` branch with the release notes found in `./changelog/15.0/15.0.0/1500_*.md`. Run the back to dev mode tool. ```shell make BASEBRANCH=\"release-15.0\" BASEREMOTE=\"origin\" RELEASEVERSION=\"15.0.0-rc1\" DEVVERSION=\"15.0.0-SNAPSHOT\" backtodev_mode ``` > You will then need to follow the instructions given by the output of the backtodev_mode Makefile command. You will need to push the newly created branch and open a Pull Request. Release the tag on GitHub UI as explained in the following section. In the below steps, we use `v8.0.0` and `v9.0.0` as an example. On Vitess' GitHub repository main page, click on Code -> . On the Releases page, click on `Draft a new release`. When drafting a new release, we are asked to choose the release's tag and branch. We format the tag this way:" }, { "data": "We append `-rcN` to the tag name for release candidates, with `N` being the increment of the release candidate. Copy/paste the previously built Release Notes into the description of the release. If this is a pre-release (`rc`) select the `pre-release` checkbox. And finally, click on `Publish release`. -- In this example we are going to do a code freeze on the `release-15.0` branch. If we are doing a release candidate, there won't be a branch yet, hence we need to create it. ``` git fetch --all git checkout -b release-15.0 origin/main ``` Important: after creating the new branch `release-15.0`, we need to create new branch protection rules on the GitHub UI. The rules can be copied from the rules that are on the `main` branch. The new branch will be based on `origin/main`, here `origin` points to `vitessio/vitess`. If we are not doing a release candidate, then the branch already exists and we can checkout on it. Now, if we are doing a GA release, let's update the branch: ``` git pull origin release-15.0 ``` Finally, let's run the code freeze script: ``` ./tools/code_freeze.sh freeze release-15.0 ``` The script will prompt the command that will allow you to push the code freeze change. Once pushed, open a PR that will be merged on `release-15.0`. Remember, you should also disable the Launchable integration from the newly created release branch. -- Warning: It is not advised to merge a PR during code-freeze. If it is deemed absolutely necessary, then the following steps can be followed. The PR that needs to be merged will be failing on the `Code Freeze` CI. To merge this PR, we'll have to mark this CI action as not required. You will need administrator privileges on the vitess repository to be able to make this change. Go to the GitHub repository and click on `Settings`. Under the `Code and automation` section, select `Branches`. Find the branch that you want to merge the PR against and then select `Edit`. Scroll down to find the list of required checks. Within this list find `Code Freeze` and click on the cross next to it to remove it from this list. Save your changes on the bottom of the page. Refresh the page of the PR, and you should be able to merge it. After merging the PR, you need to do 2 more things - Add `Code Freeze` back as a required check. Check if the release PR has any merge conflicts. If it does, fix them and push. -- Warning: This section's steps need to be executed only when releasing a new major version of Vitess, or if the Java packages changed from one minor/patch version to another. For this example, we assume we juste released `v12.0.0`. Checkout to the release commit. ```shell git checkout v12.0.0 ``` Run `gpg-agent` to avoid that Maven will constantly prompt you for the password of your private key. Note that this can print error messages that can be ignored on Mac. ```bash eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info) export GPG_TTY=$(tty) export GPGAGENTINFO ``` Export following to avoid any version conflicts ```bash export MAVEN_OPTS=\"--add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.desktop/java.awt.font=ALL-UNNAMED\" ``` Deploy (upload) the Java code to the oss.sonatype.org repository: > Warning: After the deployment, the Java packages will be automatically released. Once released, you cannot delete them. The only option is to upload a newer version (e.g. increment the patch level).</p> ```bash cd ./java/ mvn clean deploy -P release -DskipTests cd .. ``` It will take some time for artifacts to appear on" } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG-v2020.10.27-rc.1.md", "project_name": "KubeDB by AppsCode", "subcategory": "Database" }
[ { "data": "title: Changelog | KubeDB description: Changelog menu: docs_{{.version}}: identifier: changelog-kubedb-v2020.10.27-rc.1 name: Changelog-v2020.10.27-rc.1 parent: welcome weight: 20201027 product_name: kubedb menuname: docs{{.version}} sectionmenuid: welcome url: /docs/{{.version}}/welcome/changelog-v2020.10.27-rc.1/ aliases: /docs/{{.version}}/CHANGELOG-v2020.10.27-rc.1/ Prepare for release v0.1.0-rc.1 (#83) Prepare for release v0.1.0-beta.6 (#82) Prepare for release v0.1.0-beta.5 (#81) Update KubeDB api (#80) Update readme Update repository config (#79) Prepare for release v0.1.0-beta.4 (#78) Update KubeDB api (#73) Replace getConditions with kmapi.NewCondition (#71) Update License header (#70) Add RedisOpsRequest Controller (#28) Add MySQL OpsRequest Controller (#14) Add Reconfigure TLS (#69) Add Restart Operation, Readiness Criteria and Remove Configuration (#59) Update repository config (#66) Publish docker images to ghcr.io (#65) Update repository config (#60) Reconfigure MongoDB with Vertical Scaling (#57) Fix MongoDB Upgrade (#51) Integrate cert-manager for Elasticsearch (#56) Update repository config (#54) Update repository config (#52) Update Kubernetes v1.18.9 dependencies (#49) Add license verifier (#50) Add MongoDBOpsRequest Controller (#20) Use cert-manager v1 api (#47) Update apis (#45) Dynamically Generate Cluster Domain (#43) Use updated certstore & blobfs (#42) Add TLS support for redis (#35) Various fixes (#41) Add TLS/SSL configuration using Cert Manager for MySQL (#34) Update certificate spec for MongoDB and PgBouncer (#40) Update new Subject sped for certificates (#38) Update to cert-manager v0.16.0 (#37) Update to Kubernetes v1.18.3 (#36) Fix cert-manager integration for PgBouncer (#32) Update to Kubernetes v1.18.3 (#31) Include Makefile.env (#30) Disable e2e tests (#29) Update to Kubernetes v1.18.3 (#27) Update .kodiak.toml Add script to update release tracker on pr merge (#26) Rename docker image to kubedb-enterprise Create .kodiak.toml Format CI files Fix e2e tests (#25) Fix e2e tests using self-hosted GitHub action runners (#23) Update to kubedb.dev/[email protected] (#24) Update to Kubernetes v1.18.3 (#21) Use CRD v1 for Kubernetes >= 1.16 (#19) Update to Kubernetes v1.18.3 (#18) Update cert-manager util Configure GCR Docker credential helper in release pipeline Vendor kubedb.dev/[email protected] Revendor kubedb.dev/apimachinery@master Update crazy-max/ghaction-docker-buildx flag Merge pull request #17 from appscode/x7 Remove existing cluster Remove support for k8s 1.11 Run e2e tests on GitHub actions Use GCRSERVICEACCOUNTJSONKEY env in CI Use gcr.io/appscode as docker registry (#16) Run on self-hosted hosts Store enterprise images in `gcr.io/appscode` (#15) Use stash.appscode.dev/[email protected] (#12) Don't handle deleted objects. (#11) Fix MongoDB cert-manager integration (#10) Add cert-manager integration for MongoDB (#9) Refactor PgBouncer controller into its pkg (#8) Use SecretInformer from apimachinery (#5) Use non-deprecated Exporter fields (#4) Cert-Manager support for PgBouncer [Client TLS] (#2) Fix plain text secret in exporter container of StatefulSet (#5) Update client-go to kubernetes-1.16.3 (#7) Use charts to install operator (#6) Add add-license make target Enable e2e tests in GitHub actions (#4) Initial implementation (#2) Update go.yml Enable GitHub actions Clone kubedb/postgres repo (#1) Merge commit 'f78de886ed657650438f99574c3b002dd3607497' as 'hack/libbuild' Add docker badge Update MergeServicePort and PatchServicePort apis Add port constants (#635) Create separate governing service for each database (#634) Update readme Add MySQL constants (#633) Update Kubernetes v1.18.9 dependencies (#632) Set prx as ProxySQL short code (#631) Update for release [email protected] (#630) Set default CA secret name even if the SSL is disabled. (#624) Add host functions for different components of MongoDB (#625) Refine api (#629) Update Kubernetes v1.18.9 dependencies (#626) Add MongoDBCustomConfigFile constant Update MySQL ops request custom config api (#623) Rename redis ConfigMapName to ConfigSecretName API refinement (#622) Update Kubernetes v1.18.9 dependencies (#621) Handle halted condition (#620) Update constants for Elasticsearch conditions (#618) Use core/v1 ConditionStatus (#619) Update Kubernetes v1.18.9 dependencies (#617) Fix StatefulSet controller (#616) Add spec.init.initialized field (#615) Implement ReplicasAreReady (#614) Update appcatalog dependency Update swagger.json Fix build (#613) Fix build Switch kubedb apiVersion to v1alpha2 (#612) Add Volume Expansion and Configuration for MySQL" }, { "data": "(#607) Add `alias` in the name of MongoDB server certificates (#611) Remove GetMonitoringVendor method Fix build Update monitoring api dependency (#610) Remove deprecated fields for monitoring (#609) Add framework support for conditions (#608) Bring back mysql ops spec StatefulSetOrdinal field Add VerticalAutoscaler type (#606) Add MySQL constant (#604) Fix typo Update ops request enumerations Revise ops request apis (#603) Revise api conditions (#602) Update DB condition types and phases (#598) Write data restore completion event using dynamic client (#601) Update Kubernetes v1.18.9 dependencies (#600) Update for release [email protected] (#599) Update Kubernetes v1.18.9 dependencies (#597) Add DB conditions Rename ES root-cert to ca-cert (#594) Remove spec.paused & deprecated fields DB crds (#596) Use `status.conditions` to handle database initialization (#593) Update Kubernetes v1.18.9 dependencies (#595) Add helper methods for MySQL (#592) Rename client node to ingest node (#583) Update repository config (#591) Update repository config (#590) Update Kubernetes v1.18.9 dependencies (#589) Update Kubernetes v1.18.3 dependencies (#588) Add event recorder in controller struct (#587) Update Kubernetes v1.18.3 dependencies (#586) Initialize db from stash restoresession/restoreBatch (#567) Update for release [email protected] (#585) Update Kubernetes v1.18.3 dependencies (#584) Add some `MongoDB` and `MongoDBOpsRequest` Constants (#582) Add primary and secondary role constant for MySQL (#581) Update Kubernetes v1.18.3 dependencies (#580) Add Functions to get Default Probes (#579) Remove CertManagerClient client Remove unused constants for ProxySQL Update Kubernetes v1.18.3 dependencies (#578) Update redis constants (#575) Remove spec.updateStrategy field (#577) Remove description from CRD yamls (#576) Add autoscaling crds (#554) Fix build Rename PgBouncer archiver to client Handle shard scenario for MongoDB cert names (#574) Add MongoDB Custom Config Spec (#562) Support multiple certificates per DB (#555) Update Kubernetes v1.18.3 dependencies (#573) Update CRD yamls Implement ServiceMonitorAdditionalLabels method (#572) Make ServiceMonitor name same as stats service (#563) Update for release [email protected] (#571) Update for release [email protected] (#570) Update for release [email protected] (#569) Update for release [email protected] (#568) Update Kubernetes v1.18.3 dependencies (#565) Update Kubernetes v1.18.3 dependencies (#564) Update Kubernetes v1.18.3 dependencies (#561) Update Kubernetes v1.18.3 dependencies (#560) Fix MySQL enterprise condition's constant (#559) Update Kubernetes v1.18.3 dependencies (#558) Update Kubernetes v1.18.3 dependencies (#557) Add MySQL Constants (#553) Add {Horizontal,Vertical}ScalingSpec for Redis (#534) Enable TLS for Redis (#546) Add Spec for MongoDB Volume Expansion (#548) Add Subject spec for Certificate (#552) Add email SANs for certificate (#551) Update to [email protected] (#550) Update to Kubernetes v1.18.3 (#549) Make ElasticsearchVersion spec.tools optional (#526) Add Conditions Constant for MongoDBOpsRequest (#535) Update to Kubernetes v1.18.3 (#547) Add Storage Engine Support for Percona Server MongoDB (#538) Remove extra - from prefix/suffix (#543) Update to Kubernetes v1.18.3 (#542) Update for release [email protected] (#541) Update for release [email protected] (#540) Update License notice (#539) Use Allowlist and Denylist in MySQLVersion (#537) Update to Kubernetes v1.18.3 (#536) Update update-release-tracker.sh Update update-release-tracker.sh Add script to update release tracker on pr merge (#533) Update .kodiak.toml Add Ops Request const (#529) Add constants for mutator & validator group names (#532) Unwrap top level api folder (#531) Make RedisOpsRequest Namespaced (#530) Update .kodiak.toml Update to Kubernetes v1.18.3 (#527) Create .kodiak.toml Update to Kubernetes v1.18.3 Update comments Use CRD v1 for Kubernetes >= 1.16 (#525) Remove defaults from CRD v1beta1 Use crd.Interface in Controller (#524) Generate both v1beta1 and v1 CRD YAML (#523) Update to Kubernetes v1.18.3 (#520) Change MySQL `[]ContainerResources` to `core.ResourceRequirements` (#522) Merge pull request #521 from kubedb/mongo-vertical Change `[]ContainerResources` to `core.ResourceRequirements` Update `modification request` to `ops request` (#519) Fix linter warnings Rename api group to ops.kubedb.com (#518) Merge pull request #511 from pohly/memcached-pmem memcached: add dataVolume Merge pull" }, { "data": "#517 from kubedb/mg-scaling Flatten api structure Add MongoDBModificationRequest Scaling Spec Update comment for UpgradeSpec Review DBA crds (#516) Merge pull request #509 from kubedb/mysql-upgrade Fix type names and definition Update MySQLModificationRequest CRD Merge pull request #501 from kubedb/redis-modification Use standard condition from kmodules Update RedisModificationRequest CRD Merge pull request #503 from kubedb/elastic-upgrade Use standard conditions from kmodules Update dba api for elasticsearchModificationRequest Merge pull request #499 from kubedb/mongodb-modification Use standard conditions from kmodules Add MongoDBModificationRequest Spec Fix Update*Status helpers (#515) Merge pull request #512 from kubedb/prestop-mongos Use recommended kubernetes app labels (#514) Add Enum markers to api types Add Default PreStop Hook for Mongos Trigger the workflow on push or pull request Regenerate api types Update CHANGELOG.md Add requireSSL field to MySQL crd (#506) Rename Elasticsearch NODE_ROLE constant Rename Mongo SHARD_INDEX constant Add default affinity rules for Redis (#508) Set default affinity if not provided for Elasticsearch (#507) Prepare for release v0.14.0-rc.1 (#532) Prepare for release v0.14.0-beta.6 (#531) Update KubeDB api (#530) Prepare for release v0.14.0-beta.5 (#529) Update KubeDB api (#528) Update readme Update KubeDB api (#527) Update repository config (#526) Update KubeDB api (#525) Update Kubernetes v1.18.9 dependencies (#524) Update KubeDB api (#523) Update for release [email protected] (#522) Update KubeDB api (#521) Update KubeDB api (#520) Update Kubernetes v1.18.9 dependencies (#519) Update KubeDB api (#518) Update KubeDB api (#517) Update KubeDB api (#516) Update KubeDB api (#515) Update Kubernetes v1.18.9 dependencies (#514) Update KubeDB api (#513) Update KubeDB api (#512) Update KubeDB api (#511) Don't update krew manifest for pre-releases Publish to krew index (#510) Update KubeDB api (#509) Update Kubernetes v1.18.9 dependencies (#508) Add completion command (#507) Update KubeDB api (#506) Update KubeDB api (#505) Update KubeDB api (#504) Update KubeDB api (#503) Update Kubernetes v1.18.9 dependencies (#502) Update KubeDB api (#500) Update KubeDB api (#499) Update KubeDB api (#498) Update for release [email protected] (#497) Update Kubernetes v1.18.9 dependencies (#496) Update KubeDB api (#495) Update Kubernetes v1.18.9 dependencies (#494) Update repository config (#493) Update Kubernetes v1.18.9 dependencies (#492) Update Kubernetes v1.18.3 dependencies (#491) Prepare for release v0.14.0-beta.3 (#490) Update Kubernetes v1.18.3 dependencies (#489) Update for release [email protected] (#488) Update Kubernetes v1.18.3 dependencies (#487) Update Kubernetes v1.18.3 dependencies (#486) Use AppsCode Community License (#485) Prepare for release v0.14.0-beta.2 (#484) Update Kubernetes v1.18.3 dependencies (#483) Update Kubernetes v1.18.3 dependencies (#482) Update for release [email protected] (#481) Update for release [email protected] (#480) Update for release [email protected] (#479) Update for release [email protected] (#478) Update Kubernetes v1.18.3 dependencies (#477) Update Kubernetes v1.18.3 dependencies (#476) Update Kubernetes v1.18.3 dependencies (#475) Update Kubernetes v1.18.3 dependencies (#474) Update Kubernetes v1.18.3 dependencies (#473) Update Kubernetes v1.18.3 dependencies (#472) Use actions/upload-artifact@v2 Update to Kubernetes v1.18.3 (#471) Update to Kubernetes v1.18.3 (#470) Update to Kubernetes v1.18.3 (#469) Fix binary build path Prepare for release v0.14.0-beta.1 (#468) Update for release [email protected] (#466) Update for release [email protected] (#465) Disable autogen tags in docs (#464) Update License (#463) Update to Kubernetes v1.18.3 (#462) Add workflow to update docs (#461) Update update-release-tracker.sh Update update-release-tracker.sh Add script to update release tracker on pr merge (#460) Make release non-draft Update .kodiak.toml Update to Kubernetes v1.18.3 (#459) Update to Kubernetes v1.18.3 Create .kodiak.toml Update to Kubernetes v1.18.3 (#458) Update dependencies Update crazy-max/ghaction-docker-buildx flag Revendor kubedb.dev/apimachinery@master Cleanup cli commands (#454) Trigger the workflow on push or pull request Update readme (#457) Create draft GitHub release when tagged (#456) Convert kubedb cli into a `kubectl dba` plgin (#455) Revendor dependencies Update client-go to kubernetes-1.16.3 (#453) Add add-license make target Add license header to files (#452) Split imports into 3 parts (#451) Add release workflow script (#450) Enable GitHub actions Update changelog Prepare for release" }, { "data": "(#395) Prepare for release v0.14.0-beta.6 (#394) Update MergeServicePort and PatchServicePort apis (#393) Always set protocol for service ports Create SRV records for governing service (#392) Prepare for release v0.14.0-beta.5 (#391) Create separate governing service for each database (#390) Update KubeDB api (#389) Update readme Update repository config (#388) Prepare for release v0.14.0-beta.4 (#387) Update KubeDB api (#386) Make database's phase NotReady as soon as the halted is removed (#375) Update Kubernetes v1.18.9 dependencies (#385) Update Kubernetes v1.18.9 dependencies (#383) Update KubeDB api (#382) Update for release [email protected] (#381) Fix init validator (#379) Update KubeDB api (#380) Update KubeDB api (#378) Update Kubernetes v1.18.9 dependencies (#377) Update KubeDB api (#376) Update KubeDB api (#374) Update KubeDB api (#373) Integrate cert-manager and status.conditions (#357) Update repository config (#372) Update repository config (#371) Update repository config (#370) Publish docker images to ghcr.io (#369) Update repository config (#363) Update Kubernetes v1.18.9 dependencies (#362) Update for release [email protected] (#361) Update Kubernetes v1.18.9 dependencies (#360) Update Kubernetes v1.18.9 dependencies (#358) Rename client node to ingest node (#346) Update repository config (#356) Update repository config (#355) Update Kubernetes v1.18.9 dependencies (#354) Use common event recorder (#353) Update Kubernetes v1.18.3 dependencies (#352) Prepare for release v0.14.0-beta.3 (#351) Use new `spec.init` section (#350) Update Kubernetes v1.18.3 dependencies (#349) Add license verifier (#348) Update for release [email protected] (#347) Update Kubernetes v1.18.3 dependencies (#345) Use background propagation policy Update Kubernetes v1.18.3 dependencies (#343) Use AppsCode Community License (#342) Fix unit tests (#341) Update Kubernetes v1.18.3 dependencies (#340) Prepare for release v0.14.0-beta.2 (#339) Update release.yml Add support for Open-Distro-for-Elasticsearch (#303) Update Kubernetes v1.18.3 dependencies (#333) Update Kubernetes v1.18.3 dependencies (#332) Update Kubernetes v1.18.3 dependencies (#331) Update Kubernetes v1.18.3 dependencies (#330) Update Kubernetes v1.18.3 dependencies (#329) Update Kubernetes v1.18.3 dependencies (#328) Remove dependency on enterprise operator (#327) Update to cert-manager v0.16.0 (#326) Build images in e2e workflow (#325) Update to Kubernetes v1.18.3 (#324) Allow configuring k8s & db version in e2e tests (#323) Trigger e2e tests on /ok-to-test command (#322) Update to Kubernetes v1.18.3 (#321) Update to Kubernetes v1.18.3 (#320) Prepare for release v0.14.0-beta.1 (#319) Update for release [email protected] (#317) Include Makefile.env Allow customizing chart registry (#316) Update for release [email protected] (#315) Update License (#314) Update to Kubernetes v1.18.3 (#313) Update ci.yml Load stash version from .env file for make (#312) Update update-release-tracker.sh Update update-release-tracker.sh Add script to update release tracker on pr merge (#311) Update .kodiak.toml Various fixes (#310) Update to Kubernetes v1.18.3 (#309) Update to Kubernetes v1.18.3 Create .kodiak.toml Use CRD v1 for Kubernetes >= 1.16 (#308) Update to Kubernetes v1.18.3 (#307) Fix e2e tests (#306) Merge pull request #302 from kubedb/multi-region Revendor kubedb.dev/[email protected] Add support for multi-regional cluster Update stash install commands Update crazy-max/ghaction-docker-buildx flag Use updated operator labels in e2e tests (#304) Pass annotations from CRD to AppBinding (#305) Trigger the workflow on push or pull request Update CHANGELOG.md Update labelSelector for statefulsets (#300) Make master service headless & add rest-port to all db nodes (#299) Use stash.appscode.dev/[email protected] (#301) Introduce spec.halted and removed dormant and snapshot crd (#296) Add spec.selector fields to the governing service (#297) Use [email protected] release (#298) Add `Pause` feature (#295) Refactor CI pipeline to build once (#294) Fix e2e tests on GitHub actions (#292) fix bug (#293) Use Go 1.13 in CI (#291) Take out elasticsearch docker images and Matrix test (#289) Fix default make command Update catalog values for make install command Use charts to install operator (#290) Add add-license make target Skip libbuild folder from checking license Add license header to files (#288) Enable make ci (#287) Remove" }, { "data": "(#286) Fix E2E tests in github action (#285) Prepare for release v0.7.0-rc.1 (#297) Prepare for release v0.7.0-beta.6 (#296) Update MergeServicePort and PatchServicePort apis (#295) Create SRV records for governing service (#294) Make database's phase NotReady as soon as the halted is removed (#293) Prepare for release v0.7.0-beta.5 (#292) Create separate governing service for each database (#291) Update KubeDB api (#290) Update readme Prepare for release v0.7.0-beta.4 (#289) Update MongoDB Conditions (#280) Update KubeDB api (#288) Update Kubernetes v1.18.9 dependencies (#287) Update KubeDB api (#286) Update for release [email protected] (#285) Fix init validator (#283) Update KubeDB api (#284) Update KubeDB api (#282) Update Kubernetes v1.18.9 dependencies (#281) Use MongoDBCustomConfigFile constant Update KubeDB api (#279) Update KubeDB api (#278) Update KubeDB api (#277) Update Kubernetes v1.18.9 dependencies (#276) Update KubeDB api (#275) Update KubeDB api (#274) Update KubeDB api (#272) Update repository config (#271) Update repository config (#270) Initialize statefulset watcher from cmd/server/options.go (#269) Update KubeDB api (#268) Update Kubernetes v1.18.9 dependencies (#267) Publish docker images to ghcr.io (#266) Update KubeDB api (#265) Update KubeDB api (#264) Update KubeDB api (#263) Update KubeDB api (#262) Update repository config (#261) Use conditions to handle initialization (#258) Update Kubernetes v1.18.9 dependencies (#260) Remove redundant volume mounts (#259) Update for release [email protected] (#257) Update Kubernetes v1.18.9 dependencies (#256) Remove bootstrap container (#248) Update Kubernetes v1.18.9 dependencies (#254) Update repository config (#253) Update repository config (#252) Update Kubernetes v1.18.9 dependencies (#251) Use common event recorder (#249) Update Kubernetes v1.18.3 dependencies (#250) Prepare for release v0.7.0-beta.3 (#247) Use new `spec.init` section (#246) Update Kubernetes v1.18.3 dependencies (#245) Add license verifier (#244) Update for release [email protected] (#243) Update Kubernetes v1.18.3 dependencies (#242) Update Constants (#241) Use common constant across MongoDB Community and Enterprise operator (#240) Run e2e tests from kubedb/tests repo (#238) Set Delete Propagation Policy to Background (#237) Update Kubernetes v1.18.3 dependencies (#236) Use AppsCode Community License (#235) Prepare for release v0.7.0-beta.2 (#234) Update release.yml Always use OnDelete UpdateStrategy (#233) Fix build (#232) Use updated certificate spec (#221) Remove `storage` Validation Check (#231) Update Kubernetes v1.18.3 dependencies (#225) Update Kubernetes v1.18.3 dependencies (#224) Update Kubernetes v1.18.3 dependencies (#223) Update Kubernetes v1.18.3 dependencies (#222) Add `inMemory` Storage Engine Support for Percona MongoDB Server (#205) Update Kubernetes v1.18.3 dependencies (#220) Update Kubernetes v1.18.3 dependencies (#219) Fix install target Remove dependency on enterprise operator (#218) Build images in e2e workflow (#217) Update to Kubernetes v1.18.3 (#216) Allow configuring k8s & db version in e2e tests (#215) Trigger e2e tests on /ok-to-test command (#214) Update to Kubernetes v1.18.3 (#213) Update to Kubernetes v1.18.3 (#212) Prepare for release v0.7.0-beta.1 (#211) Update for release [email protected] (#209) include Makefile.env Allow customizing chart registry (#208) Update for release [email protected] (#207) Update License (#206) Update to Kubernetes v1.18.3 (#204) Update ci.yml Load stash version from .env file for make (#203) Update update-release-tracker.sh Update update-release-tracker.sh Add script to update release tracker on pr merge (#202) Update .kodiak.toml Various fixes (#201) Update to Kubernetes v1.18.3 (#200) Update to Kubernetes v1.18.3 Create .kodiak.toml Use CRD v1 for Kubernetes >= 1.16 (#199) Update to Kubernetes v1.18.3 (#198) Fix e2e tests (#197) Update stash install commands Revendor kubedb.dev/apimachinery@master (#196) Pass annotations from CRD to AppBinding (#195) Update crazy-max/ghaction-docker-buildx flag Use updated operator labels in e2e tests (#193) Trigger the workflow on push or pull request Update CHANGELOG.md Use SHARD_INDEX constant from apimachinery Use stash.appscode.dev/[email protected] (#191) Manage SSL certificates using cert-manager (#190) Use Minio storage for testing (#188) Support affinity templating in mongodb-shard (#186) Use [email protected] release (#185) Fix `Pause` Logic (#184) Refactor CI pipeline to build once (#182) Add `Pause`" }, { "data": "(#181) Delete backupconfig before attempting restoresession. (#180) Wipeout if custom databaseSecret has been deleted (#179) Matrix test and Moved out mongo docker files (#178) Add add-license makefile target Update Makefile Add license header to files (#177) Fix E2E tests in github action (#176) Prepare for release v0.7.0-rc.1 (#287) Prepare for release v0.7.0-beta.6 (#286) Create SRV records for governing service (#285) Prepare for release v0.7.0-beta.5 (#284) Create separate governing service for each database (#283) Update KubeDB api (#282) Update readme Prepare for release v0.7.0-beta.4 (#281) Add conditions to MySQL status (#275) Update KubeDB api (#280) Update Kubernetes v1.18.9 dependencies (#279) Update KubeDB api (#278) Update for release [email protected] (#277) Update KubeDB api (#276) Update KubeDB api (#274) Update Kubernetes v1.18.9 dependencies (#273) Update KubeDB api (#272) Update KubeDB api (#271) Update KubeDB api (#270) Add Pod name to mysql replication-mode-detector container envs (#269) Update KubeDB api (#268) Update Kubernetes v1.18.9 dependencies (#267) Update KubeDB api (#266) Update KubeDB api (#265) Update KubeDB api (#263) Update repository config (#262) Update repository config (#261) Update repository config (#260) Initialize statefulset watcher from cmd/server/options.go (#259) Update KubeDB api (#258) Update Kubernetes v1.18.9 dependencies (#257) Publish docker images to ghcr.io (#256) Update KubeDB api (#255) Update KubeDB api (#254) Update KubeDB api (#253) Pass mysql name by flag for replication-mode-detector container (#247) Update KubeDB api (#252) Update repository config (#251) Cleanup monitoring spec api (#250) Use condition to handle database initialization (#243) Update Kubernetes v1.18.9 dependencies (#249) Use offshootSelectors to find statefulset (#248) Update for release [email protected] (#246) Update Kubernetes v1.18.9 dependencies (#245) Update Kubernetes v1.18.9 dependencies (#242) Add separate services for primary and secondary Replicas (#229) Update repository config (#241) Update repository config (#240) Update Kubernetes v1.18.9 dependencies (#239) Remove unused StashClient (#238) Update Kubernetes v1.18.3 dependencies (#237) Use common event recorder (#236) Prepare for release v0.7.0-beta.3 (#235) Update Kubernetes v1.18.3 dependencies (#234) Add license verifier (#233) Use new `spec.init` section (#230) Update for release [email protected] (#232) Update Kubernetes v1.18.3 dependencies (#231) Use background deletion policy Update Kubernetes v1.18.3 dependencies (#227) Use AppsCode Community License (#226) Update Kubernetes v1.18.3 dependencies (#225) Prepare for release v0.7.0-beta.2 (#224) Update release.yml Update dependencies (#223) Always use OnDelete update strategy Update Kubernetes v1.18.3 dependencies (#222) Added TLS/SSL Configuration in MySQL Server (#204) Use username/password constants from core/v1 Update MySQL vendor for changes of prometheus coreos operator (#216) Update Kubernetes v1.18.3 dependencies (#215) Update Kubernetes v1.18.3 dependencies (#214) Update Kubernetes v1.18.3 dependencies (#213) Update Kubernetes v1.18.3 dependencies (#212) Update Kubernetes v1.18.3 dependencies (#211) Update Kubernetes v1.18.3 dependencies (#210) Fix install target Remove dependency on enterprise operator (#209) Detect primary pod in MySQL group replication (#190) Support MySQL new version for group replication and standalone (#189) Build images in e2e workflow (#208) Allow configuring k8s & db version in e2e tests (#207) Update to Kubernetes v1.18.3 (#206) Trigger e2e tests on /ok-to-test command (#205) Update to Kubernetes v1.18.3 (#203) Update to Kubernetes v1.18.3 (#202) Prepare for release v0.7.0-beta.1 (#201) Update for release [email protected] (#199) Allow customizing chart registry (#198) Update for release [email protected] (#197) Update License (#196) Update to Kubernetes v1.18.3 (#195) Update ci.yml Load stash version from .env file for make (#194) Update update-release-tracker.sh Update update-release-tracker.sh Add script to update release tracker on pr merge (#193) Update .kodiak.toml Various fixes (#192) Update to Kubernetes v1.18.3 (#191) Update to Kubernetes v1.18.3 Create .kodiak.toml Use helm --wait in make install command Use CRD v1 for Kubernetes >= 1.16 (#188) Merge pull request #187 from kubedb/k-1.18.3 Pass context Update to Kubernetes v1.18.3 Fix e2e" }, { "data": "(#186) Update stash install commands Revendor kubedb.dev/apimachinery@master (#185) Update crazy-max/ghaction-docker-buildx flag Use updated operator labels in e2e tests (#183) Pass annotations from CRD to AppBinding (#184) Trigger the workflow on push or pull request Update CHANGELOG.md Use stash.appscode.dev/[email protected] (#181) Introduce spec.halted and removed dormant and snapshot crd (#178) Use [email protected] release (#179) Use apache thrift v0.13.0 Update github.com/apache/thrift v0.12.0 (#176) Add Pause Feature (#177) Mount mysql config dir and tmp dir as emptydir (#166) Enable subresource for MySQL crd. (#175) Update kubernetes client-go to 1.16.3 (#174) Matrix tests for github actions (#172) Fix default make command Use charts to install operator (#173) Add add-license make target Add license header to files (#171) Fix linter errors. (#169) Enable make ci (#168) Remove EnableStatusSubresource (#167) Run e2e tests using GitHub actions (#164) Validate DBVersionSpecs and fixed broken build (#165) Update go.yml Enable GitHub actions Update changelog Prepare for release v0.1.0-rc.1 (#75) Prepare for release v0.1.0-beta.6 (#74) Update KubeDB api (#73) Fix sql query to find primary host for different version of MySQL (#66) Prepare for release v0.1.0-beta.5 (#72) Update KubeDB api (#71) Prepare for release v0.1.0-beta.4 (#70) Update KubeDB api (#69) Update Kubernetes v1.18.9 dependencies (#68) Update Kubernetes v1.18.9 dependencies (#65) Update KubeDB api (#64) Update KubeDB api (#63) Update KubeDB api (#62) Update Kubernetes v1.18.9 dependencies (#61) Update KubeDB api (#60) Update KubeDB api (#59) Update KubeDB api (#58) Add tls config (#40) Update KubeDB api (#57) Update Kubernetes v1.18.9 dependencies (#56) Update KubeDB api (#55) Update KubeDB api (#54) Update KubeDB api (#53) Update KubeDB api (#52) Update Kubernetes v1.18.9 dependencies (#51) Publish docker images to ghcr.io (#50) Update KubeDB api (#49) Update KubeDB api (#48) Update KubeDB api (#47) Update KubeDB api (#46) Update KubeDB api (#45) Update KubeDB api (#44) Update KubeDB api (#43) Update KubeDB api (#42) Update KubeDB api (#41) Update KubeDB api (#38) Update KubeDB api (#37) Update KubeDB api (#36) Update Kubernetes v1.18.9 dependencies (#35) Update KubeDB api (#34) Update KubeDB api (#33) Update Kubernetes v1.18.9 dependencies (#32) Move constant to apimachinery repo (#24) Update repository config (#31) Update repository config (#30) Update Kubernetes v1.18.9 dependencies (#29) Update Kubernetes v1.18.3 dependencies (#28) Prepare for release v0.1.0-beta.3 (#27) Update Kubernetes v1.18.3 dependencies (#26) Update Kubernetes v1.18.3 dependencies (#25) Update Kubernetes v1.18.3 dependencies (#23) Use AppsCode Community License (#22) Prepare for release v0.1.0-beta.2 (#21) Update Kubernetes v1.18.3 dependencies (#19) Update Kubernetes v1.18.3 dependencies (#18) Update Kubernetes v1.18.3 dependencies (#17) Update Kubernetes v1.18.3 dependencies (#16) Update Kubernetes v1.18.3 dependencies (#15) Update Kubernetes v1.18.3 dependencies (#14) Don't push binary with release Remove port-forwarding and Refactor Code (#13) Update to Kubernetes v1.18.3 (#12) Update to Kubernetes v1.18.3 (#11) Update to Kubernetes v1.18.3 (#10) Prepare for release v0.1.0-beta.1 (#9) Update License (#7) Update to Kubernetes v1.18.3 (#6) Update update-release-tracker.sh Add script to update release tracker on pr merge (#5) Update .kodiak.toml Update to Kubernetes v1.18.3 (#4) Merge branch 'master' into gomod-refresher-1591418508 Create .kodiak.toml Update to Kubernetes v1.18.3 Update to Kubernetes v1.18.3 (#3) Update Makefile and CI configuration (#2) Add primary role labeler controller (#1) add readme.md Prepare for release v0.14.0-rc.1 (#335) Prepare for release v0.14.0-beta.6 (#334) Update KubeDB api (#333) Update Kubernetes v1.18.9 dependencies (#332) Use go.bytebuilders.dev/license-verifier v0.4.0 Prepare for release v0.14.0-beta.5 (#331) Enable PgBoucner & ProxySQL for enterprise license (#330) Update readme.md Update KubeDB api (#329) Update readme Format readme Update readme (#328) Update repository config (#327) Prepare for release v0.14.0-beta.4 (#326) Add --readiness-probe-interval flag (#325) Update KubeDB api (#324) Update Kubernetes v1.18.9 dependencies (#323) Update Kubernetes v1.18.9 dependencies (#321) Update KubeDB api (#320) Update KubeDB api (#319) Update KubeDB" }, { "data": "(#318) Update Kubernetes v1.18.9 dependencies (#317) Update KubeDB api (#316) Update KubeDB api (#315) Update KubeDB api (#312) Update repository config (#311) Update repository config (#310) Update repository config (#309) Update KubeDB api (#308) Update Kubernetes v1.18.9 dependencies (#307) Publish docker images to ghcr.io (#306) Update KubeDB api (#305) Update KubeDB api (#304) Refactor initializer code + Use common event recorder (#292) Update repository config (#301) Update Kubernetes v1.18.9 dependencies (#300) Update for release [email protected] (#299) Update Kubernetes v1.18.9 dependencies (#298) Update Kubernetes v1.18.9 dependencies (#296) Update repository config (#295) Update repository config (#294) Update Kubernetes v1.18.9 dependencies (#293) Update Kubernetes v1.18.3 dependencies (#291) Update Kubernetes v1.18.3 dependencies (#290) Use AppsCode Community license (#289) Add license verifier (#288) Update for release [email protected] (#287) Update Kubernetes v1.18.3 dependencies (#286) Update Kubernetes v1.18.3 dependencies (#284) Update Kubernetes v1.18.3 dependencies (#282) Prepare for release v0.14.0-beta.2 (#281) Update Kubernetes v1.18.3 dependencies (#280) Update Kubernetes v1.18.3 dependencies (#275) Update Kubernetes v1.18.3 dependencies (#274) Update Kubernetes v1.18.3 dependencies (#273) Update Kubernetes v1.18.3 dependencies (#272) Update Kubernetes v1.18.3 dependencies (#271) Update Kubernetes v1.18.3 dependencies (#270) Remove dependency on enterprise operator (#269) Build images in e2e workflow (#268) Update to Kubernetes v1.18.3 (#266) Allow configuring k8s in e2e tests (#267) Trigger e2e tests on /ok-to-test command (#265) Update to Kubernetes v1.18.3 (#264) Update to Kubernetes v1.18.3 (#263) Prepare for release v0.14.0-beta.1 (#262) Allow customizing chart registry (#261) Update for release [email protected] (#260) Update for release [email protected] (#259) Update License (#258) Update to Kubernetes v1.18.3 (#256) Update ci.yml Update ci.yml Add workflow to update docs (#255) Update update-release-tracker.sh Update update-release-tracker.sh Add script to update release tracker on pr merge (#254) Update .kodiak.toml Register validator & mutators for all supported dbs (#253) Various fixes (#252) Update to Kubernetes v1.18.3 (#251) Create .kodiak.toml Update to Kubernetes v1.18.3 (#247) Update enterprise operator tag (#246) Revendor kubedb.dev/apimachinery@master (#245) Use recommended kubernetes app labels Update crazy-max/ghaction-docker-buildx flag Trigger the workflow on push or pull request Update readme (#244) Update CHANGELOG.md Add license scan report and status (#241) Pass the topology object to common controller Initialize topology for MonogDB webhooks (#243) Fix nil pointer exception (#242) Update operator dependencies (#237) Always create RBAC resources (#238) Use Go 1.13 in CI Update client-go to kubernetes-1.16.3 (#239) Update CI badge Bundle PgBouncer operator (#236) Fix linter errors (#235) Update go.yml Enable GitHub actions Update changelog Prepare for release v0.1.0-rc.1 (#119) Prepare for release v0.1.0-beta.6 (#118) Create SRV records for governing service (#117) Prepare for release v0.1.0-beta.5 (#116) Create separate governing service for each database (#115) Update KubeDB api (#114) Update readme Prepare for release v0.1.0-beta.4 (#113) Update KubeDB api (#112) Update Kubernetes v1.18.9 dependencies (#111) Update KubeDB api (#110) Update for release [email protected] (#109) Fix init validator (#107) Update KubeDB api (#108) Update KubeDB api (#106) Update Kubernetes v1.18.9 dependencies (#105) Update KubeDB api (#104) Update KubeDB api (#103) Update KubeDB api (#102) Update KubeDB api (#101) Update Kubernetes v1.18.9 dependencies (#100) Update KubeDB api (#99) Update KubeDB api (#98) Update KubeDB api (#96) Update repository config (#95) Update repository config (#94) Update repository config (#93) Initialize statefulset watcher from cmd/server/options.go (#92) Update KubeDB api (#91) Update Kubernetes v1.18.9 dependencies (#90) Publish docker images to ghcr.io (#89) Update KubeDB api (#88) Update KubeDB api (#87) Update KubeDB api (#86) Update KubeDB api (#85) Update repository config (#84) Cleanup monitoring spec api (#83) Use conditions to handle database initialization (#80) Update Kubernetes v1.18.9 dependencies (#82) Updated the exporter port and service (#81) Update for release [email protected] (#79) Update Kubernetes v1.18.9 dependencies (#78) Update Kubernetes" }, { "data": "dependencies (#76) Update repository config (#75) Update repository config (#74) Update Kubernetes v1.18.9 dependencies (#73) Update Kubernetes v1.18.3 dependencies (#72) Use common event recorder (#71) Prepare for release v0.1.0-beta.3 (#70) Use new `spec.init` section (#69) Update Kubernetes v1.18.3 dependencies (#68) Add license verifier (#67) Update for release [email protected] (#66) Update Kubernetes v1.18.3 dependencies (#65) Use background deletion policy Update Kubernetes v1.18.3 dependencies (#63) Use AppsCode Community License (#62) Update Kubernetes v1.18.3 dependencies (#61) Prepare for release v0.1.0-beta.2 (#60) Update release.yml Use updated apis (#59) Update Kubernetes v1.18.3 dependencies (#53) Update Kubernetes v1.18.3 dependencies (#52) Update Kubernetes v1.18.3 dependencies (#51) Update Kubernetes v1.18.3 dependencies (#50) Update Kubernetes v1.18.3 dependencies (#49) Update Kubernetes v1.18.3 dependencies (#48) Remove dependency on enterprise operator (#47) Allow configuring k8s & db version in e2e tests (#46) Update to Kubernetes v1.18.3 (#45) Trigger e2e tests on /ok-to-test command (#44) Update to Kubernetes v1.18.3 (#43) Update to Kubernetes v1.18.3 (#42) Prepare for release v0.1.0-beta.1 (#41) Update for release [email protected] (#39) include Makefile.env Allow customizing chart registry (#38) Update License (#37) Update for release [email protected] (#36) Update to Kubernetes v1.18.3 (#35) Update ci.yml Load stash version from .env file for make (#34) Update update-release-tracker.sh Update update-release-tracker.sh Add script to update release tracker on pr merge (#33) Update .kodiak.toml Various fixes (#32) Update to Kubernetes v1.18.3 (#31) Update to Kubernetes v1.18.3 Create .kodiak.toml Use CRD v1 for Kubernetes >= 1.16 (#30) Update to Kubernetes v1.18.3 (#29) Fix e2e tests (#28) Update stash install commands Use recommended kubernetes app labels (#27) Update crazy-max/ghaction-docker-buildx flag Revendor kubedb.dev/apimachinery@master (#26) Pass annotations from CRD to AppBinding (#25) Trigger the workflow on push or pull request Update CHANGELOG.md Use stash.appscode.dev/[email protected] (#24) Update for percona-xtradb standalone restoresession (#23) Various fixes (#21) Update kubernetes client-go to 1.16.3 (#20) Fix default make command Use charts to install operator (#19) Several fixes and update tests (#18) Various Makefile improvements (#16) Remove EnableStatusSubresource (#17) Run e2e tests using GitHub actions (#12) Validate DBVersionSpecs and fixed broken build (#15) Update go.yml Various changes for Percona XtraDB (#13) Enable GitHub actions Refactor for ProxySQL Integration (#11) Revendor Rename from perconaxtradb to percona-xtradb (#10) Set database version in AppBinding (#7) Percona XtraDB Cluster support (#9) Don't set annotation to AppBinding (#8) Fix UpsertDatabaseAnnotation() function (#4) Add license header to Makefiles (#6) Add install, uninstall and purge command in Makefile (#3) Update .gitignore Add Makefile (#2) Rename package path (#1) Use explicit IP whitelist instead of automatic IP whitelist (#151) Update to k8s 1.14.0 client libraries using go.mod (#147) Update changelog Update README.md Start next dev cycle Prepare release 0.5.0 Mysql Group Replication tests (#146) Mysql Group Replication (#144) Revendor dependencies Changed Role to exclude psp without name (#143) Modify mutator validator names (#142) Update changelog Start next dev cycle Prepare release 0.4.0 Added PSP names and init container image in testing framework (#141) Added PSP support for mySQL (#137) Don't inherit app.kubernetes.io labels from CRD into offshoots (#140) Support for init container (#139) Add role label to stats service (#138) Update changelog Update Kubernetes client libraries to 1.13.0 release (#136) Start next dev cycle Prepare release 0.3.0 Initial RBAC support: create and use K8s service account for MySQL (#134) Revendor dependencies (#135) Revendor dependencies : Retry Failed Scheduler Snapshot (#133) Added ephemeral StorageType support (#132) Added support of MySQL 8.0.14 (#131) Use PVC spec from snapshot if provided (#130) Revendored and updated tests for 'Prevent prefix matching of multiple snapshots' (#129) Add certificate health checker (#128) Update E2E test: Env update is not restricted anymore (#127) Fix" }, { "data": "(#126) Update changelog Prepare release 0.2.0 Reuse event recorder (#125) OSM binary upgraded in mysql-tools (#123) Revendor dependencies (#124) Test for faulty snapshot (#122) Start next dev cycle Prepare release 0.2.0-rc.2 Upgrade database secret keys (#121) Ignore mutation of fields to default values during update (#120) Support configuration options for exporter sidecar (#119) Use flags.DumpAll (#118) Start next dev cycle Prepare release 0.2.0-rc.1 Apply cleanup (#117) Set periodic analytics (#116) Introduce AppBinding support (#115) Fix Analytics (#114) Error out from cron job for deprecated dbversion (#113) Add CRDs without observation when operator starts (#112) Update changelog Start next dev cycle Prepare release 0.2.0-rc.0 Merge commit 'cc6607a3589a79a5e61bb198d370ea0ae30b9d09' Support custom user passowrd for backup (#111) Support providing resources for monitoring container (#110) Update kubernetes client libraries to 1.12.0 (#109) Add validation webhook xray (#108) Various Fixes (#107) Merge ports from service template (#105) Replace doNotPause with TerminationPolicy = DoNotTerminate (#104) Pass resources to NamespaceValidator (#103) Various fixes (#102) Support Livecycle hook and container probes (#101) Check if Kubernetes version is supported before running operator (#100) Update package alias (#99) Start next dev cycle Prepare release 0.2.0-beta.1 Revendor api (#98) Fix tests (#97) Revendor api for catalog apigroup (#96) Update chanelog Use --pull flag with docker build (#20) (#95) Merge commit '16c769ee4686576f172a6b79a10d25bfd79ca4a4' Start next dev cycle Prepare release 0.2.0-beta.0 Pass extra args to tools.sh (#93) Don't try to wipe out Snapshot data for Local backend (#92) Add missing alt-tag docker folder mysql-tools images (#91) Use suffix for updated DBImage & Stop working for deprecated *Versions (#90) Search used secrets within same namespace of DB object (#89) Support Termination Policy (#88) Update builddeps.sh Revendor k8s.io/apiserver (#87) Revendor kubernetes-1.11.3 (#86) Support UpdateStrategy (#84) Add TerminationPolicy for databases (#83) Revendor api (#82) Use IntHash as status.observedGeneration (#81) fix github status (#80) Update pipeline (#79) Fix E2E test for minikube (#78) Update pipeline (#77) Migrate MySQL (#75) Use official exporter image (#74) Fix uninstall for concourse (#70) Update status.ObservedGeneration for failure phase (#73) Keep track of ObservedGenerationHash (#72) Use NewObservableHandler (#71) Merge commit '887037c7e36289e3135dda99346fccc7e2ce303b' Fix uninstall for concourse (#69) Update README.md Revise immutable spec fields (#68) Merge commit '5f83049fc01dc1d0709ac0014d6f3a0f74a39417' Support passing args via PodTemplate (#67) Introduce storageType : ephemeral (#66) Add support for running tests on cncf cluster (#63) Merge commit 'e010cbb302c8d59d4cf69dd77085b046ff423b78' Revendor api (#65) Keep track of observedGeneration in status (#64) Separate StatsService for monitoring (#62) Use MySQLVersion for MySQL images (#61) Use updated crd spec (#60) Rename OffshootLabels to OffshootSelectors (#59) Revendor api (#58) Use kmodules monitoring and objectstore api (#57) Support custom configuration (#52) Merge commit '44e6d4985d93556e39ddcc4677ada5437fc5be64' Refactor concourse scripts (#56) Fix command `./hack/make.py test e2e` (#55) Set generated binary name to my-operator (#54) Don't add admission/v1beta1 group as a prioritized version (#53) Fix travis build (#48) Format shell script (#51) Enable status subresource for crds (#50) Update client-go to v8.0.0 (#49) Merge commit '71850e2c90cda8fc588b7dedb340edf3d316baea' Support ENV variables in CRDs (#46) Updated osm version to 0.7.1 (#47) Prepare release 0.1.0 Fixed missing error return (#45) Revendor dependencies (#44) Fix release script (#43) Add changelog (#42) Concourse (#41) Fixed kubeconfig plugin for Cloud Providers && Storage is required for MySQL (#40) Refactored E2E testing to support E2E testing with admission webhook in cloud (#38) Remove lost+found directory before initializing mysql (#39) Skip delete requests for empty resources (#37) Don't panic if admission options is nil (#36) Disable admission controllers for webhook server (#35) Separate ApiGroup for Mutating and Validating webhook && upgraded osm to 0.7.0 (#34) Update client-go to 7.0.0 (#33) Added update script for mysql-tools:8 (#32) Added support of" }, { "data": "(#31) Add support for one informer and N-eventHandler for snapshot, dromantDB and Job (#30) Use metrics from kube apiserver (#29) Bundle webhook server and Use SharedInformerFactory (#28) Move MySQL AdmissionWebhook packages into MySQL repository (#27) Use mysql:8.0.3 image as mysql:8.0 (#26) Update README.md Update README.md Remove Docker pull count Add travis yaml (#25) Start next dev cycle Prepare release 0.1.0-beta.2 Migrating to apps/v1 (#23) Update validation (#22) Fix dormantDB matching: pass same type to Equal method (#21) Use official code generator scripts (#20) Fixed dormantdb matching & Raised throttling time & Fixed MySQL version Checking (#19) Prepare release 0.1.0-beta.1 converted to k8s 1.9 & Improved InitSpec in DormantDB & Added support for Job watcher & Improved Tests (#17) Fixed logger, analytics and removed rbac stuff (#16) Add rbac stuffs for mysql-exporter (#15) Review Mysql docker images and Fixed monitring (#14) Update README.md Start next dev cycle Prepare release 0.1.0-beta.0 Add release script Rename ms-operator to my-operator (#13) Fix Analytics and pass client-id as ENV to Snapshot Job (#12) update docker image validation (#11) Add docker-registry and WorkQueue (#10) Set client id for analytics (#9) Fix CRD Registration (#8) Update issue repo link Update pkg paths to kubedb org (#7) Assign default Prometheus Monitoring Port (#6) Add Snapshot Backup, Restore and Backup-Scheduler (#4) Update Dockerfile Add mysql-util docker image (#5) Mysql db - Inititalizing (#2) Update README.md Update README.md Use client-go 5.x Update ./hack folder (#3) Add skeleton for mysql (#1) Merge commit 'be70502b4993171bbad79d2ff89a9844f1c24caa' as 'hack/libbuild' Prepare for release v0.1.0-rc.1 (#92) Prepare for release v0.1.0-beta.6 (#91) Create SRV records for governing service (#90) Prepare for release v0.1.0-beta.5 (#89) Create separate governing service for each database (#88) Update KubeDB api (#87) Update readme Update repository config (#86) Prepare for release v0.1.0-beta.4 (#85) Update KubeDB api (#84) Update Kubernetes v1.18.9 dependencies (#83) Update KubeDB api (#82) Update KubeDB api (#81) Update KubeDB api (#80) Update Kubernetes v1.18.9 dependencies (#79) Update KubeDB api (#78) Update KubeDB api (#77) Update KubeDB api (#76) Update KubeDB api (#75) Update Kubernetes v1.18.9 dependencies (#74) Update KubeDB api (#73) Update KubeDB api (#72) Update KubeDB api (#71) Update repository config (#70) Update repository config (#69) Update repository config (#68) Update KubeDB api (#67) Update Kubernetes v1.18.9 dependencies (#66) Publish docker images to ghcr.io (#65) Update KubeDB api (#64) Update KubeDB api (#63) Update KubeDB api (#62) Update KubeDB api (#61) Update repository config (#60) Update Kubernetes v1.18.9 dependencies (#59) Update KubeDB api (#56) Update Kubernetes v1.18.9 dependencies (#57) Update Kubernetes v1.18.9 dependencies (#55) Update repository config (#54) Update repository config (#53) Update Kubernetes v1.18.9 dependencies (#52) Update Kubernetes v1.18.3 dependencies (#51) Prepare for release v0.1.0-beta.3 (#50) Add license verifier (#49) Use AppsCode Trial license (#48) Update Kubernetes v1.18.3 dependencies (#47) Update Kubernetes v1.18.3 dependencies (#46) Update Kubernetes v1.18.3 dependencies (#44) Use AppsCode Community License (#43) Update Kubernetes v1.18.3 dependencies (#42) Prepare for release v0.1.0-beta.2 (#41) Update release.yml Use updated certificate spec (#35) Update Kubernetes v1.18.3 dependencies (#39) Update Kubernetes v1.18.3 dependencies (#38) Update Kubernetes v1.18.3 dependencies (#37) Update Kubernetes v1.18.3 dependencies (#36) Update Kubernetes v1.18.3 dependencies (#34) Update Kubernetes v1.18.3 dependencies (#33) Remove dependency on enterprise operator (#32) Update to cert-manager v0.16.0 (#30) Build images in e2e workflow (#29) Allow configuring k8s & db version in e2e tests (#28) Update to Kubernetes v1.18.3 (#27) Fix formatting Trigger e2e tests on /ok-to-test command (#26) Fix cert-manager integration for PgBouncer (#25) Update to Kubernetes v1.18.3 (#24) Update Makefile.env Prepare for release v0.1.0-beta.1 (#23) include Makefile.env (#22) Update License (#21) Update to Kubernetes v1.18.3 (#20) Update ci.yml Update update-release-tracker.sh Update" }, { "data": "Add script to update release tracker on pr merge (#19) Update .kodiak.toml Use POSTGRES_TAG v0.14.0-alpha.0 Various fixes (#18) Update to Kubernetes v1.18.3 (#17) Update to Kubernetes v1.18.3 Create .kodiak.toml Use CRD v1 for Kubernetes >= 1.16 (#16) Update to Kubernetes v1.18.3 (#15) Fix e2e tests (#14) Update crazy-max/ghaction-docker-buildx flag Revendor kubedb.dev/apimachinery@master (#13) Use updated operator labels in e2e tests (#12) Trigger the workflow on push or pull request Update CHANGELOG.md Use stash.appscode.dev/[email protected] (#11) Fix build Revendor and update enterprise sidecar image (#10) Update enterprise operator tag (#9) Use kubedb/installer master branch in CI Update pgbouncer controller (#8) Update variable names Fix plain text secret in exporter container of StatefulSet (#5) Update client-go to kubernetes-1.16.3 (#7) Use charts to install operator (#6) Add add-license make target Enable e2e tests in GitHub actions (#4) Initial implementation (#2) Update go.yml Enable GitHub actions Clone kubedb/postgres repo (#1) Merge commit 'f78de886ed657650438f99574c3b002dd3607497' as 'hack/libbuild' Prepare for release v0.14.0-rc.1 (#405) Prepare for release v0.14.0-beta.6 (#404) Create SRV records for governing service (#402) Prepare for release v0.14.0-beta.5 (#401) Simplify port assignment (#400) Create separate governing service for each database (#399) Update KubeDB api (#398) Update readme Update Kubernetes v1.18.9 dependencies (#397) Prepare for release v0.14.0-beta.4 (#396) Update KubeDB api (#395) Update Kubernetes v1.18.9 dependencies (#394) Update KubeDB api (#393) Update for release [email protected] (#392) Fix init validator (#390) Update KubeDB api (#391) Update KubeDB api (#389) Update Kubernetes v1.18.9 dependencies (#388) Update KubeDB api (#387) Update KubeDB api (#386) Update KubeDB api (#385) Update KubeDB api (#384) Update Kubernetes v1.18.9 dependencies (#383) Update KubeDB api (#382) Update KubeDB api (#381) Update KubeDB api (#379) Update repository config (#378) Update repository config (#377) Update repository config (#376) Initialize statefulset watcher from cmd/server/options.go (#375) Update KubeDB api (#374) Update Kubernetes v1.18.9 dependencies (#373) Publish docker images to ghcr.io (#372) Only keep username/password keys in Postgres secret Update KubeDB api (#371) Update KubeDB api (#370) Update KubeDB api (#369) Don't add secretTransformation in AppBinding section by default (#316) Update KubeDB api (#368) Update repository config (#367) Use conditions to handle initialization (#365) Update Kubernetes v1.18.9 dependencies (#366) Update for release [email protected] (#364) Update Kubernetes v1.18.9 dependencies (#363) Update Kubernetes v1.18.9 dependencies (#361) Update repository config (#360) Update repository config (#359) Update Kubernetes v1.18.9 dependencies (#358) Use common event recorder (#357) Update Kubernetes v1.18.3 dependencies (#356) Prepare for release v0.14.0-beta.3 (#355) Use new `sepc.init` section (#354) Update Kubernetes v1.18.3 dependencies (#353) Add license verifier (#352) Update for release [email protected] (#351) Update Kubernetes v1.18.3 dependencies (#350) Use background deletion policy Update Kubernetes v1.18.3 dependencies (#348) Use AppsCode Community License (#347) Update Kubernetes v1.18.3 dependencies (#346) Prepare for release v0.14.0-beta.2 (#345) Update release.yml Always use OnDelete update strategy Update Kubernetes v1.18.3 dependencies (#344) Update Kubernetes v1.18.3 dependencies (#343) Update Kubernetes v1.18.3 dependencies (#338) Update Kubernetes v1.18.3 dependencies (#337) Update Kubernetes v1.18.3 dependencies (#336) Update Kubernetes v1.18.3 dependencies (#335) Update Kubernetes v1.18.3 dependencies (#334) Update Kubernetes v1.18.3 dependencies (#333) Remove dependency on enterprise operator (#332) Build images in e2e workflow (#331) Update to Kubernetes v1.18.3 (#329) Allow configuring k8s & db version in e2e tests (#330) Trigger e2e tests on /ok-to-test command (#328) Update to Kubernetes v1.18.3 (#327) Update to Kubernetes v1.18.3 (#326) Prepare for release v0.14.0-beta.1 (#325) Update for release [email protected] (#323) Allow customizing kube namespace for Stash Allow customizing chart registry (#322) Update for release [email protected] (#321) Update License Update to Kubernetes v1.18.3 (#320) Update ci.yml Load stash version from .env file for make (#319) Update update-release-tracker.sh Update update-release-tracker.sh Add script to update release tracker on pr" }, { "data": "(#318) Update .kodiak.toml Various fixes (#317) Update to Kubernetes v1.18.3 (#315) Update to Kubernetes v1.18.3 Create .kodiak.toml Use CRD v1 for Kubernetes >= 1.16 (#314) Update to Kubernetes v1.18.3 (#313) Fix e2e tests (#312) Update stash install commands Revendor kubedb.dev/apimachinery@master (#311) Update crazy-max/ghaction-docker-buildx flag Use updated operator labels in e2e tests (#309) Pass annotations from CRD to AppBinding (#310) Trigger the workflow on push or pull request Update CHANGELOG.md Use stash.appscode.dev/[email protected] (#308) Fix error msg to reject halt when termination policy is 'DoNotTerminate' Change Pause to Halt (#307) feat: allow changes to nodeSelector (#298) Introduce spec.halted and removed dormant and snapshot crd (#305) Moved leader election to kubedb/pg-leader-election (#304) Use [email protected] release (#306) Make e2e tests stable in github actions (#303) Update client-go to kubernetes-1.16.3 (#301) Take out postgres docker images and Matrix test (#297) Fix default make command Update catalog values for make install command Use charts to install operator (#302) Add add-license make target Add license header to files (#296) Fix E2E testing for github actions (#295) Minio and S3 compatible storage fixes (#292) Run e2e tests using GitHub actions (#293) Validate DBVersionSpecs and fixed broken build (#294) Update go.yml Enable GitHub actions Update changelog Prepare for release v0.1.0-rc.1 (#101) Prepare for release v0.1.0-beta.6 (#100) Create SRV records for governing service (#99) Prepare for release v0.1.0-beta.5 (#98) Create separate governing service for each database (#97) Update KubeDB api (#96) Update readme Update repository config (#95) Prepare for release v0.1.0-beta.4 (#94) Update KubeDB api (#93) Update Kubernetes v1.18.9 dependencies (#92) Update KubeDB api (#91) Update for release [email protected] (#90) Update KubeDB api (#89) Update KubeDB api (#88) Update Kubernetes v1.18.9 dependencies (#87) Update KubeDB api (#86) Update KubeDB api (#85) Update KubeDB api (#84) Update KubeDB api (#83) Update Kubernetes v1.18.9 dependencies (#82) Update KubeDB api (#81) Update KubeDB api (#80) Update KubeDB api (#79) Update repository config (#78) Update repository config (#77) Update repository config (#76) Update KubeDB api (#75) Update Kubernetes v1.18.9 dependencies (#74) Publish docker images to ghcr.io (#73) Update KubeDB api (#72) Update KubeDB api (#71) Update KubeDB api (#70) Update KubeDB api (#69) Update repository config (#68) Update Kubernetes v1.18.9 dependencies (#67) Update KubeDB api (#65) Update KubeDB api (#62) Update for release [email protected] (#64) Update Kubernetes v1.18.9 dependencies (#63) Update Kubernetes v1.18.9 dependencies (#61) Update repository config (#60) Update repository config (#59) Update Kubernetes v1.18.9 dependencies (#58) Update Kubernetes v1.18.3 dependencies (#57) Prepare for release v0.1.0-beta.3 (#56) Update Makefile Use AppsCode Trial license (#55) Update Kubernetes v1.18.3 dependencies (#54) Add license verifier (#53) Update for release [email protected] (#52) Update Kubernetes v1.18.3 dependencies (#51) Update Kubernetes v1.18.3 dependencies (#49) Use AppsCode Community License (#48) Update Kubernetes v1.18.3 dependencies (#47) Prepare for release v0.1.0-beta.2 (#46) Update release.yml Use updated apis (#45) Update for release [email protected] (#43) Update for release [email protected] (#42) Update for release [email protected] (#41) Update for release [email protected] (#40) Update Kubernetes v1.18.3 dependencies (#39) Update Kubernetes v1.18.3 dependencies (#38) Update Kubernetes v1.18.3 dependencies (#37) Update Kubernetes v1.18.3 dependencies (#36) Update Kubernetes v1.18.3 dependencies (#35) Update Kubernetes v1.18.3 dependencies (#34) Remove dependency on enterprise operator (#33) Build images in e2e workflow (#32) Update to Kubernetes v1.18.3 (#30) Allow configuring k8s & db version in e2e tests (#31) Trigger e2e tests on /ok-to-test command (#29) Update to Kubernetes v1.18.3 (#28) Update to Kubernetes v1.18.3 (#27) Prepare for release v0.1.0-beta.1 (#26) Update for release [email protected] (#25) include Makefile.env (#24) Update for release [email protected] (#23) Update License (#22) Update to Kubernetes v1.18.3 (#21) Update ci.yml Update update-release-tracker.sh Update" }, { "data": "Add script to update release tracker on pr merge (#20) Update .kodiak.toml Update operator tags Various fixes (#19) Update to Kubernetes v1.18.3 (#18) Update to Kubernetes v1.18.3 Create .kodiak.toml Use CRD v1 for Kubernetes >= 1.16 (#17) Update to Kubernetes v1.18.3 (#16) Fix e2e tests (#15) Update crazy-max/ghaction-docker-buildx flag Use updated operator labels in e2e tests (#14) Revendor kubedb.dev/apimachinery@master (#13) Trigger the workflow on push or pull request Update CHANGELOG.md Use stash.appscode.dev/[email protected] (#12) Matrix Tests on Github Actions (#11) Update mount path for custom config (#8) Enable ProxySQL monitoring (#6) ProxySQL test for MySQL (#4) Use charts to install operator (#7) ProxySQL operator for MySQL databases (#2) Update go.yml Enable GitHub actions percona-xtradb -> proxysql (#1) Revendor Rename from perconaxtradb to percona-xtradb (#10) Set database version in AppBinding (#7) Percona XtraDB Cluster support (#9) Don't set annotation to AppBinding (#8) Fix UpsertDatabaseAnnotation() function (#4) Add license header to Makefiles (#6) Add install, uninstall and purge command in Makefile (#3) Update .gitignore Add Makefile (#2) Rename package path (#1) Use explicit IP whitelist instead of automatic IP whitelist (#151) Update to k8s 1.14.0 client libraries using go.mod (#147) Update changelog Update README.md Start next dev cycle Prepare release 0.5.0 Mysql Group Replication tests (#146) Mysql Group Replication (#144) Revendor dependencies Changed Role to exclude psp without name (#143) Modify mutator validator names (#142) Update changelog Start next dev cycle Prepare release 0.4.0 Added PSP names and init container image in testing framework (#141) Added PSP support for mySQL (#137) Don't inherit app.kubernetes.io labels from CRD into offshoots (#140) Support for init container (#139) Add role label to stats service (#138) Update changelog Update Kubernetes client libraries to 1.13.0 release (#136) Start next dev cycle Prepare release 0.3.0 Initial RBAC support: create and use K8s service account for MySQL (#134) Revendor dependencies (#135) Revendor dependencies : Retry Failed Scheduler Snapshot (#133) Added ephemeral StorageType support (#132) Added support of MySQL 8.0.14 (#131) Use PVC spec from snapshot if provided (#130) Revendored and updated tests for 'Prevent prefix matching of multiple snapshots' (#129) Add certificate health checker (#128) Update E2E test: Env update is not restricted anymore (#127) Fix AppBinding (#126) Update changelog Prepare release 0.2.0 Reuse event recorder (#125) OSM binary upgraded in mysql-tools (#123) Revendor dependencies (#124) Test for faulty snapshot (#122) Start next dev cycle Prepare release 0.2.0-rc.2 Upgrade database secret keys (#121) Ignore mutation of fields to default values during update (#120) Support configuration options for exporter sidecar (#119) Use flags.DumpAll (#118) Start next dev cycle Prepare release 0.2.0-rc.1 Apply cleanup (#117) Set periodic analytics (#116) Introduce AppBinding support (#115) Fix Analytics (#114) Error out from cron job for deprecated dbversion (#113) Add CRDs without observation when operator starts (#112) Update changelog Start next dev cycle Prepare release 0.2.0-rc.0 Merge commit 'cc6607a3589a79a5e61bb198d370ea0ae30b9d09' Support custom user passowrd for backup (#111) Support providing resources for monitoring container (#110) Update kubernetes client libraries to 1.12.0 (#109) Add validation webhook xray (#108) Various Fixes (#107) Merge ports from service template (#105) Replace doNotPause with TerminationPolicy = DoNotTerminate (#104) Pass resources to NamespaceValidator (#103) Various fixes (#102) Support Livecycle hook and container probes (#101) Check if Kubernetes version is supported before running operator (#100) Update package alias (#99) Start next dev cycle Prepare release 0.2.0-beta.1 Revendor api (#98) Fix tests (#97) Revendor api for catalog apigroup (#96) Update chanelog Use --pull flag with docker build (#20) (#95) Merge commit '16c769ee4686576f172a6b79a10d25bfd79ca4a4' Start next dev cycle Prepare release 0.2.0-beta.0 Pass extra args to tools.sh (#93) Don't try to wipe out Snapshot data for Local" }, { "data": "(#92) Add missing alt-tag docker folder mysql-tools images (#91) Use suffix for updated DBImage & Stop working for deprecated *Versions (#90) Search used secrets within same namespace of DB object (#89) Support Termination Policy (#88) Update builddeps.sh Revendor k8s.io/apiserver (#87) Revendor kubernetes-1.11.3 (#86) Support UpdateStrategy (#84) Add TerminationPolicy for databases (#83) Revendor api (#82) Use IntHash as status.observedGeneration (#81) fix github status (#80) Update pipeline (#79) Fix E2E test for minikube (#78) Update pipeline (#77) Migrate MySQL (#75) Use official exporter image (#74) Fix uninstall for concourse (#70) Update status.ObservedGeneration for failure phase (#73) Keep track of ObservedGenerationHash (#72) Use NewObservableHandler (#71) Merge commit '887037c7e36289e3135dda99346fccc7e2ce303b' Fix uninstall for concourse (#69) Update README.md Revise immutable spec fields (#68) Merge commit '5f83049fc01dc1d0709ac0014d6f3a0f74a39417' Support passing args via PodTemplate (#67) Introduce storageType : ephemeral (#66) Add support for running tests on cncf cluster (#63) Merge commit 'e010cbb302c8d59d4cf69dd77085b046ff423b78' Revendor api (#65) Keep track of observedGeneration in status (#64) Separate StatsService for monitoring (#62) Use MySQLVersion for MySQL images (#61) Use updated crd spec (#60) Rename OffshootLabels to OffshootSelectors (#59) Revendor api (#58) Use kmodules monitoring and objectstore api (#57) Support custom configuration (#52) Merge commit '44e6d4985d93556e39ddcc4677ada5437fc5be64' Refactor concourse scripts (#56) Fix command `./hack/make.py test e2e` (#55) Set generated binary name to my-operator (#54) Don't add admission/v1beta1 group as a prioritized version (#53) Fix travis build (#48) Format shell script (#51) Enable status subresource for crds (#50) Update client-go to v8.0.0 (#49) Merge commit '71850e2c90cda8fc588b7dedb340edf3d316baea' Support ENV variables in CRDs (#46) Updated osm version to 0.7.1 (#47) Prepare release 0.1.0 Fixed missing error return (#45) Revendor dependencies (#44) Fix release script (#43) Add changelog (#42) Concourse (#41) Fixed kubeconfig plugin for Cloud Providers && Storage is required for MySQL (#40) Refactored E2E testing to support E2E testing with admission webhook in cloud (#38) Remove lost+found directory before initializing mysql (#39) Skip delete requests for empty resources (#37) Don't panic if admission options is nil (#36) Disable admission controllers for webhook server (#35) Separate ApiGroup for Mutating and Validating webhook && upgraded osm to 0.7.0 (#34) Update client-go to 7.0.0 (#33) Added update script for mysql-tools:8 (#32) Added support of mysql:5.7 (#31) Add support for one informer and N-eventHandler for snapshot, dromantDB and Job (#30) Use metrics from kube apiserver (#29) Bundle webhook server and Use SharedInformerFactory (#28) Move MySQL AdmissionWebhook packages into MySQL repository (#27) Use mysql:8.0.3 image as mysql:8.0 (#26) Update README.md Update README.md Remove Docker pull count Add travis yaml (#25) Start next dev cycle Prepare release 0.1.0-beta.2 Migrating to apps/v1 (#23) Update validation (#22) Fix dormantDB matching: pass same type to Equal method (#21) Use official code generator scripts (#20) Fixed dormantdb matching & Raised throttling time & Fixed MySQL version Checking (#19) Prepare release 0.1.0-beta.1 converted to k8s 1.9 & Improved InitSpec in DormantDB & Added support for Job watcher & Improved Tests (#17) Fixed logger, analytics and removed rbac stuff (#16) Add rbac stuffs for mysql-exporter (#15) Review Mysql docker images and Fixed monitring (#14) Update README.md Start next dev cycle Prepare release 0.1.0-beta.0 Add release script Rename ms-operator to my-operator (#13) Fix Analytics and pass client-id as ENV to Snapshot Job (#12) update docker image validation (#11) Add docker-registry and WorkQueue (#10) Set client id for analytics (#9) Fix CRD Registration (#8) Update issue repo link Update pkg paths to kubedb org (#7) Assign default Prometheus Monitoring Port (#6) Add Snapshot Backup, Restore and Backup-Scheduler (#4) Update Dockerfile Add mysql-util docker image (#5) Mysql db - Inititalizing (#2) Update README.md Update README.md Use client-go 5.x Update ./hack" }, { "data": "(#3) Add skeleton for mysql (#1) Merge commit 'be70502b4993171bbad79d2ff89a9844f1c24caa' as 'hack/libbuild' Prepare for release v0.7.0-rc.1 (#246) Prepare for release v0.7.0-beta.6 (#245) Create SRV records for governing service (#244) Prepare for release v0.7.0-beta.5 (#243) Create separate governing service for each database (#242) Update KubeDB api (#241) Update readme Prepare for release v0.7.0-beta.4 (#240) Update KubeDB api (#239) Update Kubernetes v1.18.9 dependencies (#238) Update KubeDB api (#237) Fix init validator (#236) Update KubeDB api (#235) Update KubeDB api (#234) Update Kubernetes v1.18.9 dependencies (#233) Update KubeDB api (#232) Update KubeDB api (#230) Update KubeDB api (#229) Update KubeDB api (#228) Update Kubernetes v1.18.9 dependencies (#227) Update KubeDB api (#226) Update KubeDB api (#225) Update KubeDB api (#223) Update repository config (#222) Update repository config (#221) Update repository config (#220) Initialize statefulset watcher from cmd/server/options.go (#219) Update KubeDB api (#218) Update Kubernetes v1.18.9 dependencies (#217) Publish docker images to ghcr.io (#216) Update KubeDB api (#215) Update KubeDB api (#214) Update KubeDB api (#213) Update KubeDB api (#212) Update repository config (#211) Add support to initialize Redis using Stash (#188) Update Kubernetes v1.18.9 dependencies (#210) Update Kubernetes v1.18.9 dependencies (#209) Update Kubernetes v1.18.9 dependencies (#207) Update repository config (#206) Update repository config (#205) Update Kubernetes v1.18.9 dependencies (#204) Use common event recorder (#203) Update Kubernetes v1.18.3 dependencies (#202) Prepare for release v0.7.0-beta.3 (#201) Update Kubernetes v1.18.3 dependencies (#200) Add license verifier (#199) Update Kubernetes v1.18.3 dependencies (#198) Use background deletion policy Update Kubernetes v1.18.3 dependencies (#195) Use AppsCode Community License (#194) Update Kubernetes v1.18.3 dependencies (#193) Prepare for release v0.7.0-beta.2 (#192) Update release.yml Update dependencies (#191) Fix build Add support for Redis v6.0.6 and TLS (#180) Update Kubernetes v1.18.3 dependencies (#187) Update Kubernetes v1.18.3 dependencies (#186) Update Kubernetes v1.18.3 dependencies (#184) Update Kubernetes v1.18.3 dependencies (#183) Update Kubernetes v1.18.3 dependencies (#182) Update Kubernetes v1.18.3 dependencies (#181) Remove dependency on enterprise operator (#179) Allow configuring k8s & db version in e2e tests (#178) Update to Kubernetes v1.18.3 (#177) Trigger e2e tests on /ok-to-test command (#176) Update to Kubernetes v1.18.3 (#175) Update to Kubernetes v1.18.3 (#174) Prepare for release v0.7.0-beta.1 (#173) include Makefile.env (#171) Update License (#170) Update to Kubernetes v1.18.3 (#169) Update ci.yml Update update-release-tracker.sh Update update-release-tracker.sh Add script to update release tracker on pr merge (#167) chore: replica alert typo (#166) Update .kodiak.toml Various fixes (#165) Update to Kubernetes v1.18.3 (#164) Update to Kubernetes v1.18.3 Create .kodiak.toml Update apis (#163) Use CRD v1 for Kubernetes >= 1.16 (#162) Update kind command Update dependencies Update to Kubernetes v1.18.3 (#161) Fix e2e tests (#160) Revendor kubedb.dev/apimachinery@master (#159) Use recommended kubernetes app labels Update crazy-max/ghaction-docker-buildx flag Pass annotations from CRD to AppBinding (#158) Trigger the workflow on push or pull request Use helm --wait Use updated operator labels in e2e tests (#156) Update CHANGELOG.md Support PodAffinity Templating (#155) Use stash.appscode.dev/[email protected] (#154) Version update to resolve security issue in github.com/apache/th (#153) Use rancher/[email protected] (#152) Introduce spec.halted and removed dormant crd (#151) Add `Pause` Feature (#150) Refactor CI pipeline to build once (#149) Update kubernetes client-go to 1.16.3 (#148) Update catalog values for make install command Update catalog values for make install command (#147) Use charts to install operator (#146) Matrix test for github actions (#145) Add add-license make target Update Makefile Add license header to files (#144) Run e2e tests in parallel (#142) Use log.Fatal instead of Must() (#143) Enable make ci (#141) Remove EnableStatusSubresource (#140) Fix tests for github actions (#139) Prepend redis.conf to args list (#136) Run e2e tests using GitHub actions (#137) Validate DBVersionSpecs and fixed broken build (#138) Update go.yml Enable GitHub actions Update changelog" } ]
{ "category": "App Definition and Development", "file_name": "20150819_stateless_replica_relocation.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "Feature Name: Stateless Replica Relocation Status: completed Start Date: 2015-08-19 RFC PR: Cockroach Issue: A relocation is, conceptually, the transfer of a single replica from one store to another. However, the implementation is necessarily the combination of two operations: Creating a new replica for a range on a new store. Removing a replica of the same range from a different store. For example, by creating a replica on store Y and then removing a replica from store X, you have in effect moved the replica from X to Y. This RFC is suggesting an overall architectural design goal: that the decision to make any individual operation (either a create or a remove) should be stateless. In other words, the second operation in a replica relocation should not depend on a specific invocation of the first operation. For an assortment of reasons, Cockroach will often need to relocate the replicas of a range. Most immediately, this is needed for repair (when a store dies and its replicas are no longer usable) and rebalance (relocating replicas on overloaded stores to stores with excess capacity). A relocation must be expressed as a combination of two operations: Creating a new replica for a range on a new store. Removing a replica of the same range from a different store. These operations can happen in either order as long as quorum is maintained in the range's raft group after each individual operation, although one ordering may be preferred over another. Expressing a specific relocation (i.e. \"move replica from store X to store Y\") would require maintaining some persistent state to link the two operations involved. Storing that state presents a number of issues: where is it stored, in memory or on disk? If it's in memory, does it have to be replicated through raft or is it local to one node? If on disk, can the persistent state become stale? How do you detect conflicts between two relocation operations initiated from different places? This RFC suggests that no such relocation state should be persisted. Instead, a system that wants to initiate a relocation will perform only the first operation; a different system will later detect the need for a complementary operation and perform it. A relocation is thus completed without any state being exchanged between those two systems. By eliminating the need to persist any data about in-progress relocation operations, the overall system is dramatically simplified. The implementation involves a few pieces: Each range must have a persisted target replication state. This does not prescribe specific replica locations; it specifies a required count of replicas, along with some desired attributes for the stores where they are" }, { "data": "The core mechanic is a stateless function which can compare the immediate replication state of a range to its target replication state; if the target state is different, this function will either create or remove a replica in order to move the range towards the target replication state. By running multiple times (adding or removing a replica each time), the target replication state will eventually be matched. Any operations that wish to relocate a replica need only perform the first operation of the relocation (either a create or a remove). This will perturb the range's replication state away from the target; the core function will later detect that mismatch, and correct it by performing the complementary operation of the relocation (a remove or a create). The first piece is already present: each range has a zone configuration which determines the target replication state for the range. The second piece, the core mechanic, will be performed by the existing \"replicate queue\" which will be renamed the \"replication queue\". This queue is already used to add replicas to ranges which are under-replicated; it can be enhanced to remove replicas from over-replicated ranges, thus satisfying the basic requirements of the core mechanic. The third piece simply informs the design of systems performing relocations; for example, the upcoming repair and rebalance systems (still being planned). After identifying a relocation opportunity, these systems will perform the first operation of the relocation (add or remove) and then insert the corresponding replica into the local replication queue. The replication queue will then perform the complementary operation. The final complication is how to ensure that the replicate queue promptly identifies ranges outside of their ideal target state. As a queue it will be attached to the replica scanner, but we will also want to enqueue a replica immediately whenever we knowingly perturb the replication state. Thus, components initiating a relocation (e.g. rebalance or repair) should immediately enqueue their target replicas after changing the replication state. A stateless model for relocation precludes the ability to request specific relocations; only the first operation can be made with specificity. For example, the verb \"Move replica from X to Y\" cannot be expressed with certainty; instead, only \"Move replica to X from (some store)\" or \"Move replica to Y from (some store)\" can be expressed. The replicate queue will be responsible for selecting an appropriate store for the complementary operation. It is assumed that this level of specificity is simply not needed for any relocation operations; there is no currently apparent use case where a move between a specific pair of stores is needed. Even if this was necessary, it might be possible to express by manipulating the factors behind the individual stateless decisions. Because there is no relocation state, the possibility of \"thrashing\" is introduced. For example: Rebalance operation adds a new replica to the" }, { "data": "The replicate queue picks up the range and detects the need to remove a replica; however, it decides to remove the replica that was just added. This is possible if the rebalance's criteria for new replica selection are sufficiently different from the replicate queue's selection criteria for removing a replica. To reduce this risk, there must be sufficient agreement in the criteria between the operations; a Rebalance operation should avoid adding a new replica if there's a realistic chance that the replicate queue will immediately remove it. This can realized by adding a \"buffer\" zone between the different criteria; that is, when selecting a replica to add, the new replica should be significantly removed from the criteria for removing a replica, thus reducing the chances that it will be selected. When properly tuned to avoid thrashing, the stateless nature could instead be considered a positive because it can respond to changes during the interim; consider if, due to a delay, the relative health of nodes changes and the originally selected replica is no longer a good option. The stateless system will respond correctly to this situation. By separating the two operations, we are possibly delaying the application of the second operation. For example, the replicate queue could be very busy, or an untimely crash could result in the range being under- or over-replicated without being in the replicate queue on any store. However, many of these concerns are allayed by the existence of repair; if the node goes down, the repair system will add the range to the replicate queue on another store. Even in an exotic failure scenario, the stateless design will eventually detect the replication anomaly through the normal operation of the replica scanner. Our raft implementation currently lacks support for non-voting members; as a result, some types of relocation will temporarily make the effected range more fragile. When initially creating a replica, it is very far behind the current state of the range and thus needs to receive a snapshot. It may take some time before the range fully catches up and can take place in quorum commits. However, without non-voting replicas we have no choice but to add the new replica as a full member, thus changing the quorum requirements of the group. In the case of odd-numbered ranges, this will increase the quorum count by one, with the new range unable to be part of a quorum decision. This increases the chance of losing quorum until that replica is caught up, thus reducing availability. This could be mitigated somewhat without having to completely add-non voting replicas; in preparation for adding a replica, we could manually generate a snapshot and send it to the node before adding it to the raft" }, { "data": "This would decrease the window of time between adding the replica and having it fully catch up. \"Lack of Non-voting replicas\" is listed this as a drawback because going forward with relocation without non-voting replication introduces this fragility, regardless of how relocation decisions are made. Stateless relocation will still work correctly when non-voting replicas are implemented; there will simply be a delay in the case where a replica is added first (e.g. rebalance), with the removal not taking place until the non-voting replica catches up and is upgraded to a full group member. This is not trivial, but will still allow for stateless decisions. The main alternative would be some sort of stateful system, where relocation operations would be expressed explicitly (i.e. \"Move replica from X to Y\") and then seen to completion. For reasons outlined in the \"motivation\" section, this is considered sub-optimal when making distributed decisions. The ultimate expression of this would be the designation of a \"Relocation master\", a single node that makes all relocation decisions for the entire cluster There is enough information in gossip for a central node to make acceptable decisions about replica movement. In some ways the individual decisions would be worse, because they would be unable to consider current raft state; however, in aggregate the decisions could be much better, because the central master could consider groups of relocations together. For example, it would be able to avoid distributed pitfalls such as under-loaded nodes being suddenly inundated with requests for new replicas. It would be able to more quickly and correctly move replicas to new nodes introduced to the system. This system would also have an easier time communicating the individual relocation decisions that were made. This could be helpful for debugging or tweaking relocation criteria. However, It's important to note that a relocation master is not entirely incompatible with the \"stateless\" design; the relocation master could simply be the initiator of the stateless operations. You could thus get much of the improved decision making and communication of the master without having to move to a stateful design. The biggest unresolved issue is the requirement for relocation decisions to be constrained to the raft leader. For example, there is no firm need for a relocation operation to be initiated from a range leader; a secondary replica can safely initiate an add or remove replica, with the safety being guaranteed by a combination of raft and cockroach's transactions. However, if the criteria for an operation wants to consider raft state (such as which nodes are behind in replication), those decisions could only be made from the raft leader (which is the only node that has the full raft state). Alternatively, an interface could be provided for non-leader members to query that information from the leader." } ]
{ "category": "App Definition and Development", "file_name": "NOTES.md", "project_name": "RethinkDB", "subcategory": "Database" }
[ { "data": "Fixes for Data Explorer autocomplete, compilation fixes, other small fixes. None. Server Avoid unaligned memory accesses and bus errors on 32-bit ARM (#7128) Compilation Replace bad zlib URL with working one (#7121) Fix new include errors (#7119) Avoid using deprecated `fstat64` function (#7141) Web UI Fix data explorer auto-complete (#7111, #7113) Update ReQL documentation link (#7112, #7114) @sg5506844 Sam Hughes (@srh) Windows release, FreeBSD 13 compilation, certificate chain support, no update checker complaint. None. Some console output is altered, though. Server Fail correctly when failing to remove directory (#4647) Improve --initial-password help text (#7064) Add machine arch and uname to rethinkdb version output (#7060) Improve general help formatting (#7095) Update OpenSSL version (if fetched and statically linked) to 3.0.7 (#7099) Make TLS configuration support certificate chains (#6969) Regenerate the version correctly (eliminating update checker annoyance) (#7100) Compilation Get the Windows build working (#7074, #7075, #7082, #7083) Switch from QuickJS to QuickJSpp (our fork hosted at https://github.com/rethinkdb/quickjspp ) Include <time.h> in files using time function (#7091) and add more explicit includes for future-proofing (#7098) Get FreeBSD build working (#7088, #7098) Avoid some uses of egrep (#7092) Web UI Update jQuery to 3.6.1 (#6937) Mariano Rodrguez (@MarianoRD) Hung-Te Lin (@hungte) Antoni Mrz (@MrBoombastic) unx (@unxcepted) Russel Beswick (@besworks) Sam Hughes (@srh) Gbor Boros (@gabor-boros) Special thanks are due to Antoni Mrz and unx for getting the Windows release working. Released on 2022-04-24 Bitrot, futureproofing, and bug fix release. No migration is required when upgrading from RethinkDB 2.4.x. Please read the if you're upgrading from an older version. RethinkDB 2.4.x servers cannot be mixed with servers running RethinkDB 2.3.x or earlier in the same cluster. The `r.js` ReQL command now uses QuickJS to run instead of v8. Because RethinkDB's v8 version was old, this will allow you to use newer JavaScript features. However, performance and the results of your JavaScript code may differ. (Issue numbers point into the https://github.com/rethinkdb/rethinkdb bugtracker. For a completionist list of changes, run `git log v2.4.1..v2.4.2`.) Server Fix 32-bit overflow bug with Raft log indexes (#7036) Make `r.http` requests use HTTP/1.1 (#7012). This works around `r.http`'s inability to parse HTTP/2 responses. Fix saslname decode logic (#7016) The limit of 127 threads has been removed (#6895) On newer MacOSes, make assertion failures abort the process (#7049) On newer MacOS builds, upon assertion failures, generate backtraces correctly (#7049) Fix an O(1) memory leak (#7010) Fix some newer MacOS warnings and compilation errors (#7014) Fix some other GCC warnings (various commits) Update curl dependency to 7.82.0 Update jemalloc dependency to 5.2.1 Update libidn dependency to 1.38 Update openssl dependency (if fetched) to 3.0.1 Get Apple M1 building working Compilation Split out web assets code and its build dependency chain to the `old_admin` branch (#6979) Package generation updates for Ubuntu 21.10, 22.04, and Debian Bullseye (#7021) More package generation updates for Debian (commit da34c2f) Make install-include depend on dependency install witness (commit 79c6857) Make RPM building explicitly name fetched dependencies and dynamically link (#7035) Add RDBNOBACKTRACE flag for musl users (#7052) A patch for wider boost compatibility was supplied (#6934) Web UI The web UI was excised to the branch https://github.com/rethinkdb/rethinkdb/tree/old_admin (#6979) Many thanks to the following contributors whose patches made it into the RethinkDB 2.4.2 server release: Rui Chen (@chenrui333) Mathieu Schroeter (@Skywalker13) Yong-Hao Zou (@zouyonghao) zadcha (@zadsza) Leo Arias (@elopio) And many thanks go to the driver committers, alternative admin console client developers, bug reporters, and other helpful people who are not listed here. -- Released on 2020-08-13 Bug fix release. No migration is required when upgrading from RethinkDB" }, { "data": "Please read the if you're upgrading from an older version. RethinkDB 2.4.0 servers cannot be mixed with servers running RethinkDB 2.3.x or earlier in the same cluster. (Issue numbers point into the https://github.com/rethinkdb/rethinkdb bugtracker.) Server Force flushing when stdout isn't on a TTY (#6819) Fix code style issues (#6842) Fix warnings about missing assignment definitions in clang 10.0.0 (commit 3ca96904) Compilation Fix aarch64 builds in 2.4.x using clang++ (commit 01683d3e) Fix Linux Mint 20 build errors (commit 328ebfb) Web UI Reintroduce update checker (#6879) Allow hyphens in table and database names (#6908) Released on 2019-12-19 RethinkDB 2.4 introduces write hooks and a few other enhancements. Data files from RethinkDB version 1.16 onward will be automatically migrated. As with any major release, back up your data files before performing the upgrade. Please read the [RethinkDB 2.3.0 release notes][release-notes-2.3.0] if you're upgrading from version 2.2.x or earlier. RethinkDB 2.4.0 servers cannot be mixed with servers running RethinkDB 2.3.x or earlier in the same cluster. Except for JavaScript, the official drivers are now generally maintained as separate projects. But note that there is some coupling of RethinkDB console commands, like `rethinkdb dump`, with the Python driver. RethinkDB 2.4 contains some old copies of the drivers to keep `./test/run` working with minimal effort. The JavaScript driver is still used in the Web UI. However, the web assets are now pre-generated, at src/gen/web_assets.cc, making the presence of the driver in the repository not strictly necessary. Here are links to the official drivers: Python: https://github.com/rethinkdb/rethinkdb-python/ Ruby: https://github.com/rethinkdb/rethinkdb-ruby/ Java: https://github.com/rethinkdb/rethinkdb-java/ JavaScript: https://github.com/rethinkdb/rethinkdb/tree/next/drivers/javascript Write hooks add a new field to the table configuration. A bugfix changes the `match` command's behavior on empty regexes. It had previously been behaving incorrectly. (Issue numbers point into the https://github.com/rethinkdb/rethinkdb bugtracker.) ReQL Added the `setwritehook` and `getwritehook` commands, which attach to tables a function that can modify the behavior of any write. (#5813) Added bitwise operators for number types: `bitand`, `bitnot`, `bit_or`, `bitsal` (or `bitshl`), `bitsar`, `bitshr`, and `bit_xor`. (#6534) Permitted the hyphen character (`-`) to be used in table names. (#5537) Users may be granted permissions on system tables. (#5692) Make `iso8601` command round, not truncate. (#6909) Fix timestamp millisecond down-truncation bug. (#6272) Server Fixed crash with limit change feed `inserted` guarantee failure. (#6710) Avoid using DNS resolution when finding network interface addresses. (#6588) Removed update checker. (#6791) Added experimental support for arm64/aarch64. (#6438) Added experimental support for Power8/LE platform. (#6317) Fixed race condition in the query cache. (#6564) Avoid quadratic growth in segmented vector. (#6385) Big-Endian fixes and s390x support. (#6242) Web UI Implemented a new table viewer widget for browsing the contents of tables. (#6767) Improved table page performance in case with many databases. (#6790) Compilation The web assets are now pre-generated, so that macOS users can build them. (#6770) Debian package building is now parallelized. (#6780) Fixed extproc spawner bug. (#5572) Allowed building against libressl. (#6671) JavaScript Driver Avoided mutating object passed to `r.connect`. (#6575) Python Driver See more details at https://github.com/rethinkdb/rethinkdb-python/releases Other Drivers Changes omitted, as they're in separate repositories. Released on 2017-07-17 Bug fix release This is the first release of RethinkDB since October 2016. The RethinkDB project has [joined the Linux Foundation][blog-new-rethinkdb]. This release is brought to you by volunteers from the Open RethinkDB team. The RethinkDB source code is now licensed under an . On 32-bit platforms and on Windows (64 and 32 bit), RethinkDB 2.3.6 servers should not be mixed with servers running RethinkDB 2.3.3 or older in the same" }, { "data": "Doing so can lead to server crashes when using the web UI or when accessing the `logs` system table. On 64-bit platforms, RethinkDB 2.3.6 servers can be mixed with older RethinkDB 2.3.x servers in the same cluster. We recommend that you run a mixed-version cluster only temporarily for upgrading purposes. No migration is required when upgrading from RethinkDB 2.3.x. Please read the if you're upgrading from an older version. Server Improved the compatibility of the web UI with Chrome 49 and Edge (#5878, #5426, #5300) -- @danielmewes Fixed a crash caused by unwanted connections (#6084) -- @danielmewes Fixed a crash caused by recreating indexes with active changefeeds (#6093) -- @danielmewes Sizes passed to `sample` are now bound by the array size limit (#6148) -- @AtnNn Fixed a crashing bug in the implementation of the `interleave` argument to `union` (#6139) -- @AtnNn Fixed a crash caused by `eqJoin` of system tables when using the `uuid` `identifierFormat` (#6108) -- @nighelles Fixed a bug that caused `r.match('')` to return wrong results (#6241) -- @AtnNn Miscellaneous regression fixes and code improvements by @srh and @VeXocide Fixed argument order in pretty-printed queries in the jobs table (#6240) -- @AtnNn Packaging Fix glibc version detection in RPM packaging script (#6229) -- @gamename Add packages for Ubuntu Yakkety and Zesty (#6364) -- @AtnNn -- Released on 2016-08-26 Bug fix release On 32-bit platforms and on Windows (64 and 32 bit), RethinkDB 2.3.5 servers should not be mixed with servers running RethinkDB 2.3.3 or older in the same cluster. Doing so can lead to server crashes when using the web UI or when accessing the `logs` system table. On 64-bit platforms, RethinkDB 2.3.5 servers can be mixed with older RethinkDB 2.3.x servers in the same cluster. We recommend that you run a mixed-version cluster only temporarily for upgrading purposes. No migration is required when upgrading from RethinkDB 2.3.x. Please read the if you're upgrading from an older version. Server Improved the efficiency of the on-disk garbage collector to reduce the risk of excessive file growth (#5923) Improved the latency of read queries under heavy write loads (#6072) Fixed a bug that could cause the server to crash with a deserialization error or to stop completing any table reads (#6033) Fixed a bug in the implementation of the `interleave` option of the `union` command, which could potentially lead to results being generated in the wrong order (#6041) Fixed a bug in the batch handling of the `fold` and multi-stream `map` commands, that would stop results from being generated correctly if these commands were applied to a changefeed (#6007) Fixed an issue that could cause proxies to remain listed in the `connected_to` field of the `server_status` table, even after they had disconnected (#5871) Fixed the detection of non-deterministic conflict functions in the `insert` command (#5842) Improved the Raft election timeout logic to avoid infinite Raft election loops (#6038) Improved the response time when reading from the `table_status` system table (#4589) The server no longer logs the message `Rejected a connection from server X since one is already open` when trying to connect to itself (#5456) Fixed a bug that could cause an `Uncaught exception` server crash if a TLS-encrypted connection was closed during a certain connection stage (#5904) Fixed a bug in `merge` that could cause `r.literal` objects to remain after the `merge` and be stored in a table (#5977) On Windows: Fixed a bug in the `r.http` command that resulted in decoding issues (#5924) On Windows: RethinkDB now binds TCP ports exclusively (#6008) On Windows: No longer print an error to the log whenever a connection attempt fails (no" }, { "data": "#) Fixed a build issue that caused system libraries to not be found during `make` on OpenSUSE (#2363) JavaScript driver Fixed the server nonce validation in the connection handshake (#5916) The `host` argument to `connect` is now optional (#5846) Java driver Cursors now implement the `Closeable` interface (#5468) Fixed no-reply queries as run through `runNoReply` (#5938) Fixed a bug in the `reconnect` method (#5841) Fixed a memory leak in the `Connection` object that was caused by the driver not properly cleaning up closed cursors (#5980) Python driver The `asyncio` loop type is now available when using the driver from a Python .egg file (#6043) Ruby driver Fixed a rounding issue with time objects (#5825) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.3.5. Arve Seljebu (@arve0) Ben Sharpe (@bsharpe) Brian Chavez (@bchavez) Dan Wiechert (@DWiechert) mbains (@mbains) QianJin2013 (@QianJin2013) Raman Gupta (@rocketraman) -- Released on 2016-06-03 Bug fix release On 32-bit platforms and on Windows (64 and 32 bit), RethinkDB 2.3.4 servers should not be mixed with older RethinkDB 2.3.x servers in the same cluster. Doing so can lead to server crashes when using the web UI or when accessing the `logs` system table. On 64-bit platforms, RethinkDB 2.3.4 servers can be mixed with older RethinkDB 2.3.x servers in the same cluster. We recommend that you run a mixed-version cluster only temporarily for upgrading purposes. No migration is required when upgrading from RethinkDB 2.3.x. Please read the if you're upgrading from an older version. Server Fixed a segmentation fault in the `orderBy.limit` changefeed implementation (#5824) Fixed an incompatibility in the cluster protocol between Windows and Linux / OS X servers (#5819) Python driver Fixed various bugs in the connection class for the asyncio event loop (#5795, #5816, #5820) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.3.4. Ultrabug (@ultrabug) -- Released on 2016-06-01 Bug fix release RethinkDB 2.3.3 servers can be mixed with older RethinkDB 2.3.x servers in the same cluster. We recommend that you run a mixed-version cluster only temporarily for upgrading purposes. No migration is required when upgrading from RethinkDB 2.3.x. Please read the if you're upgrading from an older version. RethinkDB 2.3.0 was the first version to include native Windows compatibility. In RethinkDB 2.3.3, the Windows port is ready to emerge from \"beta\" testing. We now officially support RethinkDB on the Windows platform alongside our existing support for Linux and Mac OS X. We're also extending our services to include RethinkDB on Windows. Although RethinkDB is now stable on Windows, there are still a few that we are actively working to address. We also haven't yet carried out as much performance tuning on the Windows port as we have on the Linux and OS X releases. Server Fixed a bug in `orderBy.limit` changefeeds that caused the server to crash with `Guarantee failed: [subit != realadded.end()]` (#5561) Improved the performance of the `table_status` system table when the cluster is under high load (#5586) Fixed a race condition in the cluster connection logic that could cause occasional crashes with a `Guarantee failed: [refcount == 0]` error (#5783) Fixed a stack overflow when executing queries with a very high number of chained commands (#5792) Made the `fold` command work on a changefeed stream (#5800) Fixed the server uptime calculation on Windows (#5388) Fixed source code incompatibilities with GCC 6.0 (#5757) JavaScript driver The `Connection` class is now exported from the RethinkDB JavaScript module (#5758) Java driver Added the `clientPort` and `clientAddress` methods to the `Connection` class in the Java" }, { "data": "(#5571) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.3.3. Gergely Nemeth (@gergelyke) -- Released on 2016-05-06 Bug fix release RethinkDB 2.3.2 servers can be mixed with older RethinkDB 2.3.x servers in the same cluster. We recommend that you run a mixed-version cluster only temporarily for upgrading purposes. No migration is required when upgrading from RethinkDB 2.3.x. Please read the if you're upgrading from an older version. Server Fixed a data corruption issue in the secondary index construction logic. The issue could be triggered by creating a secondary index while the table is under write load and could result in a `Guarantee failed: [token.has()]` error when accessing the index (#5715) Fixed an issue in the Windows beta release that caused data corruption whenever growing a table to more than 4 GB (#5719) Fixed a crash with the message `Guarantee failed: [num_subs == 0]` that could occur when shutting down a server while trying to start new changefeeds at the same time (#5708) Fixed a crash with the message `Guarantee failed: [!pair.first.inner.overlaps(region.inner)]` that could occur when using changefeeds while resharding (#5745) Added a `--tls-min-protocol` server option for reducing the minimum required TLS protocol version. Drivers using an old OpenSSL version (e.g. on OS X) might require this option in order to connect to a TLS-enabled RethinkDB server (#5734) Added a check to disallow using `order_by` with a non-deterministic predicate function (#5548) Fixed a segmentation fault at address 0x18 that could occur in low-memory conditions on Linux (#5348) Fixed a stack overflow issue when parsing very deeply nested objects (#5601) Improved the stack protection logic in order to avoid exceeding the system's memory map limit. This issue affected Linux servers when having a very high number of concurrently running queries (#5591) The server is now built with jemalloc version 4.1 on Linux (#5712) Fixed the message that is displayed when a query times out in the Data Explorer (#5113) Improved the handling and reporting of OpenSSL-related errors (#5551) Added a new server option `--cluster-reconnect-timeout` to control how quickly RethinkDB gives up trying to reconnect to a previously connected server (#5701) Fixed a race condition when writing to system tables that could lead to incorrect update results (#5711) A custom conflict resolution function for the `insert` command can now return `null` in order to delete a conflicting document (#5713) Improved the error message emitted when opening a changefeed on an `orderBy.limit` query that has additional transformations (#5721) Fixed an incompatibility with Safari that could cause undesired page reloads in the web UI (#3983) Python driver The Python driver's `ssl` option now supports older Python versions from 2.7 up (#4815) Added a REPL mode that can be launched through the new `python -m rethinkdb` command (#5147) Added a cache for PBKDF2 authentication tokens to reduce the costs of repeatedly opening connections (#5614) Refactored how the RethinkDB import and export scripts load the driver (#4970) Improved the error message reported when attempting to connect to a pre-2.3.0 server (#5678) Fixed an incompatibility with Python 3 in the `rethinkdb dump` script that caused `name 'file' is not defined` errors (#5694) Fixed an incompatibility with Python 3.3 in the protocol handshake code (#5742) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.3.2. In no particular order: Matt Broadstone (@mbroadst) Saad Abdullah (@saadqc) -- Released on 2016-04-22 Bug fix release RethinkDB 2.3.1 servers can be mixed with RethinkDB 2.3.0 servers in the same cluster. We recommend that you run a mixed-version cluster only temporarily for upgrading" }, { "data": "No migration is required when upgrading from RethinkDB 2.3.0. Please read the if you're upgrading from an older version. We now provide packages for Ubuntu 16.04 (Xenial Xerus). The `r.http` command no longer supports fetching data from encrypted `https` resources on OS X 10.7 and 10.8 (#5681). Newer releases of OS X are not affected. Server Fixed a segmentation fault triggered by performing a batched `insert` with multiple occurrences of the same primary key (#5683) Fixed an uncaught exception bug in the `hostnametoips` function that could be triggered by connecting a server with an unresolvable address (#5629) Fixed a query failure when opening a changefeed with the `squash: true` option on a system table (#5644) Fixed a crash that was triggered when joining servers with identical server names (#5643) Fixed an issue with the random number generator that stopped initial server names from getting randomized correctly (#5655) Fixed a bug that caused memory to not be released properly after dropping a table or removing its replicas from a server (#5666) Fixed a bug causing `eqJoin` to freeze the server when chained after a `changes` command (#5696) Fixed an issue that caused the `returnChanges: \"always\"` option of the `insert` command to miss certain types of errors in the `changes` result (#5366) Fixed a crash on OS X 10.7 when using the .dmg uninstaller (#5671) The OS X .dmg uninstaller is now signed (#5615) Fixed an edge case in the error handling for auto-generated primary keys when inserting into a system table (#5691) RethinkDB can now be compiled with GCC 5.3 (#5635) JavaScript driver Renamed the `username` option of the `r.connect` command to `user`. The `username` option is still supported for backwards-compatibility with existing code (#5659) Improved the error message shown when connecting with the 2.3 driver to an older server (#5667) Python driver Improved the error message that is emitted when trying to connect to a server with a wrong password (#5624) Fixed the \"global name 'options' is not defined\" bug in the `rethinkdb import` script (#5637) Fixed a Python 3 incompatibility in the `rethinkdb restore` script (#5647) Java driver Implemented the timeout option for `getNext` (#5603) Losing the server connection while having a changefeed open now correctly results in an error (#5660) The driver now caches authentication nonces in order to speed up connection setup (#5614) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.3.1. In no particular order: Brian Chavez (@bchavez) Neil Hanlon (@NeilHanlon) Jason Soares (@JasonSoares) Magnus Lundgren (@iorlas1) -- Released on 2016-04-06 RethinkDB 2.3 introduces a users and permissions system, TLS encrypted connections, a Windows beta, and numerous improvements to the ReQL query language. ReQL improvements include up to 10x better performance for distributed joins, and a new `fold` command that allows you to implement efficient stateful transformations on streams. Read the for more details. Data files from RethinkDB version 1.16 onward will be automatically migrated. As with any major release, back up your data files before performing the upgrade. If you're upgrading from RethinkDB 1.15.x or earlier, please read the to find out about the required migration steps. RethinkDB 2.3.0 servers cannot be mixed with servers running RethinkDB 2.2.x or earlier in the same cluster. If you migrate a cluster from a previous version of RethinkDB and have an `auth_key` set, the `authkey` is turned into the password for the `\"admin\"` user. If no `authkey` is set, a new `\"admin\"` user with an empty password is automatically created during migration. RethinkDB" }, { "data": "adds a new restriction when adding a server to an existing cluster. If the existing cluster has a non-empty password set for the `\"admin\"` user, a new server is only allowed to join the cluster if it has a password set as well. This is to avoid insecure states during the join process. You can use the new `--initial-password auto` command line option for joining a new server or proxy to a password-protected cluster. The `--initial-password auto` option assigns a random `\"admin\"` password on startup, which gets overwritten by the previously configured password on the cluster once the join process is complete. The `eqJoin` command no longer returns results in the order of its first input. You can pass in the new `{ordered: true}` option to restore the previous behavior. Operations on geospatial multi-indexes now emit duplicate results if multiple index keys of a given document match the query. You can append the `.distinct()` command in order to restore the previous behavior. Changefeeds on queries of the form `orderBy(...).limit(...).filter(...)` are no longer allowed. Previous versions of RethinkDB allowed the creation of such changefeeds, but did not provide the correct semantics. The commands `r.wait`, `r.rebalance` and `r.reconfigure` can no longer be called on the global `r` scope. Previously, these commands defaulted to the `\"test\"` database implicitly. Now they have to be explicitly called on either a database or table object. For example: `r.db(\"test\").wait()`, `r.db(\"test\").rebalance()`, etc. The `{returnChanges: \"always\"}` option with the `insert` command will now add `{error: \"...\"}` documents to the `changes` array if the insert fails for some documents. Previously failed documents were simply omitted from the `changes` result. The special values `r.minval` and `r.maxval` are no longer permitted as return values of secondary index functions. The JavaScript `each` function is deprecated in favor of `eachAsync`. In a future release, `each` will be turned into an alias of `eachAsync`. We recommend converting existing calls of the form `.each(function(err, row) {})` into the `eachAsync` equivalent `.eachAsync(function(row) {}, function(err) {})`. You can read more about . The `auth_key` option to `connect` in the official drivers is deprecated in favor of the new `user` and `password` options. For now, a provided `auth_key` value is mapped by the drivers to a password for the `\"admin\"` user, so existing code will keep working. We no longer provide packages for the Debian oldstable distribution 7.x (Wheezy). When compiling from source, the minimum required GCC version is now 4.7.4. Added support for user accounts, user authentication, and access permissions. Users can be configured through the `\"users\"` system table. Permissions can be configured through either the new `\"permissions\"` system table or through the `grant` command. (#4519) Driver, intracluster and web UI connections can now be configured to use TLS encryption. For driver and intracluster connections, the server additionally supports certificate verification. (Linux and OS X only, #5381) Added beta support for running RethinkDB on Windows (64 bit only, Windows 7 and up). (#1100) Added a `fold` command to allow stateful transformations on ordered streams. (#3736) Added support for changefeeds on `getIntersecting` queries. (#4777) Server The `--bind` option can now be specified separately for the web UI (`--bind-http`), client driver port (`--bind-driver`) and cluster port (`--bind-cluster`). (#5467) RethinkDB servers now detect non-transitive connectivity in the cluster and raise a `\"nontransitiveerror\"` issue in the `\"current_issues\"` system table when detecting an issue. Additionally, the `\"server_status\"` system table now contains information on each server's connectivity in the new `connected_to` field. (#4936) Added a new `\"memoryerror\"` issue type for the `\"currentissues\"` system table that is displayed when the RethinkDB process starts using swap space. (Linux" }, { "data": "(#1023) Reduced the number of scenarios that require index migration after a RethinkDB upgrade. Indexes no longer need to be migrated unless they use a custom index function. (#5175) Added support for compiling RethinkDB on Alpine Linux. (#4437) Proxy servers now print their server ID on startup. (#5515) Raised the maximum query size from 64 MB to 128 MB. (#4529) Increased the maximum number of shards for a table from 32 to 64. (#5311) Implemented a `--join-delay` option to better tolerate unstable network conditions (#5319) Added an `--initial-password` command line option to secure the process of adding new servers to a password-protected cluster. (#5490) Implemented a new client protocol handshake to support user authentication. (#5406) ReQL Added an `interleave` option to the `union` command to allow merging streams in a particular order. (#5090) Added support for custom conflict-resolution functions to the `insert` command. (#3753) The `insert` command now returns changes in the same order in which they were passed in when the `returnChanges` option is used. (#5041) Added an `includeOffsets` option to the `changes` command to obtain the positions of changed elements in an `orderBy.limit` changefeeds. (#5334) Added an `includeTypes` option to the `changes` command that adds a `type` field to every changefeed result. (#5188) Made geospatial multi-indexes behave consistently with non-geospatial multi-indexes if a document is indexed under multiple matching keys. `getIntersecting` and `getNearest` now return duplicates if multiple index keys match. (#3351) The `and`, `or` and `getAll` commands can now be called with zero arguments. (#4696, #2588) Disallowed calling `r.wait`, `r.rebalance` and `r.reconfigure` on the global scope to avoid confusing semantics. (#4382) The `count` and `slice` commands can now be applied to strings. (#4227, #4228) Improved the error message from `reconfigure` if too many servers are unreachable. (#5267) Improved the error message for invalid timezone specifications. (#1280) Performance Implemented efficient batching for distributed joins using the `eqJoin` command. (#5115) Optimized `tableCreate` to complete more quickly. (#4746) Reduced the CPU overhead of ReQL function calls and term evaluation. (no issue number) Web UI The web UI now uses the `conn.server()` command for getting information about the connected server. (#5059) All drivers Implemented a new protocol handshake and added `user` and `password` options to the `connect` method to enable user authentication. (#5458, #5459, #5460, #5461) Added `clientPort` and `clientAddress` functions to the connection objects in the JavaScript, Python and Ruby drivers. (#4796) JavaScript driver Added new variants of the `cursor.eachAsync` function. (#5056) Added a `concurrency` option for `cursor.eachAsync`. (#5529) `r.min`, `r.max`, `r.sum`, `r.avg` and `r.distinct` now accept an array argument (#4594) Python driver Added a `\"gevent\"` loop type to the Python driver. (#4433) Printing a cursor object now displays the first few results. (#5331) Removed the dependency on `tar` for the `rethinkdb restore` and `rethinkdb dump` commands. (#5399) Added a `--tls-cert` option to the `rethinkdb import`, `rethinkdb export`, `rethinkdb dump`, `rethinkdb restore` and `rethinkdb index-rebuild` commands to enable TLS connections. (#5330) Added `--password` and `--password-file` options to the `rethinkdb import`, `rethinkdb export`, `rethinkdb dump`, `rethinkdb restore` and `rethinkdb index-rebuild` commands to connect to password-protected servers. (#5464) Added a `--format ndjson` option to `rethinkdb export` that allows exporting tables in a newline-separated JSON format. (#5101) Made `rethinkdb dump` `rethinkdb restore` and `rethinkdb import` able to write to stdout and load data from stdin respectively. (#5525, #3838) `r.min`, `r.max`, `r.sum`, `r.avg` and `r.distinct` now accept an array argument (#5494) Java driver: Made it easier to publish the driver on local Ivy and Maven repositories. (#5054) Server Fixed a crash with the message `[cmp != 0]` when querying with `r.minval` or `r.maxval` values inside of an" }, { "data": "(#5542) Fixed in issue that caused orphaned tables to be left behind when deleting a database through the `\"db_config\"` system table. (#4465) Fixed a crash when trying to restore a backup from a version of RethinkDB that is too new. (#5104) Fixed a bug in data migration from RethinkDB 2.0.x and earlier. (#5570) Fixed a race condition causing server crashes with the message `Guarantee failed: [!pair.first.inner.overlaps(region.inner)]` when rebalancing a table while simultaneously opening new changefeeds. (#5576) Fixed an issue causing backfill jobs to remain in the `jobs` system table even after finishing. (#5223) ReQL Disallowed changefeeds on queries of the form `orderBy(...).limit(...).filter(...)`, since they do not provide the correct semantics. (#5325) Coercing a binary value to a string value now properly checks for illegal characters in the string. (#5536) Web UI Fixed the \"The request to retrieve data failed\" error when having an orphaned table whose database has been deleted. (#4985) Fixed the maximum number of shards display for clusters with more than 32 servers. (#5311) Fixed an empty \"Connected to\" field when accessing the web UI through a RethinkDB proxy server. (#3182) JavaScript driver Fixed the behavior of `cursor.close` when there are remaining items in the buffer. (#5432) Python driver Fixed a bug in the `str` function of cursor objects. (#5567) Fixed the handling of the error generated for over-sized queries. (#4771) Ruby driver Fixed the handling of the error generated for over-sized queries. (#4771) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.3. In no particular order: Aaron Rosen (@aaronorosen) @crockpotveggies Daniel Hokka Zakrisson (@dhozac) Igor Lukanin (@igorlukanin) @janisz Joshua Bronson (@jab) Josh Hawn (@jlhawn) Josh Smith (@Qinusty) Marshall Cottrell (@marshall007) Mike Mintz (@mikemintz) Niklas Hambchen (@nh2) Qian Jin (@QianJin2013) Taylor Murphy (@tayloramurphy) Vladislav Botvin (@darrrk) Adam Grandquist (@grandquista) @bakape Bernardo Santana (@bsantanas) Bheesham Persaud (@bheesham) Christopher Cadieux (@ccadieux) Chuck Bassett (@chuckSMASH) Diney Wankhede (@dineyw23) Heinz Fiedler (@heinzf) Mark Yu (@vafada) Mike Krumlauf (@mjkrumlauf) Nicols Santngelo (@NicoSantangelo) Samuel Volin (@untra) Stefan de Konink (@skinkie) Tommaso (@raspo) -- Released on 2016-03-25 Bug fix release RethinkDB 2.2.6 servers cannot be mixed with servers running RethinkDB 2.2.1 or earlier in the same cluster. No migration is required when upgrading from RethinkDB 2.2.0 or higher. Please read the if you're upgrading from an older version. Fixed two bugs in the changefeed code that caused crashes with an \"Unreachable code\" error in certain edge cases (#5438, #5535) Fixed a `SANITY CHECK FAILED: [d.has()]` error when using the `map` command on a combination of empty and non-empty input streams (#5481) The result of `conn.server()` now includes a `proxy` field (#5485) Changed the connection behavior of proxy servers to avoid repeating \"Rejected a connection from server X since one is open already\" warnings (#5456) The Python driver now supports connecting to a server via IPv6, even when using the async API (asyncio, tornado, twisted) (#5445) Fixed an incompatibility with certain versions of Python that made the driver unable to load the `backports.sslmatchhostname` module (#5470) Fixed a resource leak in the Java driver's `cursor.close()` call (#5448) Cursors in the Java driver now implement the `Closeable` interface (#5468) Fixed a remaining incompatibility with Internet Explorer 10 in the JavaScript driver (#5499) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.2.6. In no particular order: Paulo Pires (@pires) Mike Mintz (@mikemintz) -- Released on 2016-02-23 Bug fix release RethinkDB 2.2.5 servers cannot be mixed with servers running RethinkDB 2.2.1 or earlier in the same cluster. No migration is required when upgrading from RethinkDB 2.2.0 or" }, { "data": "Please read the if you're upgrading from an older version. Improved the CPU efficiency of `orderBy` queries on secondary indexes (#5280) Improved the efficiency of geospatial queries on indexes with point values (#5411) Connections in the Java driver are now thread-safe (#5166) Made the JavaScript driver compatible with Internet Explorer 10 (#5067) The Ruby driver now supports nested pseudotypes (#5373) Fixed an issue that caused servers to not connect and/or reconnect properly (#2755) Fixed an issue that caused servers to time out when running queries on secondary indexes with long index keys (#5280) Changefeeds now always emit events for documents leaving or entering the changefeed range (#5205) Fixed a bug in the Java driver that caused null pointer exceptions (#5355) Fixed the `isFeed()` function in the Java driver (#5390, #5400) The `r.now` command now performs arity checking correctly (#5405) Fixed a test failure in the `unit.ClusteringBranch` test (#5182) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.2.5. In no particular order: Mike Mintz (@mikemintz) Paulo Pires (@pires) Nicolas Viennot (@nviennot) Brian Chavez (@bchavez) -- Released on 2016-02-01 This bug fix release addresses a in RethinkDB's clustering system, that can lead to data loss and invalid query results under certain rare circumstances. The bug can appear if a table is reconfigured during a network partition (read more in ). We recommend upgrading to this release as soon as possible to avoid data loss. If you see replicas get stuck in the `transitioning` state during a reconfiguration after upgrading, you can run `.reconfigure({emergencyRepair: 'debugrecommit'})` on the table to allow the reconfiguration to complete. Please make sure that the cluster is idle when running this operation, as RethinkDB does not guarantee consistency during the emergency repair. RethinkDB 2.2.4 servers cannot be mixed with servers running RethinkDB 2.2.1 or earlier in the same cluster. No migration is required when upgrading from RethinkDB 2.2.0 or higher. Please read the if you're upgrading from an older version. Fixed a bug in the clustering system that could lead to data loss, inconsistent reads, and server crashes after reconfiguring a table during incomplete connectivity (#5289, #4949) Fixed a segmentation fault that occurred when requesting certain documents from the `stats` system table (#5327) Changefeeds on system tables now support `map`, `filter` and related commands (#5241) Backtraces are now printed even if the `addr2line` tool is not installed (#5321) The Java driver now supports SSL connections thanks to a contribution by @pires (#5284) Fixed the \"Serialized query\" debug output in the Java driver (#5306) Fixed an incompatibility of the `rethinkdb import` script with Python 2.6 (#5294) -- Released on 2016-02-01 Legacy bug fix release This release maintains full compatibility with RethinkDB 2.1.5, while fixing a in RethinkDB's clustering system. We recommend installing this version only if upgrading to RethinkDB 2.2.4 is not an option, for example if you depend on a driver that still uses the old protocol buffer client protocol. Fixed a bug in the clustering system that could lead to data loss, inconsistent reads, and server crashes after reconfiguring a table during incomplete connectivity (#5289, #4949) -- Released on 2016-01-11 Bug fix release RethinkDB 2.2.3 servers cannot be mixed with servers running RethinkDB 2.2.1 or earlier in the same cluster. No migration is required when upgrading from RethinkDB 2.2.0 or higher. Please read the if you're upgrading from an older version. Fixed a bug in the changefeed code that caused crashes with the message `Guarantee failed: [env.has()]` (#5238) Fixed a crash in `r.http` when using pagination (#5256) Fixed a bug that made" }, { "data": "changefeeds prevent other changefeeds on the same table from becoming ready (#5247) Replaced a call to the deprecated `Object#timeout` function in the Ruby driver (#5232) -- Released on 2015-12-21 Bug fix release RethinkDB 2.2.2 servers cannot be mixed with servers running RethinkDB 2.2.1 or earlier in the same cluster. The protocol change was necessary to address correctness issues in the changefeed implementation. No migration is required when upgrading from RethinkDB 2.2.0 or higher. Please read the if you're upgrading from an older version. Server Fixed an issue causing `include_initial` changefeeds to miss changes (#5216) Fixed an issue causing `include_initial` changefeeds to stall and never reach the `\"ready\"` state (#5157) Fixed an issue causing `include_initial` changefeeds to emit unexpected initial results with a `null` value (#5153) Improved the efficiency of `skip` in combination with `limit` (#5155) Fixed an issue with determinism checking in geospatial commands (#5130) Fixed an invalid memory access that caused segmentation faults on ARM (#5093) Fixed a crash with \"Unreachable code\" when migrating from versions of RethinkDB older than 1.16 (#5158) Fixed an issue where the server would send an extra response to the client after a cursor completed (#5159) Fixed a build dependency issue with OpenSSL on OS X 10.11 (#4963) Fixed compiler warnings on ARM (#4541) Made the APT repository compatible with APT 1.1 (#5174) Drivers Fixed missing backtraces on `ReQLCompileError` in the JavaScript driver (#4803) Upgraded the version of CoffeeScript used to compile the JavaScript driver in order to avoid errors in strict mode (#5198) Fixed a syntax error warning in the Python driver during installation on older Python versions (#4702) `rethinkdb restore` now waits for tables to be available (#5154) -- Released on 2015-11-16 Bug fix release RethinkDB 2.2.1 is fully compatible with RethinkDB 2.2.0. Please read the if you're upgrading from an older version. Fixed a crash with the message \"Guarantee failed: [foundhashpair]\" when running `getAll` queries (#5085) `rethinkdb export` and `rethinkdb dump` now limit the number of subprocesses to reduce memory consumption (#4809) Fixed a segmentation fault in `orderBy.limit` changefeeds (#5081) Fixed a crash when using `getAll` with illegal keys (#5086) `r.uuid` is now considered a deterministic operation if it is passed a single argument (#5092) Fixed the \"Task was destroyed but it is pending!\" error when using the `asyncio` event loop on Python (#5043) -- Released on 2015-11-12 RethinkDB 2.2 introduces atomic changefeeds. Atomic changefeeds include existing values from the database into the changefeed result, and then atomically transition to streaming updates. Atomic changefeeds make building realtime apps dramatically easier: you can use a single code path to populate your application with initial data, and continue receiving realtime data updates. This release also includes numerous performance and scalability improvements designed to help RethinkDB clusters scale to larger sizes while using fewer resources. Read the for more details. Data files from RethinkDB version 1.16 onward will be automatically migrated. As with any major release, back up your data files before performing the upgrade. If you're upgrading from RethinkDB 1.14.x or 1.15.x, you need to migrate your secondary indexes first. You can do this by following these steps: Install RethinkDB 2.0.5. Update the RethinkDB Python driver (`sudo pip install 'rethinkdb<2.1.0'`). Rebuild your indexes with `rethinkdb index-rebuild`. Afterwards, you can install RethinkDB 2.2 and start it on the existing data files. If you're upgrading directly from RethinkDB 1.13 or earlier, you will need to manually upgrade using `rethinkdb dump`. Changefeeds on `.orderBy.limit` as well as `.get` queries previously provided initial results by default. You now need to include the optional argument `includeInitial: true` to" }, { "data": "to achieve the same behavior. The deprecated protocol buffer driver protocol is no longer supported. The newer JSON protocol is now the only supported driver protocol. Older drivers using the deprecated protocol no longer work with RethinkDB 2.2.0. See the list for up-to-date drivers. If you're using Java, please note that at the time of writing, existing community drivers have not been updated to use the newer JSON protocol. However, an is in active development and will be available soon. Certain argument errors that used to throw `ReqlDriverError` exceptions now throw `ReqlCompileError` exceptions. See for a full list of changes. RethinkDB 2.2.0 now comes with official packages for Ubuntu 15.10 (Wily Werewolf) and CentOS 7. We no longer provide packages for Ubuntu 10.04 (Lucid Lynx), which has reached end of life. Added full support for atomic changefeeds through the `include_initial` optarg (#3579) Added a `values` command to obtain the values of an object as an array (#2945) Added a `conn.server` command to identify the server for a given connection (#3934) Extended `r.uuid` to accept a string and work as a hash function (#4636) Server Improved the scalability of range queries on sharded tables (#4343) Improved the performance of `between` queries on secondary indexes (#4862) Reduced the memory overhead for large data sets (#1951) Redesigned the internal representation of queries to improve efficiency (#4601) Removed the deprecated protocol buffer driver protocol (#4601) Improved the construction of secondary indexes to make them resumable and to reduce their impact on any production workload (#4959) Improved the performance when using `getAll` with a secondary index in some edge cases (#4948) Removed the limit of 1024 concurrent changefeeds on a single connection (#4732) Implemented automatically growing coroutine stacks to avoid stack overflows (#4462) Optimized the deserialization of network messages to avoid an extra copy (#3734) Added a `raft_leader` field to a table's status to expose its current Raft leader (#4902) Made the handling of invalid lines in the `'logs'` system table more robust (#4929) ReQL `indexStatus` now exposes the secondary index function (#3231) Added an optarg called `changefeedqueuesize` to specify how many changes the server should buffer on a changefeed before generating an error (#3607) Extended `branch` to accept an arbitrary number of conditions and values (#3199) Strings can now contain null characters (except in primary keys) (#3163) Streams can now be coerced directly to an object (#2802) Made `coerceTo('BOOL')` consistent with `branch` (#3133) Changefeeds on `filter` and `map` queries involving geospatial terms are now allowed (#4063) Extended `or` and `and` to accept zero arguments (#4132) Web UI The Data Explorer now allows executing only parts of a query be selecting them (#4814) All drivers Improved the consistency of ReQL error types by throwing `ReqlCompileError` rather than `ReqlDriverError` for certain errors (#4669) JavaScript driver Added an `eachAsync` method on cursors that behaves like `each` but also returns a promise (#4784) Python driver Implemented an API to override the default JSON encoder and decoder (#4825, #4818) Server Fixed a segmentation fault that could happen when disconnecting a server while having open changefeeds (#4972) Updated the description of the `--server-name` parameter in `rethinkdb --help` (#4739) Fixed a crash with the message \"Guarantee failed: [ts->tv_nsec >= 0 && ts->tv_nsec < (1000LL (1000LL 1000LL))] \" (#4931) Fixed a problem where backfill jobs didn't get removed from the `'jobs'` table (#4923) Fixed a memory corruption that could trigger a segmentation fault during `getIntersecting` queries (#4937) Fixed an issue that could stop data files from RethinkDB 1.13 from migrating properly (#4991) Fixed a \"Guarantee failed:" }, { "data": "key for entry_t already exists\" crash when rapidly reconnecting servers (#4968) Fixed an \"Uncaught exception of type interruptedexct\" crash (#4977) Added a check to catch `r.minval` and `r.maxval` values when writing to the `'debugscratch'` system table (#4032) ReQL Fixed the error message that's generated when passing in a function with the wrong arity (#4189) Fixed a regression that caused `r.asc(\"test\")` to not fail as it should (#4951) JavaScript driver Object keys in `toString` are now properly quoted (#4997) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.2. In no particular order: Peter Hollows (@captainpete) Zhenchao Li (@fantasticsid) Marshall Cottrell (@marshall007) Adam Grandquist (@grandquista) Ville Immonen (@fson) Matt Broadstone (@mbroadst) Pritam Baral (@pritambaral) Elian Gidoni (@eliangidoni) Mike Mintz (@mikemintz) Daniel Compton (@danielcompton) Vinh Quc Nguyn (@kureikain) Shayne Hodge (@schodge) Alexander Zeillinger (@alexanderzeillinger) Ben Gesoff (@bengesoff) Dmitriy Lazarev (@wKich) Chris Gaudreau (@clessg) Pawe witkowski (@katafrakt) Wang Zuo (@wangzuo) Chris Goller (@goller) Mateus Craveiro (@mccraveiro) -- Released on 2015-10-08 Bug fix release RethinkDB 2.1.5 servers cannot be mixed with servers running RethinkDB 2.1.4 or earlier in the same cluster Fixed a memory corruption bug that caused segmentation faults on some systems (#4917) Made the build system compatible with OS X El Capitan (#4602) Fixed spurious \"Query terminated by `rethinkdb.jobs` table\" errors (#4819) Fixed an issue that caused changefeeds to keep failing after a table finished reconfiguring (#4838) Fixed a race condition that resulted in a crash with the message `std::terminate() called without any exception.` when losing a cluster connection (#4878) Fixed a segmentation fault in the `mark_ready()` function that could occur when reconfiguring a table (#4875) Fixed a segmentation fault when using changefeeds on `orderBy.limit` queries (#4850) Made the Data Explorer handle changefeeds on `orderBy.limit` queries correctly (#4852) Fixed a \"Branch history is incomplete\" crash when reconfiguring a table repeatedly in quick succession (#4866) Fixed a problem that caused `indexStatus` to report results for additional indexes that were not specified in its arguments (#4868) Fixed a segmentation fault when running RethinkDB on certain ARM systems (#4839) Fixed a compilation issue in the UTF-8 unit test with recent versions of Xcode (#4861) Fixed an `Assertion failed: [ptr_]` error when reconfiguring tables quickly with a debug-mode binary (#4871) Improved the detection of unsupported values in `r.js` functions to avoid a `Guarantee failed: [!key.IsEmpty() && !val.IsEmpty()]` crash in the worker process (#4879) Fixed an unitialized data access issue on shutdown (#4918) Improved the performance of `getAll` queries that fetch multiple keys at once (#1526) Optimized the distribution of tasks across threads on multi-core servers (#4905) -- Released on 2015-09-16 Bug fix release RethinkDB 2.1.4 servers cannot be mixed with servers running RethinkDB 2.1.1 or earlier in the same cluster Fixed a data corruption bug that could occur when deleting documents (#4769) The web UI no longer ignores errors during table configuration (#4811) Added a check in case `reconfigure` is called with a non-existent server tag (#4840) Removed a spurious debug-mode assertion that caused a server crash when trying to write to the `stats` system table (#4837) The `rethinkdb restore` and `rethinkdb import` commands now wait for secondary indexes to become ready before beginning the data import (#4832) -- Released on 2015-09-04 Bug fix release RethinkDB 2.1.3 servers cannot be mixed with servers running RethinkDB 2.1.1 or earlier in the same cluster Fixed a data corruption bug in the b-tree implementation (#4769) Fixed the `ssl` option in the JavaScript driver (#4786) Made the Ruby driver compatible with Ruby on Rails 3.2 (#4753) Added the `backports.sslmatchhostname` library to the Python driver" }, { "data": "(#4683) Changed the update check to use an encrypted https connection (#3988, #4643) Fixed access to `https` sources in `r.http` on OS X (#3112) Fixed an `Unexpected exception` error (#4758) Fixed a `Guarantee failed: [pair.second]` crash that could occur during resharding (#4774) Fixed a bug that caused some queries to not report an error when interrupted (#4762) Added a new `\"debugrecommit\"` recovery option to `emergency_repair` (#4720) Made error reporting in the Python driver compatible with `celery` and `nose` (#4764) Changed the handling of outdated indexes from RethinkDB 1.13 during an import to no longer terminate the server (#4766) Improved the latency when reading from a system table in `r.db('rethinkdb')` while the server is under load (#4773) Improved the parallelism of JSON encoding on the server to utilize multiple CPU cores Refactored JSON decoding in the Python driver to allow the use of custom JSON parsers and to speed up pseudo type conversion (#4585) Improved the prefetching logic in the Python driver to increase the throughput of cursors Changed the Python driver to use a more efficient data structure to store cursor results (#4782) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.1.3. In no particular order: Adam Grandquist (@grandquista) ajose01 (@ajose01) Paulius Uza (@pauliusuza) -- Released on 2015-08-25 Bug fix release RethinkDB 2.1.2 servers cannot be mixed with servers running earlier versions in the same cluster Changefeeds on a `get_all` query no longer return initial values. This restores the behavior from RethinkDB 2.0 Fixed an issue where writes could be acknowledged before all necessary data was written to disk Restored the 2.0 behavior for changefeeds on `get_all` queries to avoid various issues and incompatibilities Fixed an issue that caused previously migrated tables to be shown as unavailable (#4723) Made outdated secondary index warnings disappear once the problem is resolved (#4664) Made `index_create` atomic to avoid race conditions when multiple indexes were created in quick succession (#4694) Improved how query execution times are reported in the Data Explorer (#4752) Fixed a memory leak in `r.js` (#4663) Fixed the `Branch history is missing pieces` error (#4721) Fixed a race condition causing a crash with `Guarantee failed: [!sendmutex.islocked()]` (#4710) Fixed a bug in the changefeed code that could cause crashes with the message `Guarantee failed: [active()]` (#4678) Fixed various race conditions that could cause crashes if changefeeds were present during resharding (#4735, #4734, #4678) Fixed a race condition causing a crash with `Guarantee failed: [val.has()]` (#4736) Fixed an `Assertion failed` issue when running a debug-mode binary (#4685) Added a workaround for an `eglibc` bug that caused an `unexpected address family` error on startup (#4470) Added precautions to avoid secondary index migration issues in subsequent releases Out-of-memory errors in the server's JSON parser are now correctly reported (#4751) -- Released on 2015-08-25 Bug fix release Added precautions to avoid secondary index migration issues in subsequent releases Fixed a memory leak in `r.js` (#4663) Added a workaround for an `eglibc` bug that caused an `unexpected address family` error on startup (#4470) Fixed a bug in the changefeed code that could cause crashes with the message `Guarantee failed: [active()]` (#4678) Fixed a bug that caused intermittent server crashes with the message `Guarantee failed: [fnid != _null]` in combination with the `r.js` command (#4611) Improved the performance of the `is_empty` term (#4592) -- Released on 2015-08-12 Bug fix release Fixed a problem where after migration, some replicas remained unavailable when reconfiguring a table (#4668) Removed the defunct `--migrate-inconsistent-data` command line" }, { "data": "(#4665) Fixed the slider for setting write durability during table creation in the web UI (#4660) Fixed a race condition in the clustering subsystem (#4670) Improved the handling of error messages in the testing system (#4657) -- Released on 2015-08-11 Release highlights: Automatic failover using a Raft-based protocol More flexible administration for servers and tables Advanced recovery features Read the for more details. Data files from RethinkDB versions 1.14.0 onward will be automatically migrated. As with any major release, back up your data files before performing the upgrade. If you're upgrading directly from RethinkDB 1.13 or earlier, you will need to manually upgrade using `rethinkdb dump`. Note that files from the RethinkDB 2.1.0 beta release are not compatible with this version. This release introduces a new system for dealing with server failures and network partitions based on the Raft consensus algorithm. Previously, unreachable servers had to be manually removed from the cluster in order to restore availability. RethinkDB 2.1 can resolve many cases of availability loss automatically, and keeps the cluster in an administrable state even while servers are missing. There are three important scenarios in RethinkDB 2.1 when it comes to restoring the availability of a given table after a server failure: The table has three or more replicas, and a majority of the servers that are hosting these replicas are connected. RethinkDB 2.1 automatically elects new primary replicas to replace unavailable servers and restore availability. No manual intervention is required, and data consistency is maintained. A majority of the servers for the table are connected, regardless of the number of replicas. The table can be manually reconfigured using the usual commands, and data consistency is always maintained. A majority of servers for the table are unavailable. The new `emergency_repair` option to `table.reconfigure` can be used to restore table availability in this case. To reflect changes in the underlying cluster administration logic, some of the tables in the `rethinkdb` database changed. Changes to `table_config`: Each shard subdocument now has a new field `nonvoting_replicas`, that can be set to a subset of the servers in the `replicas` field. `write_acks` must now be either `\"single\"` or `\"majority\"`. Custom write ack specifications are no longer supported. Instead, non-voting replicas can be used to set up replicas that do not count towards the write ack requirements. Tables that have all of their replicas disconnected are now listed as special documents with an `\"error\"` field. Servers that are disconnected from the cluster are no longer included in the table. The new `indexes` field lists the secondary indexes on the given table. Changes to `table_status`: The `primaryreplica` field is now called `primaryreplicas` and has an array of current primary replicas as its value. While under normal circumstances only a single server will be serving as the primary replica for a given shard, there can temporarily be multiple primary replicas during handover or while data is being transferred between servers. The possible values of the `state` field now are `\"ready\"`, `\"transitioning\"`, `\"backfilling\"`, `\"disconnected\"`, `\"waitingforprimary\"` and `\"waitingforquorum\"`. Servers that are disconnected from the cluster are no longer included in the table. Changes to `current_issues`: The issue types `\"tableneedsprimary\"`, `\"datalost\"`, `\"writeacks\"`, `\"serverghost\"` and `\"serverdisconnected\"` can no longer occur. A new issue type `\"table_availability\"` was added and appears whenever a table is missing at least one server. Note that no issue is generated if a server which is not hosting any replicas disconnects. Changes to `cluster_config`: A new document with the `id` `\"heartbeat\"` allows configuring the heartbeat timeout for intracluster connections. RethinkDB 2.1 introduces new error types that allow you to handle different error classes separately in your application if you need" }, { "data": "You can find the of new error types in the documentation. As part of this change, ReQL error types now use the `Reql` name prefix instead of `Rql` (for example `ReqlRuntimeError` instead of `RqlRuntimeError`). The old type names are still supported in our drivers for backwards compatibility. `.split('')` now treats the input as UTF-8 instead of an array of bytes `null` values in compound index are no longer discarded The new `readmode=\"outdated\"` optional argument replaces `useoutdated=True` The older protocol-buffer-based client protocol is deprecated in this release. RethinkDB 2.2 will no longer support clients that still use it. All \"current\" drivers listed on the use the new JSON-based protocol and will continue to work with RethinkDB 2.2. Server Added automatic failover and semi-lossless rebalance based on Raft (#223) Backfills are now interuptible and reversible (#3886, #3885) `table.reconfigure` now works even if some servers are disconnected (#3913) Replicas can now be marked as voting or non-voting (#3891) Added an emergency repair feature to restore table availability if consensus is lost (#3893) Reads can now be made against a majority of replicas (#3895) Added an emergency read mode that extracts data directly from a given replica for data recovery purposes (#4388) Servers with no responsibilities can now be removed from clusters without raising an issue (#1790) Made the intracluster heartbeat timeout configurable (#4449) ReQL Added `ceil`, `floor` and `round` (#866) Extended the ReQL error type hierarchy to be more fine-grained (#4544) All drivers Added driver-side support for SSL connections and CA verification (#4075, #4076, Python driver Added Python 3 asyncio support (#4071) Added Twisted support (#4096) `rethinkdb export` now supports the `--delimiter` option for CSV files (#3916) Server Improved the handling of cluster membership and removal of servers (#3262, #3897, Changed the formatting of the `table_status` system table (#3882, #4196) Added an `indexes` field to the `table_config` system table (#4525) Improved efficiency by making `datum_t` movable (#4056) ReQL backtraces are now faster and smaller (#2900) Replaced cJSON with rapidjson (#3844) Failed meta operations are now transparently retried (#4199) Added more detailed logging of cluster events (#3878) Improved unsaved data limit throttling to increase write performance (#4441) Improved the performance of the `is_empty` term (#4592) Small backfills are now prioritized to make tables available more quickly after a server restart (#4383) Reduced the memory requirements when backfilling large documents (#4474) Changefeeds using the `squash` option now send batches early if the changefeed queue gets too full (#3942) ReQL `.split('')` is now UTF-8 aware (#2518) Improved the behaviour of compound index values containing `null` (#4146) Errors now distinguish failed writes from indeterminate writes (#4296) `r.union` is now a top-level term (#4030) `condition.branch(...)` now works just like `r.branch(condition, ...)` (#4438) Improved the detection of non-atomic `update` and `replace` arguments (#4582) Web UI Added new dependency and namespace management system to the web UI (#3465, #3660) Improved the information visible on the dashboard (#4461) Improved layout of server and replica assignment lists (#4372) Updated to reflect the new clustering features and changes (#4283, #4330, #4288, ...) JavaScript driver The version of bluebird was updated to 2.9.32 (#4178, #4475) Improved compatibility with Internet Explorer 10 (#4534) TCP keepalive is now enabled for all connections (#4572) Python driver Added a new `--max-document-size` option to the `rethinkdb import` script to handle very large JSON documents (#4452) Added an `r.version` property (#3100) TCP keepalive is now enabled for all connections (#4572) Ruby driver TCP keepalive is now enabled for all connections (#4572) `timeofdate` and `date` now respect" }, { "data": "(#4149) Added code to work around a bug in some versions of GLIBC and EGLIBC (#4470) Updated the OS X uninstall script to avoid spurious error messages (#3773) Fixed a starvation issue with squashing changefeeds (#3903) `has_fields` now returns a selection when called on a table (#2609) Fixed a bug that caused intermittent server crashes with the message `Guarantee failed: [fnid != _null]` in combination with the `r.js` command (#4611) Web UI Fixed an issue in the table list that caused it to get stuck showing \"Loading tables...\" if no database existed (#4464) Fixed the tick marks in the shard distribution graph (#4294) Python driver Fixed a missing argument error (#4402) JavaScript driver Made the handling of the `db` optional argument to `run` consistent with the Ruby and Python drivers (#4347) Fixed a problem that could cause connections to not be closed correctly (#4526) Ruby driver Made the EventMachine API raise an error when a connection is closed while handlers are active (#4626) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 2.1. In no particular order: Thomas Kluyver (@takluyver) Jonathan Phillips (@jipperinbham) Yohan Graterol (@yograterol) Adam Grandquist (@grandquista) Peter Hamilton (@hamiltop) Marshall Cottrell (@marshall007) Elias Levy (@eliaslevy) Ian Beringer (@ianberinger) Jason Dobry (@jmdobry) Wankai Zhang (@wankai) Elifarley Cruz (@elifarley) Brandon Mills (@btmills) Daniel Compton (@danielcompton) Ed Costello (@epc) Lowe Thiderman (@thiderman) Andy Wilson (@wilsaj) Nicolas Viennot (@nviennot) bnosrat (@bnosrat) Mike Mintz (@mikemintz) Lahfa Ryan (@raitobezarius) Sebastien Diaz (@sebadiaz) -- Released on 2015-07-08 Bug fix release Fixed the version number used by the JavaScript driver (#4436) Fixed a bug that caused crashes with a \"Guarantee failed: [stop]\" error (#4430) Fixed a latency issue when processing indexed `distinct` queries over low-cardinality data sets (#4362) Changed the implementation of compile time assertions (#4346) Changed the Data Explorer to render empty results more clearly (#4110) Fixed a linking issue on ARM (#4064) Improved the message showing the query execution time in the Data Explorer (#3454, #3927) Fixed an error that happened when calling `info` on an ordered table stream (#4242) Fixed a bug that caused an error to be thrown for certain streams in the Data Explorer (#4242) Increased the coroutine stack safety buffer to detect stack overflows in optarg processing (#4473) -- Released on 2015-06-10 Bug fix release Fixed a bug that broke autocompletion in the Data Explorer (#4261) No longer crash for certain types of stack overflows during query execution (#2639) No longer crash when returning a function from `r.js` (#4190) Fixed a race condition when closing cursors in the JavaScript driver (#4240) Fixed a race condition when closing connections in the JavaScript driver (#4250) Added support for building with GCC 5.1 (#4264) Improved handling of coroutine stack overflows on OS X (#4299) Removed an invalid assertion in the server (#4313) -- Released on 2015-05-22 Bug fix release Fixed \"duplicate token\" error in the web UI that happened with certain browsers (#4174) Fixed a cross site request forgery vulnerability in the HTTP admin interface (#2018) Fixed the EventEmitter interface in the JavaScript driver (#4192) Fixed a problem with the RDBInterrupt.InsertOp unit test in some compilation modes (#4038) Added packages for Ubuntu 15.04 (#4123) Added a `returnchanges: 'always'` option to restore the `returnchanges` behavior from before 2.0.0 (#4068) Fixed a bug with `return_changes` where it would populate `changes` despite an error occurring (#4208) Fixed a performance regression when calling `get_all` with many keys (#4218) Added support for using `r.row` with the `contains` command in the JavaScript driver (#4125) -- Released on 2015-04-20 Bug fix release Fixed a regression in the backup scripts that detected the server version" }, { "data": "(#3706) Fixed a bug in the cache balancer that could degrade performance (#4066) -- Released on 2015-04-14 Release highlights: Support for attaching a changefeed to the `get_all` and `union` commands Improved support for asynchronous queries The first production-ready release of RethinkDB Read the for more details. Data files from RethinkDB versions 1.13.0 onward will be automatically migrated to version 2.0. As with any major release, back up your data files before performing the upgrade. IEEE 754 floating point numbers distinguish between negative (-0) and positive (+0) zero. The following information is only relevant if you are storing negative zero values in your documents. We expect very few users to be affected by this change. ReQL compares -0 and +0 as equal in accordance with IEEE 754. In previous versions of RethinkDB, -0 and +0 were however treated as distinct values in primary and secondary indexes. This could lead to inconsistent behavior and wrong query results in some rare cases. Starting with RethinkDB 2.0, -0 and +0 are indexed as equal values. Secondary indexes can be using the `rethinkdb index-rebuild` utility. If any of your documents have negative zero values in their primary keys, those documents will become partially inaccessible in RethinkDB 2.0. You will need to re-import the affected tables using the `rethinkdb dump` and `rethinkdb restore` commands. See the article \"[Back up your data][backup-docs]\" for more information. If you are unsure if any of your documents are affected, you can run `python -m rethinkdb.negativezero_check` after upgrading both the server and Python driver. See the output of `python -m rethinkdb.negativezero_check --help` for additional options. `between` no longer accepts `null` bounds. The new `r.minval` and `r.maxval` can be used instead The `any` and `all` commands have been removed. The `or` and `and` commands can be used instead `indexesof` has been renamed to `offsetsof` The `squash` argument to `changes` now defaults to `false` The type hierarchy for exception types in the Python driver changed. All exceptions including `RqlDriverError` now inherit from the `RqlError` type. `RqlRuntimeError`, `RqlCompileError` and `RqlClientError` additionally inherit from the new `RqlQueryError` type Overall Reached a production-ready state (#1174) ReQL Added support for changefeeds on `get_all` and `union` queries (#3642) `between` no longer accepts `null` as a bound. The new `r.minval` and `r.maxval` can be used instead (#1023) Added support for getting the state of a changefeed using the new `include_states` optarg to `changes` (#3709) Drivers Added support for non-blocking `cursor.next` (#3529) Added support for executing multiple queries in parallel on a single connection (#3754) Consolidated the return types and use the new `ResponseNotes` field to convey extra information (#3715) Python driver Added an optional script that warns for documents with negative zero in a primary key (#3637) Added an asynchronous API based on Tornado (#2622) Ruby driver Added an asynchronous API based on EventMachine (#2622) Server Report open cursors as a single entry in the jobs table (#3662) Timestamps are no longer sent between servers in `batchspec_t` (#2671) Some expensive changefeed checks are no longer performed in release mode (#3656) Include the remote port number in the heartbeat timeout message (#2891) Improved the ordering and throttling of reads and writes (#1606) Limit the number of documents per write batch to reduce the impact of large writes on other queries (#3806) Execute multiple queries in parallel on a single connection (#3296) Improved the performance of sending responses (#3744) Immediately send back an empty first batch when the result is a changefeed (#3852) Simplified the `multi_throttling` infrastructure (#4021) The server now reports handshake errors to client drivers" }, { "data": "(#4011) Set `TCP_NODELAY` in the Python and Ruby driver to avoid delays in combination with `noreply` (#3998) Web UI Added a configurable limit for the results per page in the Data Explorer (#3910) Added an \"add table\" button to each database (#3522) ReQL `table.rebalance` with insufficient data is no longer an error (#3679) Renamed `indexesof` to `offsetsof` to avoid confusion with secondary indexes (#3265) Removed `any` and `all` in favor of `or` and `and` (#1581) Trivial changes are filtered out from `return_changes` (#3697) Reduced the size of profiles (#3218) Changefeeds are no longer squashed by default (#3904) JavaScript driver Added an upper bound to the bluebird dependency (#3823) Ruby driver Added a `timeout` option to `r.connect` (#1666) Improved the code style (#3900, #3901, #3906) Strings are now allowed as keys in the config options (#3905) Build Upgraded to a more recent version of V8 and dropped support for out-of-tree V8 (#3472) Added support for building with Python 3 (#3731) Packaging Got rid of the outdated bash completion script (#719) Allow installing RethinkDB in 32-bit OS X on a 64-bit processor (#1595) Tests Increased the number of retries in the `RDBBtree` tests to avoid false positives (#3805) Server Fixed a race condition that could be caused by concurrent queries (#3766) Deleted servers and tables are no longer counted during version checks (#3692) Made JSON parsing more strict (#3810) Fixed a bug that could cause the server to crash when killed (#3792) Databases can no longer be renamed to \"rethinkdb\" (#3858) Return an initial value for point changefeeds on system tables (#3723) Improved the handling of negative zero (#3637) Correctly abort `order_by.limit` changefeeds when a table become unavailable (#3932) Do not unlink files early to avoid crashing in virtual environments (#3791) Fallback to TCP4 when binding sockets (#4000) No longer crash when the data files are in a VirtualBox shared folder (#3791) ReQL Fixed the behavior of point changefeeds on system tables (#3944) `noreplyWait` no longer waits for non-`noreply` queries (#3812) Initial values for `order_by` changefeeds are now returned in order (#3993) Reduced the size of profiles when deleting documents (#3218) Web UI Fixed a bug that caused the status icon to be green when a table was unavailable (#3500) Fixed a bug that truncated labels in the performance graph (#3751) Correctly handle the escape key in modal dialogs (#3872) Fixed a bug that caused an `InternalError` when loading large tables (#3873) Fixed a bug that caused the Data Explorer to break when reading older data from `localStorage` (#3935) Fixed a bug that caused autocompletion to fail in certain cases (#3143) Python driver Fixed `rethinkdb export` compatibility between Python 2 and Python 3 (#3911) Fixed a bug that caused `rethinkdb export` to hang when certain errors occur (#4005) JavaScript driver Fixed a bug that caused `cursor.each` to fail with an exception (#3826) Fixed a bug that caused connection errors to be discarded (#3733) Fixed a bug that could be triggered by calling `close` twice (#4017) Fixed a bug in `feed.close` (#3967) Ruby driver Fixed a bug that caused failures when using JRuby (#3795) Signals are now handled correctly (#4029) Fixed a bug in the arity check (#3968) Build Fetching Browserify during the build process is now more reliable (#4009) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB" }, { "data": "In no particular order: Andrey Deryabin (@aderyabin) Krishna Narasimhan (@krishnanm86) Elian Gidoni (@eliangidoni) Sherzod Kuchkarov (@tundrax) Jason Dobry (@jmdobry) Justin Mealey (@justinmealey) Jonathan Ong (@jonathanong) Andrey Deryabin (@aderyabin) Angelo Ashmore (@angeloashmore) Bill Barsch (@billbarsch) Ed Costello (@epc) Ilya Radchenko (@knownasilya) Kai Curry (@webmasterkai) Loring Dodge (@loringdodge) Mike Marcacci (@mike-marcacci) Param Aggarwal (@paramaggarwal) Tinco Andringa (@tinco) Armen Filipetyan (@armenfilipetyan) Andrei Horak (@linkyndy) Shirow Miura (@sharow) -- Released on 2015-03-26 Bug fix update. Fixed a bug that could cause a crash when reading from a secondary index in some rare circumstances (#3976) Fixed a bug that could cause a connection to hang indefinitely on OS X (#3954) Fixed `rethinkdb export` compatibility between Python 2 and Python 3 (#3911) Heartbeat timeout messages now include the remote port number (#2891) Python driver: patched to work in PyPy (#3969) Python driver: fixed an \"Unterminated string\" error during `rethinkdb restore` (#3859) JavaScript driver: fixed a bug that caused `cursor.each` to fail with an exception (#3826) JavaScript driver: fixed a bug that caused connection errors to be discarded (#3733) Ruby driver: fixed a bug that caused failures when using JRuby (#3795) Sherzod Kuchkarov (@tundrax) Elian Gidoni (@eliangidoni) -- Released on 2015-02-16 Bug fix update. Fixed a bug in `r.range` that caused query failures (#3767) Fixed a race condition in the implementation of `.order_by.limit.changes` (#3765) Fixed a build error that caused very slow `r.js` performance (#3757) Removed spurious comma in the Data Explorer (#3730) -- Released on 2015-02-12 Bug fix update. Write a message to the log every time a file is deleted (#1780) Fixed `rethinkdb dump` and other backup scripts to correctly detect the server version (#3706) Changed the output of `rethinkdb dump` to clarify that indexes are being saved (#3708) Fixed unbounded memory consumption when using the official OS X package or when building with boost 1.56.0 or higher (#3712) Fixed the `writtendocstotal` field of `rethinkdb.stats` (#3713) Fixed a bug that caused the web UI to hang when creating or deleting indexes (#3717, #3721) Fixed rounding of document counts in the web UI (#3722) Fixed a bug that broke the `-f` flag of `rethinkdb import` (#3728) Fixed a bug that prevented the web UI from loading data properly (#3729) Fixed a bug that caused RethinkDB to refuse to link with jemalloc dynamically (#3741) Fixed an uncaught exception in the handling of `r.js` (#3745) -- Released on 2015-01-29 The highlights of this release are: A new administration API Changefeeds on complex queries Numerous improvements and enhancements throughout RethinkDB Read the for more details. Data files from RethinkDB versions 1.13.0 onward will be automatically migrated to version 1.16.x. As with any major release, back up your data files before performing the upgrade. If you are upgrading from a release earlier than 1.13.0, follow the before upgrading. Secondary indexes now use a new format; old indexes will continue to work, but you should rebuild indexes after upgrading to 1.16.x. A warning about outdated indexes will be issued on startup. Indexes can be migrated to the new format with the `rethinkdb index-rebuild` utility. Consult the for more information. The abstraction of datacenters has been replaced by . Existing datacenter assignments will be converted to server tags automatically. The `tableCreate`, `tableDrop`, `dbCreate` and `dbDrop` terms have a new set of return values. The previous return values `created` and `dropped` have been renamed to `tablescreates` / `dbscreated` and `tablesdropped` / `dbsdropped` respectively. The terms now additionally return a `config_changes` field. Consult the API documentation for these commands for details: , , , Changefeeds on a table now combine multiple changes to the same document into a single notification if they happen rapidly. You can turn this behavior off by passing the `squash: false` optional argument (see the for details). Strings passed to ReQL are now rejected if they are not valid" }, { "data": "Non UTF-8 conformant data can be stored as data instead. ReQL admin: a new cluster management and monitoring API (#2169) Added a special system database named `rethinkdb` The `table_config` table allows changing a table's configuration (#2870) The `server_config` table allows listing and managing servers in the cluster (#2873) The `db_config` table allows listing and renaming databases (#151, #2871) The `cluster_config` table contains cluster-wide settings (#2924) The `table_status` table displays each table's availability and replication status (#2983, #3269) The `server_status` table contains information about the servers in the cluster (#2923, #3270) The `current_issues` table lists issues affecting the cluster and suggests solutions (#2864, #3258) The `jobs` table lists running queries and background tasks (#3115) The `stats` table contains real-time statistics about the server (#2885) The `logs` table provides access to the server logs (#2884) The `identifierFormat` optional argument to `table` switches how databases, tables and servers are referenced in the system tables. Can be either \"name\" or \"uuid\" (#3266) Added hidden debug tables (`debugtablestatus`, `debug_stats`). These tables are subject to change and not part of the official administration interface (#2901, #3385) Improved cluster management Servers can now be associated with multiple tags using the `--server-tag` flag or by updating the `server_config` table (#2856) Removed the datacenter abstraction and changed the arguments to `tableCreate` (#2876) Removed the `rethinkdb admin` command line interface Added a `reconfigure` command to change the replication and sharding settings of a table (#2932) Added a `rebalance` command to even out the size of a table's shards (#2981) Added a `config` command for tables and databases as an alias for the corresponding row in `dbconfig` or `tableconfig` Added a `status` command for tables as an alias for the corresponding row in `table_status` Most of the `/ajax` endpoints have been removed. Their functionality has been moved to the system tables (#2878, #2879) The stats now contains the number of client connections (#2989) Added more information to the return value of `dbcreate`, `dbdrop`, `tablecreate` and `tabledrop` (#3001) Changed how `durability` and `write_acks` are configured (#3066) The cache size can now be changed at runtime (#3166) Improved the scalability for large table creation and reconfiguration in large clusters (#3198) Added a new UI for table configuration (#3229) Empty tables can now be sharded (#3271, #1679) ReQL Added `r.range` which generates all numbers from a given range (#875) Enforce UTF-8 encoding on input strings (#1181) Added a `wait` command which waits for a table to be ready (#2259) Added `toJsonString` which converts a datum into a JSON string (#2513) Turned `map` into a variadic function for mapping over multiple sequences in parallel (#2574) Added a prefix version of `map` (#3321) Added an optional `squash` argument to the `changes` command, which lets the server combine multiple changes to the same document (defaults to `true`) (#2726, #3558) `min` and `max` now use an index when passed the new `index` parameter (#2974, #2981) It is now possible to get changefeeds for more types of queries. `changes` can now be chained onto: Ranges generated with `between` (#3232) Single documents with `get` Subsets with `filter` Sub-documents and modified documents with `map` and other commands that are stream-polymorphic such as `merge` Certain reductions such as `min` and `max` Top scoring documents with `orderBy` and `limit` Combinations of the above, such as `between` followed by `map` Server Made buffered IO the default, added a `--direct-io` flag to enable direct IO and deprecated the `--no-direct-io` flag (#3391) Python driver `rethinkdb export` now exports secondary index information and `rethinkdb import` re-creates those indexes unless it is given the `--no-secondary-indexes`" }, { "data": "(#3484) Web UI The web assets are now compiled into the `rethinkdb` executable (#1093) `getAll` queries on secondary indexes are now shown in the performance graph (#2379) Added `getField` to the autocompletion in the Data Explorer (#2624) Added live updates for changefeeds in the Data Explorer (#2643) Reduced the amount of data transferred to and processed by the browser by the dashboard (#2786) The database view was removed (#3491) Added a secondary index monitor to the dashboard (#3493) Server Removed code used to support outdated tests (#1524) Improved implementation of erase range operations (#2034) `jemalloc` is now used by default instead of `tcmalloc`, which solves some memory inflation (#2279) Replaced vector clocks with automatic timestamp-based conflict resolution (#2784) No longer complain when `stderr` can't be flushed (#2790) Replaced uses of the word \"machine\" to use \"server\" instead. The old `--machine-name` flag is now `--server-name` (#2877, #3254) The server now prints its own name on startup (#2992) Adjusted the formatting of log levels, no longer print `info:` lines to stderr and added a `notice` log level (#3040) Added a `--no-default-bind` option that prevents the server from listening on loopback addresses (#3154) Lower the CPU load of an idle server (#3185) The OS disk cache is now ignored when calculating available memory (#3193) `kqueue` is now used instead of `poll` for events on OS X for better performance (#3403) Tables now become available faster and table metadata takes less space on disk (#3463) Queries that perform multiple HTTP requests now share the same cookies (#3469) Made resolving of ties in secondary indexes more consistent (#3550) The server now calls home to RethinkDB HQ to check for newer versions and send anonymized statistics. This can be turned off with the new `--no-update-check` flag (#3170) Testing The ReQL tests can now run in parallel (#2305, #2672) The Ruby driver is now tested against different versions of Ruby (#2526) Relaxing floating point comparison to account for rounding issues (#2912) No longer depend on external HTTP servers (#3047) Switched to using the new cluster management features in the test suite (#3398, #3401) ReQL Allow querying the version of the server (#2698) Improved the error message when pseudo-types are used as objects (#2766) The names of types returned by `typeOf` and `info` now match (#3059) No longer silently ignore unknown global optional arguments (#2052) Improved handling of socket, protocol and cursor errors (#3207) Added an `isOpen` method to connections (#2113) `info` on tables now works when `useOutdated` is true (#3355) `info` on tables now includes the table id (#3358) Driver protocol Made internal names of commands more consistent (#3147) Added the `SUCCESSATOMFEED` return type, used when returning changes for a single document (#3389) JavaScript driver Upgrade to bluebird 2 (#2973) Python driver Added a `next` method for cursors (#3042) `r.expr` now accepts any iterable or mapping (#3146) `rethinkdb export` now avoids traversing each table twice by using an estimated document count (#3483) Build Updated code to build with post-Yosemite versions of Xcode (#3219) Fetch Curl 7.40.0 when not using the system-installed one (#3540) Packaging Added `procps` as a dependency in the Debian packages (#3404) Now depend on `jemalloc` instead of `gperftools` (#3419) Tests Fixed the `TimerTest` test (#549) Always use the in-tree driver for testing (#711) Server Some startup log messages are now consolidated into a single log entry (#925) Fixed continuation of partially-interrupted vectored I/O (#2665) Fixed an issue in the log file parser (#3694) ReQL Changed type of `between` queries to from `TABLE` to the more correct" }, { "data": "(#1728) Web UI Use a real fixed width font for backtraces to make them align correctly (#2065) No longer show empty batches when more data is available (#3432) Python driver Correctly handle Unicode characters in error messages (#3051) JavaScript driver Added support for passing batch configuration arguments to `run` (#3161) Ruby driver Optional arguments can now be passed to `order_by` (#3221) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 1.16. In no particular order: Adam Grandquist (@grandquista) Ilya Radchenko (@knownasilya) Gianluca Ciccarelli (@sturmer) Alexis Okuwa (@wojons) Mike Ma (@cosql) Patrick Stapleton (@gdi2290) Mike Marcacci (@mike-marcacci) Andrei Horak (@linkyndy) Param Aggarwal (@paramaggarwal) Brandon Zylstra (@brandondrew) Ed Costello (@epc) Alessio Basso (@alexdown) Benjamin Goodger (@goodgerster) Vinh Quc Nguyn (@kureikain) -- Released on 2015-01-08 Bug fix update. Fixed a bug that caused the endpoints of a reversed range to not be correctly included or excluded (#3449) Fixed the `reverse_iterator` implementation for leaf nodes (#3446) Fixed a bug that could cause a bad ordering of secondary indexes for rows with different primary key sizes (#3444) Fixed a bug that could cause a crash under high load when using changefeeds (#3393) Fixed a bug that made it impossible to chain `between` and `distinct` (#3346) Changed some calls to avoid passing `NULL` to `memcpy` (#3317) Fixed the installer artwork on OS X Yosemite (#3253) Changed the version scheme for the JavaScript driver to avoid mis-use of pre-release numbers (#3281) Fixed a bug that could cause `rethinkdb import` and `rethinkdb export` to hang (#3458) Released on 2014-11-07 Bug fix update. Added packages for Ubuntu 14.10 \"Utopic Unicorn\" (#3237) Fixed a bug with memory handling in S2 (#3201) Fixed a bug handling paged results in the Data Explorer (#3111) Fixed a bug that caused a crash on exit if a joined server with an open changefeed crashed (#3038) Fixed a bug that caused a crash when unsharding discarded more rows than expected when batching results (#3264) Fixed a bug that could lead to crashes when changefeeds were frequently registered and unregistered (#3205) Changed the `r.point` constructor to be deterministic, allowing it to be used in secondary index functions (#3287) Fixed an incompatibility problem between Python 3.4 and the `import` command (#3259) Fixed a buffer alignment issue with `objectbuffert` data (#3300) Released on 2014-10-07 Bug fix update. Fixed a bug where tables were always created with hard durability, regardless of the `durability` option (#3128) Fixed a bug that caused HTTPS access with `r.http` to fail under OS X (#3112) Fixed a bug in the Python driver that caused pickling/unpickling of time objects to fail (#3024) Changed the Data Explorer autocomplete to not override Ctrl+Tab on Firefox (#2959) Fixed a bug that caused a crash when a non-directory file was specified as RethinkDB's startup directory (#3036) Added native packages for Debian (#3125, #3107) Fixed a compilation error on ARM CPUs (#3116) Support building with Protobuf 2.6.0 (#3137) Released on 2014-09-23 The highlights of this release are support for geospatial objects and queries, and significant performance upgrades relating to datum serialization (twice as fast for many analytical workloads). Read the for more details. Only documents modified after upgrading to 1.15 will receive these performance gains. You may \"upgrade\" older documents by performing any write that modifies their contents. For example, you could add a dummy field to all the documents in a table and then remove it: r.table('tablename').update({dummy_field: true}) r.table('tablename').replace(r.row.without('dummy_field')) There are no API-breaking changes in this release. ReQL Added geospatial query and index support (#2571, #2847, #2851, #2854, #2859, Added `r.uuid` for generating unique" }, { "data": "(#2063) Added a `BRACKET` term to the query language, to improve the bracket operator in client drivers (#1179) Server Significantly improved performance of read operations by lazily deserializing data: ~1.15x faster for simple queries, ~2x faster for many analytical queries, and ~50x for count queries (#1915, #2244, #2652) Removed the option for `datum_t` to be uninitialized (#2985) Improved the performance of `zip` by replacing the `zipdatumstream_t` type with a transformer function (#2654) Clarified error messages when the data in the selection could not be printed (#972) Improved performance of `r.match` by adding regex caching and a framework for generic query-based caches (#2196) Testing Removed unnecessary files from `test/common` (#2829) Changed all tests to run with `--cache-size` parameter (#2816) Python driver Modified `r.row` to provide an error message on an attempt to call it like a function (#2960) JavaScript driver Errors thrown by the driver now have a stack trace (#3087) ReQL Fixed a bug for `r.literal` corner cases (#2710) Improved error message when `r.literal` is used in an invalid context (#1600) Web UI Fixed a bug that caused selection in the query text area to become unresponsive with large queries (#3043) Fixed a bug that caused \"more data is available\" to be displayed incorrectly in certain cases (#3037) Server Fixed a display bug with log entries in the web UI (#2627) Fixed a bug where Makefile miscounted dependencies when `ql2.proto` was changed (#2965) Fixed a bug where the connection authorization key was improperly encoded (#2952) Fixed an uninitialized variable warning during builds (#2977) Testing Fixed various bugs in tests (#2940, #2887, #2844, #2837, #2603, #2793) JavaScript driver Fixed a bug in the JavaScript driver that caused backtraces to not print properly (#2793) Python driver Replaced `or isinstance` with a tuple of types (#2968) Removed unused `kwarg` assignments (#2969) Ruby driver Fixed a bug where `default_db`, `host` and `port` were not exposed in the Connection object (#2849) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 1.15. In no particular order: Sathyanarayanan Gunasekaran (@gsathya) Adam Grandquist (@grandquista) Duane Johnson (@canadaduane) Colin Mattson(@cmattson) Justas Brazauskas (@jutaz) Matt Stith (@stith) Dmitry Minkovsky (@dminkovsky) Released on 2014-09-09 Bug fix update. Fixed a bug that caused `rethinkdb index-rebuild` to fail with auth keys (#2970) Changed `rethinkdb export` to specify `binary_format='raw'` to work with binary data correctly (#2964) Fixed `rethinkdb import` to handle Unicode in CSV files (#2963) Updated the Valgrind suppressions file to fix false \"uninitialized value\" warnings (#2961) Fixed a bug that caused duplicate perfmons when recreating an index (#2951) Fixed a bug that could cause `r.http` to crash when used with pagination and `coerce_to` (#2947) Improved the printing of binary data in Python and Ruby REPLs (#2942) Fixed a bug that could corrupt databases larger than 4GB on 32-bit systems (#2928) Fixed permission issues with the web admin interface files (#2927) Fixed a bug that caused a crash due to incorrect error handling in profile mode (#2718) Fixed a bug that caused a crash when existing servers tried to connect to a new server with an unresolvable hostname (#2708) Released on 2014-08-20 The highlights of this release are: Support for storing binary data Seamless database migration Support for Python 3 Read the for more details. Data files from RethinkDB versions 1.13.0 onward will be automatically migrated to version 1.14.x. As with any major release, back up your data files before performing the upgrade. If you are upgrading from a release earlier than 1.13.0, follow the before upgrading: Secondary indexes now use a new format; old indexes will continue to work, but you should rebuild indexes after upgrading to" }, { "data": "A warning about outdated indexes will be issued on startup. Indexes can be migrated to the new format with the `rethinkdb index-rebuild` utility. Consult the for more information: The `return_vals` optional argument for `insert`, `delete` and `update` has been changed to `return_changes`, and works with all write operations (previously, this only worked with single-document writes). The returned object is in a new format that is backwards-incompatible with previous versions. Consult the API documentation for , , and for details: The `upsert` optional argument to `insert` has been replaced with `conflict` and new allowed values of `error`, `replace` or `update`. This is a backwards-incompatible change. Consult the API documentation for more information. Server Return old/new values for multi-row write operations (#1382) `upsert` replaced with `conflict` argument for `insert` (#1838) Added binary data type support via `binary` (#137, #2612, #2931) `binary_format=\"raw\"` added to `run` (#2762) Secondary indexes can be renamed with `index_rename` (#2794) Secondary indexes can be duplicated (#2797) Out of date secondary indexes logged on startup (#2798) `r.http` can return a binary object (#2806) Python driver Added Python 3 support (#2502) Data Explorer Support for displaying binary types (#2804, #2865) Server Server names now default to the hostname of the machine (#236, #2548) `distinct` is faster and now works on indexes (#1864) Improve secondary index queue handling (#1947) The array limit is now configurable (#2059) Allow initializing an empty directory (#2359) Max number of extprocs raised (#2391, #2584) Argument count errors are prettier (#2568) Better error reporting in `r.js` (#2748) `index_status` provides more info (#2791) Command line Table names in the CLI are disambiguated (#2360, #2550) Python driver Cleanup unneeded files (#2610) Documentation examples are now PEP8 compliant (#2534) Testing Polyglot targets for Ruby 1.8, 1.9, 2.0, 2.1 (#773) Multiple versions of Python can be tested simultaneously Web UI Added a notification and helpful message for out-of-date secondary indexes (#2799) Server `--runuser` and `--rungroup` set proper permissions when used with `rethinkdb create` (#1722) Improved behavior and error messages for write acks (#2039) Fix `getaddrinfo` error handling (#2110) Fix a bug with machine name generation (#2552, #2569) Fix a bug when compiling on GCC 4.7.2 (#2572) Fix memory corruption error (#2589) Fix stream cache error (#2607) Fix linking issue with RE2 (#2685) Miscellaneous build fixes (#2666, #2827) Fix rare conflict bug in cluster config (#2738) Fix variable initialization error (#2741) Fix bug in `RPCSemilatticeTest.MetadataExchange` (#2758) Changefeeds work on multiple joined servers (#2761) Secondary index functions ignore global optargs (#2767) Secondary indexes sort correctly (#2774, #2789) Fix crashing bug with undefined ordering (#2777) Fix dependency includes in Makefile (#2779) Convert `query_params` to a `map` (#2812) Convert `header_lines` into a `map` (#2818) Improve robustness with big documents (#2832) Wipe out old secondary index tree when post-constructing (#2925) Preliminary fix for web permission issues on CentOS (#2927) ReQL Fix a bug in `delete_at` (#2696) `insert` and `splice` now check array size limit (#2697) Fix error in undocumented `batch_conf` option (#2709) Web UI Improve interval notation for shard boundaries (#2081, #2635) The Data Explorer now remembers the current query on disconnects (#2460) The Data Explorer now hides overflowing text in its popup containers (#2593) Testing Miscellaneous testing improvements (#2396, #2583, #2760, #2787, #2788, Update testing scripts for drivers (#2815) Python driver Make `all` and `any` behave like `and` and `or` (#2659) Use `os.path.splitext` to check file extensions (#2681) JavaScript driver Return errors rather than throw them (#2852, #2883) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB" }, { "data": "In no particular order: Sathyanarayanan Gunasekaran (@gsathya) James Costian (@jamescostian) Ed Rooth (@sym3tri) Cole Gleason (@colegleason) Mikhail Goncharov (@metaflow) Elian Gidoni (@eliangidoni) Ivan Fraixedes (@ifraixedes) Brett Griffin (@brettgriffin) Ed Costello (@epc) Juuso Haavisto (@9uuso) Nick Verlinde (@npiv) Ayman Mackouly (@1N50MN14) Adam Grandquist (@grandquista) Released on 2014-08-14 Bug fix update. Resolved several memory leaks (#2899, #2840, #2744) Added a `--temp-dir` option to `rethinkdb dump` and `rethinkdb restore` (#2783) The `batchConf` argument to `run` now works in the JavaScript driver (#2707) Fixed the `Accept-Encoding` header sent by `r.http` (#2695) Improved the performance of inserting large objects from the JavaScript driver (#2641) `cursor.close` now takes a callback in the JavaScript driver (#2591) Fixed a bug that caused `illegal to destroy fifoenforcersink_t` errors (#2264) Fixed a bug that caused `Assertion failed: [!parent->draining.is_pulsed()]` errors (#2811) Improved the error message when running into version incompatibilities (#2657) Improved the garbage collection for `r.js` (#2642) Released on 2014-07-07 Bug fix update. Fix a compiler warning: `cluster_version may be used uninitialized` (#2640) Fix code that used `std::move` twice on the same object (#2638) Prepare for live cluster upgrades (#2563) Fix a bug that could lead to inconsistent data (#2579) Released on 2014-06-26 Bug fix update. Fixed a bug that caused `Assertion failed: [ptr_]` errors when shutting down (#2594) Fixed a performance issue in the JSON parser (#2585) The JavaScript driver no longer buffers change feeds (#2582) Fixed a bug that caused `Uncaught exception of type \"cannotperformqueryexct\"` errors (#2576) No longer crash when a secondary index is named `primary` (#2575) Queries that return `null` are now handled correctly in the Data Explorer (#2573) `r.http` now properly parses headers when following a redirection (#2556) Improved the performance of write operations on sharded tables (#2551) Fixed a bug that caused `r.js` to crash in certain circumstances (#2435) Correctly handle `EPIPE` errors when connecting to an old version of the server (#2422) Fixed a bug that caused `Could not bind socket` errors when using `--bind` (#2405) Fixed a bug that caused `Failed to parse as valid uuid` errors (#2401) Improved the `bad magic number` error message (#2302) `default` now catches index out of bounds errors on streams (#1922) Improved arity error messages in the JavaScript driver (#2449) Released on 2014-06-13 The highlights of this release are the `r.http` command for external data access, change feed support via the new `changes` command, and full support for Node.js promises in the JavaScript driver. Read the for more details. This release is not compatible with data files from earlier releases. If you have data you want to migrate from an older version of RethinkDB, please follow the before upgrading: There are also some backwards incompatible changes in the JavaScript driver. The `hasNext` command for cursors has been removed. `next` can be used instead. ReQL Added `r.random` for generating random numbers (#865) Made the second argument to `slice` optional () `eq_join` now accepts a function as its first argument, and does not fail if the field doesn't exist (#1764) `nth` can now return a selection, just like `get` (#348) Improved the `master not available` error message (#1811) Switched to the JSON protocol in the Ruby, JavaScript and Python drivers (#2224, #2390) Added the `changes` command for creating live change feeds (#997) Added `r.args` to allow specifying a dynamic number of arguments to commands such as `get_all` (#1854) Added `r.http` for interfacing with external APIs (#1383) Server Added a JSON protocol to replace the protobuf protocol, which is now deprecated (#1868) Added a README describing the structure of the `src/` folder (#2301) Switched to manual versioning of the intra-cluster protocol (#2295) Made the serialization format version-aware" }, { "data": "#2353) Improved the error message when running out of disk space (#1945) JavaScript driver Added support for promises (using bluebird) (#1395) Removed the `hasNext` command (#2497) Added the `on`, `once`, `removeListener` and `removeAllListeners` methods to cursors (#2223) The first argument to `r.connect` has been made optional (#2273) Tests Improved the run-test script and ported it to Python (#2235) Improved the ReQL tests (#1402) Build Symbol files are now generated (#2330) Build with debugging symbols by default (#2323) Added a signature to the OS X package (#1565) Dropped support for GCC 4.4 (#1748) Added an explicit dependency on Python 2 in the `configure` script (#2478) Added a dependency on libcurl (#2400) Server Allocate smaller pages in the cache (#2130) Reduce overhead by handling requests locally on the primary if possible (#2083) Adjusted the value of `chunkprocessingsemaphore` (#2392) Improved backfilling on rotational drives (#2393) Metadata is no longer copied when evaluating `r.db` (#1907) No longer update the stat block when updating secondary indexes (#2431) Block writes are better combined in the cache (#2127) Concurrent garbage collection to improve disk space efficiency (#2457) Testing Added automated performance regression tests (#1963) Server Fixed the threaded coroutine implementation (#2168, #2332) HTTP 500 errors are now accompanied by an error message (#511) Got rid of vestigial memcache support (#803) Made `order_by` and other sortings be stable (#2155) Cleaned up blob_t code to make it more reliable (#2227) Fixed a bug that caused crashes when dropping secondary indexes under load (#2251) Fixed a bug in the JSON parsing code that caused a crash (#2489) Fixed a bug that could cause segfaults (#2491) Avoid high memory consumption on startup (#2329) Disabled Nagle algorithm for outgoing TCP connections (#2529) Remove some potentially objectionable server names (#2468) Fixed a bug that caused `Callstack overflow in a coroutine` errors (#2357) Merged upstream fixes to `cJSON` (#2496) Fixed a bug that could cause a segmentation fault (#2500) Fixed a bug in the serializer garbage byte calculation (#2541) ReQL Added the database name to error messages (#2280) No longer report run-time errors as client errors (#1908) Arguments to `r.expr` are now properly validated (#2384) No longer crash when `r.js` returns a bad datum (#2409) Fixed handling of global optargs (#2525) Ruby driver Ignore `close` errors when reconnecting (#2276) Fixed conflicts with active support (#2284) Added a missing `nesting_depth` argument to `r.expr` (#2408) Modified the driver to work with JRuby (#2296) The driver now prefetches cursor data (#2373) JavaScript driver Improved the variable names in error messages (#2461) Web UI Fixed a bug that caused JavaScript exceptions (#2503) Fixed the per-server document count (#1836) The database name is now shown on the table page (#2366) Removed the inconsistent green tick next to the secondary index status (#2084) Fixed email highlighting (#2378) Large responses no longer cause the Data Explorer to become unresponsive (#2481) Fixed a bug triggered by clearing the Data Explorer history (#2389) Tests Converted the memcache tests to use ReQL (#803) Many thanks to external contributors from the RethinkDB community for helping us ship RethinkDB 1.13. In no particular order: Liu Aleaxander (@Aleaxander) Nicolas Viennot (@nviennot) Elian Gidoni (@eliangidoni) Matthew Frazier (@leafstorm) Masatoshi Ishida (@Masatoshi) Released on 2014-05-21 Bug fix update. Fixed a bug that caused `Guarantee failed: [!modinfo->deleted.second.empty() && modinfo->added.second.empty()]` errors (#2285) Fixed the behaviour of `order_by` following `between` (#2307) Fixed a bug that caused `Deserialization of rdb value failed with error archiveresultt::RANGE_ERROR` errors (#2399) JavaScript driver: `reduce` no longer accepts the `base` argument (#2288) Python driver: improved the error message when a cursor's connection is closed (#2291) Python driver: improved the implementation of" }, { "data": "(#2364, #2337) Released on 2014-04-22 Bug fix update. Fixed a bug that caused `Assertion failed: [page->isdiskbacked()]` errors (#2260) Fixed a bug that caused incorrect query results and frequent server crashes under low memory conditions (#2237) Released on 2014-04-17 Bug fix update. Compilation no longer fails with SEMANTICSERIALIZERCHECK=1 (#2239) Identical `--join` options no longer cause a crash (#2219) Fixed a race condition in the Ruby driver (#2194) Concurrent backfills are now limited to 4 per peer and 12 total (#2211) Failure to `fsync` a directory is now a warning instead of an error (#2255) Packages are now available for Ubuntu 14.04, codename Trusty Tahr (#2101) Released on 2014-04-08 Bug fix update. Fixed a bug that caused `illegal to destroy fifoenforcersink_t` errors (#2092) Fixed a bug that caused `Guarantee failed: [resp != null]` errors (#2214) Fixed a bug that caused `Segmentation fault` errors (#2222) Use less memory for common operations (#2164, #2213) Ruby 2.1.1 support (#2177) Fixed the `PageTest` unit test (#2187) Updated the documentation (#2197) Updated the sample config file (#2205) Released on 2014-03-27 Bug fix update. Fixed crash `evicter.cc at line 124: Guarantee failed: [initialized_]` (#2182) Fixed `index_wait` which did not always work (#2170, #2179) Fixed a segmentation fault (#2178) Added a `--hard-durability` option to import/restore Changed the default `--cache-size` to be more friendly towards machines with less free RAM Changed tables to scale their soft durability throttling based on their cache size Fixed some build failures (#2183, #2174) Fixed the Centos i686 packages (#2176) Released on 2014-03-26 The highlights of this release are a simplified map/reduce, an experimental ARM port, and a new caching infrastructure. Read the for more details. This release is not compatible with data files from earlier releases. If you have data you want to migrate from an older version of RethinkDB, please follow the before upgrading: There are also some backwards incompatible changes in ReQL, the query language. `groupedmapreduce` and `group_by` were replaced by `group` `reduce` no longer takes a `base` argument; use `default` instead `tablecreate` no longer takes a `cachesize` argument In JavaScript, the calling convention for `run` has changed. Server Added support for the ARM architecture (#1625) Added per-instance cache quota and removed per-table quotas (#97) Added a `--cache-size` command line option that sets the cache size in MiB for a single instance Removed the `cachesize` optional argument for `tablecreate` Wrote a new and improved cache (#1642) Added gzip compression to the built-in web server (#1746) ReQL `merge` now accepts functions as an argument (#1345) Added an `object` command to build objects from dynamic field names (#1857) Removed the optional `base` argument to `reduce` (#888) Added a `split` command to split strings (#1099) Added `upcase` and `downcase` commands to change the case of a string (#874) Change how aggregation and grouping is performed (#1096) Removed `groupedmapreduce` and `groupBy` Added `group` and `ungroup` Changed the behavior of `count`, `sum` and `avg` and added `max` and `min` Web UI Display index status and progress bars in the web UI (#1614) Server Improved the scalability of meta operations (such as table creation) to allow for more nodes in a cluster (#1648) Changed the CPU sharding factor to 8 for improved multi-core scalability (#1043) Batch sizes now scale better, which speeds up operations like `limit` (#1786) Tweaked batch sizes to avoid computing unused results (#1962) Improved throttling and LBA garbage collection to avoid stalls in query throughput (#1820) Many optimizations to avoid having the web UI time out when the server is under load (#1183) Improved backfill performance by using batches for sending and inserting" }, { "data": "(#1971) Reduced wasteful copies when serializing (#1431) Evaluating queries now yields occasionally to avoid timeouts (#1631) Reduced performance impact of backfilling on other queries (#2071) Web UI Improved how the web UI handles large clusters with many tables (#1662) Ruby driver Removed inefficient construction-location backtraces (#1843) Tests Added automated performance tests (#1806) Server Improved the code to avoid heartbeat timeout errors when resharding (#1708) Resharding one table no longer makes other tables unavailable (#1751) Improved the blueprint suggester to distribute shards more evenly (#344) Queries from the data explorer are now interruptible (#1888) No longer fail if IPv6 is not available (#1925) Added support for the new v8 scope API (#1510) Coroutine stacks now freed to reduce memory consumption (#1670) Added range get, backfill and secondary index reads to the stats (#660) `r.table(...).count()` no longer stalls inserts (#1870) Fixed crashes when adding a secondary index (#1621, #1437) Improve the handling of out-of-memory conditions (#2003) Secondary index construction is now reported as a write operation in the stats (#1953) Fixed a crash that occasionally happened during replication (#1389) `linuxfilet::set_size` no longer makes blocking syscalls (#265) Fixed a crash caught by the unit tests (#1084) Command line `rethinkdb admin` now prints warnings to stderr instead of stdout (#1316) The `import` and `export` scripts now display a row count when done (#1659) Added support for `logfile` and `nodirect_io` in the init script (#1769, #1892) Do not display a stack trace for regular errors printed by the backup scripts (#2098) `rethinkdb export` no longer fails when there are no tables (#1904) `rethinkdb import` no longer tries to parse CSV files as JSON (#2097) Web UI No longer display wrong number of rows when a string is returned (#1669) The data explorer now properly closes cursors (#1569) The data explorer now displays empty tables correctly (#1698) Links are now relative so they can be proxied from a subdirectory (#1791) The server time reported by the profiler is now accurate (#1784) Improved the flow for removing dead servers (#1366) No longer lower replicas to 0 if the datacenter is primary (#1834) Now displays consistent availability information (#1756) Fixed a XSS security issue (#2018) Changing the number of acks no longer displays the sharding bar (#2023) The backfilling progress bar doesn't disappear when refreshing the page (#1997) Correctly handle newlines in the data explorer (#2021) Sort the table list in alphabetical order (#1704) Added mouseover text to the query execution time (#1940) The replication status no longer blinks (#2019) Fix inconsistencies in dates caused by DST (#2047) Ruby driver Added a missing `close` method on cursors (#1568) Improved conflict handling (#1814) Use `definemethod` instead of `methodmissing`, which improves compatibility with Sinatra (#1896) Python driver Improved the quality of the generated documentation (#1729) Added missing `and` and `or` and a warning for misuse of `|` and `&` (#1582) Added support for `r.row` in `eq_join` (#1810) Added a detailed error message when brackets are not used properly (#1434) `count` with an argument now behaves correctly (#1992) Added missing `get_field` command (#1941) JavaScript driver Added support for `r.row` in `eqJoin` (#1810) Timeout events on the socket are now handled correctly (#1909) No longer use the deprecated ArrayBuffer API (#1803) Fix backtraces for optional arguments (#1935) Changed the syntax of `run` (#1890) Exposed the error constructors (#1926) Fixed the string representation of functions (#1919, #1894) Backtrace printing now works correctly for both protobufjs and node-protobuf (#1879) No longer extend `Array.prototype` (#2112) Build `./configure` now checks for `boost` and can fetch it if it is not" }, { "data": "(#1699) The source distribution now includes the v8 source (#1844) Use Python 2 to build v8 (#1873) Improved how the build system fetches and builds external packages, and changed the default to not fetch anything (#1231) Support for Ubuntu Raring has been dropped (#1924) Released on 2014-01-14 Bug fix update. Fixed a crash on multiple ctrl-c (#1848) Ruby driver: fixed mutable backtraces (#1846) Fixed a build failure on older versions of GCC (#1824) Added missing durability argument to the Javascript driver (#1821) Fixed a crash caused by changing the cluster configuration when there are unresolved issues (#1813) Fixed a bug triggered by using `order_by` followed by `count` (#1796) Tables can now be referenced by full name (e.g. `database.table`) in `rethinkdb admin` (#1795) Fixed a bug triggered by chaining multiple joins (#1793) Fixed a crash occasionally triggered by dropping an index (#1789) The init script now fails when given wrong arguments (#1779) RethinkDB now refuses to start if it cannot open the log file (#1778) Fixed a JavaScript error in the Web UI (#1754) Speed up count queries (#1733) Fix a bug with thread-local storage that caused a segfault on certain platforms (#1731) `get` of a non-existent document followed by `replace` now works as documented (#1570) Released on 2013-12-06 Bug fix update. Fixed a bug caused by suggesting `r.ISO8601` in the Data Explorer Cursor in the Data Explorer is restored when a command is selected via dropdown Fixed a bug where queries returning null in the Data Explorer could not be parsed (#1739) Always `fsync` parent directories to avoid data loss (#1703) Fixed IPv6 issues with link-local addresses (#1694) Add some support for Python 3 in the build scripts (#1709) Released on 2013-12-02 Bug fix update. Drivers no longer ignore the timeFormat flag (#1719) RethinkDB now correctly sets the `ResponseType` field in responses to `STOP` queries (#1715) Fixed a bug that caused RethinkDB to crash with a failed guarantee (#1691) Released on 2013-11-25 This release introduces a new query profiler, an overhauled streaming infrastructure, and many enhancements that improve ease and robustness of operation. Read the for more details. Server Removed anaphoric macros (#806) Changed the array size limit to 100,000 (#971) The server now reads blobs off of disk simultaneously (#1296) Improved the batch replace logic (#1468) Added IPv6 support (#1469) Reduced random disk seeks during write transactions (#1470) Merged writebacks on the serializer level (#1471) Streaming improvements: smarter batch sizes are sent to clients (#1543) Reduced the impact of index creation on realtime performance (#1556) Optimized insert performance (#1559) Added support for sending data back as JSON, improving driver performance (#1571) Backtraces now span coroutine boundaries (#1602) Command line Added a progress bar to `rethinkdb import` and `rethinkdb export` (#1415) ReQL Added query profiling, enabled by passing `profile=True` to `run` (#175) Added a sync command that flushes soft writes made on a table (#1046) Made it possible to wait for no-reply writes to complete (#1388) Added a `wait` command Added an optional argument to `close` and `reconnect` Added `indexstatus` and `indexwait` commands (#1562) Python driver Added documentation strings (#808) Added streaming and performance improvements (#1371) Changed the installation procedure for using the C++ protobuf implementation (#1394) Improved the streaming logic (#1364) JavaScript driver Changed the installation procedure for using the C++ protobuf implementation (#1172) Improved the streaming logic (#1364) Packaging Made the version more explicit in the OS X package (#1413) Server No longer access `perfmoncollectiont` after destruction (#1497) Fixed a bug that caused nodes to become unresponsize and added a coroutine profiler (#1516) Made database files compatible between 32-bit and 64-bit" }, { "data": "(#1535) No longer use four times more cache space (#1538) Fix handling of errors in continue queries (#1619) Fixed heartbeat timeout when deleting many tables at once (#1624) Improved signal handling (#1630) Reduced the load caused by using the Web UI on a large cluster (#1660) Command line Made `rethinkdb export` more resilient to low-memory conditions (#1546) Fixed a race condition in `rethinkdb import` (#1597) ReQL Non-indexed `order_by` queries now return arrays (#1566) Type system now includes selections on both arrays and streams (#1566) Fixed wrong `inserted` result (#1547) Fixed crash caused by using unusual strings in `r.js` (#1638) Redefined batched stream semantics (includes a specific fix for the JavaScipt driver as well) (#1544) `r.js` now works with time values (#1513) Python driver Handle interrupted system calls (#1362) Improved the error message when inserting dates with no timezone (#1509) Made `RqlTzInfo` copyable (#1588) JavaScript driver Fixed argument handling (#1555, #1577) Fixed `ArrayResult` (#1578, #1584) Fixed pretty-printing of `limit` (#1617) Web UI Made it obvious when errors are JavaScript errors or database errors (#1293) Improved the message displayed when there are more than 40 documents (#1307) Fixed date formatting (#1596) Fixed typo (#1668) Build Fixed item counts in make (#1443) Bump the fetched version of `gperftools` (#1594) Fixed how `termcap` is handled by `./configure` (#1622) Generate a better version number when compiling from a shallow clone (#1636) Released on 2013-10-23 Bug fix update. Server `r.js` no longer fails when interrupted too early (#1553) Data for some `get_all` queries is no longer duplicated (#1541) Fixed crash `Guarantee failed: [!tokenpair->sindexwrite_token.has()]` (#1380) Fixed crash caused by rapid resharding (#728) ReQL `pluck` in combination with `limit` now returns a stream when possible (#1502) Python driver Fixed `undefined variable` error in `tzname()` (#1512) Ruby driver Fixed error message when calling unbound function (#1336) Packaging Added support for Ubuntu 13.10 (Saucy) (#1554) Released on 2013-09-26 Read the for more details. Added multi-indexes (#933) Added selective history deletion to the Data Explorer (#1297) Made `filter` support the nested object syntax (#1325) Added `r.not_` to the Python driver (#1329) The backup scripts are now installed by the Python driver (#1355) Implemented variable-width object length encoding and other improvements to serialization (#1424) Lowered the I/O pool threads' stack size (#1425) Improved datum integer serialization (#1426) Added a thin wrapper struct `threadnum_t` instead of using raw ints for thread numbers (#1445) Re-enabled full perfmon capability (#1474) Fixed the output of `rethinkdb admin help` (#936, #1447) `make` no longer runs `./configure` (#979) Made the Python code mostly Python 3 compatible (still a work in progress) (#1050) The Data Explorer now correctly handles unexpected token errors (#1334) Improved the performance of JSON parsing (#1403) `batchedrgetstream_t` no longer uses an out-of-date interruptor (#1411) The missing `protobuf_implementation` variable was added to the JavaScript driver (#1416) Improved the error and help messages in the backup scripts (#1417) `r.json` is no longer incorrectly marked as non-deterministic (#1430) Fixed the import of `rethinkdb_pbcpp` in the Python driver (#1438) Made some stores go on threads other than just 0 through 3 (#1446) Fixed the error message when using `groupBy` with illegal pattern (#1460) `default` is no longer unprintable in Python (#1476) Released on 2013-09-09 Read the for more details. Bug fix update. Server Fixed a potential crash caused by a coroutine switch in exception handlers (#153) No longer duplicate documents in the secondary indexes (#752) Removed unnecessary conversions between `datum_t` and `cJSON` (#1041) No longer load documents from disk for count queries (#1295) Changed extproc code to use `datumt` instead of" }, { "data": "(#1326) ReQL Use `Buffer` instead of `ArrayBuffer` in the JavaScript driver (#1244) `orderBy` on an index no longer destroys the preceding `between` (#1333) Fixed error in the python driver caused by multiple threads using lambdas (#1377) Fixed Unicode handling in the Python driver (#1378) Web UI No longer truncate server names (#1313) CLI `rethinkdb import` now works with Python 2.6 (#1349) Released on 2013-08-21 Bug fix update. The shard suggester now works with times (#1335) The Python backup scripts no longer rely on `pip show` (#1331) `rethinkdb help` no longer uses a pager (#1315, #1308) The web UI now correctly positions `SVGRectElement` (#1314) Fixed a bug that caused a crash when using `filter` with a non-deterministic value (#1299) Released on 2013-08-14 This release introduces date and time support, a new syntax for querying nested objects and 8x improvement in disk usage. Read the for more details. ReQL `order_by` now accepts a function as an argument and can efficiently sort by index (#159, #1120, #1258) `slice` and `between` are now half-open by default (#869) The behaviour can be changed by setting the new optional `right_bound` argument to `closed` or by setting `left_bound` to `open` `contains` can now be passed a predicate (#870) `merge` is now deep by default (#872) Introduced `literal` to merge flat values `coerce_to` can now convert strings to numbers (#877) Added support for times (#977) `+`, `-`, `<`, `<=`, `>`, `>=`, `==` and `!=`: arithmetic and comparison `during`: match a time with an interval `in_timezone`: change the timezone offset `date`, `timeofday`, `timezone`, `year`, `month`, `day`, `weekday`, `hour`, `minute` and `second`: accessors `time`, `epoch_time` and `iso8601`: constructors `monday` to `sunday` and `january` to `december`: constants `now`: current time `toiso8601`, `toepoch_time`: conversion Add the nested document syntax to functions other than `pluck` (#1094) `without`, `groupby`, `withfields` and `has_fields` Remove Google Closure from the JavaScript driver (#1194) It now depends on the `protobufjs` and `node-protobuf` libraries Server Added a `--canonical-address HOST[:PORT]` command line option for connecting RethinkDB nodes across different networks (#486) Two instances behind proxies can now be configured to connect to each other Optimize space efficiency by allowing smaller block sizes (#939) Added a `--no-direct-io` startup flag that turns off direct IO (#1051) Rewrote the `extproc` code, making `r.js` interuptible and fixing many crashes (#1097, #1106) Added support for V8 >= 3.19 (#1195) Clear blobs when they are unused (#1286) Web UI Use relative paths (#1053) Build Add support for Emacs' `flymake-mode` (#1161) ReQL Check the type of the callback passed to `next` and `each` in the JavaScript driver (#656) Fixed how some backtraces are printed in the JavaScript driver (#973) Coerce the port argument to a number in the Python driver (#1017) Functions that are polymorphic on objects and sequences now only recurse one level deep (#1045) Affects `pluck`, `hasfields`, `withfields`, etc In the JavaScript driver, no longer fail when requiring the module twice (#1047) `r.union` now returns a stream when given two streams (#1081) `r.db` can now be chained with `do` in the JavaScript driver (#1082) Improve the error message when combining `foreach` and `returnvals` (#1104) Fixed a bug causing the JavaScript driver to overflow the stack when given an object with circular references (#1133) Don't leak internal functions in the JavaScript driver (#1164) Fix the qurey printing in the Python driver (#1178) Correctly depend on node >= 0.10 in the JavaScript driver (#1197) Server Improved the error message when there is a version mismatch in data files (#521) The `--no-http-admin` option now disables the check for the web assets folder (#1092) No longer compile JavaScript expressions once per" }, { "data": "(#1105) Fixed a crash in the 32-bit version caused by not using 64-bit file offsets (#1129) Fixed a crash caused by malformed json documents (#1132) Fixed a crash caused by moving `func_t` betweek threads (#1157) Improved the scheduling the coroutines that sometimes caused heartbeat timeouts (#1169) Fixed `conflictresolvingdiskmgr_t` to suport files over 1TB (#1170) Fixed a crash caused by disconnecting too fast (#1182) Fixed the error message when `jsrunnert::call` times out (#1218) Fixed a crash caused by serializing unintitialised values (#1219) Fixed a bug that caused an assertion failure in `protobservert` (#1220) Fixed a bug causing too many file descriptors to be open (#1225) Fixed memory leaks reported by valgrind (#1233) Fixed a crash triggered by the BTreeSindex test (#1237) Fixed some problems with import performance, interruption, parsing, and error reporting (#1252) RethinkDB proxy no longer crashes when interacting with the Web UI (#1276) Tests Fixed the connectivity tests for OS X (#613) Fix the python connection tests (#1110) Web UI Provide a less scary error message when a request times out (#1074) Provide proper suggestions in the presence of comments in the Data Explorer (#1214) The `More data` link in the data explorer now works consistently (#1222) Documentation Improved the documentaiton for `update` and `pluck` (#1141) Build Fixed bugs that caused make to not rebuild certain files or build files it should not (#1162, #1215, #1216, #1229, #1230, #1272) Released on 2013-07-19 Bug fix update. RethinkDB no longer fails to create the data directory when using `--daemon` (#1191) Released on 2013-07-18 Bug fix update. Fixed `wirefunct` serialization that caused inserts to fail (#1155) Fixed a bug in the JavaScript driver that caused asynchronous connections to fail (#1150) Removed `nicecrash` and `niceguarantee` to improve error messages and logging (#1144) `rethinkdb import` now warns when unexpected files are found (#1143) `rethinkdb import` now correctly imports nested objects (#1142) Fixed the connection timeout in the JavaScript driver (#1140) Fixed `r.without` (#1139) Add a warning to `rethinkdb dump` about indexes and cluster config (#1137) Fixed the `debian/rules` makefile to properly build the rethinkdb-dbg package (#1130) Allow multiple instances with different port offsets in the init script (#1126) Fixed `make install` to not use `/dev/stdin` (#1125) Added missing files to the OS X uninstall script (#1123) Fixed the documentation for insert with the returnVals flag in JavaScript (#1122) No longer cache `index.html` in the web UI (#1117) The init script now waits for rethinkdb to stop before restarting (#1115) `rethinkdb` porcelain now removes the new directory if it fails (#1070) Added cooperative locking to the rethinkdb data directory to detect conflicts earlier (#1109) Improved the comments in the sample configuration file (#1078) Config file parsing now allows options that apply to other modes of rethinkdb (#1077) The init script now creates folders with the correct permissions (#1069) Client drivers now time out if the connection handshake takes too long (#1033) Released on 2013-07-04 Bug fix update. Fixed a bug causing `rethinkdb import` to crash on single files Added options to `rethinkdb import` for custom CSV separators and no headers (#1112) Released on 2013-07-03 This release introduces hot backup, atomic set and get operations, significant insert performance improvements, nested document syntax, and native binaries for CentOS / RHEL. Read the for more details. ReQL Added `r.json` for parsing JSON strings server-side (#887) Added syntax to `pluck` to access nested documents (#889) `get_all` now takes a variable number of arguments (#915) Added atomic set and get" }, { "data": "(#976) `update`, `insert`, `delete` and `replace` now take an optional `return_vals` argument that returns the values that have been atomically modified Renamed `getattr` to `get_field` and make it polymorphic on arrays (#993) Drivers now use faster protobuf libraries when possible (#1027, #1026, #1025) Drivers now use `r.json` to improve the performance of inserts (#1085) Improved the behaviour of `pluck` (#1095) A field with a non-pluckable value is considered absent Web UI It is now possible to resolve `auth_key` conflicts via the web UI (#1028) The web UI now uses relative paths (#1053) Server Flushes to disk less often to improve performance without affecting durability (#520) CLI Import and export JSON and CSV files (#193) Four new subcommands: `rethinkdb import`, `rethinkdb export`, `rethinkdb dump` and `rethinkdb restore` `rethinkdb admin` no longer requires the `--join` option (#1052) It now connects to localhost:29015 by default Documentation Documented durability settings correctly (#1008) Improved instructions for migration (#1013) Documented the allowed character names for tables (#1039) Packaging RPMs for CentOS (#268) ReQL Fixed the behaviour of `between` with `null` and secondary indexes (#1001) Web UI Moved to using a patched Bootstrap, Handlebars.js 1.0.0 and LESS 1.4.0 (#954) Fixed bug causing nested arrays to be shown as `{...}` in the Data Explorer (#1038) Inline comments are now parsed correctly in the Data Explorer (#1060) Properly remove event listeners from the dashboard view (#1044) Server Fixed a crash caused by shutting down the server during secondary index creation (#1056) Fixed a potential btree corruption bug (#986) Tests ReQL tests no longer leave zombie processes (#1055) Released on 2013-06-19 Fixed a buffer overflow in the networking code Fixed a possible timing attack on the API key In python, `r.table_list()` no longer throws an error (#1005) Fixed compilation failures with clang 3.2 (#1006) Fixed compilation failures with gcc 4.4 (#1011) Fixed problems with resolving a conflicted auth_key (#1024) Released on 2013-06-13 This release introduces basic access control, regular expression matching, new array operations, random sampling, better error handling, and many bug fixes. Read the for more details. ReQL Added access control with a single, common API key (#266) Improved handshake to the client driver protocol (#978) Added `sample` command for random sampling of sequences (#861, #182) Secondary indexes can be queried with boolean values (#854) Added `onFinished` callback to `each` in JavaScript to improve cursors (#443) Added per-command durability settings (#890) Changed `hard_durability=True` to `durability = 'soft' | 'hard'` Added a `durability` option to `run` Added `with_fields` command, and `pluck` no longer throws on missing attributes (#886) Renamed `contains` to `has_fields` (#885) `has_fields` runs on tables, objects, and arrays, and doesn't throw on missing attributes `contains` now checks if an element is in an array Added `default` command: replaces missing fields with a default value (#884) Added new array operations (#868, #198, #341) `prepend`: prepends an element to an array `append`: appends an element to an array `insert_at`: inserts an element at the specified index `splice_at`: splices a list into another list at the specified index `delete_at`: deletes the element at the specified index `change_at`: changes the element at the specified index to the specified value `+` operator: adds two arrays -- returns the ordered union `` operator: repeats an array `n` times `difference`: removes all instances of specified elements from an array `count`: returns the number of elements in an array `indexes_of`: returns positions of elements that match the specified value in an array `is_empty`: check if an array or table is empty `set_insert`: adds an element to a set `set_intersection`: finds the intersection of two sets `set_union`: returns the union of two sets `set_difference`: returns the difference of two sets Added `match` command for regular expression" }, { "data": "(#867) Added `keys` command that returns the fields of an object (#181) Web UI Document fields are now sorted in alphabetical order in the table view (#832) CLI Added `-v` flag as an alias for `--version` (#839) Added `--io-threads` flag to allow limiting the amount of concurrent I/O operations (#928) Build system Allow building the documentation with Python 3 (#826, #815) Have `make` build the Ruby and Python drivers (#923) Server Fixed a crash caused by resharding during secondary index creation (#852) Fixed style problems: code hygiene (#805) Server code cleanup (#920, #924) Properly check permissions for writing the pid file (#916) ReQL Use the correct function name in backtraces (#995) Fixed callback issues in the JavaScript driver (#846) In JavaScript, call the callback when the connection is closed (#758) In JavaScript, GETATTR now checks the argument count (#992) Tweaked the CoffeeScript source to work with older versions of coffee and node (#963) Fixed a typo in the error handling of the JavaScript driver (#961) Fixed performance regression for `pluck` and `without` (#947) Make sure callbacks get cleared in the Javascript driver (#942) Improved errors when `return` is omitted in Javascript (#914) Web UI Correctly distinguish case sensitive table names (#822) No longer display some results as `{ ... }` in the table view (#937) Ensured tables are available before listing their indexes (#926) Fixed the autocompletion for opening square brackets (#917) Tests Modified the polyglot test framework to catch more possible errors (#724) Polyglot tests can now run in debug mode (#614) Build Look for `libprotobuf` in the correct path (#860, #853) Always fetch and use the same version of Handlebars.js (#821, #819, #958) Check for versions of LESS that are known to work (#956) Removed bitrotted MOCKCACHECHECK option (#804) Released on 2013-05-24 Bug fix update. Fix #844: Compilation error when using `./configure --fetch protobuf` resolved Fix #840: Using `.run` in the data explorer gives a more helpful error message Fix #817: Fix a crash caused by adding a secondary index while under load Fix #831: Some invalid queries no longer cause a crash Released on 2013-05-16 Bug fix update. Fix a build error Fix #816: Table view in the data explorer is no longer broken Released on 2013-05-16 This release introduces secondary indexes, stability and performance enhancements and many bug fixes. Read the for more details. Server #69: Bad JavaScript snippets can no longer hang the server `r.js` now takes an optional `timeout` argument #88: Added secondary and compound indexes #165: Backtraces on OS X #207: Added soft durability option #311: Added daemon mode (via --daemon) #423: Allow logging to a specific file (via --log-file) #457: Significant performance improvement for batched inserts #496: Links against libc++ on OS X #672: Replaced environment checkpoints with shared pointers in the ReQL layer, reducing the memory footprint #715: Adds support for noreply writes CLI #566: In the admin CLI, `ls tables --long` now includes a `durability` column Web UI #331: Improved data explorer indentation and brace pairing #355: Checks for new versions of RethinkDB from the admin UI #395: Auto-completes databases and tables names in the data explorer #514: Rewrote and improved the documentation generator #569: Added toggle for Data Explorer brace pairing #572: Changed button styles in the Data Explorer #707: Two step confirmation added to the delete database button ReQL #469: Added an `info` command that provides information on any ReQL value (useful for retreiving the primary key) #604: Allow iterating over result arrays as if they were cursors in the JavaScript client #670: Added a soft durability option to `table_create` Testing #480:" }, { "data": "#517: Improved the integration tests #587: Validate API documentation examples #669: Added color to `make test` output Server Fix #403: Serialization of `perfmonresultt` is no longer 32-bit/64-bit dependent Fix #505: Reduced memory consumption to avoid failure on read query \"tcmalloc: allocation failed\" Fix #510: Make bind to `0.0.0.0` be equivalent to `all` Fix #639: Fixed bug where 100% cpu utilization was reported on OSX Fix #645: Fixed server crash when updating replicas / acks Fix #665: Fixed queries across shards that would trigger `Assertion failed: [!br.pointreplaces.empty()] in src/rdbprotocol/protocol.cc` Fix #750: Resolved intermittent RPCSemilatticeTest.MetadataExchange failure Fix #757: Fixed a crashing bug in the environment checkpointing code Fix #796: Crash when running batched writes to a sharded table in debug mode Documentation Fix #387: Documented the command line option --web-static-directory (custom location for web assets) Fix #503: Updated outdated useOutdated documentation Fix #563: Fixed documentation for `typeof()` Fix #610: Documented optional arguments for `tablecreate`: `datacenter` and `cachesize` Fix #631: Fixed missing parameters in documentation for `reduce` Fix #651: Updated comments in protobuf file to reflect current spec Fix #679: Fixed incorrect examples in protobuf spec Fix #691: Fixed faulty example for `zip` in API documentation Web UI Fix #529: Suggestion arrow no longer goes out of bounds Fix #583: Improved Data Explorer suggestions for `table` Fix #584: Improved Data Explorer suggestion style and content Fix #623: Improved margins on modals Fix #629: Updated buttons and button styles on the Tables View (and throughout the UI) Fix #662: Command key no longer deletes text selection in Data Explorer Fix #725: Improved documentation in the Data Explorer Fix #790: False values in a document no longer show up as blank in the tree view ReQL Fix #46: Added top-level `tableCreate`, `tableDrop` and `tableList` to drivers Fix #125: Grouped reductions now return a stream, not an array Fix #370: `map` after `dbList` and `tableList` no longer errors Fix #421: Return values from mutation operations now include all attributes, even if zero (`inserted`,`deleted`,`skipped`,`replaced`,`unchanged`,`errors`) Fix #525: Improved error for `groupBy` Fix #527: Numbers that can be represented as integers are returned as native integers Fix #603: Check the number of arguments correctly in the JavaScript client Fix #632: Keys of objects are no longer magically converted to strings in the drivers Fix #640: Clients handle system interrupts better Fix #642: Ruby client no longer has a bad error message when queries are run on a closed connection Fix #650: Python driver successfully sends `STOP` query when the connection is closed Fix #663: Undefined values are handled correctly in JavaScript objects Fix #678: `limit` no longer automatically converts streams to arrays Fix #689: Python driver no longer hardcodes the default database Fix #704: `typeOf` is no longer broken in JavaScript driver Fix #730: Ruby driver no longer accepts spurious methods Fix #733: `groupBy` can now handle both a `MAKEOBJ` protobuf and a literal `ROBJECT` datum protobuf Fix #767: Server now detects `NaN` in all cases Fix #777: Ruby driver no longer occasionally truncates messages Fix #779: Server now rejects strings that contain a null byte Fix #789: Style improvements in Ruby driver Fix #799: Python driver handles `use_outdated` when specified as a global option argument Testing Fix #653: `make test` now works on OS X Build Fix #475: Added a workaround to avoid `make -j2` segfaulting when building on older versions of `make` Fix #541: `make clean` is more aggressive Fix #630: Removed cruft from the RethinkDB source tree Fix #635: Building fails quickly with troublesome versions of `coffee-script`" }, { "data": "#694: Improved warnings when building with `clang` and `ccache` Fix #717: Fixed `./configure --fetch-protobuf` Migration Fix #782: Improved speed in migration script #508: Removed the JavaScript mock server Released on 2013-05-03. Bug fix update: Increase the TCP accept backlog (#369). Released on 2013-04-19. Bug fix update: Improved the documentation Made the output of `rethinkdb help` and `--help` consistent (#643) Clarify details about the client protocol (#649) Cap the size of data returned from rgets not just the number of values (#597) The limit changed from 4000 rows to 1MiB Partial bug fix: Rethinkdb server crashing (#621) Fixed bug: Can't insert objects that use 'self' as a key with Python driver (#619) Fixed bug: [Web UI] Statistics graphs are not plotted correctly in background tabs (#373) Fixed bug: [Web UI] Large JSON Causes Data Explorer to Hang (#536) Fixed bug: Import command doesn't import last row if doesn't have end-of-line (#637) Fixed bug: [Web UI] Cubism doesn't unbind some listeners (#622) Fixed crash: Made global optargs actually propagate to the shards properly (#683) Released on 2013-04-10. Bug fix update: Improve the networking code in the Python driver Fix a crash triggered by a type error when using concatMap (#568) Fix a crash when running `rethinkdb proxy --help` (#565) Fix a bug in the Python driver that caused it to occasionally return `None` (#564) Released on 2013-03-30. Bug fix update: Replace `~` with `About` in the web UI (#485) Add framing documentation to the protobuf spec (#500) Fix crashes triggered by .orderBy().skip() and .reduce(r.js()) (#522, #545) Replace MB with GB in an error message (#526) Remove some semicolons from the protobuf spec (#530) Fix the `rethinkdb import` command (#535) Improve handling of very large queries in the data explorer (#536) Fix variable shadowing in the javascript driver (#546) Released on 2013-03-22. Bug fix update: Python driver fix for TCP streams (#495) Web UI fix that reduces the number of AJAX requests (#481) JS driver: added useOutdated to r.table() (#502) RDB protocol performance fix in release mode. Performance fix when using filter with object shortcut syntax. Do not abort when the `runuser` or `rungroup` options are present (#512) Relased on 2013-03-18. Improved ReQL wire protocol and client drivers New build system Data explorer query history Released on 2013-01-15. Fixed security bug in http server. Released on 2012-12-20. Fixed OS X crash on ReQL exceptions. Released on 2012-12-20. Native OS X support. 32-bit support. Support for legacy systems (e.g. Ubuntu 10.04) Released on 2012-12-15. Updating data explorer suggestions to account for recent `r.row` changes. Released on 2012-12-14. Lots and lots of bug fixes Released on 2012-11-13. Fixed the version string Fixed 'crashing after crashed' Fixed json docs Released on 2012-11-11. Checking for a null ifaattrs Released on 2012-11-11. Local interface lookup is now more robust. Released on 2012-11-09. Fixes a bug in the query engine that causes large range queries to return incorrect results. Released on 2012-11-09. This is the first release of the product. It includes: JSON data model and immediate consistency support Distributed joins, subqueries, aggregation, atomic updates Hadoop-style map/reduce Friendly web and command-line administration tools Takes care of machine failures and network interrupts Multi-datacenter replication and failover Sharding and replication to multiple nodes Queries are automatically parallelized and distributed Lock-free operation via MVCC concurrency There are a number of technical limitations that aren't baked into the architecture, but were not resolved in this release due to time pressure. They will be resolved in subsequent releases. Write requests have minimal batching in memory and are flushed to disk on every write. This significantly limits write performance, expecially on rotational disks. Range commands" } ]
{ "category": "App Definition and Development", "file_name": "v21.9.1.8000-prestable.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "sidebar_position: 1 sidebar_label: 2022 Fix the issue that in case of some sophisticated query with column aliases identical to the names of expressions, bad cast may happen. This fixes . This fixes . This fix may introduce backward incompatibility: if there are different expressions with identical names, exception will be thrown. It may break some rare cases when `enableoptimizepredicate_expression` is set. (). Under clickhouse-local, always treat local addresses with a port as remote. (). Do not allow to apply parametric aggregate function with `-Merge` combinator to aggregate function state if state was produced by aggregate function with different parameters. For example, state of `fooState(42)(x)` cannot be finalized with `fooMerge(s)` or `fooMerge(123)(s)`, parameters must be specified explicitly like `fooMerge(42)(s)` and must be equal. It does not affect some special aggregate functions like `quantile` and `sequence` that use parameters for finalization only. (). Do not output trailing zeros in text representation of `Decimal` types. Example: `1.23` will be printed instead of `1.230000` for decimal with scale 6. This closes . It may introduce slight incompatibility if your applications somehow relied on the trailing zeros. Serialization in output formats can be controlled with the setting `outputformatdecimaltrailingzeros`. Implementation of `toString` and casting to String is changed unconditionally. (). Implement window function `nth_value(expr, N)` that returns the value of the Nth row of the window frame. (). - Add `REPLACE GRANT` feature. (). Functions that return (initial)queryid of the current query. This closes . (). Introduce syntax for here documents. Example `SELECT $doc$VALUE$doc$`. (). New functions `currentProfiles()`, `enabledProfiles()`, `defaultProfiles()`. (). Add new functions `currentRoles()`, `enabledRoles()`, `defaultRoles()`. (). Supported cluster macros inside table functions 'cluster' and 'clusterAllReplicas'. (). Added support for custom query for MySQL, PostgreSQL, ClickHouse, JDBC, Cassandra dictionary source. Closes . (). add column default_database to system.users. (). Added `bitmapSubsetOffsetLimit(bitmap, offset, cardinalitylimit)` function. It creates a subset of bitmap limit the results to `cardinalitylimit` with offset of `offset`. (). Add support for `bzip2` compression method for import/export. Closes . (). - Add replicated storage of user, roles, row policies, quotas and settings profiles through ZooKeeper (experimental). (). Add \"tupleToNameValuePairs\", a function that turns a named tuple into an array of pairs. (). Enable using constants from with and select in aggregate function parameters. Close . (). Added ComplexKeyRangeHashed dictionary. Closes . (). Compile aggregate functions `groupBitOr`, `groupBitAnd`, `groupBitXor`. (). Compile columns with `Enum` types. (). - Vectorize the SUM of Nullable integer types with native representation (, ). (). Don't build sets for indices when analyzing a query. (). Improve latency of short queries, that require reading from tables with large number of columns. (). Share file descriptors in concurrent reads of the same files. There is no noticeable performance difference on Linux. But the number of opened files will be significantly (10..100 times) lower on typical servers and it makes operations easier. See . (). Specialize date time related comparison to achieve better performance. This fixes . (). Improve the performance of fast queries when `maxexecutiontime=0` by reducing the number of `clock_gettime` system calls. (). Less number of `clock_gettime` syscalls that may lead to performance improvement for some types of fast queries. (). Add error id (like `BAD_ARGUMENTS`) to exception messages. This closes . (). Remove GLOBAL keyword for IN when scalar function is passed. In previous versions, if user specified `GLOBAL IN f(x)` exception was thrown." }, { "data": "Apply aggressive IN index analysis for projections so that better projection candidate can be selected. (). convert timestamp and timestamptz data types to DateTime64 in postgres engine. (). Check for non-deterministic functions in keys, including constant expressions like `now()`, `today()`. This closes . This closes . (). Don't throw exception when querying `system.detached_parts` table if there is custom disk configuration and `detached` directory does not exist on some disks. This closes . (). Add information about column sizes in `system.columns` table for `Log` and `TinyLog` tables. This closes . (). Added `outputformatavrostringcolumn_pattern` setting to put specified String columns to Avro as string instead of default bytes. Implements . (). - Add `system.warnings` table to collect warnings about server configuration. (). Check hash function at table creation, not at sampling. Add settings in MergeTreeSettings, if someone create a table with incorrect sampling column but sampling never be used, disable this settings for starting the server without exception. (). Make `toTimeZone` monotonicity when timeZone is a constant value to support partition puring when use sql like:. (). - When client connect to server, he receives information about all warnings that are already were collected by server. (It can be disabled by using option `--no-warnings`). (). Add a setting `functionrangemaxelementsin_block` to tune the safety threshold for data volume generated by function `range`. This closes . (). Control the execution period of clear old temporary directories by parameter with default value. . (). Allow to reuse connections of shards among different clusters. It also avoids creating new connections when using `cluster` table function. (). Add events to profile calls to sleep / sleepEachRow. (). Save server address in history URLs in web UI if it differs from the origin of web UI. This closes . (). Add ability to set Distributed directory monitor settings via CREATE TABLE (i.e. `CREATE TABLE dist (key Int) Engine=Distributed(cluster, db, table) SETTINGS monitorbatchinserts=1` and similar). (). Fix behaviour with non-existing host in user allowed host list. (). Added comments for the code written in https://github.com/ClickHouse/ClickHouse/pull/24206; the code has been improved in several places. (). Enable `usehedgedrequests` setting that allows to mitigate tail latencies on large clusters. (). Updated protobuf to 3.17.3. Changelogs are available on https://github.com/protocolbuffers/protobuf/releases. (). After https://github.com/ClickHouse/ClickHouse/pull/26377. Encryption algorithm now should be specified explicitly if it's not default (`aes128ctr`):. (). Apply `LIMIT` on the shards for queries like `SELECT FROM dist ORDER BY key LIMIT 10` w/ `distributedpushdownlimit=1`. Avoid running `Distinct`/`LIMIT BY` steps for queries like `SELECT DISTINCT shadingkey FROM dist ORDER BY key`. Now `distributedpushdownlimit` is respected by `optimizedistributedgroupbyshardingkey` optimization. (). - Set client query kind for mysql and postgresql handler. (). Executable dictionaries (ExecutableDictionarySource, ExecutablePoolDictionarySource) enable creation with DDL query using clickhouse-local. Closes . (). Add round-robin support for clickhouse-benchmark (it does not differ from the regular multi host/port run except for statistics report). (). Improve the high performance machine to use the kafka engine. and it can recuce the query node work load. (). Avoid hanging clickhouse-benchmark if connection fails (i.e. on EMFILE). (). Fix excessive (x2) connect attempts with skipunavailableshards. (). - `mapPopulatesSeries` function supports `Map` type. (). Improve handling of KILL QUERY requests. (). SET PROFILE now applies constraints too if they're set for a passed profile. (). Support multiple keys for encrypted disk. Display error message if the key is probably wrong. (see https://github.com/ClickHouse/ClickHouse/pull/26465#issuecomment-882015970). (). remove uncessary exception thrown. (). Watchdog is disabled in docker by default. Fix for not handling ctrl+c. (). Changing default roles affects new sessions only." }, { "data": "Less verbose internal RocksDB logs. This closes . (). Expose rocksdb statistics via system.rocksdb table. Read rocksdb options from ClickHouse config (`rocksdb`/`rocksdb_TABLE` keys). (). Updated extractAllGroupsHorizontal - upper limit on the number of matches per row can be set via optional third argument. ... (). Now functions can be shard-level constants, which means if it's executed in the context of some distributed table, it generates a normal column, otherwise it produces a constant value. Notable functions are: `hostName()`, `tcpPort()`, `version()`, `buildId()`, `uptime()`, etc. (). Merge join correctly handles empty set in the right. (). Improve compatibility with non-whole-minute timezone offsets. (). Enable distributedpushdown_limit by default. (). Improved the existence condition judgment and empty string node judgment when clickhouse-keeper creates znode. (). Add compression for `INTO OUTFILE` that automatically choose compression algorithm. Closes . (). add a new metric called MaxPushedDDLEntryID which is the maximum ddl entry id that current node push to zookeeper. (). Allow to pass query settings via server URI in Web UI. (). Added columns `replicaisactive` that maps replica name to is replica active status to table `system.replicas`. Closes . (). Try recording `query_kind` even when query fails to start. (). Mark window functions as ready for general use. Remove the `allowexperimentalwindow_functions` setting. (). Memory client in client. (). Support schema for postgres database engine. Closes . (). Split global mutex into individual regexp construction. This helps avoid huge regexp construction blocking other related threads. Not sure how to proper test the improvement. (). Add 10 seconds cache for S3 proxy resolver. (). Add new index data skipping minmax index format for proper Nullable support. (). Memory consumed by bitmap aggregate functions now is taken into account for memory limits. This closes . (). Add two settings `maxhyperscanregexplength` and `maxhyperscanregexptotal_length` to prevent huge regexp being used in hyperscan related functions, such as `multiMatchAny`. (). Add setting `logformattedqueries` to log additional formatted query into `system.query_log`. It's useful for normalized query analysis because functions like `normalizeQuery` and `normalizeQueryKeepNames` don't parse/format queries in order to achieve better performance. (). Add Cast function for internal usage, which will not preserve type nullability, but non-internal cast will preserve according to setting castkeepnullable. Closes . (). Send response with error message if HTTP port is not set and user tries to send HTTP request to TCP port. (). Use bytes instead of strings for binary data in the GRPC protocol. (). Log client IP address if authentication fails. (). Disable arrayJoin on partition expressions. (). - Add `FROM INFILE` command. (). Enables query parameters to be passed in the body of http requests. (). Remove duplicate index analysis and avoid possible invalid limit checks during projection analysis. (). Fix potential crash if more than one `untuple` expression is used. (). Remove excessive newline in `threadname` column in `system.stacktrace` table. This fixes . (). Fix logical error on join with totals, close . (). Fix zstd decompression in case there are escape sequences at the end of internal buffer. Closes . (). Fixed rare bug in lost replica recovery that may cause replicas to diverge. (). Fix `optimizedistributedgroupbyshardingkey` for multiple columns (leads to incorrect result w/ `optimizeskipunusedshards=1`/`allownondeterministicoptimizeskipunused_shards=1` and multiple columns in sharding key expression). (). Fix possible crash when login as dropped user. This PR fixes . (). Fix infinite non joined block stream in `partialmergejoin` close . (). Now, scalar subquery always returns `Nullable` result if it's type can be" }, { "data": "It is needed because in case of empty subquery it's result should be `Null`. Previously, it was possible to get error about incompatible types (type deduction does not execute scalar subquery, and it could use not-nullable type). Scalar subquery with empty result which can't be converted to `Nullable` (like `Array` or `Tuple`) now throws error. Fixes . (). Fix some fuzzed msan crash. Fixes . (). Fix broken name resolution after rewriting column aliases. This fixes . (). Fix issues with `CREATE DICTIONARY` query if dictionary name or database name was quoted. Closes . (). Fix crash in rabbitmq shutdown in case rabbitmq setup was not started. Closes . (). Update `chown` cmd check in clickhouse-server docker entrypoint. It fixes the bug that cluster pod restart failed (or timeout) on kubernetes. (). Fix incorrect function names of groupBitmapAnd/Or/Xor. This fixes. (). Fix history file conversion if file is empty. (). Fix potential nullptr dereference in window functions. This fixes . (). ParallelFormattingOutputFormat: Use mutex to handle the join to the collector_thread (https://github.com/ClickHouse/ClickHouse/issues/26694). (). Sometimes SET ROLE could work incorrectly, this PR fixes that. (). Do not remove data on ReplicatedMergeTree table shutdown to avoid creating data to metadata inconsistency. (). Add `eventtimemicroseconds` value for `REMOVEPART` in `system.partlog`. In previous versions is was not set. (). Aggregate function parameters might be lost when applying some combinators causing exceptions like `Conversion from AggregateFunction(topKArray, Array(String)) to AggregateFunction(topKArray(10), Array(String)) is not supported`. It's fixed. Fixes and . (). Fix library-bridge ids load. (). Fix error `Missing columns: 'xxx'` when `DEFAULT` column references other non materialized column without `DEFAULT` expression. Fixes . (). Fix reading of custom TLDs (stops processing with lower buffer or bigger file). (). Fix \"Unknown column name\" error with multiple JOINs in some cases, close . (). Now partition ID in queries like `ALTER TABLE ... PARTITION ID xxx` validates for correctness. Fixes . (). (). Fixed `cache`, `complexkeycache`, `ssdcache`, `complexkeyssdcache` configuration parsing. Options `allowreadexpiredkeys`, `maxupdatequeuesize`, `updatequeuepushtimeoutmilliseconds`, `querywaittimeout_milliseconds` were not parsed for dictionaries with non `cache` type. (). Fix synchronization in GRPCServer This PR fixes . (). Fix uninitialized memory in functions `multiSearch` with empty array, close . (). In rare cases `system.detached_parts` table might contain incorrect information for some parts, it's fixed. Fixes . (). Fix on-disk format breakage for secondary indices over Nullable column (no stable release had been affected). (). Fix column structure in merge join, close . (). In case of ambiguity, lambda functions prefer its arguments to other aliases or identifiers. (). Fix mutation stuck on invalid partitions in non-replicated MergeTree. (). Fix `distributedgroupbynomerge=2`+`distributedpushdownlimit=1` or `optimizedistributedgroupbyshardingkey=1` with `LIMIT BY` and `LIMIT OFFSET`. (). Fix errors like `Expected ColumnLowCardinality, gotUInt8` or `Bad cast from type DB::ColumnVector<char8_t> to DB::ColumnLowCardinality` for some queries with `LowCardinality` in `PREWHERE`. Fixes . (). Fix `Cannot find column` error for queries with sampling. Was introduced in . Fixes . (). Fix Mysql protocol when using parallel formats (CSV / TSV). (). Fixed incorrect validation of partition id for MergeTree tables that created with old syntax. (). Fix incorrect result for query with row-level security, prewhere and LowCardinality filter. Fixes . (). /proc/info contains metrics like. (). Fix distributed queries with zero shards and aggregation. (). fix metric BackgroundMessageBrokerSchedulePoolTask, maybe mistyped. (). Fix crash during projection materialization when some parts contain missing columns. This fixes . (). Fixed underflow of the time value when constructing it from components. Closes ." }, { "data": "After setting `max_memory_usage` to non-zero value it was not possible to reset it back to 0 (unlimited). It's fixed. (). - Fix bug with aliased column in `Distributed` table. (). Fixed another case of `Unexpected merged part ... intersecting drop range ...` error. (). Fix postgresql table function resulting in non-closing connections. Closes . (). Fix bad type cast when functions like `arrayHas` are applied to arrays of LowCardinality of Nullable of different non-numeric types like `DateTime` and `DateTime64`. In previous versions bad cast occurs. In new version it will lead to exception. This closes . (). Fix column filtering with union distinct in subquery. Closes . (). After https://github.com/ClickHouse/ClickHouse/pull/26384. To execute `GRANT WITH REPLACE OPTION` now the current user should have `GRANT OPTION` for access rights it's going to grant AND for access rights it's going to revoke. (). After https://github.com/ClickHouse/ClickHouse/pull/25687. Add backquotes for the default database shown in CREATE USER. (). Remove duplicated source files in CMakeLists.txt in arrow-cmake. (). Fix possible crash when asynchronous connection draining is enabled and hedged connection is disabled. (). Prevent crashes for some formats when NULL (tombstone) message was coming from Kafka. Closes . (). Fix a rare bug in `DROP PART` which can lead to the error `Unexpected merged part intersects drop range`. (). Fix a couple of bugs that may cause replicas to diverge. (). Update RocksDB to 2021-07-16 master. (). `clickhouse-test` supports SQL tests with templates. (). Fix /clickhouse/window functions/tests/non distributed/errors/error window function in join. (). Enabling RBAC TestFlows tests and crossing out new fails. (). Tests: Fix CLICKHOUSECLIENTSECURE with the default config. (). Fix linking of auxiliar programs when using dynamic libraries. (). Add CMake options to build with or without specific CPU instruction set. This is for and . (). Add support for build with `clang-13`. This closes . (). Improve support for build with `clang-13`. (). Rename `MaterializeMySQL` to `MaterializedMySQL`. (). NO CL ENTRY: 'Modify code comments'. (). NO CL ENTRY: 'Revert \"Datatype Date32, support range 1925 to 2283\"'. (). NO CL ENTRY: 'Fix CURRDATABASE empty for 01034movepartitionfromtablezookeeper.sh'. (). NO CL ENTRY: 'DOCSUP-12413: macros support in functions cluster and clusterAllReplicas'. (). NO CL ENTRY: 'Revert \"less sys calls #2: make vdso work again\"'. (). NO CL ENTRY: 'Revert \"Do not miss exceptions from the ThreadPool\"'. (). Fix prometheus metric name (). Add comments for the implementations of the pad functions (). Change color in client for double colon (). Remove misleading stderr output (). Merging . (). Fix bad code (default function argument) (). Support for `pread` in `ReadBufferFromFile` (). Fix ReadBufferFromS3 (). Fix output of TSV in integration tests (). Enum type additional support for compilation (). Bump poco (now poco fork has CI via github actions) (). Add check for sqlite database path (). Fix arcadia (). yandex/clickhouse-test-base Dockerfile fix (mysql connector moved) (). Improve readfileto_stringcolumn test compatibility (). Make socket poll() 7x faster (by replacing epoll() with poll()) (). Modifications to an obscure Yandex TSKV format (). Fixing RBAC sample by tests in TestFlows. (). Fix error in stress test script (). Fix flaky integration test about \"replicated max parallel fetches\". (). Continuation of (). Enabling all TestFlows modules except LDAP after Kerberos merge. (). Fix flaky test 01293clientinteractiveverticalmultiline_long (). Small bugfix in Block (). More integration tests improvements. (). Fix arcadia (). Add separate step to read from remote. (). Fix calculating of intersection of access rights. (). Less logging in AsynchronousMetrics" }, { "data": "stress tests report improvements (). Fix failed assertion in RocksDB in case of bad_alloc exception during batch write (). Add integrity for loaded scripts in play.html (). Relax condition in flaky test (). Split FunctionsCoding into several files (). Remove MySQLWireContext (). Fix \"While sending batch\" (on Distributed async send) (). Tests fixes v21.9.1.7477 (). Fix flaky testreplicatedmutations (due to lack of threads in pool) (). Fix undefined-behavior in DirectoryMonitor (for exponential back off) (). Remove some code (). Rewrite distributed DDL to Processors (). Fix flaky test `distributedddloutput_mode` (). Update link to dpkg-deb in dockerfiles (). SELECT String from ClickHouse as Avro string - PartialMatch (). Fix arcadia (). Fix build under AppleClang 12 (). fix lagInFrame for nullable types (). Handle empty testset in 'Functional stateless tests flaky check' (). Remove some streams. (). Fix flaky test 01622defaultsforurlengine (). Fix flaky test 01509checkmanyparallelquorum_inserts (). Fix one possible cause of tests flakiness (). Remove some code, more C++ way (). Fix mysqlkillsyncthreadrestore_test (). Minor bugfix (). copypaste error (). Fix flaky test `mutationstuckafterreplacepartition` (). Log errors on integration test listing error in ci-runner (). more debug checks for window functions (). record server exit code in fuzzer (). Remove more streams. (). 01946testwronghostname_access: Clear DNS in the end (). Setting mincounttocompileaggregate_expression fix (). Compile aggregate functions profile events fix (). Fix use after free in AsyncDrain connection from S3Cluster (). Fixed wrong error message in `S3Common` (). Lock mutex before access to std::cerr in clickhouse-benchmark (). Remove some output streams (). Merging (Bolonini/readfromfile) (). Heredoc updated tests (). Fix double unlock in RocksDB (). Fix sqlite engine attach (). Remove unneeded mutex during connection draining (). Make sure table is readonly when restarting fails. (). Check stdout for messages to retry in clickhouse-test (). Fix killing unstopped containers in integration tests. (). Do not start new hedged connection if query was already canceled. (). Try fix rabbitmq tests (). Wait for self datasource to be initialized in testjdbcbridge (). Flush LazyOutputFormat on query cancel. (). Try fix timeout in functional tests with pytest (). Compile aggregate functions without key (). Introduce sessions (). Fix rabbitmq sink (). One more library bridge fix (). Fix keeper bench compilation (). Better integration tests (). Maybe fix extremely rare `intersecting parts`. (). Try increase diff upper bound (). Enable Arrow format in Arcadia (). Improve test compatibility (00646urlengine and 01854HTTPdict_decompression) (). Update NuRaft (). 01921datatypedate32: Adapt it to work under Pacific/Fiji (). Remove unused files that confuse developers (). Set allowremotefszerocopy_replication to true by default (). Hold context in HedgedConnections to prevent use-after-free on settings. (). Fix client options in stress test (). fix window function partition boundary search (). Print trace from std::terminate exception line-by-line to make it grep easier (). Fix system.zookeeper_log initialization (). Benchmark script fix (). Fix 01600quotabyforwardedip (). 01674executabledictionaryimplicitkey: executable_dictionary: Use printf (). Remove testkeeperserver usage (for {operation/session}timeoutms) (). Set insertquorumtimeout to 1 minute for tests. (). Adjust 00537_quarters to be timezone independent (). Help with (). Attempt to fix flaky 00705dropcreatemergetree (). Improved `runner` to use `pytest` keyword expressions (). Improve 01006simpodemptypartsinglecolumnwrite (). Improved logging of `hwmon` sensor errors in `AsynchronousMetrics` (). Fix assertions in Replicated database (). Make test 01852castoperator independent of timezone (). Moving to TestFlows 1.7.20 that has native support for parallel tests. (). library bridge fixes (). Fix synchronization while updating from the config of an encrypted disk." }, { "data": "Fix excessive logging in NuRaft on server shutdown (). Fix testmergetrees3failover with debug build (). Stateless tests: Keep an DNS error free log (). Normalize hostname in stateless tests (). Try update AMQP-CPP (). Try update arrow (). GlobalSubqueriesVisitor external storage check fix (). Better code around decompression (). Add test for parsing maps with integer keys (). Fix arcadia src/Access gtest (). Implement `legacycolumnnameoftuple_literal` in a less intrusive way (). Updated readIntTextUnsafe (). Safer `ReadBufferFromS3` for merges and backports (). properly check the settings in perf test (). Save information about used functions/tables/... into query_log on error (). Fix polling of /sys/block (). Allow parallel execution of .sql tests with ReplicatedMergeTree (by using {database} macro) (). Fix NLP performance test (). Update changelog/README.md (). more careful handling of reconnects in fuzzer (). ADDINCL proper fast_float directory (). DatabaseReplicatedWorker logstokeep race fix (). Revert . User-level settings will affect queries from view. (). Using formatted string literals in clickhouse-test, extracted sort key functions and stacktraces printer (). Fix polling of /sys/block in case of block devices reopened on error (). Remove streams from dicts (). 01099operatorsdateandtimestamp: Use dates that work with all available timezones (). Improve kafka integration test error messages (). Improve 00738lockforinnertable stability (). Add and check system.projection_parts for database filter (). Update PVS checksum (). Fix 01300clientsavehistorywhenterminatedlong (). Try update contrib/zlib-ng (). Fix flacky test (). Add and check system.mutations for database filter (). Correct the key data type used in mapContains (). Fix tests for WithMergeableStateAfterAggregationAndLimit (). Do not miss exceptions from the ThreadPool (). Accept error code by error name in client test hints (). Added reserve method to Block (). make it possible to cancel window functions on ctrl+c (). Fix Nullable const columns in JOIN (). Fix Logical error: 'Table UUID is not specified in DDL log' (). Fix 01236graphitemt for random timezones (). Add timeout for integration tests runner (). Fix polling of /sys/class on errors (). fix recalculate QueryMemoryLimitExceeded event error (). Fix 01961roaringmemory_tracking for split builds (). Improve the experience of running stateless tests locally (). Fix integration tests (). Dictionaries refactor (). Less Stopwatch.h (). Aggregation temporary disable compilation without key (). Removed some data streams (). Remove streams from lv (). fix ProfileEvents::CompileFunction (). Refactor mysql format check (). enable part_log by default (). Remove the remains of ANTLR in the tests (). MV: Improve text logs when doing parallel processing (). Fix testsqliteodbchasheddictionary (). Add a test for (). Disable memory tracking for roaring bitmaps on Mac OS (). Use only SSE2 in \"unbundled\" build (). Remove trash (). Fix stress test in `~CompressedWriteBuffer` (). Mark tests for `DatabaseReplicated` as green (). Removed DenseHashMap, DenseHashSet (). Map data type parsing tests (). Refactor arrayJoin check on partition expressions (). Fix test 01014lazydatabaseconcurrentrecreatereattachandshowtables (). Disable jemalloc under OSX (). Fix jemalloc under osx (zone_register() had been optimized out again) (). Fix intersect/except with limit (). Add HTTP string parsing test (). Fix some tests (). Set function divide as suitable for short-circuit in case of Nullable(Decimal) (). Remove unnecessary files (). Revert \"Mark tests for `DatabaseReplicated` as green\" (). Remove hardening for watches in DDLWorker (). Stateless test: Cleanup leftovers (). Dictionaries key types refactoring (). Update 01822shortcircuit.reference (after merging ) (). Proper shutdown global context (). 01766todatetime64notimezonearg: Use a date without timezone changes (). Fix setting name \"allowexperimentaldatabasematerializedpostgresql\" in the error message (). Fix bug in short-circuit found by fuzzer ()." } ]
{ "category": "App Definition and Development", "file_name": "Patterns.md", "project_name": "Tremor", "subcategory": "Streaming & Messaging" }
[ { "data": "```rust self.pipelines.retain(|item| item != to_delete) ``` the loop: ```rust for (idx, in_port) in outgoing { self.stack.push((*idx, in_port.clone(), event.clone())) } ``` for (things that deref to) slices becomes: ```rust if let Some((lastoutgoing, otheroutgoing)) = outgoing.split_last() { // Iterate over all but the last `outgoing`, cloning the `event` for (idx, inport) in otheroutgoing { self.stack.push((*idx, in_port.clone(), event.clone())) } // Handle the last `outgoing`, consuming the `event` let (idx, inport) = lastoutgoing; self.stack.push((*idx, in_port, event)) } else { // `outgoing` was empty } ``` or for iterators over non-slices becomes: ```rust if outgoing.is_empty() { // `outgoing` was empty } else { let len = outgoing.len(); // Iterate over all but the last `outgoing`, cloning the `event` for (idx, in_port) in outgoing.iter().take(len - 1) { self.stack.push((*idx, in_port.clone(), event.clone())) } // Handle the last `outgoing`, consuming the `event` let (idx, in_port) = &outgoing[len - 1]; self.stack.push((*idx, in_port.clone(), event)) } ``` ```rust self.metrics[idx].outputs.entry(inport.clone()).orinsert(0) += 1; ``` becomes where the clone is limited to only happen when the element doesn't exist. ```rust if let Some(count) = self.metrics[*idx].outputs.getmut(inport) { *count += 1; } else { self.metrics[*idx].outputs.insert(in_port.clone(), 1); } ``` https://docs.rs/downcast-rs/1.1.0/downcast_rs/ allows downcasting on traits. An example can be found in `TremorAggrFn`. ```rust pub trait TremorAggrFn: DowncastSync + Sync + Send { //... fn merge(&mut self, src: &dyn TremorAggrFn) -> FResult<()>; //... } impl_downcast!(sync TremorAggrFn); ``` With `impl_downcast` we can can pass src as `dyn TremorAggrFn` instead of `dyn Any` locking down" } ]
{ "category": "App Definition and Development", "file_name": "01-dynamic-log-level.md", "project_name": "Hazelcast IMDG", "subcategory": "Database" }
[ { "data": "Support dynamic log level changing without restarting a member. Occasionally, users want to make the logging more verbose for a short period of time without restarting members. For instance, that is useful while diagnosing some issue. Provide an internal API for dynamic log level adjustment. Make the API accessible through REST. Make the API accessible through JMX. Providing public API. Per logger granularity of the log level adjustment. Providing support for MC. Providing support for CLI. User wants to obtain a more detailed log on a certain member. User makes the log level more verbose on the member. User reproduces an issue in question. User resets the log level back. User collects the log. User inspects the log or sends it to support. Hazelcast supports multiple logger frameworks out of the box. The difference between logging backends is abstracted away by `LoggerFactory` and `ILogger` interfaces which belong to the public API. Since we don't want to provide any public API for the log level adjustment, internal counterparts should be introduced: `InternalLoggerFactory` and `InternalLogger`: ```java public interface InternalLoggerFactory { / Sets the levels of all the loggers known to this logger factory to the given level. If a certain logger was already preconfigured with a more verbose level than the given level, it will be kept at that more verbose level. * @param level the level to set. */ void setLevel(@Nonnull Level level); / Resets the levels of all the loggers known to this logger factory back to the default preconfigured values. Basically, undoes all the changes done by the previous calls to {@link #setLevel}, if there were any. */ void resetLevel(); } ``` ```java public interface InternalLogger { / Sets the level of this logger to the given level. * @param level the level to set, can be {@code null} if the underlying logging framework gives some special meaning to it (like inheriting the log level from some parent object). */ void setLevel(@Nullable Level level); } ``` For each logging backend Hazelcast provides a `LoggerFactory` implementation which in turn provides an `ILogger` implementation. To support dynamic log level changing the implementations should additionally implement the internal counterpart (`InternalLoggerFactory` and `InternalLogger`). The following backends support the dynamic log level changing currently: JDK/JUL (default) log4j log4j2 The only unsupported backend is slf4j since it doesn't provide any public or internal API for level changing. For each logging backend supported by slf4j a separate specialized code should be written to extract the loggers from internal slf4j" }, { "data": "The problem is that the wrappers are internal and specific for every logging backend, so they are not necessarily stable across versions of slf4j. In theory, we could support some set of popular logging backends for slf4j, but that would require inspecting at least recent versions of slf4j and wrappers to understand how stable the implementation of each wrapper is and how we could access it, regularly or through reflection. For that reasons slf4j support was postponed. `LoggingServiceImpl` should provide methods for getting, setting and resetting the log level of the `loggerFactory` managed by it: ```java / @return the log level of this logging service previously set by {@link #setLevel}, or {@code null} if no level was set or it was reset by {@link #resetLevel}. */ public @Nullable Level getLevel(); / Sets the levels of all the loggers known to this logger service to the given level. If a certain logger was already preconfigured with a more verbose level than the given level, it will be kept at that more verbose level. <p> WARNING: Keep in mind that verbose log levels like {@link Level#FINEST} may severely affect the performance. * @param level the level to set. @throws HazelcastException if the underlying {@link LoggerFactory} doesn't implement {@link InternalLoggerFactory} required for dynamic log level changing. */ public void setLevel(@Nonnull Level level); / Parses the given string level into {@link Level} and then sets the level using {@link #setLevel(Level)}. * @param level the level to parse, see {@link Level#getName()} for available level names. @throws IllegalArgumentException if the passed string can't be parsed into a known {@link Level}. */ public void setLevel(@Nonnull String level); / Resets the levels of all the loggers known to this logging service back to the default reconfigured values. Basically, undoes all the changes done by the previous calls to {@link #setLevel}, if there were any. */ public void resetLevel(); ``` Log level changes performed by `LoggingServiceImpl` should be reported to audit log. REST endpoint at `/hazelcast/rest/log-level` should be exposed to users; users should be able to: GET it to learn the current log level set, POST to it to set the log level to the value they want to. REST endpoint at `/hazelcast/rest/log-level/reset` should be exposed to users; users should be able to POST to it to reset the log level back. The endpoints should have a proper `RestEndpointGroup` assigned to control the access, mutating operations should be password protected. Logging service should expose its JMX MBean (`LoggingServiceMBean`) to users. User should be able to get level, set the level and reset the level. Cover every logging backend with a separate test to make sure the applied log level is actually taking effect on the logging backend. Provide functional tests for REST and JMX." } ]
{ "category": "App Definition and Development", "file_name": "yugabyte-jdbc-reference.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: YugabyteDB JDBC Smart Driver headerTitle: JDBC Drivers linkTitle: JDBC Drivers description: YugabyteDB JDBC Smart Driver for YSQL reference headcontent: JDBC Drivers for YSQL menu: v2.18: name: JDBC Drivers identifier: ref-yugabyte-jdbc-driver parent: drivers weight: 500 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../yugabyte-jdbc-reference/\" class=\"nav-link active\"> <i class=\"fa-brands fa-java\" aria-hidden=\"true\"></i> YugabyteDB JDBC Smart Driver </a> </li> <li > <a href=\"../postgres-jdbc-reference/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> PostgreSQL JDBC Driver </a> </li> </ul> YugabyteDB JDBC smart driver is a JDBC driver for built on the , with additional connection load balancing features. For more information on the YugabyteDB Java smart driver, see the following: <!-- place holder for adding link to YugabyteDB University course for Java Developers --> YugabyteDB JDBC Driver is available as maven dependency. Download the driver by adding the following dependency entries in the java project. To get the driver and HikariPool from Maven, add the following dependencies to the Maven project: ```xml <dependency> <groupId>com.yugabyte</groupId> <artifactId>jdbc-yugabytedb</artifactId> <version>42.3.0</version> </dependency> <!-- https://mvnrepository.com/artifact/com.zaxxer/HikariCP --> <dependency> <groupId>com.zaxxer</groupId> <artifactId>HikariCP</artifactId> <version>4.0.3</version> </dependency> ``` To get the driver and HikariPool, add the following dependencies to the Gradle project: ```java // https://mvnrepository.com/artifact/org.postgresql/postgresql implementation 'com.yugabyte:jdbc-yugabytedb:42.3.0' implementation 'com.zaxxer:HikariCP:4.0.3' ``` Learn how to perform common tasks required for Java application development using the YugabyteDB JDBC driver. {{< note title=\"Note\">}} The driver requires YugabyteDB version 2.7.2.0 or higher, and Java 8 or above. {{< /note >}} The following connection properties need to be added to enable load balancing: `load-balance` - enable cluster-aware load balancing by setting this property to `true`; disabled by default. `topology-keys` - provide comma-separated geo-location values to enable topology-aware load balancing. Geo-locations can be provided as `cloud.region.zone`. Specify all zones in a region as `cloud.region.*`. To designate fallback locations for when the primary location is unreachable, specify a priority in the form `:n`, where `n` is the order of precedence. For example, `cloud1.datacenter1.rack1:1,cloud1.datacenter1.rack2:2`. By default, the driver refreshes the list of nodes every 300 seconds (5 minutes ). You can change this value by including the `yb-servers-refresh-interval` parameter. The YugabyteDB JDBC driver's driver class is `com.yugabyte.Driver`. The driver package includes a `YBClusterAwareDataSource` class that uses one initial contact point for the YugabyteDB cluster as a means of discovering all the nodes and, if required, refreshing the list of live endpoints with every new connection attempt. The refresh is triggered if stale information (by default, older than 5 minutes) is discovered. To use the driver, do the following: Pass new connection properties for load balancing in the connection URL or properties" }, { "data": "To enable uniform load balancing across all servers, you set the `load-balance` property to `true` in the URL, as per the following example: ```java String yburl = \"jdbc:yugabytedb://127.0.0.1:5433/yugabyte?user=yugabyte&password=yugabyte&load-balance=true\"; DriverManager.getConnection(yburl); ``` To provide alternate hosts during the initial connection in case the first address fails, specify multiple hosts in the connection string, as follows: ```java String yburl = \"jdbc:yugabytedb://127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433/yugabyte?user=yugabyte&password=yugabyte&load-balance=true\"; DriverManager.getConnection(yburl); ``` After the driver establishes the initial connection, it fetches the list of available servers from the universe, and performs load balancing of subsequent connection requests across these servers. To specify topology keys, you set the `topology-keys` property to comma separated values, as per the following example: ```java String yburl = \"jdbc:yugabytedb://127.0.0.1:5433/yugabyte?user=yugabyte&password=yugabyte&load-balance=true&topology-keys=cloud1.region1.zone1,cloud1.region1.zone2\"; DriverManager.getConnection(yburl); ``` Configure `YBClusterAwareDataSource` for uniform load balancing and then use it to create a connection, as per the following example: ```java String jdbcUrl = \"jdbc:yugabytedb://127.0.0.1:5433/yugabyte\"; YBClusterAwareDataSource ds = new YBClusterAwareDataSource(); ds.setUrl(jdbcUrl); // Set topology keys to enable topology-aware distribution ds.setTopologyKeys(\"cloud1.region1.zone1,cloud1.region2.zone2\"); // Provide more end points to prevent first connection failure // if an initial contact point is not available ds.setAdditionalEndpoints(\"127.0.0.2:5433,127.0.0.3:5433\"); Connection conn = ds.getConnection(); ``` Configure `YBClusterAwareDataSource` with a pooling solution such as Hikari and then use it to create a connection, as per the following example: ```java Properties poolProperties = new Properties(); poolProperties.setProperty(\"dataSourceClassName\", \"com.yugabyte.ysql.YBClusterAwareDataSource\"); poolProperties.setProperty(\"maximumPoolSize\", 10); poolProperties.setProperty(\"dataSource.serverName\", \"127.0.0.1\"); poolProperties.setProperty(\"dataSource.portNumber\", \"5433\"); poolProperties.setProperty(\"dataSource.databaseName\", \"yugabyte\"); poolProperties.setProperty(\"dataSource.user\", \"yugabyte\"); poolProperties.setProperty(\"dataSource.password\", \"yugabyte\"); // If you want to provide additional end points String additionalEndpoints = \"127.0.0.2:5433,127.0.0.3:5433,127.0.0.4:5433,127.0.0.5:5433\"; poolProperties.setProperty(\"dataSource.additionalEndpoints\", additionalEndpoints); // If you want to load balance between specific geo locations using topology keys String geoLocations = \"cloud1.region1.zone1,cloud1.region2.zone2\"; poolProperties.setProperty(\"dataSource.topologyKeys\", geoLocations); poolProperties.setProperty(\"poolName\", name); HikariConfig config = new HikariConfig(poolProperties); config.validate(); HikariDataSource ds = new HikariDataSource(config); Connection conn = ds.getConnection(); ``` This tutorial shows how to use the YugabyteDB JDBC Driver with YugabyteDB. It starts by creating a three-node cluster with a replication factor of 3. This tutorial uses the utility. Next, you use to demonstrate the driver's load balancing features and create a Maven project to learn how to use the driver in an application. {{< note title=\"Note\">}} The driver requires YugabyteDB version 2.7.2.0 or higher, and Java 8 or above. {{< /note>}} Create a universe with a 3-node RF-3 cluster with some fictitious geo-locations assigned. The placement values used are just tokens and have nothing to do with actual AWS cloud regions and zones. ```sh $ cd <path-to-yugabytedb-installation> ./bin/yb-ctl create --rf 3 --placement_info \"aws.us-west.us-west-2a,aws.us-west.us-west-2a,aws.us-west.us-west-2b\" ``` Download the yb-sample-apps JAR file. ```sh wget https://github.com/yugabyte/yb-sample-apps/releases/download/v1.4.0/yb-sample-apps.jar ``` Run the SqlInserts workload application, which creates multiple threads that perform read and write operations on a sample table created by the" }, { "data": "Uniform load balancing is enabled by default in all Sql* workloads of the yb-sample-apps, including SqlInserts. ```sh java -jar yb-sample-apps.jar \\ --workload SqlInserts \\ --numthreadsread 15 --numthreadswrite 15 \\ --nodes 127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 ``` The application creates 30 connections, 1 for each reader and writer threads. To verify the behavior, wait for the app to create connections and then visit `http://<host>:13000/rpcz` from your browser for each node to see that the connections are equally distributed among the nodes. This URL presents a list of connections where each element of the list has some information about the connection as shown in the following screenshot. You can count the number of connections from that list, or search for the occurrence count of the `host` keyword on that webpage. Each node should have 10 connections. For topology-aware load balancing, run the SqlInserts workload application with the `topology-keys1` property set to `aws.us-west.us-west-2a`; only two nodes are used in this case. ```sh java -jar yb-sample-apps.jar \\ --workload SqlInserts \\ --nodes 127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 \\ --numthreadsread 15 --numthreadswrite 15 \\ --topology_keys aws.us-west.us-west-2a ``` To verify the behavior, wait for the app to create connections and then navigate to `http://<host>:13000/rpcz`. The first two nodes should have 15 connections each, and the third node should have zero connections. When you're done experimenting, run the following command to destroy the local cluster: ```sh ./bin/yb-ctl destroy ``` To access sample applications that use the YugabyteDB JDBC driver, visit . To use the samples, complete the following steps: Install YugabyteDB by following the instructions in . Build the examples by running `mvn package`. Run the `run.sh` script, as per the following guideline: ```sh ./run.sh [-v] [-i] -D -<pathtoyugabyte_installation> ``` In the preceding command, replace: [-v] [-i] with `-v` if you want to run the script in `VERBOSE` mode. [-v] [-i] with `-i` if you want to run the script in `INTERACTIVE` mode. [-v] [-i] with `-v -i` if you want to run the script in both `VERBOSE` and `INTERACTIVE` mode at the same time. <path_to_yugabyte_installation> with the path to the directory where you installed YugabyteDB. The following is an example of a shell command that runs the script: ```sh ./run.sh -v -i -D ~/yugabyte-2.7.2.0/ ``` {{< note title=\"Note\">}} The driver requires YugabyteDB version 2.7.2.0 or higher. {{< /note>}} The `run` script starts a YugabyteDB cluster, demonstrates load balancing through Java applications, and then destroys the cluster. When started, the script displays a menu with two options: `UniformLoadBalance` and `TopologyAwareLoadBalance`. Choose one of these options to run the corresponding script with its Java application in the background." } ]